text
stringlengths 1
2.55M
| id
stringlengths 21
25
| metadata
dict |
---|---|---|
\section{Introduction}
There is convincing evidence from astronomical and cosmological observations that $\approx 85\%$ of the matter in the Universe is in the form of cold, nonbaryonic dark matter (DM), see Ref.~\cite{Bertone:2016nfn} for a historical review. The study of Primordial Black Holes (PBHs), black holes formed via the collapse of large overdensities in the early Universe, dates back to the 1960s and 70s~\cite{zn,Hawking:1971ei}. It was realised early on that PBHs are a potential DM candidate~\cite{Hawking:1971ei,Chapline:1975ojl}.
As they form before matter-radiation equality, PBHs are non-baryonic. While PBHs are thought to evaporate via Hawking radiation~\cite{Hawking:1974sw,Hawking:1974rv}, those with initial mass $M_{\rm PBH} \gtrsim 5 \times 10^{14} \, {\rm g}$ have a lifetime longer than the age of the Universe~\cite{Page:1976df,MacGibbon:2007yq}. On cosmological scales PBH DM would behave like particle DM, however on galactic and smaller scales its granularity can have observable consequences.
The MACHO collaboration's 2-year Large Magellanic Cloud microlensing results~\cite{Alcock:1996yv}
in the mid-late 1990s generated a wave of interest in PBH DM. They observed significantly more events than expected from known stellar populations. This excess was consistent with roughly half of the Milky Way (MW) halo being in 0.5 $M_{\odot}$ compact objects, with astrophysical compact objects excluded by baryon budget arguments~\cite{Fields:1999ar}. With subsequent data sets the allowed halo fraction decreased somewhat~\cite{Alcock:2000ph,Tisserand:2006zx} (see Sec.~\ref{sec:micro}). However, many of the ideas and models for producing PBH DM date back to this time.
In 2016 LIGO-Virgo announced the discovery of gravitational waves from mergers of tens of Solar mass black holes~\cite{Abbott:2016blz}. The possibility that these BHs could be primordial rather astrophysical~\cite{Bird:2016dcv,Clesse:2016vqa,Sasaki:2016jop}, has led to a 2nd, larger, wave of interest in PBH DM. At that time Ref.~\cite{Carr:2016drx} carried out a comprehensive review of PBH DM, highlighting several mass windows where PBHs could make up all of the DM. Subsequently there have been significant refinements of observational constraints on the abundance of PBHs. New constraints have been proposed, while some existing constraints have been weakened, or even removed completely. There have also been significant developments in
the theoretical calculations of PBH formation.
Here we aim to provide a relatively concise overview of the current (Summer 2020) status of PBHs as a dark matter candidate,~\footnote{We confine our attention to PBHs themselves as a DM candidate. The evaporation of light ($M_{\rm PBH} \lesssim 5 \times 10^{14} \, {\rm g}$) PBHs can produce stable massive particles (e.g. Ref.~\cite{Fujita:2014hha}) or leave stable Planck mass relics~\cite{MacGibbon:1987my}, both of which could also constitute the dark matter.} aimed at readers outside the field. For a comprehensive recent review of constraints on the abundances of PBHs of all masses, with an extensive reference list, see Ref.~\cite{Carr:2020gox}. Reference~\cite{Carr:2020xqk} provides a recent overview of various potential observational consequences of PBHs, including dark matter. For a detailed review (circa 2018) of observational constraints on non-evaporated PBHs, PBH formation from large perturbations produced by inflation and PBH binaries as a source of gravitational waves see Ref.~\cite{Sasaki:2018dmp}. Reference~\cite{Khlopov:2008qy} covers formation mechanisms while Ref.~\cite{Ali-Haimoud:2019khd} focuses on future electromagnetic probes of PBHs as dark matter.
In Sec.~\ref{sec:dp}, we review the formation of PBHs, focusing mainly on the collapse of large density perturbations produced by inflation.
In Sec.~\ref{sec:constraints}, we overview the various constraints on the present day abundance of PBHs, including potential future constraints. Finally we conclude with a summary of the current status and open questions in Sec.~\ref{sec:summary}.
Throughout our intention is not to mention all (or even most) papers published on a topic, but to briefly describe the origin of a calculation, model or constraint and to summarise the current status.
We set $c=1$.
\section{PBH formation}
\label{sec:dp}
The most commonly considered PBH formation mechanism is the collapse, during radiation domination, of large (adiabatic) density perturbations generated by a period of inflation
in the very early Universe. We first overview the formation of PBHs via the collapse of density perturbations during radiation domination (Sec.~\ref{sec:raddom}) and during matter domination (Sec.~\ref{sec:matdom}). We then discuss how large perturbations can be generated by inflation (Sec.~\ref{sec:large}). Finally in Sec.~\ref{sec:otherform} we briefly review other PBH formation mechanisms. We start Secs.~\ref{sec:raddom} and ~\ref{sec:large} with an overview of the essential physics, so that readers who are not interested in the subsequent details can skip them.
\subsection{Collapse of density perturbations during radiation domination}
\label{sec:raddom}
In Sec.~\ref{sec:orig} we will first review the pioneering calculations by Carr~\cite{Carr:1975qj} of the criterion for PBH formation and the resulting PBH mass and abundance. These are sufficiently accurate for a rough understanding of the expected PBH properties. For the interested reader, we then look at refinements to the calculation of the criterion for PBH formation (Sec.~\ref{sec:deltac}), the mass of an individual PBH (Sec.~\ref{sec:mind}),
the PBH abundance, including their mass function (Sec.~\ref{sec:betamf}) and finally spin and clustering (Sec.~\ref{sec:spincluster}). For a more extensive review of PBH formation from large inflationary perturbations, see Ref.~\cite{Sasaki:2018dmp}.
For detailed recent studies of the relationship between the PBH abundance and the primordial power spectrum, see Refs.~\cite{Kalaja:2019uju,Gow:2020bzo}.
\subsubsection{Original calculation}
\label{sec:orig}
By considering the Jeans' length and using Newtonian gravity, Ref.~\cite{Carr:1975qj} found that a PBH will form if the density contrast $\delta \equiv \delta \rho / \rho$ in the comoving slice~\footnote{The density contrast at horizon crossing is a gauge dependent quantity. For a detailed discussion, see Sec. V of Ref.~\cite{Harada:2013epa}.}, evaluated when a given scale enters the horizon exceeds a critical, or threshold, value, $\delta_{\rm c}$, given by $\delta_{\rm c} = c_{\rm s}^2$.~\footnote{PBHs could also form on scales that never exit the horizon~\cite{Lyth:2005ze}.} Here, $c_{\rm s}$ is the sound speed, which is equal to $1/\sqrt{3}$ during radiation domination. The mass of the resulting PBH is of order the horizon mass at that time, $M_{\rm PBH} \sim M_{\rm H}$, where
\begin{equation}
M_{\rm H} \sim \frac{
t}{G} \sim 10^{15} \, {\rm g} \left( \frac{t}{10^{-23} \, {\rm s}} \right) \,.
\end{equation}
See Eq.~(\ref{mhr}) in Sec.~\ref{sec:betamf} below for a more precise expression for $M_{\rm H}$. A PBH formed at around the QCD phase transition $(t \sim 10^{-6} \, {\rm s})$ would have mass of order a Solar mass ($M_{\odot}= 2 \times 10^{30} \, {\rm g})$. These initial analytic PBH formation calculations were supported by numerical simulations a few years later~\cite{nnp}.
Reference~\cite{Carr:1975qj} also calculated the initial (i.e.~at the time of formation) abundance of PBHs
\begin{equation}
\label{beta}
\beta(M_{\rm H}) \equiv \frac{\rho_{\rm PBH}}{\rho_{\rm tot}} = \int_{\delta_{\rm c}}^{\infty} P(\delta) \, {\rm d} \delta
\sim \sigma(M_{\rm H}) \exp{ \left( - \frac{\delta^2_{\rm c}}{2 \sigma^2(M_{\rm H})} \right)} \,,
\end{equation}
where the final step assumes that the probability distribution of primordial density perturbations, $P(\delta)$, is Gaussian with variance $\sigma^2(M_{\rm H}) \ll \delta^2_{\rm c}$. See Sec.~\ref{sec:betamf} below for an expansion on this calculation, including a more precise definition of the mass variance, $\sigma(M_{\rm H})$, in Eq.~(\ref{massvariance}). Since the abundance of PBHs, $\beta$, depends exponentially on the typical size of the fluctuations and the threshold for collapse, uncertainties in these quantities lead to large (potentially orders of magnitude) uncertainty in $\beta$.
\subsubsection{Criterion for PBH formation}
\label{sec:deltac}
There has been extensive work on the criterion for PBH formation in the past decade. For a more detailed review see the introduction of Ref.~\cite{Musco:2018rwt} and Sec.~2.1.~of Ref.~\cite{Shibata:1999zs}.
Reference~\cite{Harada:2013epa} calculated the density threshold analytically finding, in the comoving slice, $\delta_{\rm c} = 0.41$ for radiation domination.
The threshold for collapse depends significantly on the shape of the density perturbation~\cite{Nakama:2013ica}. For a Gaussian density field the shape of rare peaks depends on the power spectrum~\cite{Bardeen:1985tr}. Consequently the threshold for PBH formation, and hence their abundance, depends on the form of the primordial power spectrum~\cite{Germani:2018jgr}. Recently Ref.~\cite{Musco:2018rwt} has studied a wide range of shapes and found that the threshold is lowest ($\delta_{\rm c} \approx 0.41$) for broad shapes where pressure gradients play a negligible role and highest ($\delta_{\rm c} \approx 0.67$) for peaked shapes where pressure gradients are large. The lower limit agrees well with the analytic estimate of the threshold in Ref.~\cite{Harada:2013epa}.
Subsequently Ref.~\cite{Escriva:2019phb} has shown that the criterion for collapse is, to an excellent approximation, universal when expressed in terms of the average of the compaction function (which quantifies the gravitational potential) inside the radius at which it is maximised. If the abundance of PBHs is calculated using peaks theory (see Sec.~\ref{sec:betamf}) rather than using Eq.~(\ref{beta}) (or its more accurate form Eq.~(\ref{betaimp}) below) then the peak amplitude, rather than the average value, of the perturbation is required and this is also calculated in Ref.~\cite{Musco:2018rwt}.
Deviations from spherical symmetry could in principle affect the threshold for collapse and hence the abundance of PBHs. However the ellipticity of large peaks in a Gaussian random field is small~\cite{doroshkevich,Bardeen:1985tr}, and numerical simulations have recently shown that the effect on the threshold for collapse is negligibly small~\cite{Yoo:2020lmg}.
While non-Gaussianity of the primordial perturbations can have a significant effect on the abundance of PBHs (see Sec.~\ref{sec:betamf}), its effect on the threshold for collapse alone is also relatively small, of order a few percent~\cite{Kehagias:2019eil}.
In principle it is also possible to calculate the abundance of PBHs using the primordial curvature perturbation (which is introduced in Sec.~\ref{sec:infintro}) rather than the density contrast. However, as emphasised in Ref.~\cite{Young:2014ana} (see also Ref.~\cite{Shibata:1999zs}), perturbations on scales larger than the cosmological horizon can not (because of causality) affect whether or not a PBH forms. Consequently care must be taken if using the curvature perturbation to ensure that super-horizon modes don't lead to unphysical results.
\subsubsection{Mass of an individual PBH}
\label{sec:mind}
It was realised in the late 1990s that, due to near critical gravitational collapse~\cite{Choptuik:1992jv}, the mass of a PBH depends on the amplitude of the fluctuation from which it forms~\cite{Niemeyer:1997mt,Niemeyer:1999ak}:
\begin{equation}
M_{\rm PBH} = \kappa M_{\rm H} (\delta - \delta_{\rm c})^{\gamma} \,,
\end{equation}
where the constants $\gamma$ and $\kappa$ depend on the shape of the perturbation and the background equation of state~\cite{Niemeyer:1999ak,Musco:2004ak}. Numerical simulations have verified that this power law scaling of the PBH mass holds down to $(\delta-\delta_{\rm c})^{\gamma}\sim 10^{-10}$~\cite{Musco:2008hv,Musco:2012au}. For PBHs formed from Mexican hat-shaped perturbations during radiation domination $\gamma=0.357$ and $\kappa = 4.02$~\cite{Musco:2008hv}.
\subsubsection{PBH abundance and mass function}
\label{sec:betamf}
The fraction of the energy density of the Universe contained in regions overdense enough to form PBHs is usually calculated, as in Press-Schechter theory~\cite{Press:1973iz}, as~\footnote{The factor of 2 is usually included by hand to avoid the under-counting that otherwise
occurs in Press-Schechter theory.}
\begin{equation}
\beta(M_{\rm H}) = 2 \int_{\delta_{\rm c}}^{\infty} \frac{M_{\rm PBH}}{M_{\rm H}} \, P(\delta(R)) \, {\rm d} \delta(R) \,.
\end{equation}
Assuming that the probability distribution of the smoothed density contrast at horizon crossing, $\delta(R)$, is Gaussian with mass variance $\sigma(R)$ and that all PBHs form at the same time (i.e.~at the same value of $M_{\rm H}$), with the same mass $M_{\rm PBH} = \alpha M_{\rm H}$, then the PBH mass function is monochromatic and
\begin{equation}
\label{betaimp}
\beta(M_{\rm H}) = \sqrt{\frac{2}{\pi}} \frac{\alpha}{\sigma(R)} \int_{\delta_{\rm c}}^{\infty} \exp{ \left( - \frac{\delta^2(R)}{2 \sigma^2(R)}\right)} \, {\rm d} \delta(R) = \alpha \, {\rm erfc} \left( \frac{\delta_{\rm c}}{\sqrt{2} \sigma(R)}\right) \,.
\end{equation}
The mass variance is given by~\cite{Blais:2002gw,Josan:2009qn}
\begin{equation}
\label{massvariance}
\sigma^{2}(R) = \frac{16}{81} \int_{0}^{\infty} (k R)^4 W^2(kR) {\cal P}_{\cal R}(k) T^2(kR/\sqrt{3}) \, \frac{{\rm d} k}{k} \,,
\end{equation}
where $W(kR)$ is the Fourier transform of the window function used to smooth the density contrast on a comoving scale $R$, ${\cal P}_{\cal R}(k)$ is the power spectrum of the primordial comoving curvature perturbation (see e.g. Ref.~\cite{Malik:2008im}) and $T(y)$ is the transfer function which describes the evolution of the density perturbations on subhorizon scales:
\begin{equation}
T(y) = 3 \, \frac{ \left( \sin{y} - y \cos{y}\right) }{y^3} \,.
\end{equation}
The appropriate window function to use for PBH formation is not known and the relationship between the amplitude of the power spectrum and $\sigma(R)$ (and hence the abundance of PBHs formed) depends significantly on the choice of window function~\cite{Ando:2018qdb}. For a locally scale-invariant power spectrum with amplitude ${\cal P}_{\cal R}(k) = A_{\rm PBH}$, one finds $\sigma^2(R) = b A_{\rm PBH}$ with $b =1.1, 0.09$ and $0.05$ for real-space top-hat, Gaussian and k-space top-hat window functions respectively~\cite{Ando:2018qdb}.
The horizon mass, $M_{\rm H}$, within a comoving radius, $R$, during radiation domination is given by (c.f. Appendix A of Ref.~\cite{Wang:2019kaf})
\begin{equation}
\label{mhr}
M_{\rm H} = 5.6 \times 10^{15} M_{\odot} \left( \frac{g_{\star, {\rm i}}}{106.75} \right)^{-1/6} (R k_{0})^2 \,.
\end{equation}
This expression has been normalised to a fiducial comoving wavenumber, $k_{0} = 0.05 \, {\rm Mpc}^{-1}$, corresponding to the Cosmic Microwave Background (CMB) pivot scale and assumes that the initial effective degrees of freedom for entropy and energy density are equal and denoted by $g_{\star, {\rm i}}$. Excursion set (or peaks) theory~\cite{Bardeen:1985tr}, which uses the heights of peaks in the density field rather than their averaged value, can also be used to calculate the PBH abundance~\cite{Green:2004wb,Young:2014ana,MoradinezhadDizgah:2019wjf}.
In the case of critical collapse, where the mass of a PBH depends on the size of the fluctuation from which it forms (see Sec.~\ref{sec:mind}), even if all PBHs form at the same time they have an extended mass function (MF)~\cite{Niemeyer:1997mt}. While the MF is peaked close to the horizon mass, it has a significant low mass tail.
If the mass of each PBH remains constant and mergers are negligible (see Sec.~\ref{sec:gwmergers} for discussion of the latter) then the PBH density evolves with the scale factor, $a$, as $\rho_{\rm PBH} \propto a^{-3}$. During matter domination the fraction of the total density in the form of PBHs remains constant, while during radiation it grows proportional to $a$. For a monochromatic mass function the present day (at $t=t_{0}$) PBH density parameter is given by~\cite{Carr:2009jm}
\begin{equation}
\label{omegapbh}
\Omega_{{\rm PBH}} \equiv \frac{\rho_{\rm PBH}(t_0)}{\rho_{\rm c}(t_0)} = \left( \frac{ \beta(M_{\rm H})}{1.1 \times 10^{-8}} \right)
\left( \frac{h}{0.7} \right)^{-2} \left( \frac{g_{\star, {\rm i}}}{106.75} \right)^{-1/4} \left( \frac{M_{\rm H}}{M_{\odot}} \right)^{-1/2} \,,
\end{equation}
where $\rho_{\rm c}$ is the critical density for which the geometry of the Universe is flat, $h$ is the dimensionless Hubble constant, $H_{0} = 100\, h \, {\rm km \, s}^{-1} \, {\rm Mpc}^{-1}$, and again it is assumed that the initial effective degrees of freedom for entropy and energy density are equal.
Accretion of gas onto multi-Solar mass PBHs at late times can increase their mass significantly, and this needs to be taken into account when translating constraints on the present day PBH abundance into constraints on the initial PBH abundance~\cite{DeLuca:2020fpg}. Constraints arising from this accretion are discussed in Sec.~\ref{sec:accretionconstraint}.
If the primordial power spectrum has a finite width peak, then PBHs can be formed at a range of times and in this case the spread in formation times (or equivalently horizon masses at the time of formation) also needs to be taken into account. Often (e.g.~Refs.~\cite{Kuhnel:2015vtw,Clesse:2015wea,Carr:2016drx,Byrnes:2018txb,Wang:2019kaf}) the PBH mass function is calculating by binning the primordial power spectrum by horizon mass, calculating the mass function for each bin, and then summing these mass functions. The resulting mass functions can often be well fit by a lognormal distribution~\cite{Green:2016xgy,Kannike:2017bxn}. The accurate calculation of the MF resulting from a broad power spectrum is, however, an outstanding problem. In principle a region which is over-dense when smoothed on a scale $R_{1}$ could also be over-dense when smoothed on a scale $R_{2} > R_{1}$, and hence the original PBH with mass $M_{1}$ is then subsumed within a PBH of mass $M_{2}> M_{1}$. This general situation in structure formation is known as the `cloud in cloud' problem. Reference~\cite{MoradinezhadDizgah:2019wjf} argued that for a broad power spectrum the probability that a PBH with mass $M_{1}$ is subsumed within a PBH with mass $M_{2} \gg M_{1}$ is small. For work towards an accurate calculation of the PBH MF, see e.g.~Refs.~\cite{Suyama:2019npc,Germani:2019zez}.
During phase transitions the pressure is reduced and consequently
the threshold for PBH formation, $\delta_{\rm c}$, is reduced (e.g.~Refs.~\cite{Carr:1975qj,Harada:2013epa}) and PBHs form more abundantly. In particular (if the primordial power spectrum is close to scale-invariant on small scales) the QCD phase transition leads to enhanced formation of Solar mass PBHs~\cite{Jedamzik:1996mr,Jedamzik:1999am,Byrnes:2018clq} and other phase transitions may lead to enhanced formation of PBHs with other masses~\cite{Carr:2019kxo}.
It has long been realised that since PBHs form from the high amplitude tail of the density perturbation probability distribution, non-Gaussianity can have a significant effect on their abundance~\cite{Bullock:1996at,Ivanov:1997ia}.
Reference~\cite{Franciolini:2018vbk} presents a path-integral formulation for calculating the PBH abundance (in principle) exactly in the presence of non-Gaussianity. In practice this is non-trivial as the resulting expression depends on all of the $n$-point correlation functions. In many PBH-producing inflation models quantum diffusion is important (see Sec.~\ref{sec:large}) and in this case the high amplitude tail of the probability distribution is exponential rather than Gaussian~\cite{Pattison:2017mbe,Ezquiaga:2019ftu}.
It should be noted that even if the underlying curvature perturbations are Gaussian, the non-linear relationship between density and curvature perturbations inevitably renders the distribution of large density perturbations non-Gaussian~\cite{Kawasaki:2019mbl,DeLuca:2019qsy,Young:2019yug}.
\subsubsection{Spin and clustering}
\label{sec:spincluster}
The rare high peaks in the density field from which PBHs form are close to spherically symmetric. Therefore the torques on the collapsing perturbation, and the resulting angular momentum, are small. Consequently PBHs are formed with dimensionless spin parameters, $a = |{\bf S}|/(G M_{\rm PBH}^2)$ where ${\bf S}$ is the spin, of order $0.01$ or smaller~\cite{Mirbabayi:2019uph,DeLuca:2019buf}. Note however that accretion of gas at late times may increase the spin of massive ($M_{\rm PBH} \gtrsim 30 M_{\odot}$) PBHs~\cite{DeLuca:2020bjf}.
As PBHs are discrete objects there are Poissonian fluctuations in their distribution~\cite{Afshordi:2003zb}.
The initial clustering of PBHs was first studied in Refs.~\cite{Afshordi:2003zb,Chisholm:2005vm}, which found that PBHs would be formed in clusters. More recently it has been shown that if the primordial curvature perturbations are Gaussian and have a narrow peak then the PBHs are not initially clustered, beyond Poisson~\cite{Ali-Haimoud:2018dau,Desjacques:2018wuu,Ballesteros:2018swv}.
Reference~\cite{MoradinezhadDizgah:2019wjf} has argued that the initial clustering is also small for broad spectra. However local non-Gaussianity~\footnote{In local non-Gaussianity the probability distribution of the primordial fluctuations is a local function of one or more Gaussian random fields. Non-negligible local non-Gaussianity can arise if there are multiple light scalar fields present during inflation, e.g. Ref.~\cite{Wands:2010af}.} can lead to enhanced initial clustering
~\cite{Tada:2015noa,Young:2015kda,Suyama:2019cst}.
\subsection{Collapse of density perturbations during matter domination}
\label{sec:matdom}
It is usually assumed that the evolution of the Universe is radiation dominated from the end of inflation up until matter-radiation equality at $t_{\rm eq} = 1.7 \times 10^{12} \, {\rm s}$. However it is possible that there could be a period of matter domination (with equation of state $p \approx 0$) prior to Big Bang Nucleosynthesis due to, for instance, long-lived particles dominating the Universe and then decaying (see e.g.~Ref.~\cite{Khlopov:1980mg,
Georg:2016yxa,Carr:2017edp,Allahverdi:2020bys} for discussion in the context of PBH formation).
The criteria for PBH formation during matter domination are significantly different than during radiation domination. During matter domination the density contrast grows as $\delta \propto a$, so that in principle small perturbations can grow sufficiently to form a PBH. However a perturbation has to be (close) to spherical and homogeneous for a PBH to form~\cite{Khlopov:1980mg}. The modified expansion history of the Universe also modifies the relationship between the initial PBH mass fraction, $\beta$, and the present day PBH density parameter, $\Omega_{\rm PBH}$, Eq.~(\ref{omegapbh}).
The fraction of horizon-sized regions which collapse to form a PBH can be written as the product of the fraction of regions which separately satisfy the inhomogeneity and anisotropy criteria: $\beta = \beta_{\rm inhom} \times \beta_{\rm aniso}$~\cite{Kokubu:2018fxy}. If a perturbation is not sufficiently spherically symmetric it will collapse to form a pancake or cigar. Khlopov and Polnarev originally found $\beta_{\rm aniso} \approx 0.02 \sigma^{5}$, where $\sigma$ is the mass variance as defined in Eq.~(\ref{massvariance}). Reference~\cite{Harada:2016mhb} revisited this calculation numerically. Their result, which is valid for all $\sigma$, can be approximated by $\beta_{\rm aniso} \approx 0.056 \sigma^{5}$ for $\sigma \lesssim 0.01$.
Polnarev and Khlopov argued that for a PBH to form, a fluctuation must collapse to within its Schwarzschild radius before a caustic can form at its centre, and found that the fraction of regions which satisfy this criteria is given by $\beta_{\rm inhom} \approx \sigma^{3/2}$~\cite{Khlopov:1981}. Taking into account the finite propagation speed of information $\beta_{\rm inhom} \approx 3.7 \sigma^{3/2}$ (for $\sigma \ll 1$)~\cite{Kokubu:2018fxy}. The final result (for $\sigma \ll 1$) is $\beta \approx 0.21 \sigma^{13/2}$~\cite{Kokubu:2018fxy}, and if $\sigma \lesssim 0.05$, PBHs form more abundantly during matter domination than during radiation domination.
As angular momentum plays a significant role in PBH formation during matter domination, they are formed with large spins: $a \gtrsim 0.5$, with the exact value depending on the duration of the period of matter domination and also $\sigma$~\cite{Harada:2017fjm}. Since PBHs formed during matter domination don't form from the high peaks of the density field, local primordial non-Gaussianity would lead to smaller initial clustering than for formation during radiation domination, however it can still be much larger than the Poisson shot noise~\cite{Matsubara:2019qzv}.
\subsection{Generation of large primordial perturbations by inflation}
\label{sec:large}
Inflation is a period of accelerated expansion ($\ddot{a}> 0$, where $\dot{}$ denotes a derivative with respect to time) in the early Universe, originally proposed to solve various problems with the standard Big Bang. It also provides a mechanism for generating primordial perturbations, via quantum fluctuations of scalar fields.
For a comprehensive and comprehensible introduction to inflation see Baumann's lecture notes~\cite{Baumann:2009ds}.
In Sec.~\ref{sec:infintro} we review the aspects of inflation that are relevant to PBH formation and briefly discuss the requirements for generating large, PBH-producing perturbations. For the interested reader, we then look at PBH producing single (Sec.~\ref{sec:single}) and multi (Sec.~\ref{sec:multi}) field inflation models in more detail. A significant number of PBH-producing inflation models have been proposed. We will not attempt a detailed study of all possible models, but instead focus on examples of the types of models that can produce large primordial perturbations from a phenomenological point of view. In many cases the initial ideas (a plateau in the potential of a single field~\cite{Ivanov:1994pa}, hybrid inflation~\cite{Randall:1995dj,GarciaBellido:1996qt}, double inflation~\cite{Silk:1986vc,Kawasaki:1997ju} and a spectator field~\cite{Yokoyama:1995ex}) were proposed in the 1990s, motivated by the
excess of LMC microlensing events observed by the MACHO collaboration~\cite{Alcock:1996yv}. In recent years, motivated by the LIGO-Virgo discovery of massive BH binaries, these models have been revisited and refined, taking into account theoretical and observational developments in the intervening decades.
\subsubsection{Introduction}
\label{sec:infintro}
In most models of inflation the accelerated expansion is driven by a scalar field known as the inflaton. The Friedmann equation for the expansion of a universe dominated by a scalar field $\phi$ with potential $V(\phi)$ is
\begin{equation}
\label{Friedmann}
H^2 = \frac{8 \pi}{3 m_{\rm pl}^2} \left[ \frac{1}{2} \dot{\phi}^2 + V(\phi) \right] \,,
\end{equation}
and the evolution of the scalar field is governed by the Klein-Gordon equation
\begin{equation}
\label{kg}
\ddot{\phi} + 3 H \dot{\phi} + \frac{{\rm d} V}{{\rm d} \phi} =0 \,.
\end{equation}
The dynamics of inflation are often studied using the (potential) slow-roll parameters $\epsilon_{V}$ and $\eta_{V}$~\footnote{The Hubble slow-roll parameters, defined in terms of the Hubble parameter and its derivatives, allow for a more accurate calculation of the power spectrum and also a more accurate definition of the condition for accelerated expansion (see e.g.~Ref.~\cite{Baumann:2009ds}). We use the potential slow-roll parameters here because their definition in terms of the potential is initially more intuitive.}
\begin{equation}
\epsilon_{V} \equiv \frac{m_{\rm Pl}^2}{16 \pi} \left( \frac{V^{\prime}}{V} \right)^2 \,, \hspace{2.0cm}
\eta_{V} \equiv \frac{m_{\rm Pl}^2}{8 \pi} \left( \frac{V^{\prime \prime}}{V} \right) \,,
\end{equation}
where ${}^{\prime}$ denotes derivatives with respect to $\phi$. Accelerated expansion occurs when $\epsilon_{V} \lesssim 1 $. In the slow-roll approximation (SRA), $\epsilon_{V}$ and $ \eta_{V}$ are both much less than one, and the $\dot{\phi}$ term in the Friedmann equation (Eq.~(\ref{Friedmann})) and the $\ddot{\phi}$ term in the Klein-Gordon equation (Eq.~(\ref{kg})) are negligible. In this regime the power spectrum of the primordial curvature perturbation, ${\cal P}_{\cal R}(k) \equiv (k^3/2 \pi^2) \langle |{\cal R}_{k}| \rangle$, is given by
\begin{equation}
\label{psv}
{\cal P}_{\cal R}(k) \approx \frac{ 8 }{3 m_{\rm pl}^4} \frac{V}{\epsilon_{V}} \,,
\end{equation}
where $V$ and $\epsilon_{V}$ are to be evaluated when the scale of interest exits the horizon during inflation, $k=aH$. It is common to use a Taylor
expansion of the spectral index to parameterise the primordial power spectrum on cosmological scales ($k_{\rm cos} \approx (10^{-3}-1) \, {\rm Mpc}^{-1}$):~\footnote{N.b.~this approach should not be used to extrapolate down to PBH DM forming scales, $k_{\rm PBH} \sim (10^{5}-10^{15}) \, {\rm Mpc}^{-1}$, as the expansion does not converge over this much wider range of scales~\cite{Green:2018akb}.}
\begin{equation}
{\cal P}_{\cal R}(k) = A_{\rm s} \left( \frac{k}{k_{0}} \right)^{(n_{\rm s}(k) -1)} \,,
\end{equation}
where $k_{0}$ is the pivot scale about which the expansion is carried out and
\begin{equation}
n_{\rm s}(k)= n_{\rm s}|_{k_{0}} + \frac{1}{2} \left. \frac{{\rm d} n_{\rm s}}{{\rm d} \ln{k}}
\right|_{k_{0}} \ln{\left( \frac{k}{k_{0}} \right)} + ... \,.
\end{equation}
In the SRA, $n_{\rm s} |_{k_{0}} -1 = 2 \eta_{V} - 6\epsilon_{V}$. Primordial tensor modes, which manifest as gravitational waves, are also generated by inflation and in the SRA the ratio of the amplitudes of the tensor and scalar power spectra on cosmological scales (known as the `tensor to scalar ratio') is given by $r \equiv {\cal P}_{\rm t}(k_{0})/{\cal P}_{\cal R}(k_{0}) = 16 \epsilon_V$.
The amplitude and scale dependence of the primordial perturbations on cosmological scales are now accurately measured. In particular a scale-invariant power spectrum ($n_{\rm s}=1$) is excluded at high confidence. From Planck 2018, combined with other CMB and large scale structure data sets~\cite{Akrami:2018odb}:
\begin{eqnarray}
\ln{(10^{10} A_{\rm s})} &=& 3.044 \pm 0.0014 \,, \\
n_{\rm s} |_{0.05 \, {\rm Mpc}^{-1}} &=& 0.9668 \pm 0.0037 \,, \\
r &<& 0.063 \,,
\end{eqnarray}
where $1\sigma$ errors on measured parameters and a $95\%$ upper confidence limit on $r$ are stated.
The upper limit on $r$ leads to a relatively tight constraint on the slope of the potential in the region that corresponds to cosmological scales, $\epsilon_{V} < 0.0039$, and various inflation models are tightly constrained, or excluded (e.g. $V(\phi) \propto \phi^2$)~\cite{Akrami:2018odb}.
There are also constraints on smaller scales from spectral distortions of the CMB and induced gravitational waves (see Sec.~\ref{sec:indirect} for a more detailed discussion). However the current constraints on the amplitude of the power spectrum on small scales are fairly weak. The COBE/FIRAS limits on spectral distortions require ${\cal P}(k) \lesssim 10^{-4}$~\cite{Fixsen:1996nj,1994ApJ...420..439M} for $k \approx (10-10^{4}) \, {\rm Mpc}^{-1}$ and Pulsar Timing Array (PTA) limits on gravitational waves require ${\cal P}(k) \lesssim 10^{-2}$ for $k \approx (10^{6}-10^{7}) \, {\rm Mpc}^{-1}$~\cite{Byrnes:2018txb,Inomata:2018epa,Chen:2019xse}.
However a future PIXIE-like experiment could tighten the spectral distortion constraint to ${\cal P}(k) \lesssim 10^{-8}$~\cite{Chluba:2019nxa} and SKA and LISA will improve the current induced gravitational wave constraints over a wide range of smaller scales~\cite{Byrnes:2018txb,Inomata:2018epa}.
On cosmological scales the amplitude of the primordial curvature perturbation power spectrum has
been measured to be $A_{\rm s} =2.1 \times 10^{-9}$~\cite{Akrami:2018odb}. If the power spectrum were completely scale invariant~\footnote{As we saw above in fact on cosmological scales the power spectrum is `red', $n_{\rm s} < 1$, with amplitude decreasing with increasing wavenumber $k$.} then using Eq.~(\ref{betaimp}), and assuming $\delta_{\rm c}=0.5$ and $\sigma^2 = A_{\rm s}$, which is sufficient for a rough calculation, the initial mass fraction of PBHs formed during radiation domination would be $\beta \approx {\rm erfc}(7000) \approx \exp{[(-7000)^2]}/7000$, i.e.~completely negligible.
Conversely if all of the DM is in PBHs with $M_{\rm PBH} \sim M_{\odot}$ then, from Eq.~(\ref{omegapbh}), the initial PBH mass fraction must be $\beta \sim 10^{-8}$. From Eq.~(\ref{betaimp}) this requires $\delta_{c}/(\sqrt{2} \sigma) \sim 4$, and hence the mass variance on the corresponding scale must be $\sigma \sim 0.1$. This requires the amplitude of the primordial power spectrum on this scale to be $A_{\rm PBH} \sim 0.01$, 7 orders of magnitude larger than its measured value on cosmological scales. We note here that since $\beta$ depends exponentially on the amplitude of the perturbations, fine-tuning is required to achieve an interesting (i.e.~neither negligible nor unphysically large, $\Omega_{\rm PBH} \gg 1$) abundance of PBHs. Furthermore to produce PBHs of a particular mass the peak in the power spectrum must occur on a specific scale, given by Eq.~(\ref{mhr}).
Figure~\ref{fig:Pk} compares the amplitude of the power spectrum required on small scales to form PBH DM (${\cal P}_{\cal R}(k) \sim 10^{-2}$)~\footnote{The required amplitude is in fact scale dependent. Lighter PBHs form earlier and therefore their density relative to the total density grows for longer. Consequently the initial PBH density corresponding to a given present day density is smaller, and hence the amplitude of the power spectrum required for PBHs to make up all of the DM is smaller. However this scale dependence is smaller than the uncertainties discussed in Sec.~\ref{sec:raddom}, and therefore we do not include it here.} with the current measurements on cosmological scales from the CMB temperature angular power spectrum~\cite{Akrami:2018odb} and the Lyman--$\alpha$ forest~\cite{Bird:2010mp}, and current and future constraints on smaller scales~\cite{Byrnes:2018txb,Inomata:2018epa,Chluba:2019nxa}, which were mentioned above.~\footnote{To translate from $k$ to $M_{\rm H}$ we use Eq.~(\ref{mhr}), from Ref.~\cite{Wang:2019kaf}, which agrees with the conversion given in Ref.~\cite{Kalaja:2019uju}. However we note a discrepancy with Ref.~\cite{Byrnes:2018txb}.}
The steepest growth of the power spectrum which can be achieved in single field inflation~\cite{Byrnes:2018txb,Carrilho:2019oqg} (see Sec.~\ref{sec:single} for discussion) is also shown. As discussed in more detail in Sec.~\ref{sec:indirect}, the current constraints already indirectly exclude $M_{\rm PBH}/M_{\odot} \gtrsim 10^{3} $ and $ 10^{-2} \lesssim M_{\rm PBH}/M_{\odot} \lesssim 1 $ for PBH DM formed from the collapse of large inflationary density perturbations during radiation domination. Significant improvements in these indirect probes in the future will cover most of the mass range for PBHs formed via this mechanism.
\begin{figure}[t]
\begin{center}
\includegraphics[width=1.0\textwidth]{figures/PS2.pdf}\\
\end{center}
\caption{Constraints on the primordial power spectrum ${\cal P}_{\cal R}(k)$ from the CMB temperature angular power spectrum~\cite{Akrami:2018odb} (red line), from the Lyman-$\alpha$ forest~\cite{Bird:2010mp} (blue), CMB spectral distortions~\cite{Fixsen:1996nj,1994ApJ...420..439M} (green) and pulsar timing array limits on gravitational waves~\cite{Byrnes:2018txb} (magenta). Potential constraints from a future PIXIE-like spectral distortion experiment~\cite{Chluba:2019nxa} (blue) and limits on gravitational waves from SKA, LISA and BBO (magenta)~\cite{Inomata:2018epa,Chluba:2019nxa} are shown as dotted lines. In each case the excluded regions are shaded. The spectral distortion and induced gravitational wave constraints depend on the shape of the primordial power spectrum, a $k^4$ growth followed by a sharp cut-off has been assumed here~\cite{Byrnes:2018txb}. The approximate amplitude, ${\cal P}_{\cal R}(k) \sim 10^{-2}$, required to form an interesting number of PBHs is shown as a black line (see text for details). The dotted black line shows the steepest growth possible in a single field inflation model~\cite{Byrnes:2018txb,Carrilho:2019oqg} (see Sec.~\ref{sec:single}). Adapted from Refs.~\cite{Byrnes:2018txb,Inomata:2018epa,Chluba:2019nxa}. }
\label{fig:Pk}
\end{figure}
As we saw in Eq.~(\ref{psv}), in the SRA the power spectrum is inversely proportional to $\epsilon_{V}$ and hence the slope of the potential, so that reducing the slope increases the amplitude of the perturbations. The potential then has to steepen again so that
$\epsilon_{V}$ becomes greater than 1 and inflation ends. This can be achieved by inserting a plateau or inflection point feature in the potential (see Sec.~\ref{sec:single}).
Alternatively large perturbations can be generated in multi-field models.
In this case typically a different field is responsible for the perturbations on small scales than on cosmological scales. This effectively decouples the constraints on cosmological scales from the requirements for generating large perturbations.
\subsubsection{Single-field models}
\label{sec:single}
In this subsection we discuss various ways of generating large PBH forming perturbations in single-field inflation models, namely features in the potential, running of the inflaton mass, hilltop models, and the reheating period at the end of inflation.
The possibility of producing PBHs by inserting a plateau in the potential was explored in the 1990s by Ivanov et al.~\cite{Ivanov:1994pa}. More recently Ref.~\cite{Garcia-Bellido:2017mdw} showed that a PBH-forming peak in the power spectrum
could be produced with a potential possessing an inflection point. Such a potential can be generated from a ratio of polynomials~\cite{Garcia-Bellido:2017mdw} or for a non-minimally coupled scalar field with a quartic potential~\cite{Ballesteros:2017fsr}.
Reference~\cite{Hertzberg:2017dkh} showed that if a quintic potential is fine-tuned so that a local minimum is followed by a small maximum (so that the field is slowed down but can still exit this region) a sufficiently large peak in the power spectrum can be produced.
As shown in Ref.~\cite{Motohashi:2017kbs}, for the power spectrum of canonical single-field models to grow by the required seven orders of magnitude the SRA has to be violated.
In the limit where the potential is flat a phase of non-attractor ultra-slow roll (USR) inflation occurs~\cite{Tsamis:2003px,Kinney:2005vj}, where the evolution of the inflaton (Eq.~(\ref{kg})) is governed by the expansion rate, rather than the slope of the potential. In this case the standard calculation of the power spectrum is not valid. In particular the curvature perturbations grow rapidly on superhorizon scales, rather than remaining constant, or `frozen out'. As emphasised by e.g.~Refs.~\cite{Germani:2017bcs,Ballesteros:2017fsr,Hertzberg:2017dkh}, in models with an inflection point, or shallow local minimum, a numerical calculation using the Sasaki-Mukhanov equations~\cite{Sasaki:1986hm,Mukhanov:1988jd} is required to accurately calculate the position and height of the resulting peak in the power spectrum. Also in this regime quantum diffusion (where quantum kicks of the inflaton field are larger than the classical evolution) occurs and has a significant effect on the probability distribution of the primordial perturbations, and hence the number of PBHs formed~\cite{Ivanov:1997ia,Pattison:2017mbe,Biagetti:2018pjj,Ezquiaga:2019ftu}.
Reference~\cite{Byrnes:2018txb} studied the fastest possible growth of the primordial power spectrum that can be achieved in principle in single-field inflation with a period of USR, finding ${\cal P}(k) \propto k^{4}$. Reference~\cite{Carrilho:2019oqg} subsequently showed that ${\cal P}(k) \propto k^{5} (\log{k})^2$ can be achieved for a specific form of the pre-USR inflationary expansion.
These constraints on the growth of the power spectrum are important because to form an interesting abundance of PBHs the amplitude has to grow significantly from its value on cosmological scales, while evading the constraints from spectral distortions on intermediate scales~\cite{Byrnes:2018txb}, see Fig.~\ref{fig:Pk}.
The running mass inflation model has a potential $V(\phi) = V_{0} + (1/2) m^2(\phi) \phi^2$, where the inflaton mass, $m$, depends on its value~\cite{Stewart:1996ey,Stewart:1997wg}. The resulting power spectrum can grow sufficiently for PBHs to form while satisfying constraints on cosmological scales~\cite{Leach:2000ea,Kohri:2007qn}. However this model is not complete; it relies on a Taylor expansion of the potential around a maximum and does not contain a mechanism for ending inflation
(see discussion in e.g. Sec. IV of Ref.~\cite{Motohashi:2017kbs}).
Inflation can alternatively be studied using the hierarchy of (Hubble) slow-roll parameters rather than the potential. It is possible to `design' a functional form for $\epsilon(N)$, where $N$
is the number of e-foldings of inflation, which satisfies all of the observational constraints and produces PBHs~\cite{Kohri:2007qn}. The corresponding potential has a `hill-top' form~\cite{Kohri:2007qn,Alabidi:2009bk}, with inflation occurring as the field evolves away from a local maximum, towards a minimum with $V(\phi) \neq 0$.
However, as for running mass inflation, an auxiliary mechanism is required to terminate inflation, so this is not a complete single-field inflation model.
The reheating era at the end of inflation, where the inflaton oscillates around a minimum of its potential and decays, may offer a mechanism for generating PBHs~\cite{Green:2000he,Bassett:2000ha}. Reference~\cite{Martin:2019nuw} has shown that during oscillations about a parabolic minimum perturbations are enhanced sufficiently by a resonant instability for PBHs to be produced.
\subsubsection{Multi-field models}
\label{sec:multi}
In this subsection we discuss some multi-field scenarios which can generate large PBH-producing fluctuations: hybrid inflation, double inflation
and a curvaton field. Quantum diffusion is also often important in multi-field models, e.g. Refs.~\cite{Randall:1995dj,GarciaBellido:1996qt,Yokoyama:1998pt,Clesse:2015wea}.
The most commonly studied two-field model in the context of PBH production is hybrid inflation (e.g.~Refs.~\cite{Randall:1995dj,GarciaBellido:1996qt,Lyth:2010zq,Clesse:2015wea}).
In hybrid inflation one of the fields, $\phi$, initially slow-rolls while the accelerated expansion is driven by the false-vacuum energy of a second scalar field $\psi$. At a critical value of $\phi$ there is a phase transition, with $\psi$ undergoing a waterfall transition to a global minimum and inflation ending. Around the phase transition quantum fluctuations are large, and a spike in the power spectrum on small scales is generated, leading to a large abundance of light PBHs~\cite{Randall:1995dj,GarciaBellido:1996qt}.
For some parameter values, however, the waterfall transition can be `mild' so that there is a 2nd phase of inflation as the $\psi$ field evolves to the minimum of its potential~\cite{Clesse:2015wea}. In this case during the initial stage of the waterfall transition when both fields are important, isocurvature perturbations are generated, leading to a broad peak in the curvature perturbation power spectrum. Perturbations on cosmological scales are generated during the initial phase of inflation and can be consistent with CMB observations.
In double inflation~\cite{Silk:1986vc,Kawasaki:1997ju,Yokoyama:1998pt,Clesse:2015wea} there are two separate periods of inflation, with perturbations on cosmological scales being generated during the first period, and those on small scales during the second.~\footnote{In the initial realizations of double inflation~\cite{Silk:1986vc} different fields were responsible for the two periods of inflation. However the single field models with a local minimum or inflection point in the potential described in Sec.~\ref{sec:single} can also be viewed as double inflation models in the sense that the potential changes shape and there are two (or more) distinct dynamical phases of inflation~\cite{Kannike:2017bxn}.} Hybrid inflation models with a mild waterfall transition, as discussed above, fall into this class.
A curvaton is a field which is dynamically unimportant during inflation and
acquires isocurvature perturbations, with adiabatic perturbations being generated when it later decays~\cite{Lyth:2001nq}. If the inflaton is responsible for the perturbations on cosmological scales, while the curvaton generates small-scale perturbations, it is easier to produce large PBH-producing perturbations than in standard single field models (where the inflaton is responsible for perturbations on all scales)~\cite{Yokoyama:1995ex,Kawasaki:2012wr}. A specific example is the `axion-like curvaton'~\cite{Kawasaki:2012wr}.
\subsection{Other formation mechanisms}
\label{sec:otherform}
There are a variety of other early Universe processes which can produce large, PBH-forming overdensities. These include bubble collisions, cosmic string loop- or cusp-collapse, domain wall collapse and scalar condensate fragmentation.
First order phase transitions occur through the formation of bubbles. If these bubbles collide, PBHs with mass of order the horizon mass can form~\cite{Crawford:1982yz,Hawking:1982ga,Kodama:1982sf}. However a non-negligible abundance of PBHs
is only formed if the bubble formation rate is fine-tuned so that bubble collisions occur, but the phase transition doesn't complete instantaneously. Recently Ref.~\cite{Kusenko:2020pcg} has studied PBH formation from the collapse of bubbles nucleated during inflation.
Cosmic strings are topological defects which may form during phase transitions in the early Universe~\cite{Kibble:1976sj}.
A network of cosmic strings is formed which quickly reaches a stable scaling solution, in which loops with size smaller than the Hubble
radius are constantly being produced via long string interactions and loop self-intercommutations. The loops oscillate and if a loop contracts under its own tension to become smaller than its Schwarzschild radius a PBH can form~\cite{Hawking:1987bn,Polnarev:1988dh}.
The loop collapse probability is independent of time, and the mass of the PBH formed is proportional to the typical loop mass, which is proportional to the horizon mass. Consequently the PBHs formed from loop collapse have an extended mass function of the form ${\rm d} n_{\rm PBH}/ {\rm d} M_{\rm PBH} \propto M_{\rm PBH}^{-5/2}$~\cite{MacGibbon:1997pu}. The fraction of loops that collapse, $f$, is not well known. Numerical simulations~\cite{Caldwell:1991jj} have found $f = 10^{4.9 \pm 0.2} (G \mu)^{4.1 \pm 0.1}$ for large tensions $ G \mu \sim 10^{-(2-3)}$.~\footnote{The stochastic gravitational wave background produced by loop oscillations now lead to a constraint $ G \mu < 1.5 \times 10^{-11}$~\cite{Blanco-Pillado:2017rnf}.}
Critical phenomena also arise in this PBH formation mechanism, with the PBH mass scaling as a power law of the difference between the loop radius and the Schwarzschild radius~\cite{Helfer:2018qgv}.
It has recently been argued that PBHs would form more abundantly from the collapse of cosmic string cusps~\cite{Jenkins:2020ctp}.
Large closed domain walls, produced during a second order phase transition, can collapse to form PBHs~\cite{Rubin:2000dq,Rubin:2001yw}. The PBHs have masses that depend on the parameters of the field which undergoes the phase transition, typically with a significant spread, and can be clustered.
A scalar field with a sufficiently flat potential (such as the multiple flat directions found in supersymmetric generalizations of the Standard Model of particle physics) forms a coherent condensate at the end of inflation. This condensate typically fragments into lumps,
such as oscillons or Q-balls. These lumps can come to dominate the Universe, and have large density fluctuations which can produce PBHs~\cite{Cotner:2016cvr,Cotner:2019ykd}. These PBHs are smaller (compared with the horizon mass) than those formed via the collapse of density perturbations during radiation domination and can have larger spin~\cite{Cotner:2019ykd}.
Baryogenesis scenarios with spontaneous breaking of charge symmetry during inflation generate high density regions that can collapse to form PBHs after the QCD phase transition~\cite{Dolgov:1992pu}. In this case the PBHs formed have a lognormal mass function~\cite{Dolgov:1992pu}, centered at $\sim 10 \,M_{\odot}$ or higher~\cite{Dolghov:2020hjk}.
\section{Abundance constraints}
\label{sec:constraints}
PBH DM has a wide range of potentially observable effects. In this section we review the constraints on the present day abundance of PBHs, expressed as the fraction of DM in the form of PBHs today: $f_{\rm PBH}= \Omega_{\rm PBH}/\Omega_{\rm DM}$.
We order the constraints roughly by increasing PBH mass:
evaporation (Sec.~\ref{sec:evap}), interactions with stars (Sec.~\ref{sec:stars}), gravitational lensing (Sec.~\ref{sec:lensing}), gravitational waves from mergers of PBH binaries (Sec.~\ref{sec:gwmergers}), dynamical effects (Sec.~\ref{sec:dynamical}), the consequences of accretion (Sec.~\ref{sec:accretionconstraint}) and large scale structure (Sec.~\ref{sec:lss}).
We then review indirect constraints which apply if PBHs are formed via the collapse of large density perturbations during radiation domination (Sec.~\ref{sec:indirect}) and potential future constraints (Sec.~\ref{sec:future}). For more detailed descriptions of the physics behind the constraints, including key equations, see Ref.~\cite{Sasaki:2018dmp}.
Figure~\ref{fig:constraints} shows all of the current limits discussed in the text, grouped by the type of constraint (evaporation, lensing, gravitational waves, dynamical effects, and accretion), while Fig.~\ref{fig:allconstraints} provides an overview, showing the envelope of each type of constraint. Where different constraints arise from different assumptions on e.g.~modeling and backgrounds, we aim to show the most conservative. Code for plotting the constraints is available online at \href{https://github.com/bradkav/PBHbounds}{github.com/bradkav/PBHbounds}.
We restrict our attention to PBHs with $M_{\rm PBH} \ll 10^{7} \, M_{\odot}$ which could, in principle, constitute the DM halos of small dwarf galaxies. There are various constraints on the abundance of more massive PBHs, for an overview, see Ref.~\cite{Carr:2020gox}. All limits quoted assume that the PBHs have a delta-function mass function and do not form clusters. We discuss the application of delta-function constraints to extended mass functions in Sec.~\ref{sec:emf}. As discussed in Sec.~\ref{sec:gwmergers} understanding the late time clustering of PBHs is an outstanding challenge. In this section we use `PBH' to denote limits which apply specifically to PBHs and `CO' to denote limits which apply to any compact object.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.32\textwidth]{figures/PBHbounds_evaporation_square.pdf}
\includegraphics[width=0.32\textwidth]{figures/PBHbounds_microlensing_square.pdf}
\includegraphics[width=0.32\textwidth]{figures/PBHbounds_GW_square.pdf}\\
\includegraphics[width=0.32\textwidth]{figures/PBHbounds_dynamical_square.pdf}
\includegraphics[width=0.32\textwidth]{figures/PBHbounds_accretion_square.pdf}
\end{center}
\caption{Constraints on the fraction of DM in the form of PBHs $f_{\rm PBH}$, with mass $M_{\rm PBH}$, or in the form of compact objects, $f_{\rm CO}$, with mass $M_{\rm CO}$ for each of the different types of constraint. In each case the excluded regions are shaded. \textit{Top left:} Evaporation constraints on PBHs (Sec.~\ref{sec:evap}): extragalactic gamma-ray background~\cite{Carr:2009jm}, CMB \cite{Poulin:2016anj,Clark:2016nst}, dwarf galaxy heating~\cite{Kim:2020ngi}, EDGES 21cm~\cite{Clark:2018ghm}, Voyager $e^{\pm}$~\cite{Boudaud:2018hqb}, $511 \, {\rm keV}$ gamma-ray line~\cite{DeRocco:2019fjq,Laha:2019ssq} and the MeV Galactic diffuse flux~\cite{Laha:2020ivk}. \textit{Top middle:} Gravitational lensing constraints on compact objects (Sec.~\ref{sec:lensing}): stellar microlensing (MACHO~\cite{Allsman:2000kg}, EROS~\cite{Tisserand:2006zx}, OGLE~\cite{Niikura:2019kqi}, HSC~\cite{Croon:2020ouk}), Icarus lensing event~\cite{Oguri:2017ock}, and supernovae magnification distribution~\cite{Zumalacarregui:2017qqd}. \textit{Top right:}
Constraints on PBHs from gravitational waves (Sec.~\ref{sec:gwmergers}) produced by individual mergers~\cite{Kavanagh:2018ggo,Authors:2019qbw} and the stochastic background of mergers~\cite{Chen:2019irf}. Note that there are substantial uncertainties on GW constraints, arising from the possible disruption of PBH binaries. \textit{Bottom left:} Dynamical constraints on compact objects (Sec.~\ref{sec:dynamical}): from dwarf galaxies~\cite{Brandt:2016aco} and wide binaries~\cite{mr}.
\textit{Bottom right:} Accretion constraints on PBHs (Sec.~\ref{sec:accretionconstraint}): CMB~\cite{Serpico:2020ehh}, EDGES 21cm~\cite{Hektor:2018qqw}, X-ray~\cite{Manshanden:2018tze}, radio~\cite{Manshanden:2018tze}, and dwarf galaxy heating~\cite{Lu:2020bmd}.
Digitised bounds and plotting codes are available online at \href{https://github.com/bradkav/PBHbounds} {\underline{PBHbounds}}.
}
\label{fig:constraints}
\end{figure}
\subsection{Evaporation}
\label{sec:evap}
PBHs with initial mass $M_{\rm PBH} < M_{\star} \approx 5 \times 10^{14} \, {\rm g}$ have completed their evaporation by the present day~\cite{Page:1976df,MacGibbon:2007yq}. The emission from slightly more massive PBHs ($M_{\star} < M_{\rm PBH} \lesssim 10^{17} \, {\rm g}$) is sufficient that limits on their evaporation products can be used to constrain their abundance~\cite{Page:1976wx}.
From the extragalactic gamma-ray background~\cite{Carr:2009jm,Carr:2016drx},
$f_{\rm PBH} \lesssim 2 \times 10^{-8} (M_{\rm PBH}/M_{\star})^{(3+ \epsilon)}$,
where $\epsilon \sim 0.1-0.4$ parameterizes the energy dependence of the observed gamma-ray intensity: $I^{\rm obs} \propto E_{\gamma}^{-(1+\epsilon)}$\cite{Carr:2009jm}. This constraint can be tightened by a factor of ${\cal O}(10)$ by taking into account the contribution of known astrophysical sources, such as blazars~\cite{Ballesteros:2019exr}. There are also similar constraints from the
damping of CMB anisotropies due to energy injection during recombination~\cite{Poulin:2016anj,Clark:2016nst}, from
heating of neutral hydrogen, as probed by the EDGES measurements of 21cm absorption~\cite{Clark:2018ghm} and also from heating of the interstellar medium in dwarf galaxies~\cite{Kim:2020ngi}.
Constraints on the $e^{\pm}$ flux from Voyager 1 lead to a similar limit on the contribution of PBHs to the {\em local} dark matter density $f_{\rm PBH} < 0.001$ for $M_{\rm PBH} = 10^{16} \, {\rm g}$~\cite{Boudaud:2018hqb}, with the constraint on $f_{\rm PBH}$ varying with $M_{\rm PBH}$ in a similar way to the gamma-ray constraint.
Again subtraction of backgrounds, in this case supernova remnants and pulsar wind nebulae, leads to constraints that are tighter by $\sim 2$ orders of magnitude~\cite{Boudaud:2018hqb}.
Positrons produced by PBHs will also annihilate and contribute to the flux of the $511 \, {\rm keV}$ line~\cite{cg}. The SPI/INTEGRAL limits on this line lead to limits on the PBH fraction which are similar to the gamma-ray limit for $10^{16} \, {\rm g} \lesssim M_{\rm PBH} \lesssim 10^{17} \, {\rm g}$~\cite{DeRocco:2019fjq,Laha:2019ssq}. There are somewhat tighter constraints (that exclude $f_{\rm PBH} = 1$ up to $M_{\rm PBH} \approx 2 \times 10^{17} \, {\rm g}$) from INTEGRAL measurements of the Galactic diffuse flux in the MeV range~\cite{Laha:2020ivk}. There are also constraints from Super-Kamiokande measurements of the diffuse neutrino background~\cite{Dasgupta:2019cae}.
\subsection{Interactions with stars}
\label{sec:stars}
Asteroid mass PBHs can potentially be constrained by the consequences of their capture by, and transit through, stars~\cite{Capela:2013yf,Pani:2014rca,Graham:2015apa,Montero-Camacho:2019jte}. See Ref.~\cite{Montero-Camacho:2019jte} for detailed recent calculations and discussion.
As a PBH passes through a star it loses energy by dynamical friction, and may be captured. A captured PBH will sink to the centre of the star and also accrete matter, potentially destroying the star. A large capture probability requires a large DM density and low velocity dispersions. Stellar survival constraints have been applied to globular clusters~\cite{Capela:2013yf}. However, as emphasised by Ref.~\cite{Pani:2014rca}, (most) globular clusters are not thought to have a high DM density.
Moreover, Ref.~\cite{Montero-Camacho:2019jte} argues that the survival of stars does not in fact constrain the PBH abundance, but that the disruption of stars may lead to constraints, if the observational signatures are worked out (see Ref.~\cite{Genolini:2020ejw} for work in this direction).
The transit of a PBH through a carbon/oxygen white dwarf will lead to localized heating by dynamical friction, which could ignite the carbon and potentially cause a runaway explosion~\cite{Graham:2015apa,Montero-Camacho:2019jte}. Reference~\cite{Montero-Camacho:2019jte} again finds that the survival of white dwarfs does not constrain $f_{\rm PBH}$, but if white dwarf ignition by a PBH leads to a visible explosion there could be constraints.
\subsection{Gravitational lensing}
\label{sec:lensing}
\subsubsection{Stellar microlensing}
\label{sec:micro}
Stellar microlensing occurs when a compact object with mass in the range $5 \times 10^{-10} \, M_{\odot} \lesssim M_{\rm CO} \lesssim 10 \, M_{\odot}$ crosses the line of sight to a star, leading to a temporary, achromatic amplification of its flux~\cite{Paczynski:1985jf}. The duration of the microlensing event is proportional to $M_{\rm CO}^{1/2}$, therefore the range of masses constrained depends on the cadence of the microlensing survey. The EROS-2 survey of the Magellanic Clouds (MC) found $f_{\rm CO} \lesssim 0.1$ for masses in the range $10^{-6} \lesssim M_{\rm CO}/M_{\odot} \lesssim 1$. The constraint weakens above $M_\mathrm{CO} \approx 1\,M_\odot$, reaching $f_{\rm CO} \lesssim 1$ for $M_{\rm CO} \approx 30 M_{\odot}$~\cite{Tisserand:2006zx}. A MACHO collaboration search for long duration ($> 150$ days) events places a similar constraint on $f_{\rm CO}$, for $1 \lesssim M_{\rm CO}/M_{\odot} \lesssim 30$~\cite{Allsman:2000kg}. Uncertainties in the density and velocity distribution of the dark matter have a non-negligible effect on the MC microlensing constraints~\cite{Hawkins:2015uja,Green:2017qoa,Calcino:2018mwh}, and they would also be changed significantly if the compact objects are clustered~\cite{Calcino:2018mwh}.
Tighter constraints ($f_{\rm CO} \lesssim 10^{-2}$ for $M_{\rm CO} \sim 10^{-3} M_{\odot}$ weakening to $f_{\rm CO} \lesssim 0.1$ for $M_{\rm CO} \sim 10^{-5} \, M_{\odot}$ and $10^{-2} \, M_{\odot}$) have been obtained~\cite{Niikura:2019kqi} using the OGLE microlensing survey of the Galactic bulge~\cite{OGLE}. The OGLE data also contain 6 ultra-short ($0.1-0.3$ day) events which could be due to free-floating planets, or PBHs with $M_{\rm CO} \sim (10^{-4}-10^{-6}) M_{\odot}$ and $f_{\rm CO} \sim 0.01-0.1$~\cite{Niikura:2019kqi}.
A high cadence optical observation of M31 by Subaru HSC~\cite{Niikura:2017zjd} constrains $f_{\rm CO} \lesssim 10^{-2}$ for
$ 5 \times 10^{-10} \, M_{\odot} \lesssim M_{\rm CO} \lesssim 10^{-8} \, M_{\odot}$ weakening to $f_{\rm CO} \lesssim 1$ at
$M_{\rm CO} \sim 5 \times 10^{-12} \, M_{\odot}$ and $ 5 \times 10^{-6} \, M_{\odot}$~\cite{Croon:2020ouk}. The constraints are weaker than initially found, due to finite source and wave optics effects. For $M_{\rm PBH} \lesssim 10^{-10} M_{\odot}$, the Schwarzschild radius of the PBH is comparable to, or less than, the wavelength of the light and wave optics effects reduce the amplification~\cite{Sugiyama:2019dgt}. Furthermore the stars in M31 that are bright enough for microlensing to be detected are typically larger than assumed in Ref.~\cite{Niikura:2017zjd}, further weakening the constraint~\cite{Montero-Camacho:2019jte,Smyth:2019whb}.
There are also significantly weaker constraints for $M_{\rm CO} \approx (10^{-7}-10^{-9}) \, M_{\odot}$ from a search for low amplification microlensing events in Kepler data~\cite{Griest:2013aaa}.
When a background star crosses a caustic in a galaxy cluster it is magnified by orders of magnitude~\cite{miraldaescude}. Microlensing by stars or other compact objects in the cluster can lead to short periods of further enhanced magnification. On the other hand if a significant fraction of the DM within the cluster is composed of compact objects then the overall magnification is reduced
(see e.g.~Ref.~\cite{Venumadhav:2017pps} and references therein).
Icarus/MACS J1149LS1 is the first such microlensing event discovered (serendipitously) and is consistent with microlensing by an intracluster star of a source star at a redshift of 1.5~\cite{Kelly:2017fps,Diego:2017drh}. This leads to a constraint $f_{\rm CO} < 0.08$ for $ 10^{-5} < M_{\rm CO}/M_{\odot} < 10$ from the compact object population not reducing the magnification~\cite{Oguri:2017ock}. For more massive compact objects a constraint $f_{\rm CO} < 0.08\,(M_{\rm CO}/10 \,M_{\odot})^{1/3}$ arises from assuming the microlensing event was caused by a dark compact object rather than a star
~\cite{Oguri:2017ock}. Both of these constraints have an uncertainty of order a factor of $2$, from the uncertainty in the lens-source transverse velocity.
\subsubsection{Quasar microlensing}
Microlensing by compact objects in the lens galaxy leads to variations variation in the brightness of multiple-image quasars~\cite{Chang:1979zz}. Using optical data from 24 lensed quasars, Ref.~\cite{Mediavilla:2017bok} finds that $(20 \pm 5) \% $ of
the mass of the lens galaxies is in compact objects (including stars) with mass in the range $0.05 < M_{\rm CO}/M_{\odot} < 0.45$. This is consistent with the expected stellar component, with only a small contribution allowed from dark compact objects, however no constraint on $f_{\rm CO}$ is stated.~\footnote{ According to Ref.~\cite{schechter} similar analysis of X-ray flux ratios, taking into account the stellar contribution, places a constraint $f_{\rm CO} \lesssim 0.1$.}
\subsubsection{Type 1a supernovae} The effects of gravitational lensing on the magnification distribution of type 1a supernovae (SNe1a) depend on whether or not the DM is smoothly distributed~\cite{Metcalf:2006ms}. If the DM is in compact objects with $M_{\rm CO} \gtrsim 10^{-2} M_{\odot}$, then most SNe1a would be dimmer than if the DM were smoothly distributed, while a few will be significantly magnified~\cite{Zumalacarregui:2017qqd}. Using the JLA and Union 2.1 SNe samples, Ref.~\cite{Zumalacarregui:2017qqd} find a constraint $f_{\rm CO} \lesssim 0.4$, for all $M_{\rm CO} \gtrsim 10^{-2} M_{\odot}$. They argue that this result is robust to the finite size of SNe and peculiar SNe. The former is contrary to Ref.~\cite{Garcia-Bellido:2017imq} who argue that the constraint does not apply for $M \lesssim 3 M_{\odot}$.
\subsubsection{Strong lensing of Fast Radio Bursts}
Strong gravitational lensing of Fast Radio Bursts (FRBs) by COs with $M_{\rm CO} \gtrsim (10-100) M_{\odot}$ would lead to two images, separated by a measurable (ms) time delay~\cite{Munoz:2016tmg}. No such signal has been seen in the $\sim 100$ FRBs observed to date, which leads to a constraint $f_{\rm CO} \lesssim 0.7$ for $M_{\rm CO} \gtrsim 10^{3} M_{\odot}$, weakening to $f_{\rm CO} \lesssim 1$ for $M_{\rm CO} \sim 10^{2} M_{\odot}$~\cite{Liao:2020wae}.
\subsubsection{Femtolensing}
\label{femto}
Reference~\cite{gould} proposed that asteroid mass compact objects
could be probed by femtolensing of gamma-ray bursts (GRBs), specifically via interference fringes in the frequency spectrum due to the different phases of the 2 (unresolved) images during propagation.
Using data from the Fermi Gamma-ray Burst Monitor, Ref.~\cite{Barnacka:2012bm} placed constraints on compact objects in the mass range $ 5 \times 10^{17} \, {\rm g} \lesssim M_{\rm CO} < 10^{20} \, {\rm g}$. However Ref.~\cite{Katz:2018zrn} demonstrated that most GRBs are too large to be modelled as point-sources, and furthermore that wave optics effects~\cite{Ulmer:1994ij} need to be taken into account. Consequently there are in fact no current constraints on $f_{\rm CO}$ from femtolensing.
\subsection{Gravitational waves from mergers}
\label{sec:gwmergers}
In the late 1990s Nakamura et al.~\cite{Nakamura:1997sm,Ioka:1998nz} studied the formation of Solar mass PBH DM binaries in the early Universe, when pairs of PBHs may be close enough to decouple from the Hubble expansion before matter-radiation equality. Three-body interactions would impart a small angular momentum on the PBHs, leading to the formation of highly eccentric binaries.
If these binaries survive unaffected to the present day then the gravitational waves (GWs) resulting from their coalescence could be detected by LIGO. \footnote{An additional but sub-dominant contribution to the PBH merger rate comes from binaries formed by dynamical capture in the late Universe~\cite{Bird:2016dcv}.} In fact the merger rate would be several orders of magnitude larger than measured by LIGO-Virgo~\cite{Abbott:2016blz}, which places a tight constraint, $f_{\rm PBH} < {\cal O} (10^{-3})$ for $10 \lesssim M_{\rm PBH}/M_{\odot} \lesssim 300$~\cite{Hayasaki:2009ug,Sasaki:2016jop,Ali-Haimoud:2017rtz,Kavanagh:2018ggo}. A dedicated LIGO-Virgo search for sub-Solar mass mergers constrains $f_{\rm PBH} < {\cal O} (10^{-1})$ down to $M_\mathrm{PBH} \sim 0.2\,M_\odot$ \cite{Authors:2019qbw}. There are also similar constraints from the stochastic gravitational wave background~\cite{Wang:2016ana,Raidal:2017mfl,Chen:2019irf,LIGOScientific:2019vic} of such mergers, as well as searches for PBH binaries with large mass ratios~\cite{Nitz:2020bdb}.
If PBHs don't constitute all of the DM, then during matter domination stellar mass (and more massive) PBHs accrete halos of particle dark matter with a steep density profile~\cite{Mack:2006gz,Adamek:2019gns}.~\footnote{Consequently stellar mass PBHs and WIMP DM can't coexist, as gamma-rays from WIMP annihilation in the WIMP halos around PBHs would have already been observed~\cite{Lacki:2010zf,Adamek:2019gns,Bertone:2019vsk}.}
These DM mini-halos affect the dynamical evolution of the PBH-binaries~\cite{Hayasaki:2009ug,Ali-Haimoud:2017rtz}, however this has a relatively small effect on the merger rate and resulting constraints~\cite{Kavanagh:2018ggo}.
A major outstanding problem in the calculation of GW constraints is the evolution, and survival, of PBH binaries between formation and merger. If PBHs make up a significant fraction of the DM then PBH clusters form not long after matter-radiation equality~\cite{Chisholm:2005vm,Chisholm:2011kn,Raidal:2018bbj,Inman:2019wvr}. While distant three-body interactions are expected to have little impact on isolated PBH binaries~\cite{Ali-Haimoud:2017rtz,Young:2020scc}, three-body interactions in PBH clusters could significantly affect the properties of binaries and hence the predicted merger rates~\cite{Vaskonen:2019jpv,Jedamzik:2020ypm,Jedamzik:2020omx,Trashorras:2020mwn}. Merger rates may also be increased (leading to stronger constraints) in scenarios where PBHs are formed with large initial clustering (as discussed in Sec.~\ref{sec:spincluster})~\cite{Ballesteros:2018swv,Bringmann:2018mxj}, though Ref.~\cite{Atal:2020igj} argue that clustering instead should weaken constraints. Given these outstanding problems of PBH clustering and binary survival, in Fig.~\ref{fig:constraints} we show constraints~\cite{Kavanagh:2018ggo,Authors:2019qbw,Chen:2019irf} that assume no clustering and no disruption of the binaries.~\footnote{Note that the constraints in Ref.~\cite{Kavanagh:2018ggo} are derived using limits from LIGO's first observing run (O1). Stronger constraints can be derived from more recent data (O2), assuming that none of the observed binary BH mergers have a primordial origin~\cite{Vaskonen:2019jpv,Chen:2019irf}.}
\subsection{Dynamical}
\label{sec:dynamical}
\subsubsection{Dwarf galaxies}
Two-body interactions lead to the kinetic energies of different mass populations within a system becoming more equal. In a system made of stars and more massive compact objects, the stars gain energy and their distribution will expand. Ultra-faint dwarf galaxies (UFDGs) are particularly sensitive to this effect, due to their high mass to luminosity ratios. Reference~\cite{Brandt:2016aco} found that the observed sizes of UFDGs place a constraint $f_{\rm CO} \lesssim 0.002-0.004$ for $M_{\rm CO} \sim 10^{4} M_{\odot}$, weakening with decreasing mass to $f_{\rm CO} \lesssim 1 $ for $M_{\rm CO} \sim 10 M_{\odot}$. The uncertainty comes from uncertainties in the velocity dispersion of the compact objects and the assumed initial radius of the stellar distribution. Tighter constraints may be obtained from the survival of the star cluster near the centre of the dwarf galaxy Eridanus II, depending on its age~\cite{Brandt:2016aco}.
Reference~\cite{Koushiappas:2017chw} used the projected stellar surface density profile of Segue 1, and in particular the absence of a ring feature, to show that $f_{\rm CO} < 0.06 \, (0.20)$ for $M_{\rm CO} = 30 \, (10) M_{\odot}$ at 99.9\% confidence.
Subsequent Fokker-Planck simulations of dwarfs composed of stars and compact object dark matter have not found such a feature however~\cite{Zhu:2017plg}, and have also found a slightly slower growth of the stellar component than in Ref.~\cite{Brandt:2016aco}. Consequently their constraints~\cite{Zhu:2017plg} are slightly weaker than Refs.~\cite{Brandt:2016aco,Koushiappas:2017chw}: $f_{\rm CO} < 1$ for $M_{\rm CO} >14 \, M_{\odot}$. Reference~\cite{Stegmann:2019wyz} has subsequently shown that collectively UFDGs exclude $f_{\rm CO} = 1$ for the mass range $(1-100) M_{\odot}$ for both delta-function and lognormal mass functions. While some low mass UFDGs have stellar populations that are individually consistent with $f_{\rm CO} \sim 1$ for $M_{\rm CO} \sim 1 M_{\odot}$ (see also Ref.~\cite{Zhu:2017plg}), most would be `puffed up' too much.
\subsubsection{Wide binaries}
The energy of wide binary stars is increased by multiple encounters with compact objects, potentially leading to disruption~\cite{bht}.
The separation distribution of wide binaries can therefore be used to constrain the abundance of compact objects~\cite{Yoo:2003fr}. Radial velocity measurements are required to confirm that wide binaries are genuine, as otherwise erroneously tight constraints can be obtained from spurious binaries~\cite{Quinn:2009zg}.
Using the 25 wide binaries in their catalogue~\cite{Allen} that spend the least time in the Galactic disk (and are hence least affected by encounters with the stars therein) Ref.~\cite{mr} finds $f_{\rm CO} \lesssim 0.1$ for $M_{\rm CO} \gtrsim 70 M_{\odot}$ with the limit weakening with decreasing mass to $f_{\rm CO} \lesssim 1 $ for $M_{\rm CO} = 3 M_{\odot}$. Tighter constraints may be possible using data from \textit{Gaia}, however radial velocity follow-up will be needed to confirm that candidate binaries are genuine (c.f. Ref.~\cite{pwos}).
\subsection{Accretion}
\label{sec:accretionconstraint}
\subsubsection{$z>0$}
Radiation emitted due to gas accretion onto PBHs can modify the recombination history of the Universe, and hence affect the anisotropies and spectrum of the CMB~\cite{1981MNRAS.194..639C,Ricotti:2007au}. There are significant theoretical uncertainties in the accretion rate and also the ionizing effects of the radiation~\cite{Ali-Haimoud:2016mbv,Bosch-Ramon:2020pcz}.
Reference~\cite{Poulin:2017bwe} argues that, contrary to the spherical accretion assumed in previous work, an accretion disk should form, in which case the resulting constraints are significantly tightened. Formation of (non-PBH) dark matter halos around PBHs tightens the constraints for $M_{\rm PBH} \gtrsim 10 M_{\odot}$~\cite{Serpico:2020ehh}. For spherical (disk) accretion $f_{\rm PBH} \lesssim 1$ for $M_{\rm PBH} \sim 10 \, (1) \, M_{\odot}$, tightening with increasing PBH mass to $f_{\rm PBH} < 3 \times 10^{-9}$ at $M_{\rm PBH} \sim 10^{4} M_{\odot}$~\cite{Serpico:2020ehh}. There are also model-dependent constraints on PBHs with $M_{\rm PBH} \gtrsim 10 \, M_{\odot}$ from their effects on the 21-cm spectrum as measured by EDGES~\cite{Hektor:2018qqw}.
\subsubsection{Present day}
Accretion of interstellar gas onto $M_{\rm PBH}> M_{\odot}$ PBHs in the Milky Way would lead to observable X-ray and radio emission~\cite{Gaggero:2016dpq}. Comparing predictions from numerical simulations of gas accretion onto isolated moving compact objects with known X-ray and radio sources in the Chandra and VLA Galactic centre surveys leads to a constraint $f_{\rm PBH} \lesssim 10^{-3}$ for $M_{\rm PBH} \sim (30-100) \, M_{\odot}$~\cite{Manshanden:2018tze}. Reference~\cite{Inoue:2017csr} uses the observed number density of compact X-ray objects to place a similar constraint on $f_{\rm PBH}$, valid up to $M_{\rm PBH} \sim 10^{7} \, M_{\odot}$. Reference~\cite{Lu:2020bmd} places a constraint $f_{\rm PBH} \lesssim 10^{-4}$ for $M_{\rm PBH} \sim 10^{3} M_{\odot}$, weakening to
$f_{\rm PBH} \lesssim 1$ for $M_{\rm PBH} \sim M_{\odot}$ and $10^{7} M_{\odot}$, from gas heating in dwarf galaxies.
\begin{figure}[t]
\begin{center}
\includegraphics[width=1.0\textwidth]{figures/PBHbounds_review.pdf}\\
\end{center}
\caption{\label{fig:allconstraints} All constraints on the fraction of DM in the form of PBHs, $f_{\rm PBHs}$, with mass $M_{\rm PBH}$, coming from PBH evaporation, microlensing, gravitational waves, PBH accretion and dynamical constraints. Each region shows the envelope of constraints from the corresponding panel in Fig.~\ref{fig:constraints}. Digitised bounds and plotting codes are available online at \href{https://github.com/bradkav/PBHbounds} {\underline{PBHbounds}}. }
\end{figure}
\subsection{Large scale structure}
\label{sec:lss}
If massive PBHs make up a significant fraction of the DM, then Poisson fluctuations in their number density enhance the matter power spectrum at small scales~\cite{1975A&A....38....5M,Afshordi:2003zb}, which can be probed by observations of the Lyman-$\alpha$ forest~\cite{Afshordi:2003zb}. Using the latest data from MIKE/HIRES and high-resolution hydrodynamical simulations Ref.~\cite{Murgia:2019duy} finds a conservative limit $f_{\rm co} \lesssim (100 M_{\odot}/ M_{\rm PBH})$.
\subsection{Indirect constraints}
\label{sec:indirect}
In this subsection we look at constraints on the amplitude of large primordial perturbations, which lead to indirect constraints on the abundance of PBHs formed via the collapse of large density perturbations during radiation domination (Sec.~\ref{sec:raddom}). These constraints do not apply to PBHs formed via other mechanisms (see Sec.~\ref{sec:otherform}). As discussed in Sec.~\ref{sec:raddom}, there are large uncertainties in the calculation of the abundance of PBHs formed from a given primordial power spectrum.
First order scalar perturbations generate tensor perturbations at second order~\cite{Ananda:2006af,Baumann:2007zm}. If the density perturbations are sufficiently large then the amplitude of the resulting `scalar induced gravitational waves' (SIGWs) is larger than that of the GWs generated by the primordial tensor perturbations. Constraints on the energy density of stochastic GWs, from e.g.~Pulsar Timing Arrays, therefore limit the abundance of PBHs formed via the collapse of large density perturbations~\cite{Saito:2008jc}. These constraints depend on the shape of the primordial power spectrum, and also the assumed probability distribution of the density perturbations, and are therefore (inflation) model dependent~\cite{Garcia-Bellido:2016dkw,Inomata:2016rbd,Orlofsky:2016vbd}. Models which produce a broad peak in the primordial power spectrum are most tightly constrained~\cite{Inomata:2016rbd,Orlofsky:2016vbd}.
For PBHs forming from large density perturbations during radiation domination, Refs.~\cite{Byrnes:2018txb,Inomata:2018epa} find $f_{\rm PBH} < 1$ for $10^{-2} \lesssim M_{\rm PBH}/M_{\odot} \lesssim 1$. Reference~\cite{Chen:2019xse} finds, using data from NANOGrav, $f_{\rm PBH} < 10^{-23}$ for $M_{\rm PBH} = 0.1 M_{\odot}$ and $f_{\rm PBH} < 10^{-6}$ for $0.002 < M_{\rm PBH}/M_{\odot} < 0.4$. However this calculation makes approximations which have a huge effect on the constraint on $f_{\rm PBH}$ (including setting the PBH formation threshold equal to unity, and $\sigma^2 = A$). There are also tight constraints on the abundance of light, $M_{\rm PBH} \sim 10^{13-15} \, {\rm g}$, PBHs from limits on SIGWs from LIGO~\cite{Kapadia:2020pir}. Such light PBHs are expected to have evaporated by the present day, however if Hawking evaporation is not realised in nature they would be stable and otherwise viable as DM.
The amplitude of the primordial density perturbations can also be constrained by the CMB spectral distortions caused by the dissipation of large perturbations~\cite{Carr:1993aq}. Limits on CMB spectral distortions from COBE/FIRAS exclude $f_{\rm PBH} = 1$ for PBHs with
$ M_{\rm PBH}/M_{\odot} \gtrsim 10^{3}$
formed from large density perturbations with a Gaussian distribution~\cite{Kohri:2014lza}.~\footnote{Here we have used Eq.~(\ref{mhr}), taken from Ref.~\cite{Wang:2019kaf}, to translate from $k$ to $M_{\rm PBH}\approx M_{\rm H}$.}
Masses down to $M \sim 10^{3} M_{\odot}$ can similarly be excluded via the effects of dissipation
on Big Bang Nucleosynthesis~\cite{Inomata:2016uip}.
\subsection{Future constraints}
\label{sec:future}
In this subsection we discuss potential future constraints on PBH DM. We start with direct constraints in (roughly) increasing order of PBH mass probed, followed by indirect constraints.
Planned space observatories, such as e-ASTROGAM and ASTRO-H, will allow a lower and more precise measurement of the flux of the isotropic gamma-ray and X-ray backgrounds. This will allow improved constraints to be placed on PBHs in the mass range $M_{\rm PBH} \sim 10^{16-18} \, {\rm g}$ via the products of their evaporation~\cite{Ballesteros:2019exr}.
A small subset of GRBs with fast variability have small sizes and are therefore suitable targets for femtolensing~\cite{Katz:2018zrn}. A sample of 100 such GRBs with well-measured redshifts could be used to probe PBHs with $10^{17} \, {\rm g} \lesssim M_{\rm CO} \lesssim 10^{19} \, {\rm g}$. Proposals to measure the lensing parallax of GRBs -- the relative brightness measured by multiple telescopes at large spatial separations -- suggest that this approach could be sensitive to the entire unconstrained range $M_\mathrm{PBH} \sim 10^{17-23}\,\mathrm{g}$~\cite{Nemiroff:1995ak,Jung:2019fcs}. Very high cadence observations of white dwarfs in the LMC, by e.g.~LSST, could reduce the minimum mass probed by microlensing observations by a factor of a few~\cite{Sugiyama:2019dgt}. However, as discussed in detail in Ref.~\cite{Montero-Camacho:2019jte}, diffraction and finite source size effects make it difficult to extend the range of masses probed by optical microlensing below $M_{\rm CO} \sim 10^{22} \, {\rm g}$. Microlensing of X-ray pulsars, which have small source sizes, can avoid these restrictions and long observations by X-ray telescopes with large effective areas (e.g.\ AstroSat, LOFT) of SMC X-1 and other X-ray binaries could probe the range $10^{18} \, {\rm g} \lesssim M_{\rm CO} \lesssim 10^{22} \, {\rm g}$~\cite{Bai:2018bej}.
Per-cent level constraints on $f_{\rm CO}$ for $10^{-4} \lesssim M_{\rm CO}/M_{\odot} \lesssim 0.1$ could be achieved from future FRB detections, via the phase difference they produce between unresolved images (as in femtolensing)~\cite{Katz:2019qug}. Pulsar timing arrays can detect the gravitational redshift and acceleration induced by passing compact objects~\cite{Schutz:2016khr,Dror:2019twh}. Via various types of searches, SKA will be able to probe $f_{\rm PBH} \approx 1$ over the entire mass range $(10^{-12} -100)\, M_{\odot}$, with potentially sub-percent level constraints in some mass regions~\cite{Dror:2019twh}. Planned sub-Hertz gravitational wave observatories such as LISA and DECIGO will be sensitive to extreme mass ratio binaries, composed of astrophysical supermassive black holes and compact objects in the range $10^{-6} \lesssim M_{\rm CO}/M_{\odot} \lesssim 1$, down to the level of around $f_\mathrm{CO} \sim 10^{-3}$~\cite{Guo:2017njn,Wang:2019kzb}.
Detailed observations of caustic-crossing events (combined with improved theoretical modelling) could place very tight constraints on compact objects with planetary, stellar and larger masses~\cite{Diego:2017drh,Venumadhav:2017pps}.
Constraints on frequency-dependent gravitational lensing dispersions by future gravitational wave detectors could probe PBHs with $M \gtrsim 0.1 M_{\odot}$ via their effects on the matter power spectrum~\cite{Oguri:2020ldf}. Measurements of the 21cm power spectrum by HERA and SKA will potentially improve cosmological accretion constraints on PBHs with $M_{\rm PBH}> M_{\odot}$ by an order of magnitude~\cite{Mena:2019nhm}.
Accurate astrometric surveys can probe compact objects via the time-dependent weak lensing of stars (``astrometric microlensing")~\cite{Dominik:1998tn}.
By the end of its lifetime {\it Gaia} could place sub per-cent level constraints on $f_{\rm CO}$ for $M_{\rm CO}> 10 \, M_{\odot}$ via the large anomalous angular velocities and accelerations produced by close encounters~\cite{VanTilburg:2018ykj}. Similar constraints could be placed on stellar and planetary mass COs by {\it Gaia} via non-repeating proper motion anomalies (dubbed `blips'), with a future survey such as {\it Theia} capable of even tighter constraints~\cite{VanTilburg:2018ykj}.
Upcoming experiments, suchs as CHIME and SKA, should lead to constraints in the range $f_{\rm CO}< (0.01-0.1)$ for $M_{\rm CO} \gtrsim (10-100) M_{\odot}$ from strong gravitational lensing of FRBs by COs~\cite{Munoz:2016tmg,Liao:2020wae}.
Similar constraints could be obtained down to $M_{\rm CO} \approx 2 M_{\odot}$ from lensing of the burst microstructure~\cite{Laha:2018zav}. Strong lensing of GRBs by compact objects with $10 \lesssim M_{\rm CO}/M_{\odot} \lesssim 1000 M_{\odot}$ leads to superimposed images which could be detected by a future GRB observatory, leading to per-cent level constraints~\cite{Ji:2018rvg}.
Gravitational lensing of gravitational waves by compact objects with $M_{\rm CO}/M_{\odot} \gtrsim 5 M_{\odot}$ would produce fringes which could lead to sub per-cent level constraints from current gravitational wave detectors~\cite{Diego:2019rzc} and 3rd generation experiments like the Einstein Telescope~\cite{Liao:2020hnx}.
Future space-based laser interferometers will be able to indirectly constrain PBHs produced by the collapse of large density perturbations in the mass range $10^{20-26} \, {\rm g}$, via induced gravitational waves~\cite{Saito:2008jc,Cai:2018dig,Bartolo:2018evs,Inomata:2018epa}. A PIXIE-like CMB spectral distortion experiment could similarly constrain $M_{\rm PBH} \gtrsim \, M_{\odot}$~\cite{Chluba:2019nxa}.
\subsection{Application of constraints to extended mass functions}
\label{sec:emf}
The constraints described above are typically calculated assuming a delta-function (or monochromatic) mass function. However, as discussed in Sec.~\ref{sec:dp}, in many cases PBHs are expected to be produced with an extended mass function.
Carr et al.~\cite{Carr:2017jsz} devised a method for applying constraints calculated assuming a delta-function mass function to specific extended mass functions, without explicitly recalculating the constraint from scratch. The dependence of a given astrophysical observable, $A[\psi(M)]$, on the mass function, $\psi(M)$, defined so that
$f_{\rm PBH} = \int \psi(M) \, {\rm d} M$
can be expanded as
\begin{equation}
A[\psi(M)] = A_{0} + \int \psi(M) K_{1}(M) \, {\rm d} M + \int \psi(M) K_{2}(M_{1}, M_{2}) \, {\rm d} M_{1} \, {\rm d} M_{2} + ... \,,
\label{apsi}
\end{equation}
where $A_{0}$ is the background contribution and the functions $K_{j}(M)$ encode how the underlying physics of the observable depend on the PBH mass function. In many cases observations place a bound on a single observable: $A[\psi(M)] < A_{\rm exp}$. For instance in the case of stellar microlensing the observable is the number of events. Also, for many constraints, PBHs with different masses contribute independently to the constraint so that $K_{j}(M) =0$ for $j \geq 2$. In this case the constraint for a delta-function mass function as a function of mass, $f_{\rm max}(M)$, can be translated into a constraint on a specified extended mass function using:
\begin{equation}
\int \frac{ \psi(M)}{f_{\rm max} (M)} \, {\rm d} M \leq 1 \,.
\end{equation}
This procedure has to be implemented separately for each observable, and different constraints combined in quadrature. As emphasised in Ref.~\cite{Carr:2017jsz} there are some caveats in the application of this method. The PBH MF can evolve, due to mergers and accretion, so that the the initial MF is not the same as the MF at the time the constraint applies.
For some constraints, e.g.~GWs from mergers, the higher order terms, $K_{j}(M)$ for $j \geq 2$, are not zero and a detailed calculation is required for each mass function~\cite{Raidal:2017mfl,Chen:2018czv}. In some cases, such as the effects of PBH accretion on the CMB, the observable is not a single quantity, and if this is not taken into account artificially tight constraints will be obtained.
This method has also been expounded on by Bellomo et al.~\cite{Bellomo:2017zsr}. They showed that for any specific mass function, for each observable the effects are equivalent to a delta-function MF with a particular `equivalent mass'. They also emphasised that when considering MFs with extended tails, for instance a lognormal distribution, care should be taken not to apply constraints beyond the limits of their validity.
When applied to extended mass functions the constraints are effectively `smeared out'~\cite{Carr:2017jsz}. Consequently, when multiple constraints are considered, extended mass functions are more tightly constrained than the delta-function mass function, and small mass windows between constraints are closed~\cite{Green:2016xgy,Carr:2017jsz}
\section{Summary}
\label{sec:summary}
The LIGO-Virgo discovery of gravitational waves from multi-Solar mass black holes has led to a resurgence of interest in primordial black holes (PBHs) as a dark matter candidate. Consequently there have been significant improvements in the theoretical calculations of PBH formation and also the observational constraints on their abundance. In this final section we summarise the current status and highlight key open questions.
The most popular PBH formation mechanism is the collapse of large density perturbations, generated by a period of inflation in the early Universe, during radiation domination. There have been significant recent improvements in our understanding of the threshold for collapse, $\delta_{\rm c}$, and its dependence on the shape of the perturbations (see Sec.~\ref{sec:deltac}). To produce an interesting number of PBHs the primordial power spectrum must grow by $\sim 7$ orders of magnitude from its measured value on cosmological scales. This can be achieved in single-field models with an inflection point or a shallow local minimum in the potential (Sec.~\ref{sec:single}), or in some multi-field models (Sec.~\ref{sec:multi}), and there has been significant recent work revisiting these models and refining calculations. The standard calculation of the abundance and mass function of PBHs (Sec.~\ref{sec:betamf}) assumes that the primordial density perturbations have a Gaussian probability distribution. However this assumption is not valid for large, PBH-producing perturbations. An accurate calculation of the non-Gaussian probability distribution, which for many models needs to take into account quantum diffusion, is an open issue. Detailed calculations of the mass function and initial clustering of PBHs produced by broad power spectra are also an outstanding problem.
There are a wide range of different observational constraints. The lightest stable PBHs are constrained by the products of the early stages of their evaporation (Sec.~\ref{sec:evap}), planetary and stellar mass and heavier PBHs are constrained by gravitational lensing observations (Sec.~\ref{sec:lensing}), with stellar mass PBHs also being constrained by their dynamical effects (Sec.~\ref{sec:dynamical}), the consequences of accretion onto them (Sec.~\ref{sec:accretionconstraint}) and gravitational waves from their mergers (Sec.~\ref{sec:gwmergers}). The abundance of PBHs formed by the collapse of large density perturbations is also constrained indirectly via constraints on the amplitude of the power spectrum (Sec.~\ref{sec:indirect}). In the past few years there has been significant activity on PBH abundance constraints, with new constraints being proposed and existing constraints being revisited. In some cases (e.g.~interactions with stars, Sec.~\ref{sec:stars}, or GRB femtolensing, Sec.~\ref{femto}) `old' constraints have been shown not to hold. While individual constraints may have significant modelling uncertainties the Solar mass region is now subject to multiple constraints. If the constraints are taken at face value, PBHs with masses in the planetary to multi-Solar mass range can only make up a subdominant fraction of the DM. However the robustness of this conclusion depends on the late-time clustering of the PBH population, which remains unclear.
The asteroid mass region ($10^{17} \, {\rm g} \lesssim M_{\rm PBH} \lesssim 10^{22} \, {\rm g}$) remains open. This largely reflects the difficulty of detecting such light compact objects.
Primordial Black Holes, in particular asteroid mass ones, remain a viable dark matter candidate.
Further improvements in the theoretical calculations of the production and evolution of PBHs are required to reliably predict the abundance and properties of PBHs from a given model.
Even so, it seems clear that a cosmologically interesting number of PBHs can only be produced in specific models of the early Universe, and often fine-tuning is required.
However, whether or not PBHs are a significant component of the DM is a question that has to be answered observationally. Novel ideas are needed here to either detect or rule out the remaining open parameter space.
\section*{Acknowledgments}
AMG is supported by STFC grant ST/P000703/1. BJK thanks the Spanish Agencia Estatal de Investigaci\'on (AEI, MICIU) for the support to the Unidad de Excelencia Mar\'ia de Maeztu Instituto de F\'isica de Cantabria, ref. MDM-2017-0765.
We acknowledge
the use of {\sc NumPy} \citep{vanderWalt:2011bqk} and {\sc Matplotlib} \citep{Hunter:2007ouj}.
\href{https://automeris.io/WebPlotDigitizer}{\underline{WebPlotDigitizer}} has been used to extract data from publications.
We are grateful to Chris Byrnes, Bernard Carr, Karim Malik, Jordi Miralda-Escude and Teruaki Suyama for useful discussions and/or comments.
BJK thanks Adam Coogan, Zu-Cheng Chen and Mohsen Ghazi for contributions to the PBHbounds code.
\bibliographystyle{apsrev4-1}
| proofpile-arXiv_065-200 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
When dealing with correlated Fermi systems, one has very frequently to face the breaking of one or more symmetries of the model through the development of some kind of order. Mean-field theory provides a relatively simple but often qualitatively correct description of \textit{ground state} properties in the ordered phase. Remarkably enough, this is not limited to weak coupling calculations but it may survive at strong coupling~\cite{Eagles1969,Leggett1980}. On the other hand, fluctuations of the order parameter play a key role at \textit{finite temperature} $T$~\cite{Nozieres1985} and in low dimensionalities. In particular, they are fundamental in two-dimensional systems, where they prevent \hgl{continuous symmetry breaking} at any $T\neq0$~\cite{Mermin1966,Hohenberg1967}. \hgl{In the specific case of U(1) or SO(2) symmetry groups, fluctuations} are responsible for the formation of the Berezinskii-Kosterlitz-Thouless (BKT) phase, characterized by quasi long-range order~\cite{Berezinskii1971,Kosterlitz1973}.
The functional renormalization group (fRG) provides a framework to deal with interacting Fermi systems and ordering tendencies~\cite{Metzner2012,Kopietz2010book}. The inclusion of an infrared cutoff in the bare model allows for the treatment of different energy scales $\Lambda$ in a unified approach. In the most typical cases of symmetry breaking, as those associated with the onset of magnetic or superfluid/superconducting orders, at high energies the system is in its symmetric phase, while by decreasing the scale $\Lambda$, the effective two fermion interaction grows until it reaches a divergence at a scale $\Lambda_c$ in one (or more) specific momentum channel~\cite{Zanchi2000,Halboth2000,Honerkamp2001}. \hgl{This divergence, however, can be an artifact of a poor approximation of the flow equations, such as the 1-loop truncation. Indeed, better approximations, as the 2-loop or the multiloop truncation, can significantly reduce the value of $\Lambda_c$, even down to zero~\cite{Freire2008,Eberlein2014,Hille2020}}. In order to continue the flow into the low energy regime, $\Lambda<\Lambda_c$, one has to explicitly introduce an order parameter taking into account spontaneous symmetry breaking.
Various approaches are possible. One can, for example, decouple the bare interaction via a Hubbard-Stratonovich transformation and run a flow for a mixed boson-fermion system above~\cite{Schuetz2005,Bartosch2009_II,Isidori2010,Streib2013,Lange2015,Lange2017} and below the critical scale. In this way one is able to study fluctuation effects both in the symmetric and in the ordered phases~\cite{Diehl2007,Diehl2007_II,Strack2008,Bartosch2009,Obert2013,Schuetz2006}. Moreover, the two fermion effective interaction generated by the flow can be re-bosonized scale by scale, with a technique called \textit{flowing bosonization}, either decoupling the bare interaction from the beginning~\cite{Baier2004,Floerchinger2008} or keeping it along the flow, reassigning to the bosonic sector only contributions that arise on top of it~\cite{Krahl2009,Friederich2010,Friederich2011}. A different approach to fluctuation effects consists in including below the critical scale $\Lambda_c$ the anomalous terms arising from the breaking of the global symmetry, by keeping only fermionic degrees of freedom~\cite{Salmhofer2004,Gersch2008,Eberlein2013,Eberlein2014_II,Maier2014}. If one is not interested in the effects of bosonic fluctuations, as it could be for ground state calculations, a relatively simple truncation of flow equations can reproduce a mean-field (MF) like solution~\cite{Salmhofer2004,Gersch2005,Wang2014,Yamase2016}.
Concerning the symmetric phase above the critical scale, recent developments have made the fRG a more reliable method for quantitative and/or strong coupling calculations. We refer, in particular, to the development of the multiloop fRG, that has been proven to be equivalent to the parquet approximation~\cite{Kugler2018_I,Kugler2018_II,Tagliavini2019,Hille2020} and the fusion of the fRG with the dynamical mean-field theory (DMFT)~\cite{Georges1996} in the so called DMF\textsuperscript{2}RG scheme~\cite{Taranto2014,Vilardi2019}. Within these frameworks, the full dependence of the effective two fermion interaction on all three Matsubara frequencies is often kept.
On the other hand, many efforts have been made in order to reduce the computational complexity of the effective interaction with a full dependence on its fermionic arguments. This is mainly achieved by describing the fermion-fermion interaction process through the exchange of a small number of bosons. Many works treat this aspect not only within the fRG~\cite{Friederich2010,Friederich2011,Denz2020}, but also within the DMFT, in the recently introduced single boson exchange approximation~\cite{Krien2019_I,Krien2019_II}, its nonlocal extensions, the TRILEX approach for example~\cite{Ayral2016}, or the dual boson theory~\cite{Rubtsov2012,Stepanov2018,Stepanov2019,Peters2019}. Describing the fermionic interactions in terms of exchanged bosons is important not only to reduce the computational complexity, but also to identify those collective fluctuations that play a fundamental role in the ordered phase.
In this paper, we present a truncation of the fRG flow equations, in which a bosonic field is explicitly introduced, and we prove it to be equivalent to the fusion of the fRG with MF theory introduced in Refs.~\cite{Wang2014,Yamase2016}. These flow equations fulfill fundamental constraints as the Goldstone theorem and the global Ward identity connected with spontaneous symmetry breaking (SSB), and they can be integrated, simplifying the calculation of correlation functions in the ordered phase to a couple of self consistent equations, one for the bosonic field expectation value, and another one for the Yukawa coupling between a fermion and the Goldstone mode. In order to perform the Hubbard-Stratonovich transformation, we decompose the effective two fermion interaction in terms of an exchanged boson, which becomes massless at the critical scale, and a residual interaction, and we present a technique to factorize the fRG vertex when its full dependence on fermionic Matsubara frequencies is kept. We prove the feasibility and efficiency of our formalism by applying it to the two-dimensional half-filled attractive Hubbard model, calculating the superfluid gap, Yukawa couplings and residual two fermion interactions in the SSB phase \hgl{and comparing our results with previous fRG and quantum Monte Carlo studies.}
One notable aspect of our formalism is that the full dependence on fermionic momenta and/or frequencies can be retained. This makes it suitable for a combination with the newly developed methods within the fRG, to continue the flow with a simple truncation in those cases in which the effective two fermion interaction diverges. In the one loop truncation, both in plain fRG~\cite{Vilardi2017} and in the DMF\textsuperscript{2}RG~\cite{Vilardi2019}, these divergences are actually found at finite temperature, indicating the onset of spontaneous symmetry breaking.
Our method can be also combined with the multiloop fRG, where no divergences are found at finite temperature in 2D~\cite{Hille2020}, to study three-dimensional systems or zero temperature phases.
Furthermore, the introduction of the bosonic field makes our method a convenient starting point for the inclusion of order parameter fluctuations on top of the MF, and paves the way for the study of the SSB phases with a full treatment of fermionic Matsubara frequency dependencies.
This paper is organized as follows. In Sec.~\ref{sec: fRG} we give a short overview of the fRG and its application to correlated Fermi systems. In Sec.~\ref{sec: model} we introduce the attractive Hubbard model, that will be the prototypical model for the application of our method. In Sec.~\ref{sect: fermionic formalism} we review the MF approximation within the fRG by making use only of fermionic degrees of freedom. In Sec.~\ref{sect: bosonic formalism} we introduce our method by reformulating the fermionic MF approach with the introduction of a bosonic field and we prove the equivalence of the two methods. In Sec.~\ref{sec: vertex bosonization} we expose a strategy to extract a factorizable part from the effective two fermion interactions, necessary to implement the Hubbard-Stratonovich transformation.
This strategy is suitable for the application to the most frequently used schemes within the fRG.
In Sec.~\ref{sec: results} we present some exemplary results for the attractive Hubbard model. A conclusion in Sec.~\ref{sec: conclusion} closes the presentation.
\section{Functional renormalization group}
\label{sec: fRG}
In this section we present a short review of the fRG applied to interacting Fermi systems and we refer to Ref.~\cite{Metzner2012} for further details.
Providing the bare fermionic action with a regulator $R^\Lambda$,
\begin{equation}
\mathcal{S}\left[\psi,\overline{\psi}\right]\rightarrow \mathcal{S}\left[\psi,\overline{\psi}\right] + \left(\overline{\psi}R^\Lambda,\psi\right),
\end{equation}
where the symbol $(\cdot,\cdot)$ indicates a sum over quantum numbers and fermionic Matsubara frequencies $\nu=(2j+1)\pi T$, with $j\in \mathbb{Z}$, one can derive an exact differential equation for the effective action as a function of the scale $\Lambda$~\cite{Wetterich1993,Berges2002}:
\begin{equation}
\partial_\Lambda\Gamma^\Lambda[\psi,\overline{\psi}]=-\frac{1}{2}\widetilde{\partial}_\Lambda\text{tr}\ln\left[\mathbf{\Gamma}^{(2)\Lambda}[\psi,\overline{\psi}]+R^\Lambda\right],
\label{eq: Wetterich eq. ferm}
\end{equation}
where $\mathbf{\Gamma}^{(2)\Lambda}$ is the matrix of second derivatives of the effective action w.r.t. the fermionic fields, $\widetilde{\partial}_\Lambda$ is a derivative acting only on the explicit $\Lambda$-dependence of $R^\Lambda$ and the trace is intended to run over all the quantum numbers and Matsubara frequencies. In general, the regulator can be any generic function of the scale $\Lambda$ and the fermionic "$d$+1 momentum" $k=(\mathbf{k},\nu)$ (with $\mathbf{k}$ being the spatial momentum), provided that $R^{\Lambda\rightarrow\Lambda_\text{init}}\rightarrow \infty$ and $R^{\Lambda\rightarrow\Lambda_\text{fin}}\rightarrow 0$. In this way, Eq.~\eqref{eq: Wetterich eq. ferm} can be complemented with the initial condition
\begin{equation}
\Gamma^{\Lambda=\Lambda_\text{init}}[\psi,\overline{\psi}]=\mathcal{S}[\psi,\overline{\psi}].
\end{equation}
Eq.~\eqref{eq: Wetterich eq. ferm} is however very hard to tackle. A common procedure is to expand the effective action $\Gamma^\Lambda$ in polynomials of the fields up to a finite order, so that one is limited to work with a finite number of scale dependent couplings. Rather often, in the context of correlated Fermi systems, this truncation is restricted to a flow equation for the self-energy $\Sigma^\Lambda$ and a vertex $V^\Lambda$, describing the two fermion effective interaction. The differential equations for these couplings can be inferred directly from Eq.~\eqref{eq: Wetterich eq. ferm}. Furthermore, when working with systems that possess U(1) charge, SU(2) spin rotation and translational symmetries, the vertex $V^\Lambda$ as a function of the spin variables $\sigma_i$ and the four $d+1$ momenta $k_i$ of the fermions (two incoming, two outgoing) can be written as
\begin{equation}
\begin{split}
&V^\Lambda_{\sigma_1\sigma_2\sigma_3\sigma_4}(k_1,k_2,k_3)=\\
&V^\Lambda(k_1,k_2,k_3)\delta_{\sigma_1\sigma_4}\delta_{\sigma_2\sigma_3}-V^\Lambda(k_2,k_1,k_3)\delta_{\sigma_1\sigma_3}\delta_{\sigma_2\sigma_4},
\end{split}
\end{equation}
where the fermions labeled as 1-2 are considered as incoming and the ones labeled as 3-4 as outgoing in the scattering process. Furthermore, thanks to translational invariance, the vertex is nonzero only when the total momentum is conserved, that is when $k_1+k_2=k_3+k_4$. So, one can shorten the momentum dependence to three momenta, the fourth being fixed by the conservation law. By exploiting the relation above, one is left with the calculation of a single coupling function $V^\Lambda$ that summarizes all possible spin combinations. Its flow equation reads (dropping momentum dependencies for the sake of compactness)
\begin{equation}
\partial_\Lambda V^\Lambda = \mathcal{T}_\text{pp}^\Lambda+\mathcal{T}_\text{ph}^\Lambda+\mathcal{T}_\text{phx}^\Lambda+\Gamma^{(6)\Lambda} \circ \widetilde{\partial}_\Lambda G^\Lambda,
\label{eq: vertex flow equation symm}
\end{equation}
where the last term contains the 3-fermion coupling $\Gamma^{(6)\Lambda}$ contracted with the \textit{single scale propagator} $\widetilde{\partial}_\Lambda G^\Lambda$. This term is often neglected or treated in an approximate fashion in most applications. The remaining three terms can be expressed as loop integrals involving two fermionic propagators and two vertices $V^\Lambda$. They are grouped in three channels, namely \textit{particle-particle} ($ \mathcal{T}_\text{pp}^\Lambda$), \textit{particle-hole} ($ \mathcal{T}_\text{ph}^\Lambda$) and \textit{particle-hole-crossed} ($ \mathcal{T}_\text{phx}^\Lambda$), depending on which combination of momenta is transported by the loop. For the expressions of all the terms in Eq.~\eqref{eq: vertex flow equation symm} see Ref.~\cite{Metzner2012}.
In numerous applications of the fRG to various systems, the vertex function $V^\Lambda$ diverges before the numerical integration of Eq.~\eqref{eq: vertex flow equation symm} reaches the final scale $\Lambda_\text{fin}$. This fact signals the tendency of the system to develop some kind of order by spontaneously breaking one (or more) of its symmetries. One can often trace back the nature of the order tendency by looking at which of the terms in Eq.~\eqref{eq: vertex flow equation symm} contributes the most to the flow of $V^\Lambda$ near the critical scale $\Lambda_c$, where the divergence occurs.
\section{Model}
\label{sec: model}
In this section we present the prototypical model that we use for the application of our method. This is the two-dimensional (2D) attractive Hubbard model, that exhibits an instability in the particle-particle channel, signaling the onset of spin-singlet superfluidity. Our formalism, however, can be extended to a wide class of models, including the 2D repulsive Hubbard model, to study the phases in which (generally incommensurate) antiferromagnetism and/or d-wave superconductivity appear.
The bare action of the model describes spin-$\frac{1}{2}$ fermions on a 2D lattice experiencing an attractive on-site attraction
\begin{equation}
\begin{split}
\mathcal{S}=&-\int_{k,\sigma} \overline{\psi}_{k,\sigma} \left[i\nu-\xi_\mathbf{k}\right]\psi_{k,\sigma} \\
&+ U \int_{k,k',q} \overline{\psi}_{k,\uparrow} \overline{\psi}_{q-k,\downarrow} \psi_{q-k',\downarrow} \, \psi_{k',\uparrow},
\end{split}
\label{eq: bare Hubbard action}
\end{equation}
where $\nu$ is a fermionic Matsubara frequency, $\xi_\mathbf{k}$ is the bare band dispersion measured relative to the chemical potential $\mu$, and $U<0$ is the local interaction. The symbol $\int_k=T\sum_\nu\int\frac{d^2\mathbf{k}}{(2\pi)^2}$ ($T$ being the temperature) denotes an integral over the Brillouin zone and a sum over Matsubara frequencies.
\hgl{This model, in $d=2$ or 3, at zero or finite temperature, has been subject of extensive studies with several methods, in particular the fRG~\cite{Eberlein2013,Obert2013}, quantum Monte Carlo~\cite{Randeria1992,dosSantos1994,Trivedi1995,Singer1996,Karakuzu2018}, and DMFT and extensions~\cite{Keller2001,Capone2002,Toschi2005,DelRe2019}. }
In the next sections, we will assume that a fRG flow is run for this model, up to a stopping scale $\Lambda_s$, very close to a critical scale $\Lambda_c$ where the vertex $V^\Lambda$ diverges due to a pairing tendency, but still in the symmetric regime. From now on, we will also assume an infrared regulator such that the scale $\Lambda$ is lowered from $\Lambda_\text{ini}$ to $\Lambda_\text{fin}$, so that the inequality $\Lambda_\text{ini}>\Lambda_s\gtrsim\Lambda_c>\Lambda_\text{fin}$ holds.
\section{Broken symmetry phase: fermionic formalism}
\label{sect: fermionic formalism}
In this section we will present a simple truncation of flow equations that allows to continue the flow beyond $\Lambda_s$ in the superfluid phase within a MF-like approximation, that neglects any kind of order parameter (thermal or quantum) fluctuations. This approximation can be formulated by working only with the physical fermionic degrees of freedom. \\
In order to tackle the breaking of the global U(1) symmetry, we introduce the Nambu spinors
\begin{equation}
\Psi_k=\left(
\begin{array}{c}
\psi_{k,\uparrow} \\
\overline{\psi}_{-k,\downarrow}
\end{array}
\right) \hskip 1cm
\overline{\Psi}_k=\left(
\begin{array}{c}
\overline{\psi}_{k,\uparrow} \\
\psi_{-k,\downarrow}
\end{array}
\right).
\label{eq: Nambu spinors}
\end{equation}
\subsection{Flow equations and integration}
In the SSB phase, the vertex function $V$ acquires anomalous components due to the violation of particle number conservation. In particular, besides the normal vertex describing scattering processes with two incoming and two outgoing particles ($V_{2+2}$), in the superfluid phase also components with three ($V_{3+1}$) or four ($V_{4+0}$) incoming or outgoing particles can arise. We avoid to treat the 3+1 components, since they are related to the coupling of the order parameter to charge fluctuations~\cite{Eberlein2013}, which do not play any role in a MF-like approximation for the superfluid state. It turns out to be useful to work with combinations
\begin{equation}
\begin{split}
&V_\mathcal{A}=\Re\left\{V_{2+2}+V_{4+0}\right\}\\
&V_\Phi=\Re\left\{V_{2+2}-V_{4+0}\right\},
\label{eq: A and Phi vertex combination}
\end{split}
\end{equation}
that represent two fermion interactions in the longitudinal and transverse order parameter channels, respectively. They are related to the amplitude and phase fluctuations of the superfluid order parameter, respectively. In principle, a longitudinal-transverse mixed interaction can also appear, from the imaginary parts of the vertices in Eq.~\eqref{eq: A and Phi vertex combination}, but it has no effect in the present MF approximation because it vanishes at zero center of mass frequency~\cite{Eberlein_Thesis}.
Below the stopping scale, $\Lambda <\Lambda_s$, we consider a truncation of the effective action of the form
\begin{equation}
\begin{split}
\Gamma^{\Lambda}_{\text{SSB}}[\Psi,\overline{\Psi}]=-&\int_{k} \overline{\Psi}_{k} \, \left[\mathbf{G}^{\Lambda}(k)\right]^{-1} \Psi_{k}\\
+&\int_{k,k',q}V^{\Lambda}_{\mathcal{A}}(k,k';q)\, S^1_{k,q}\,S^1_{k',-q}\\
+&\int_{k,k',q}V^{\Lambda}_{\Phi}(k,k';q)\, S^2_{k,q}\,S^2_{k',-q} ,
\end{split}
\label{eq: fermionic SSB truncation}
\end{equation}
with the Nambu bilinears defined as
\begin{equation}
S^\alpha_{k,q}=\overline{\Psi}_{k}\, \tau^\alpha \,\Psi_{k-q},
\label{eq: fermion bilinear}
\end{equation}
where the Pauli matrices $\tau^\alpha$ are contracted with Nambu spinor indexes. The fermionic propagator $\mathbf{G}^\Lambda(k)$ is given by the matrix
\begin{equation}
\left(
\begin{array}{cc}
Q_{0}^\Lambda(k)-\Sigma^\Lambda(k) & \Delta^\Lambda(k)\\
\Delta^\Lambda(k) & -Q_0^\Lambda(-k)+\Sigma^\Lambda(-k)
\end{array}
\right)^{-1},
\end{equation}
where $Q_{0}^\Lambda(k)=i\nu-\xi_\mathbf{k}+R^\Lambda(k)$, $R^\Lambda(k)$ is the regulator, $\Sigma^\Lambda(k)$ is the normal self energy and $\Delta^\Lambda(k)$ is the superfluid gap. The initial conditions at the scale $\Lambda=\Lambda_s$ require $\Delta^{\Lambda_s}$ to be zero and both $V^{\Lambda_s}_\mathcal{A}$ and $V^{\Lambda_s}_\Phi$ to equal the vertex $V^{\Lambda_s}$ in the symmetric phase.
We are now going to introduce the MF approximation to the symmetry broken state, that means that we focus on the $q=0$ component of $V_\mathcal{A}$ and $V_\Phi$ and neglect all the rest. So, from now on we drop all the $q$-dependencies. We neglect the flow of the normal self-energy below $\Lambda_s$, that would require the inclusion of charge fluctuations in the SSB phase, which is beyond the MF approximation. In order to simplify the presentation, we introduce a matrix-vector notation for the gaps and vertices. In particular, the functions $V_\mathcal{A}$ and $V_\Phi$ are matrices in the indices $k$ and $k'$, while the gap and the fermionic propagator behave as vectors. For example, in this notation an object of the type $\int_{k'}V_\mathcal{A}^\Lambda(k,k')\Delta^\Lambda(k')$ can be viewed as a matrix-vector product, $V_\mathcal{A}^\Lambda \Delta^\Lambda$.
Within our MF approximation, we consider in our set of flow equations only the terms that involve only the $q=0$ components of the functions $V_\mathcal{A}$ and $V_\Phi$. This means that in a generalization of Eq.~\eqref{eq: vertex flow equation symm} to the SSB phase, we consider only the particle-particle contributions. In formulas we have:
\begin{align}
&\partial_\Lambda V_\mathcal{A}^\Lambda=V_\mathcal{A}^\Lambda\left[\widetilde{\partial}_\Lambda\Pi^\Lambda_{11}\right] V_\mathcal{A}^\Lambda+\Gamma^{(6)\Lambda} \circ \widetilde{\partial}_\Lambda G^\Lambda,
\label{eq: flow eq Va fermionic}\\
&\partial_\Lambda V_\Phi^\Lambda=V_\Phi^\Lambda \left[\widetilde{\partial}_\Lambda\Pi^\Lambda_{22}\right] V_\Phi^\Lambda+\Gamma^{(6)\Lambda} \circ \widetilde{\partial}_\Lambda G^\Lambda,
\label{eq: flow eq Vphi fermionic}
\end{align}
where we have defined the bubbles
\begin{equation}
\Pi^\Lambda_{\alpha\beta}(k,k')=-\frac{1}{2}\Tr\left[\tau^\alpha\,\mathbf{G}^\Lambda(k)\,\tau^\beta\,\mathbf{G}^\Lambda(k)\right]\delta_{k,k'},
\end{equation}
where $\delta_{k,k'}=(2\pi)^2/T \,\delta^{(2)}(\mathbf{k}-\mathbf{k}')\delta_{\nu\nu'}$, and the trace runs over Nambu spin indexes.
The last terms of Eqs.~\eqref{eq: flow eq Va fermionic} and~\eqref{eq: flow eq Vphi fermionic} involve the 6-particle interaction, which we treat here in the Katanin approximation, that allows us to replace the derivative acting on the regulator $\widetilde{\partial}_\Lambda$ of the bubbles with the full scale derivative $\partial_\Lambda$~\cite{Katanin2004}. This approach is useful for it provides the exact solution of mean-field models, such as the reduced BCS, in which the bare interaction is restricted to the zero center of mass momentum channel~\cite{Salmhofer2004}.
In this way, the flow equation~\eqref{eq: flow eq Va fermionic} for the vertex $V_\mathcal{A}$, together with the initial condition $V_\mathcal{A}^{\Lambda_s}=V^{\Lambda_s}$ can be integrated analytically, giving
\begin{equation}
\begin{split}
V_\mathcal{A}^\Lambda = &\left[1+V^{\Lambda_s}(\Pi^{\Lambda_s}-\Pi_{11}^\Lambda)\right]^{-1}V^{\Lambda_s}\\ =&\left[1-\widetilde{V}^{\Lambda_s}\Pi_{11}^\Lambda\right]^{-1}\widetilde{V}^{\Lambda_s},
\end{split}
\label{eq: Va solution fermionic}
\end{equation}
where
\begin{equation}
\Pi^{\Lambda_s}(k,k')=G^{\Lambda_s}(k)G^{\Lambda_s}(-k)\delta_{k,k'},
\label{eq: bubble at Lambda_s}
\end{equation}
is the (normal) particle-particle bubble at zero center of mass momentum,
\begin{equation}
G^{\Lambda}(k)=\frac{1}{Q_0^{\Lambda}(k)-\Sigma^{\Lambda_s}(k)},
\label{eq: G at Lambda_s}
\end{equation}
is the fermionic normal propagator, and
\begin{equation}
\widetilde{V}^{\Lambda_s}=\left[1+V^{\Lambda_s}\Pi^{\Lambda_s}\right]^{-1}V^{\Lambda_s}
\label{eq: irr vertex fermionic}
\end{equation}
is the irreducible (normal) vertex in the particle-particle channel at the stopping scale. The flow equation for the transverse vertex $V_\Phi$ exhibits a formal solution similar to the one in Eq.~\eqref{eq: Va solution fermionic}, but the matrix inside the square brackets is not invertible. We will come to this aspect later.
\subsection{Gap equation}
Similarly to the flow equations for vertices, in the flow equation of the superfluid gap we neglect the contributions involving the vertices at $q\neq 0$. We are then left with
\begin{equation}
\partial_\Lambda\Delta^\Lambda(k)=\int_{k'}V_\mathcal{A}^\Lambda(k,k')\,\widetilde{\partial}_\Lambda F^\Lambda(k'),
\label{eq: gap flow equation}
\end{equation}
where
\begin{equation}
F^\Lambda(k)=\frac{\Delta^\Lambda(k)}{[G^\Lambda(k)\,G^\Lambda(-k)]^{-1}+\left[\Delta^\Lambda(k)\right]^2}
\label{eq: F definition}
\end{equation}
is the anomalous fermionic propagator, with $G$ defined as in Eq.~\eqref{eq: G at Lambda_s}, and with the normal self-energy kept fixed at its value at the stopping scale. By inserting Eq.~\eqref{eq: Va solution fermionic} into Eq.~\eqref{eq: gap flow equation} and using the initial condition $\Delta^{\Lambda_s}=0$, we can analytically integrate the latter, obtaining the gap equation~\cite{Wang2014}
\begin{equation}
\Delta^\Lambda(k)=\int_{k'}\widetilde{V}^{\Lambda_s}(k,k')\, F^\Lambda(k').
\label{eq: gap equation fermionic}
\end{equation}
In the particular case in which the contributions to the vertex flow equation from other channels (different from the particle-particle) as well as the 3-fermion interaction and the normal self-energy are neglected also above the stopping scale, the irreducible vertex is nothing but $-U$, the (sign reversed) bare interaction, and Eq.~\eqref{eq: gap equation fermionic} reduces to the standard Hartree-Fock approximation to the SSB state.
\subsection{Goldstone Theorem}
In this subsection we prove that the present truncation of flow equations fulfills the Goldstone theorem. We revert our attention on the transverse vertex $V_\Phi$. Its flow equation in Eq.~\eqref{eq: flow eq Vphi fermionic} can be (formally) integrated too, together with the initial condition $V_\Phi^{\Lambda_s}=V^{\Lambda_s}$, giving
\begin{equation}
\begin{split}
V_\Phi^\Lambda = &\left[1+V^{\Lambda_s}(\Pi^{\Lambda_s}-\Pi_{22}^\Lambda)\right]^{-1}V^{\Lambda_s}\\ =&\left[1-\widetilde{V}^{\Lambda_s}\Pi_{22}^\Lambda\right]^{-1}\widetilde{V}^{\Lambda_s}.
\end{split}
\label{eq: Vphi solution fermionic}
\end{equation}
However, by using the relation
\begin{equation}
\Pi_{22}^\Lambda(k,k')=\frac{F^\Lambda(k)}{\Delta^\Lambda(k)}\,\delta_{k,k'},
\label{eq: Pi22=F/delta}
\end{equation}
one can rewrite the matrix in angular brackets in the second line of Eq.~\eqref{eq: Vphi solution fermionic} as
\begin{equation}
\delta_{k,k'}-\widetilde{V}^{\Lambda_s}(k,k')\,\frac{F^\Lambda(k')}{\Delta^\Lambda(k')}.
\end{equation}
Multiplying this expression by $\Delta^\Lambda(k')$ and integrating over $k'$, we see that it vanishes if the gap equation~\eqref{eq: gap equation fermionic} is obeyed. Thus, the matrix in angular brackets in Eq.~\eqref{eq: Vphi solution fermionic} has a zero eigenvalue with the superfluid gap as eigenvector. In matrix notation this fact can be expressed as
\begin{equation}
\left[ 1 - \widetilde{V}^{\Lambda_s}\Pi^\Lambda_{22}\right]\Delta^\Lambda=0.
\end{equation}
Due to the presence of this zero eigenvalue, the above matrix is not invertible. This is nothing but a manifestation of the Goldstone theorem. Indeed, due to the breaking of the global U(1) symmetry, transverse fluctuations of the order parameter become massless at $q=0$, leading to the divergence of the transverse two fermion interaction $V_\Phi$.
\section{Broken symmetry phase: bosonic formalism}
\label{sect: bosonic formalism}
The SSB phase can be accessed also via the introduction of a bosonic field, describing the fluctuations of the order parameter, and whose finite expectation value is related to the formation of anomalous components in the fermionic propagator. In order to introduce this bosonic field, we express the vertex at the stopping scale in the following form:
\begin{equation}
V^{\Lambda_s}(k,k';q)=\frac{h^{\Lambda_s}(k;q)\,h^{\Lambda_s}(k';q)}{m^{\Lambda_s}(q)}+\mathcal{Q}^{\Lambda_s}(k,k';q).
\label{eq: vertex at Lambda crit}
\end{equation}
We assume from now on that the divergence of the vertex, due to the appearance of a massless mode, is absorbed into the first term, while the second one remains finite. In other words, we assume that as the stopping scale $\Lambda_s$ approaches the critical scale $\Lambda_c$ at which the vertex is formally divergent, the (inverse) bosonic propagator $m^{\Lambda_s}(q)$ at zero frequency and momentum vanishes, while the the Yukawa coupling $h^{\Lambda_s}(k;q)$ and the residual two fermion interaction $\mathcal{Q}^{\Lambda_s}(k,k';q)$ remain finite.
In Sec.~\ref{sec: vertex bosonization} we will introduce a systematic scheme to extract the decomposition~\eqref{eq: vertex at Lambda crit} from a given vertex at the stopping scale.
\subsection{Hubbard-Stratonovich transformation and truncation}
\hgl{Since the effective action at a given scale $\Lambda$ can be viewed as a bare action with bare propagator $G_0-G_0^\Lambda$ (with $G_0^\Lambda$ the regularized bare propagator)~\cite{note_HS_gamma}, one can decouple the factorized (and singular) part of the vertex at $\Lambda_s$ via a Gaussian integration, thus introducing a bosonic field. By adding source terms which couple linearly to this field and to the fermionic ones, one obtains the generating functional of connected Green's functions, whose Legendre transform reads, at the stopping scale}
\begin{equation}
\begin{split}
&\Gamma^{\Lambda_s}[\psi,\overline{\psi},\phi]=
\int_{k,\sigma} \overline{\psi}_{k,\sigma} \left[G^{\Lambda_s}(k)\right]^{-1} \psi_{k,\sigma}\\
&+\int_{q} \phi^*_q \, m^{\Lambda_s}(q)\, \phi_q\\
&+\int_{k,k',q}\mathcal{Q}^{\Lambda_s}(k,k';q)\,\overline{\psi}_{k,\uparrow} \overline{\psi}_{q-k,\downarrow} \psi_{q-k',\downarrow} \psi_{k',\uparrow}\\
&+\int_{k,q}h^{\Lambda_s}(k;q)\left[ \overline{\psi}_{k,\uparrow} \overline{\psi}_{q-k,\downarrow} \phi_q + \text{h.c.}\right],
\end{split}
\label{eq: gamma lambda crit bos}
\end{equation}
\hgl{where $\phi$ represents the expectation value (in presence of sources) of the Hubbard-Stratonovich field.}
Note that we have avoided to introduce an interaction between equal spin fermions. Indeed, since we are focusing on a spin singlet superconducting order parameter, within the MF approximation this interaction has no contribution to the flow equations.
\hgl{The Hubbard-Stratonovich transformation introduced in Eq.\eqref{eq: gamma lambda crit bos} is free of the so-called Fierz ambiguity, according to which different ways of decoupling of the bare interaction can lead to different mean-field results for the gap (see, for example, Ref.~\cite{Baier2004}). Indeed, through the inclusion of the residual two fermion interaction, we are able to recover the same equations that one would get without bosonizing the interactions, as proven in Sec.~\ref{subsec: equivalence bos and fer}. In essence, the only ambiguity lies in selecting what to assign to the bosonized part of the vertex and what to $\mathcal{Q}$, but by keeping both of them all along the flow, the results will not depend on this choice.}
We introduce Nambu spinors as in Eq.~\eqref{eq: Nambu spinors} and we decompose the bosonic field into its (flowing) expectation value plus longitudinal ($\sigma$) and transverse ($\pi$) fluctuations~\cite{Obert2013}:
\begin{equation}
\begin{split}
&\phi_q=\alpha^\Lambda\,\delta_{q,0} + \sigma_q + i\, \pi_q \\
&\phi^*_q=\alpha^\Lambda\,\delta_{q,0} + \sigma_{-q} - i\, \pi_{-q},
\end{split}
\end{equation}
where we have chosen $\alpha^\Lambda$ to be real. For the effective action at $\Lambda<\Lambda_s$ in the SSB phase, we use the following \textit{ansatz}
\begin{equation}
\begin{split}
\Gamma^{\Lambda}_\text{SSB}[\Psi,\overline{\Psi},\sigma,\pi]&=\Gamma^\Lambda_{\Psi^2}+\Gamma^\Lambda_{\sigma^2}+\Gamma^\Lambda_{\pi^2}\\
&+\Gamma^\Lambda_{\Psi^2\sigma} + \Gamma^\Lambda_{\Psi^2\pi}
+\Gamma^\Lambda_{\Psi^4},
\end{split}
\label{eq: bosonic eff action}
\end{equation}
where the first three quadratic terms are given by
\begin{equation}
\begin{split}
&\Gamma^\Lambda_{\Psi^2}=-\int_{k} \overline{\Psi}_{k} \left[\mathbf{G}^{\Lambda}(k)\right]^{-1} \Psi_{k}\\
&\Gamma^\Lambda_{\sigma^2}=-\frac{1}{2}\int_q \sigma_{-q}\,m_\sigma^{\Lambda}(q)\, \sigma_q\\
&\Gamma^\Lambda_{\pi^2}=-\frac{1}{2}\int_q \pi_{-q}\,m_\pi^{\Lambda}(q)\, \pi_q,
\end{split}
\end{equation}
and the fermion-boson interactions are
\begin{equation}
\begin{split}
&\Gamma^\Lambda_{\Psi^2\sigma}=\int_{k,q}h^{\Lambda}_\sigma(k;q)\left\{S^1_{k,-q}\,\sigma_q+ \text{h.c.} \right\}\\
&\Gamma^\Lambda_{\Psi^2\pi}=\int_{k,q}h^{\Lambda}_\pi(k;q)\left\{S^2_{k,-q}\,\pi_q+ \text{h.c.} \right\},
\end{split}
\end{equation}
with $S^\alpha_{k,q}$ as in Eq.~\eqref{eq: fermion bilinear}.
The residual two fermion interaction term is written as
\begin{equation}
\begin{split}
\Gamma^\Lambda_{\Psi^{4}}=&
\int_{k,k',q}\mathcal{A}^{\Lambda}(k,k';q)\,S^1_{k,q}\,S^1_{k',-q}\\
&+\int_{k,k',q}\hskip - 5mm\Phi^{\Lambda}(k,k';q) \,S^2_{k,q}\,S^2_{k',-q}.
\end{split}
\end{equation}
As in the fermionic formalism, in the truncation in Eq.~\eqref{eq: bosonic eff action} we have neglected any type of longitudinal-transverse fluctuation mixing in the Yukawa couplings, bosonic propagators and two fermion interactions because at $q=0$ they are identically zero. In the bosonic formulation, as well as for the fermionic one, the MF approximation requires to focus on the $q=0$ components of the various terms appearing in the effective action and neglect all the rest. So, from now on we drop all the $q$-dependencies. We will make use of the matrix notation introduced in Sec.~\ref{sect: fermionic formalism}, for which the newly introduced Yukawa couplings behave as vectors and bosonic inverse propagators as scalars.
\subsection{Flow equations and integration}
Here we focus on the flow equations for two fermion interactions, Yukawa couplings and bosonic inverse propagators in the longitudinal and transverse channels within a MF approximation, that is we focus only on the Cooper channel ($q=0$) and neglect all the diagrams containing internal bosonic lines or the couplings $\mathcal{A}$, $\Phi$ at $q\neq 0$. Furthermore, we introduce a generalized Katanin approximation to account for higher order couplings in the flow equations. We refer to Appendix~\ref{app: flow eqs} for a derivation of the latter. We now show that our reduced set of flow equations for the various couplings can be integrated. We first focus on the longitudinal channel, while in the transverse one the flow equations possess the same structure.
The flow equation for the longitudinal bosonic mass (inverse propagator at $q=0$) reads
\begin{equation}
\begin{split}
\partial_\Lambda m_\sigma^\Lambda=&\int_{k,k'} h^\Lambda_\sigma(k) \left[\partial_\Lambda\Pi^\Lambda_{11}(k,k')\right] h^\Lambda_\sigma(k') \\
\equiv &\left[h^\Lambda_\sigma\right]^T\left[\partial_\Lambda\Pi^\Lambda_{11}\right] h^\Lambda_\sigma.
\end{split}
\label{eq: flow P sigma}
\end{equation}
Similarly, the equation for the longitudinal Yukawa coupling is
\begin{equation}
\partial_\Lambda h^\Lambda_\sigma=\mathcal{A}^\Lambda\left[\partial_\Lambda\Pi^\Lambda_{11}\right]h^\Lambda_\sigma,
\label{eq: flow h sigma}
\end{equation}
and the one for the residual two fermion longitudinal interaction is given by
\begin{equation}
\partial_\Lambda\mathcal{A}^\Lambda=\mathcal{A}^\Lambda\left[\partial_\Lambda\Pi^\Lambda_{11}\right]\mathcal{A}^\Lambda.
\label{eq: A flow eq}
\end{equation}
The above flow equations are pictorially shown in Fig.~\ref{fig: flow eqs}. The initial conditions at $\Lambda=\Lambda_s$ read, for both channels,
\begin{equation}
\begin{split}
&m_\sigma^{\Lambda_s}=m_\pi^{\Lambda_s}=m^{\Lambda_s}\\
&h_\sigma^{\Lambda_s}=h_\pi^{\Lambda_s}=h^{\Lambda_s}\\
&\mathcal{A}^{\Lambda_s}=\Phi^{\Lambda_s}=\mathcal{Q}^{\Lambda_s}.
\end{split}
\end{equation}
We start by integrating the equation for the residual two fermion longitudinal interaction $\mathcal{A}$. Eq.~\eqref{eq: A flow eq} can be solved exactly as we have done in the fermionic formalism, obtaining for $\mathcal{A}$
\begin{equation}
\mathcal{A}^\Lambda = \left[1-\widetilde{\mathcal{Q}}^{\Lambda_s}\Pi_{11}^\Lambda\right]^{-1}\widetilde{\mathcal{Q}}^{\Lambda_s},
\label{eq: A}
\end{equation}
where we have introduced a reduced residual two fermion interaction $\widetilde{\mathcal{Q}}$
\begin{equation}
\widetilde{\mathcal{Q}}^{\Lambda_s}=\left[1+\mathcal{Q}^{\Lambda_s}\Pi^{\Lambda_s}\right]^{-1}\mathcal{Q}^{\Lambda_s}.
\label{eq: reduced C tilde}
\end{equation}
We are now in the position to employ this result and plug it in Eq.~\eqref{eq: flow h sigma} for the Yukawa coupling. The latter can be integrated as well. Its solution reads
\begin{equation}
h_\sigma^\Lambda= \left[1-\widetilde{\mathcal{Q}}^{\Lambda_s}\Pi_{11}^\Lambda\right]^{-1}\widetilde{h}^{\Lambda_s},
\label{eq: h_sigma}
\end{equation}
where the introduction of a "reduced" Yukawa coupling
\begin{equation}
\widetilde{h}^{\Lambda_s}=\left[1+\mathcal{Q}^{\Lambda_s}\Pi^{\Lambda_s}\right]^{-1}h^{\Lambda_s}
\label{eq: reduced yukawa}
\end{equation}
is necessary. This Bethe-Salpeter-like equation for the Yukawa coupling is similar in structure to the parquetlike equations for the three-leg vertex derived in Ref.~\cite{Krien2019_II}.
Finally, we can use the two results of Eqs.~\eqref{eq: A} and~\eqref{eq: h_sigma} and plug them in the equation for the bosonic mass, whose integration provides
\begin{equation}
m_\sigma^\Lambda=\widetilde{m}^{\Lambda_s}-\left[\widetilde{h}^{\Lambda_s}\right]^T\Pi_{11}^\Lambda\,h_\sigma^\Lambda,
\label{eq: P_sigma}
\end{equation}
where, by following definitions introduced above, the "reduced" bosonic mass is given by
\begin{equation}
\widetilde{m}^{\Lambda_s}=m^{\Lambda_s}+\left[\widetilde{h}^{\Lambda_s}\right]^T\Pi^{\Lambda_s}\,h^{\Lambda_s}.
\label{eq: reduced mass P tilde}
\end{equation}
\begin{figure}[t]
\centering
\includegraphics[width=0.32\textwidth]{fig1_paper.png}
\caption{Schematic representation of flow equations for the mass and the couplings in the longitudinal channel. Full lines represent Nambu matrix propagators, triangles the Yukawa coupling $h_\sigma$ and squares the residual interaction $\mathcal{A}$. The black dots over fermionic legs represent full derivatives with respect to the scale $\Lambda$.}
\label{fig: flow eqs}
\end{figure}
In the transverse channel, the equations have the same structure and can be integrated in the same way. Their solutions read
\begin{align}
&\Phi^\Lambda = \left[1-\widetilde{\mathcal{Q}}^{\Lambda_s}\Pi_{22}^\Lambda\right]^{-1}\widetilde{\mathcal{Q}}^{\Lambda_s},
\label{eq: Phi}\\
&h_\pi^\Lambda= \left[1-\widetilde{\mathcal{Q}}^{\Lambda_s}\Pi_{22}^\Lambda\right]^{-1}\widetilde{h}^{\Lambda_s},
\label{eq: h_pi}\\
&m_\pi^\Lambda=\widetilde{m}^{\Lambda_s}-\left[\widetilde{h}^{\Lambda_s}\right]^T\Pi_{22}^\Lambda\,h_\pi^\Lambda.
\label{eq: goldstone mass}
\end{align}
Eq.~\eqref{eq: goldstone mass} provides the mass of the transverse mode, which, according to the Goldstone theorem, must be zero. We will show later that this is indeed fulfilled.
It is worthwhile to point out that the combinations
\begin{equation}
\begin{split}
&\frac{h_\sigma^\Lambda \left[h_\sigma^\Lambda\right]^T}{m_\sigma^\Lambda}+\mathcal{A}^\Lambda\\
&\frac{h_\pi^\Lambda \left[h_\pi^\Lambda\right]^T}{m_\pi^\Lambda}+\Phi^\Lambda
\end{split}
\label{eq: eff fer interactions}
\end{equation}
obey the same flow equations, Eqs.~\eqref{eq: flow eq Va fermionic} and~\eqref{eq: flow eq Vphi fermionic}, as the vertices in the fermionic formalism and share the same initial conditions. Therefore the solutions for these quantities coincide with expressions~\eqref{eq: Va solution fermionic} and~\eqref{eq: Vphi solution fermionic}, respectively. Within this equivalence, it is interesting to express the irreducible vertex $\widetilde{V}^{\Lambda_s}$ of Eq.~\eqref{eq: irr vertex fermionic} in terms of the quantities, $\mathcal{Q}^{\Lambda_s}$, $h^{\Lambda_s}$ and $m^{\Lambda_s}$, introduced in the factorization in Eq.~\eqref{eq: vertex at Lambda crit}:
\begin{equation}
\widetilde{V}^{\Lambda_s}=\frac{\widetilde{h}^{\Lambda_s}\left[\widetilde{h}^{\Lambda_s}\right]^T}{\widetilde{m}^{\Lambda_s}}+\widetilde{\mathcal{Q}}^{\Lambda_s},
\label{eq: irr V bosonic formalism}
\end{equation}
where $\widetilde{\mathcal{Q}}^{\Lambda_s}$, $\widetilde{h}^{\Lambda_s}$ and $\widetilde{m}^{\Lambda_s}$ were defined in Eqs.~\eqref{eq: reduced C tilde},~\eqref{eq: reduced yukawa} and~\eqref{eq: reduced mass P tilde}. For a proof see Appendix~\ref{app: irr V bosonic formalism}. Relation~\eqref{eq: irr V bosonic formalism} is of particular interest because it states that when the full vertex is expressed as in Eq.~\eqref{eq: vertex at Lambda crit}, then the irreducible one will obey a similar decomposition, where the bosonic propagator, Yukawa coupling and residual two fermion interaction are replaced by their "reduced" counterparts. This relation holds even for $q\neq 0$.
\subsection{Ward identity for the gap and Goldstone theorem}
We now focus on the flow of the fermionic gap and the bosonic expectation value and express a relation that connects them. Their flow equations are given by (see Appendix~\ref{app: flow eqs})
\begin{equation}
\partial_\Lambda \alpha^\Lambda=\frac{1}{m_\sigma^\Lambda}\left[h_\sigma^\Lambda\right]^T\widetilde{\partial}_\Lambda F^\Lambda,
\label{eq: dalpha dLambda main text}
\end{equation}
and
\begin{equation}
\begin{split}
\partial_\Lambda \Delta^\Lambda &= \partial_\Lambda \alpha^\Lambda\, h_\sigma^\Lambda+\mathcal{A}^\Lambda\widetilde{\partial}_\Lambda F^\Lambda\\
&= \left[\frac{h_\sigma^\Lambda \left[h_\sigma^\Lambda\right]^T}{m_\sigma^\Lambda}+\mathcal{A}^\Lambda\right]\widetilde{\partial}_\Lambda F^\Lambda,
\label{eq: gap eq main text}
\end{split}
\end{equation}
with $F^\Lambda$ given by Eq.~\eqref{eq: F definition}. In Fig.~\ref{fig: flow eqs gaps} we show a pictorial representation.
\begin{figure}[t]
\centering
\includegraphics[width=0.35\textwidth]{fig2_paper.png}
\caption{Schematic representation of flow equations for the bosonic expectation value $\alpha^\Lambda$ and fermionic gap $\Delta^\Lambda$. Besides the slashed lines, representing Nambu matrix propagators with a scale derivative acting only on the regulator, the conventions for the symbols are the same as in Fig.~\ref{fig: flow eqs}.}
\label{fig: flow eqs gaps}
\end{figure}
Eq.~\eqref{eq: dalpha dLambda main text} can be integrated, with the help of the previously obtained results for $\mathcal{A}$, $h_\sigma$ and $m_\sigma$, yielding
\begin{equation}
\alpha^\Lambda=\frac{1}{\widetilde{m}^{\Lambda_s}}\left[\widetilde{h}^{\Lambda_s}\right]^T F^\Lambda.
\label{eq: alpha solution}
\end{equation}
In the last line of Eq.~\eqref{eq: gap eq main text}, as previously discussed, the object in angular brackets equals the full vertex $V_\mathcal{A}$ of the fermionic formalism. Thus, integration of the gap equation is possible and the result is simply Eq.~\eqref{eq: gap equation fermionic} of the fermionic formalism. However, if we now insert the expression in Eq.~\eqref{eq: irr V bosonic formalism} for the irreducible vertex within the "fermionic" form (Eq.~\eqref{eq: gap equation fermionic}) of the gap equation, and use relation~\eqref{eq: Pi22=F/delta}, we get:
\begin{equation}
\Delta^\Lambda(k)=\alpha^\Lambda h_\pi^\Lambda(k).
\label{eq: Ward Identity}
\end{equation}
This equation is the Ward identity for the mixed boson-fermion system related to the global U(1) symmetry~\cite{Obert2013}. In Appendix~\ref{app: loop} we propose a self consistent loop for the calculation of $\alpha$, $h_{\pi}$, through Eqs.~\ref{eq: alpha solution} and~\ref{eq: h_pi}, and subsequently the superfluid gap $\Delta$.
Let us now come back to the question of the Goldstone theorem. For the mass of the Goldstone boson to be zero, it is necessary for Eq.~\eqref{eq: goldstone mass} to vanish. We show that this is indeed the case. With the help of Eq.~\eqref{eq: Pi22=F/delta}, we can reformulate the equation for the transverse mass in the form
\begin{equation}
\begin{split}
m^\Lambda_\pi &= \widetilde{m}^{\Lambda_s}-\int_k \widetilde{h}^{\Lambda_s}(k)F^\Lambda(k)\frac{h^\Lambda_\pi(k)}{\Delta^\Lambda(k)}\\
&=\widetilde{m}^{\Lambda_s}-\frac{1}{\alpha^{\Lambda}}\int_k \widetilde{h}^{\Lambda_s}(k)F^\Lambda(k),
\end{split}
\end{equation}
where the Ward Identity $\Delta=\alpha h_\pi$ was applied in the last line. We see that the expression for the Goldstone boson mass vanishes when $\alpha$ obeys its self consistent equation, Eq.~\eqref{eq: alpha solution}. This proves that our truncation of flow equations fulfills the Goldstone theorem. \\
\hgl{Constructing a truncation of the fRG flow equations which fulfills the Ward identities and the Goldstone theorem is, in general, a nontrivial task. In Ref.~\cite{Bartosch2009}, in which the order parameter fluctuations have been included on top of the Hartree-Fock solution, no distinction has been made between the longitudinal and transverse Yukawa couplings and the Ward identity~\eqref{eq: Ward Identity} as well as the Goldstone theorem have been enforced by construction, by calculating the gap and the bosonic expectation values from them rather than from their flow equations. Similarly, in Ref.~\cite{Obert2013}, in order for the flow equations to fulfill the Goldstone theorem, it was necessary to impose $h_\sigma=h_\pi$ and use only the flow equation of $h_\pi$ for both Yukawa couplings. Within the present approach, due to the mean-field-like nature of the truncation, the Ward identity~\eqref{eq: Ward Identity} and the Goldstone theorem are automatically fulfilled by the flow equations.}
\subsection{Equivalence of bosonic and fermionic formalisms}
\label{subsec: equivalence bos and fer}
As we have proven in the previous sections, within the MF approximation the fully fermionic formalism of Sec.~\ref{sect: fermionic formalism} and the bosonized approach introduced in the present section provide the same results for the superfluid gap and for the effective two fermion interactions.
Notwithstanding the formal equivalence, the bosonic formulation relies on a further requirement. In Eqs.~\eqref{eq: Phi} and~\eqref{eq: h_pi} we assumed the matrix $\left[1-\widetilde{\mathcal{Q}}^{\Lambda_s}\Pi_{22}^\Lambda\right]$ to be invertible. This statement is exactly equivalent to assert that the two fermion residual interaction $\Phi$ remains finite. Otherwise the Goldstone mode would lie in this coupling and not (only) in the Hubbard-Stratonovich boson. This fact cannot occur if the flow is stopped at a scale $\Lambda_s$ coinciding with the critical scale $\Lambda_c$ at which the (normal) bosonic mass $m^\Lambda$ turns zero, but it could take place if one considers symmetry breaking in more than one channel. In particular, if one allows the system to develop two different orders and stops the flow when the mass of one of the two associated bosons becomes zero, it could happen that, within a MF approximation for both order types, the appearance of a finite gap in the first channel makes the two fermion transverse residual interaction in the other channel diverging. In that case one can apply the technique of the \textit{flowing bosonization}~\cite{Friederich2010,Friederich2011}, by reassigning to the bosonic sector the (most singular part of the) two fermion interactions that are generated during the flow. It can be proven that also this approach gives the same results for the gap and the effective fermionic interactions in Eq.~\eqref{eq: eff fer interactions} as the fully fermionic formalism.
\section{Vertex bosonization}
\label{sec: vertex bosonization}
In this section we present a systematic procedure to extract the quantities in Eq.~\eqref{eq: vertex at Lambda crit} from a given vertex, within an approximate framework. The full vertex in the symmetric phase can be written as~\cite{Husemann2009,Husemann2012}
\begin{equation}
\begin{split}
V^\Lambda(k_1,k_2,k_3)&=V^{\Lambda_\text{ini}}(k_1,k_2,k_3)\\
&+\phi^\Lambda_p\left(k_1,k_3;k_1+k_2\right)\\
&-\phi^\Lambda_m\left(k_1,k_2;k_2-k_3\right)\\
&-\frac{1}{2}\phi^\Lambda_m\left(k_1,k_2;k_3-k_1\right)\\
&+\frac{1}{2}\phi^\Lambda_c\left(k_1,k_2;k_3-k_1\right),
\end{split}
\label{eq: channel decomposition}
\end{equation}
where $V^{\Lambda_\text{ini}}$ is the vertex at the initial scale, and we call $\phi_p$ pairing channel, $\phi_m$ magnetic channel and $\phi_c$ charge channel. Each of this functions depends on a bosonic and two fermionic variables.
Within the so called 1-loop approximation, where one neglects the 3-fermion coupling in Eq.~\eqref{eq: vertex flow equation symm}, in the Katanin scheme~\cite{Katanin2004}, or in more involved schemes, such as the 2-loop~\cite{Eberlein2014} or the multiloop~\cite{Kugler2018_I,Kugler2018_II}, one is able to assign one or more of the terms of the flow equation~\eqref{eq: vertex flow equation symm} for $V^\Lambda$ to each of the channels, in a way that their last bosonic argument enters only parametrically in the formulas. This is the reason why the decomposition in Eq.~\eqref{eq: channel decomposition} is useful. The vertex at the initial scale can be set equal to the bare (sign-reversed) Hubbard interaction $-U$ in a weak-coupling approximation, or as in the recently introduced DMF\textsuperscript{2}RG scheme, to the vertex computed via DMFT~\cite{Taranto2014,Vilardi2019}.
In order to simplify the treatment of the dependence on fermionic spatial momenta of the various channels, one often introduces a complete basis of Brillouin zone form factors $\{f^\ell_\mathbf{k}\}$ and expands each channel in this basis~\cite{Lichtenstein2017}
\begin{equation}
\begin{split}
\phi^\Lambda_X(k,k';q)=\sum_{\ell\ell'} \phi^{\Lambda}_{X,\ell\ell'}(\nu,\nu';q)f^\ell_{\mathbf{k}+(\text{sgn}X)\mathbf{q}/2}\,f^{\ell'}_{\mathbf{k'}-\mathbf{q}/2},
\end{split}
\label{eq: form factor expansion}
\end{equation}
with $X=p$, $m$ or $c$, and $\text{sgn}\,p=-1$, $\text{sgn}\,c=\text{sgn}\,m=+1$. For practical calculations the above sum is truncated to a finite number of form factors and often only diagonal terms, $\ell=\ell'$, are considered. Within the form factor truncated expansion, one is left with the calculation of a finite number of channels that depend on a bosonic "$d$+1 momentum" $q=(\mathbf{q},\Omega)$ and two fermionic Matsubara frequencies $\nu$ and $\nu'$.
We will now show how to obtain the decomposition introduced in Eq.~\eqref{eq: vertex at Lambda crit} within the form factor expansion.
We focus on only one of the channels in Eq.~\eqref{eq: channel decomposition}, depending on the type of order we are interested in, and factorize its dependence on the two fermionic Matsubara frequencies. We introduce the so called channel asymptotics, that is the functions that describe the channels for large $\nu$, $\nu'$. From now on we adopt the shorthand $\lim_{\nu\rightarrow\infty}g(\nu)=g(\infty)$ for whatever $g$, function of $\nu$. By considering only diagonal terms in the form factor expansion in Eq.~\eqref{eq: form factor expansion}, we can write the channels as~\cite{Wentzell2016}:
\begin{equation}
\begin{split}
\phi_{X,\ell}^\Lambda(\nu,\nu';q)&=\mathcal{K}_{X,\ell}^{(1)\Lambda}(q)+\mathcal{K}_{X,\ell}^{(2)\Lambda}(\nu;q)\\
&+\overline{\mathcal{K}}_{X,\ell}^{(2)\Lambda}(\nu';q)
+\delta\phi^\Lambda_{X,\ell}(\nu,\nu';q),
\end{split}
\label{eq: vertex asymptotics}
\end{equation}
with
\begin{equation}
\begin{split}
&\mathcal{K}_{X,\ell}^{(1)\Lambda}(q)=\phi_{X,\ell}^\Lambda(\infty,\infty;q)\\
&\mathcal{K}_{X,\ell}^{(2)\Lambda}(\nu;q)=\phi_{X,\ell}^\Lambda(\nu,\infty;q)-\mathcal{K}_{X,\ell}^{(1)\Lambda}(q)\\
&\overline{\mathcal{K}}_{X,\ell}^{(2)\Lambda}(\nu';q)=\phi_{X,\ell}^\Lambda(\infty,\nu';q)-\mathcal{K}_{X,\ell}^{(1)\Lambda}(q)\\
&\delta\phi^\Lambda_{X,\ell}(\nu,\infty;q)=\delta\phi^\Lambda_{X,\ell}(\infty,\nu';q)=0.
\end{split}
\label{eq: asymptotics properties}
\end{equation}
According to Ref.~\cite{Wentzell2016}, these functions are related to physical quantities. $\mathcal{K}_{X,\ell}^{(1)}$ turns out to be proportional to the relative susceptibility and the combination $\mathcal{K}_{X,\ell}^{(1)}+\mathcal{K}_{X,\ell}^{(2)}$ (or $\mathcal{K}_{X,\ell}^{(1)}+\overline{\mathcal{K}}_{X,\ell}^{(2)}$) to the so called boson-fermion vertex, that describes both the response of the Green's function to an external field~\cite{VanLoon2018} and the coupling between a fermion and an effective boson. In principle one should be able to calculate the above quantities diagrammatically (see Ref.~\cite{Wentzell2016} for the details) without performing any limit. However, it is well known how fRG truncations, in particular the 1-loop approximation, do not properly weight all the Feynman diagrams contributing to the vertex, so that the diagrammatic calculation and the high frequency limit give two different results. To keep the property in the last line of Eq.~\eqref{eq: asymptotics properties}, we choose to perform the limits. We rewrite Eq.~\eqref{eq: vertex asymptotics} in the following way:
\begin{equation}
\begin{split}
&\phi_{X,\ell}^\Lambda(\nu,\nu';q)=\\
&=\frac{\left[\mathcal{K}_{X,\ell}^{(1)\Lambda}+\mathcal{K}_{X,\ell}^{(2)\Lambda}\right]\left[\mathcal{K}_{X,\ell}^{(1)\Lambda}+\overline{\mathcal{K}}_{X,\ell}^{(2)\Lambda}\right]}{\mathcal{K}_{X,\ell}^{(1)\Lambda}}+\mathcal{R}_{X,\ell}^\Lambda\\
&=\frac{\phi_{X,\ell}^\Lambda(\nu,\infty;q)\phi_{X,\ell}^\Lambda(\infty,\nu';q)}{\phi_{X,\ell}^\Lambda(\infty,\infty;q)}+\mathcal{R}_{X,\ell}^\Lambda(\nu,\nu';q),
\end{split}
\label{eq: vertex separation}
\end{equation}
where we have made the frequency and momentum dependencies explicit only in the second line and we have defined
\begin{equation}
\mathcal{R}_{X,\ell}^\Lambda(\nu,\nu';q)=\delta\phi^\Lambda_{X,\ell}(\nu,\nu';q)-\frac{\mathcal{K}_{X,\ell}^{(2)\Lambda}(\nu;q)\overline{\mathcal{K}}_{X,\ell}^{(2)\Lambda}(\nu';q)}{\mathcal{K}_{X,\ell}^{(1)\Lambda}(q)}.
\end{equation}
From the definitions given above, it is obvious that the rest function $\mathcal{R}_{X,\ell}$ decays to zero in all frequency directions.
Since the first term of Eq.~\eqref{eq: vertex separation} is separable by construction, we choose to identify this term with the first one of Eq.~\eqref{eq: vertex at Lambda crit}. Indeed, in many cases the vertex divergence is manifest already in the asymptotic $\mathcal{K}_{X,\ell}^{(1)\Lambda}$, that we recall to be proportional to the susceptibility of the channel. There are however situations in which the functions $\mathcal{K}^{(1)}$ and $\mathcal{K}^{(2)}$ are zero even close to an instability in the channel, an important example being the d-wave superconducting instability in the repulsive Hubbard model. In general, this occurs for those channels that, within a Feynman diagram expansion, cannot be constructed with a ladder resummation with the bare vertex. In the Hubbard model, due to the locality of the bare interaction, this happens for every $\ell\neq 0$, that is for every term in the form factor expansion different than the s-wave contribution. In this case one should adopt a different approach and, for example, replace the limits to infinity in Eq.~\eqref{eq: vertex separation} by some given values of the Matsubara frequencies, $\pm \pi T$ for example.
\section{Results for the attractive Hubbard model at half-filling}
\label{sec: results}
In this section we report some exemplary results of the equations derived within the bosonic formalism, for the attractive two-dimensional Hubbard model. We focus on the half-filled case. For pure nearest neighbors hopping with amplitude $-t$, the band dispersion $\xi_\mathbf{k}$ is given by
\begin{equation}
\xi_\mathbf{k} = - 2 t \left( \cos k_x + \cos k_y \right) -\mu,
\label{eq: dispersion band}
\end{equation}
with $\mu=0$ at half-filling. We choose the onsite attraction and the temperature to be $U=-4t$ and $T=0.1t$. All results are presented in units of the hopping parameter $t$.
\subsection{Symmetric phase}
In the symmetric phase, in order to run a fRG flow, we introduce the $\Omega$-regulator~\cite{Husemann2009}
\begin{equation}
R^\Lambda(k) = \left(i\nu-\xi_\mathbf{k}\right) \frac{\Lambda^2}{\nu^2},
\end{equation}
so that the initial scale is $\Lambda_\text{init}=+\infty$ (fixed to a large number in the numerical calculation) and the final one $\Lambda_\text{fin}=0$. We choose a 1-loop truncation, that is we neglect the last term of Eq.~\eqref{eq: vertex flow equation symm}, and use the decomposition in Eq.~\eqref{eq: channel decomposition} with a form factor expansion. We truncate Eq.~\eqref{eq: form factor expansion} only to the first term, that is we use only s-wave, $f^{(0)}_\mathbf{k}\equiv 1$, form factors. Within these approximations, the vertex reads
\begin{equation}
\begin{split}
V^\Lambda(&k_1,k_2,k_3) = - U + \mathcal{P}^{\Lambda}_{\nu_1\nu_3}(k_1+k_2) \\
- &\mathcal{M}^{\Lambda}_{\nu_1\nu_2}(k_2-k_3)\\
-&\frac{1}{2} \mathcal{M}^{\Lambda}_{\nu_1\nu_2}(k_3-k_1)
+\frac{1}{2} \mathcal{C}^{\Lambda}_{\nu_1\nu_2}(k_3-k_1),
\end{split}
\label{eq: channel decomposition attractive model}
\end{equation}
where $\mathcal{P}$, $\mathcal{M}$, $\mathcal{C}$, are referred as pairing, magnetic and charge channel, respectively.
Furthermore, we focus only on the spin-singlet component of the pairing (the triplet one is very small in the present parameter region), so that we require the pairing channel to obey~\cite{Rohringer2012}
\begin{equation}
\mathcal{P}^{\Lambda}_{\nu\nu'}(q) = \mathcal{P}^{\Lambda}_{\Omega-\nu,\nu'}(q) = \mathcal{P}^{\Lambda}_{\nu,\Omega-\nu'}(q),
\end{equation}
with $q=(\mathbf{q},\Omega)$.
The initial condition for the vertex reads
\begin{equation}
V^{\Lambda_\text{init}}(k_1,k_2,k_3) = - U,
\end{equation}
so that $\mathcal{P}^{\Lambda_\text{init}}=\mathcal{M}^{\Lambda_\text{init}}=\mathcal{C}^{\Lambda_\text{init}}=0$.
Neglecting the fermionic self-energy, $\Sigma^\Lambda(k)\equiv0$, we run a flow for these three quantities until one (ore more) of them diverges. Under a technical point of view, each channel is computed by keeping 50 positive and 50 negative values for each of the three Matsubara frequencies (two fermionic, one bosonic) on which it depends. Frequency asymptotics are also taken into account, by following Ref.~\cite{Wentzell2016}. The momentum dependence of the channel is treated by discretizing with 38 patches the region $\mathcal{B}=\{(k_x,k_y): 0\leq k_y\leq k_x\leq\pi\}$ in the Brillouin zone and extending to the other regions by using lattice symmetries. The expressions of the flow equations are reported in Appendix~\ref{app: flow eqs symm phase}.
Due to particle-hole symmetry occurring at half-filling, pairing fluctuations at $\mathbf{q}=0$ combine with charge fluctuations at $\mathbf{q}=(\pi,\pi)$ to form an order parameter with SO(3) symmetry~\cite{Micnas1990}. Indeed, with the help of a canonical particle-hole transformation, one can map the attractive half-filled Hubbard model onto the repulsive one. Within this duality, the SO(3)-symmetric magnetic order parameter is mapped onto the above mentioned combined charge-pairing order parameter and vice versa. This is the reason why we find a critical scale, $\Lambda_c$, at which both $\mathcal{C}((\pi,\pi),0)$ and $\mathcal{P}(\mathbf{0},0)$ diverge, as shown in Fig.~\ref{fig: flow channels}. On a practical level, we define the stopping scale $\Lambda_s$ as the scale at which one (or more, in this case) channel exceeds $10^3t$. With our choice of parameters, we find that at $\Lambda_s \simeq 0.378t$ both $\mathcal{C}$ and $\mathcal{P}$ cross our threshold.
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{fig3_paper.png}
\caption{Flow of the maximum values of the pairing, magnetic and charge channel. The maximum value of the charge channel at zero frequency and momentum $(\pi,\pi)$ and the one for the pairing channel at $q=0$ coincide, within the numerical accuracy, and both exceed the threshold $10^3t$ at the stopping scale, signaling an instability to the formation of an order parameter given by any linear combination of the superfluid and the charge density wave ones.}
\label{fig: flow channels}
\end{figure}
In the SSB phase, we choose to restrict the ordering to the pairing channel, thus excluding the formation of charge density waves. This choice is always possible because we have the freedom to choose the "direction" in which our order parameter points. Indeed, in the particle-hole dual repulsive model, our choice would be equivalent to choose the (antiferro-) magnetic order parameter to lie in the $xy$ plane. This choice is implemented by selecting the particle-particle channel as the only one contributing to the flow in the SSB phase, as exposed in Secs.~\ref{sect: fermionic formalism} and~\ref{sect: bosonic formalism}.
In order to access the SSB phase with our bosonic formalism, we need to perform the decomposition in Eq.~\eqref{eq: vertex at Lambda crit} for our vertex at $\Lambda_s$. Before proceeding, in order to be consistent with our form factor expansion in the SSB phase, we need to project $V$ in Eq.~\eqref{eq: channel decomposition attractive model} onto the s-wave form factors, because we want the quantities in the ordered phase to be functions of Matsubara frequencies only. Therefore we define the total vertex projected onto s-wave form factors
\begin{equation}
\overline{V}^{\Lambda_s}_{\nu\nu'}(q)=\int_{\mathbf{k},\mathbf{k}'}V^{\Lambda_s}\hskip -1mm\left(k,q-k,k'\right).
\end{equation}
Furthermore, since we are interested only in spin singlet pairing, we symmetrize it with respect to one of the two fermionic frequencies, so that in the end we are dealing with
\begin{equation}
V^{\Lambda_s}_{\nu\nu'}(q)=\frac{\overline{V}^{\Lambda_s}_{\nu\nu'}(q)+\overline{V}^{\Lambda_s}_{\nu,\Omega-\nu'}(q)}{2}.
\end{equation}
In order to extract the Yukawa coupling $h^{\Lambda_s}$ and bosonic propagator $m^{\Lambda_s}$, we employ the strategy described in Sec.~\ref{sec: vertex bosonization}. Here, however, instead of factorizing the pairing channel $\mathcal{P}^{\Lambda_s}$ alone, we subtract from it the bare interaction $U$. In principle, $U$ can be assigned both to the pairing channel, to be factorized, or to the residual two fermion interaction, giving rise to the same results in the SSB phase. However, when in a real calculation the vertices are calculated on a finite frequency box, it is more convenient to have the residual two fermion interaction $\mathcal{Q}^{\Lambda_s}$ as small as possible, in order to reduce finite size effects in the matrix inversions needed to extract the reduced couplings in Eqs.~\eqref{eq: reduced C tilde},~\eqref{eq: reduced yukawa} and~\eqref{eq: reduced mass P tilde}, and in the calculation of $h_\pi$, in Eq.~\eqref{eq: h_pi}.
Furthermore, since it is always possible to rescale the bosonic propagators and Yukawa couplings by a constant such that the vertex constructed with them (Eq.~\eqref{eq: vertex separation}) is invariant, we impose the normalization condition $h^{\Lambda_s}(\nu\rightarrow\infty;q)=1$.
In formulas, we have
\begin{equation}
m^{\Lambda_s}(q)=\frac{1}{\mathcal{K}_{p,\ell=0}^{(1)\Lambda_s}(q)-U}=\frac{1}{\mathcal{P}^{\Lambda_s}_{\infty,\infty}(q)-U},
\end{equation}
and
\begin{equation}
\begin{split}
h^{\Lambda_s}(\nu;q)&=\frac{\mathcal{K}_{p,\ell=0}^{(2)\Lambda_s}(\nu;q)+\mathcal{K}_{p,\ell=0}^{(1)\Lambda_s}(q)-U}{\mathcal{K}_{p,\ell=0}^{(1)\Lambda_s}(q)-U}\\
&=\frac{\mathcal{P}^{\Lambda_s}_{\nu,\infty}(q)-U}{\mathcal{P}^{\Lambda_s}_{\infty,\infty}(q)-U}.
\end{split}
\end{equation}
The limits are numerically performed by evaluating the pairing channel at large values of the fermionic frequencies.
The extraction of the factorizable part from the pairing channel minus the bare interaction defines the rest function
\begin{equation}
\mathcal{R}^{\Lambda_s}_{\nu\nu'}(q)=\mathcal{P}^{\Lambda_s}_{\nu\nu'}(q)-U-\frac{h^{\Lambda_s}(\nu;q)h^{\Lambda_s}(\nu';q)}{m^{\Lambda_s}(q)},
\end{equation}
and the residual two fermion interaction $\mathcal{Q}$
\begin{equation}
\begin{split}
\mathcal{Q}^{\Lambda_s}_{\nu\nu'}(q)=&\left[V^{\Lambda_s}_{\nu\nu'}(q)-\mathcal{P}^{\Lambda_s}_{\nu\nu'}(q)+U\right]+\mathcal{R}^{\Lambda_s}_{\nu\nu'}(q)\\
=&V^{\Lambda_s}_{\nu\nu'}(q)-\frac{h^{\Lambda_s}(\nu;q)h^{\Lambda_s}(\nu';q)}{m^{\Lambda_s}(q)}.
\end{split}
\end{equation}
We are now in the position to extract the reduced couplings, $\widetilde{\mathcal{Q}}^{\Lambda_s}$, $\widetilde{h}^{\Lambda_s}$ and $\widetilde{m}^{\Lambda_s}$, defined in Eqs.~\eqref{eq: reduced C tilde},~\eqref{eq: reduced yukawa},~\eqref{eq: reduced mass P tilde}. This is achieved by numerically inverting the matrix (we drop the $q$-dependence from now on, assuming always $q=0$)
\begin{equation}
\delta_{\nu\nu'} + \mathcal{Q}^{\Lambda_s}_{\nu\nu'}\, \chi^{\Lambda_s}_{\nu'},
\end{equation}
with
\begin{equation}
\chi^{\Lambda_s}_{\nu} = T\int_{\mathbf{k}}G_0^{\Lambda_s}(k)G_0^{\Lambda_s}(-k),
\end{equation}
and
\begin{equation}
G_0^{\Lambda_s}(k) = \frac{1}{i\nu-\xi_\mathbf{k}+R^{\Lambda_s}(k)} =\frac{\nu^2}{\nu^2+\Lambda_s^2}\frac{1}{i\nu-\xi_\mathbf{k}}.
\end{equation}
In Fig.~\ref{fig: vertices Lambda s} we show the results for the pairing channel minus the bare interaction, the rest function, the residual two fermion interaction $\mathcal{Q}$ and the reduced one $\widetilde{\mathcal{Q}}$ at the stopping scale. One can see that in the present parameter region the pairing channel (minus $U$) is highly factorizable. Indeed, despite the latter being very large because of the vicinity to the instability, the rest function $\mathcal{R}$ remains very small, a sign that the pairing channel is well described by the exchange of a single boson. Furthermore, thanks to our choice of assigning the bare interaction to the factorized part, as we see in Fig.~\ref{fig: vertices Lambda s}, both $\mathcal{Q}$ and $\widetilde{\mathcal{Q}}$ possess frequency structures that arise from a background that is zero.
\begin{figure}[t]
\centering
\includegraphics[width=0.49\textwidth]{fig4_paper.png}
\caption{Couplings contributing to the total vertex at the stopping scale. \\
%
\textit{Upper left}: pairing channel minus the bare interaction. At the stopping scale this quantity acquires very large values due to the vicinity to the pairing instability. \\
%
\textit{Upper right}: rest function of the pairing channel minus the bare interaction. In the present regime the pairing channel is very well factorizable, giving rise to a small rest function.\\
%
\textit{Lower left}: residual two fermion interaction. The choice of factorizing $\mathcal{P}^{\Lambda_s}-U$ instead of $\mathcal{P}^{\Lambda_s}$ alone makes the background of this quantity zero.\\
%
\textit{Lower right}: reduced residual two fermion interaction. As well as the full one, this coupling has a zero background value, making calculations of couplings in the SSB phase more precise by reducing finite size effects in the matrix inversions.}
\label{fig: vertices Lambda s}
\end{figure}
Finally, the full bosonic mass at the stopping scale is close to zero, $m^{\Lambda_s}\simeq10^{-3} $, due to the vicinity to the instability, while the reduced one is finite, $\widetilde{m}^{\Lambda_s}\simeq 0.237$.
\subsection{SSB Phase}
In the SSB phase, instead of running the fRG flow, we employ the analytical integration of the flow equations described in Sec.~\ref{sect: bosonic formalism}. On a practical level, we implement a solution of the loop described in Appendix~\ref{app: loop}, that allows for the calculation of the bosonic expectation value $\alpha$, the transverse Yukawa coupling $h_\pi$ and subsequently the fermionic gap $\Delta$ through the Ward identity $\Delta=\alpha h_\pi$. In this section we drop the dependence on the scale, since we have reached the final scale $\Lambda_\text{fin}=0$. Note that, as exposed previously, in the half-filled attractive Hubbard model the superfluid phase sets in by breaking a SO(3) rather than a U(1) symmetry. This means that one should expect the appearance of two massless Goldstone modes. Indeed, besides the Goldstone boson present in the (transverse) particle-particle channel, another one appears in the particle-hole channel and it is related to the divergence of the charge channel at momentum $(\pi,\pi)$. However, within our choice of considering only superfluid order and within the MF approximation, this mode is decoupled from our equations.
Within our previously discussed choice of bosonizing $\mathcal{P}^{\Lambda_s}-U$ instead of $\mathcal{P}^{\Lambda_s}$ alone, the self consistent loop introduced in Appendix~\ref{app: loop} converges extremely fast, 15 iterations for example are sufficient to reach a precision of $10^{-7}$ in $\alpha$.
Once convergence is reached and the gap $\Delta(\nu)$ obtained, we are in the position to evaluate the remaining couplings introduced in Sec.~\ref{sect: bosonic formalism} through their integrated flow equations. In Fig.~\ref{fig: gap} we show the computed frequency dependence of the gap. It interpolates between $\Delta_0=\Delta(\nu\rightarrow 0)$, its value at the Fermi level, and its asymptotic value, that equals the (signed reversed) bare interaction times the \hgl{condensate fraction $\langle\psi_{\downarrow}\psi_{\uparrow}\rangle=$}$\int_\mathbf{k}\langle \psi_{-\mathbf{k},\downarrow}\psi_{\mathbf{k},\uparrow}\rangle$. $\Delta_0$ also represents the gap between the upper and lower Bogoliubov bands. Magnetic and charge fluctuations above the critical scale significantly renormalize the gap with respect to the Hartree-Fock calculation ($\widetilde{V}=-U$ in Eq.~\eqref{eq: gap equation fermionic}), that in the present case coincides with Bardeen-Cooper-Schrieffer (BCS) theory. \hgl{This effect is reminiscent of the Gor'kov-Melik-Barkhudarov correction in weakly coupled superconductors~\cite{Gorkov1961}. The computed frequency dependence of the gap compares qualitatively well with Ref.~\cite{Eberlein2013}, where a more sophisticated truncation of the flow equations has been carried out.}\\
Since $\Delta$ is a spin singlet superfluid gap, and we have chosen $\alpha$ to be real, it obeys
\begin{equation}
\Delta(\nu) = \Delta(-\nu) = \Delta^*(-\nu),
\end{equation}
where the first equality comes from the spin singlet nature and the second one from the time reversal symmetry of the effective action. Therefore, the imaginary part of the gap is always zero. By contrast, a magnetic gap would gain, in general, a finite (and antisymmetric in frequency) imaginary part.
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{fig5_paper.png}
\caption{Frequency dependence of the superfluid gap. It interpolates between its value at the Fermi level, $\Delta_0$, and its asymptotic one. The dashed line marks the BCS value, while the dotted one $-U$ times the Cooper pair expectation value.}
\label{fig: gap}
\end{figure}
In Fig.~\ref{fig: final vertices} we show the results for the residual two fermion interactions in the longitudinal and transverse channels, together with the total effective interaction in the longitudinal channel, defined as
\begin{equation}
V_{\mathcal{A},\nu\nu'}=\frac{h_\sigma(\nu)h_\sigma(\nu')}{m_\sigma}+\mathcal{A}_{\nu\nu'}.
\label{eq: VA SSB bosonic}
\end{equation}
The analogue of Eq.~\eqref{eq: VA SSB bosonic} for the transverse channel cannot be computed, because the transverse mass $m_\pi$ is zero, in agreement with the Goldstone theorem. The key result is that the residual interactions $\mathcal{A}_{\nu\nu'}$ and $\Phi_{\nu\nu'}$ inherit the frequency structures of $\widetilde{\mathcal{Q}}^{\Lambda_s}_{\nu\nu'}$ and $\mathcal{Q}^{\Lambda_s}_{\nu\nu'}$, respectively, and they are also close to them in values (compare with Fig.~\ref{fig: vertices Lambda s}).
\begin{figure}[t]
\centering
\includegraphics[width=0.49\textwidth]{fig6_paper.png}
\caption{Effective interactions calculated in the SSB phase as functions of Matsubara frequencies.\\
\textit{Upper left}: longitudinal residual two fermion interaction $\mathcal{A}$.\\
\textit{Upper right}: transverse residual two fermion interaction $\Phi$.\\
\textit{Lower left}: longitudinal effective two fermion interaction $V_\mathcal{A}$.\\
\textit{Lower right}: longitudinal residual two fermion interaction $\mathcal{A}$ with its reduced counterpart $\widetilde{\mathcal{Q}}$ at the stopping scale subtracted (left), and transverse longitudinal residual two fermion interaction $\Phi$ minus its equivalent, $\mathcal{Q}$, at $\Lambda_s$ (right). Both quantities exhibit very small values, showing that $\mathcal{A}$ and $\Phi$ do not deviate significantly from $\widetilde{\mathcal{Q}}$ and $\mathcal{Q}$, respectively.}
\label{fig: final vertices}
\end{figure}
The same occurs for the Yukawa couplings, as shown in Fig.~\ref{fig: hs}. Indeed, the calculated transverse coupling $h_\pi$ does not differ at all from the Yukawa coupling at the stopping scale $h^{\Lambda_s}$. In other words, if instead of solving the self consistent equations, one runs a flow in the SSB phase, the transverse Yukawa coupling will stay the same from $\Lambda_s$ to $\Lambda_\text{fin}$. Furthermore, the longitudinal coupling $h_\sigma$ develops a dependence on the frequency which does not differ significantly from the one of $\widetilde{h}^{\Lambda_s}$.
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{fig7_paper.png}
\caption{Frequency dependence of Yukawa couplings both at the stopping scale $\Lambda_s$ and in the SSB phase. While $h_\pi$ coincides with $h^{\Lambda_s}$, the longitudinal coupling $h_\sigma$ does not differ significantly from the reduced one at the stopping scale, $\widetilde{h}^{\Lambda_s}$. The continuous lines for $h^{\Lambda_s}$ and $\widetilde{h}^{\Lambda_s}$ are an interpolation through the data calculated on the Matsubara frequencies.}
\label{fig: hs}
\end{figure}
This feature, at least for our choice of parameters, can lead to some simplifications in the flow equations of Sec.~\ref{sect: bosonic formalism}. Indeed, when running a fRG flow in the SSB phase, one might let flow only the bosonic inverse propagators by keeping the Yukawa couplings and residual interactions fixed at their values, reduced or not, depending on the channel, at the stopping scale. This fact can be crucial to make computational costs lighter when including bosonic fluctuations of the order parameter, which, similarly, do not significantly renormalize Yukawa couplings in the SSB phase~\cite{Obert2013,Bartosch2009}.
\hgl{\subsection{Gap and condensate fraction dependence on the coupling }
In this section, we carry out an analysis of the dependence of the zero-frequency gap $\Delta_0$ on the coupling $U$. In order to obtain a zero temperature estimate, we perform a finite temperature calculation and check that the condition $\Delta_0>T$ is fulfilled. In fact, when this is the case, the superfluid gap is not expected to change significantly by further lowering the temperature, at least within a MF-like calculation.
In Fig.~\ref{fig: D vs U}, we show the zero-frequency extrapolation of the superfluid gap and the bosonic expectation value $\alpha$ to be compared with the BCS (mean-field) result. The inclusion of magnetic and charge correlations above the stopping scale $\Lambda_s$ renormalizes $\Delta_0$ compared to the BCS result. In particular, as proven by second order perturbation theory in Ref.~\cite{Gorkov1961}, even in the $U\rightarrow 0$ limit the ratio between the ground state gap and its BCS result is expected to be smaller than 1 due to particle-hole fluctuations. Differently, $\alpha$ does not deviate significantly from the mean-field result, as this quantity is not particularly influenced by magnetic and charge fluctuations, but rather by fluctuations of the order parameter, which, in particular at strong coupling, can significantly reduce it~\cite{Bartosch2009}. In the present approach, we include the effect of particle-hole fluctuations and we tackle the frequency dependence of the gap, which are not treated in Refs.~\cite{Bartosch2009,Diehl2007} (which focus on the BEC-BCS crossover in the continuum in 3 dimensions), and~\cite{Obert2013} (2-dimensional lattice model). In these works, however, fluctuations of the order parameter, which are not included in our method, are taken into account. In Ref.~\cite{Eberlein2013}, both particle-hole and order parameter fluctuations, together with the gap frequency dependence, are treated in a rather complicated fRG approach to the 2D attractive Hubbard model, where, however, the Goldstone theorem and the Ward identity turn out to be violated to some extent. We believe our approach to represent a convenient starting point on top of which one can include fluctuations in a systematic manner in order to fulfill the above mentioned fundamental constraints.
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{fig8_paper.png}
\caption{\hgl{Low temperature estimate of the ground state superfluid gap $\Delta_0$ and bosonic expectation value $\alpha$ as a function of the coupling $U$. For $|U|>2t$, the calculations have been performed at $T=0.1t$, while for $t\leq |U|\leq 2t$ we have chosen $T=0.01t$. Results for $|U|<t$ are not shown because the temperature at which $\Delta_0>T$ is fulfilled is hardly reachable by our numerics. The dashed line shows the BCS (for which $\Delta_0=\alpha$) zero temperature result.}}
\label{fig: D vs U}
\end{figure}
Furthermore, it is interesting to consider the coupling dependence of the condensate fraction $\langle\psi_{\downarrow}\psi_{\uparrow}\rangle$. Within mean-field theory, it evolves from an exponentially small value at weak coupling to $\frac{1}{2}$ at strong coupling, indicating that all the fermions are bound in bosonic pairs which condense. This is an aspect of the well-known paradigm of the BEC-BCS crossover~\cite{Eagles1969,Leggett1980,Nozieres1985}. At half filling, by including quantum fluctuations in the strong coupling regime, it is known that the condensate fraction will be reduced to a value of 0.3, as it has been obtained from the spin wave theory for the Heisenberg model~\cite{Anderson1952}, on which the particle-hole symmetric attractive Hubbard model can be mapped at large $U$.
Within our approach, the condensate fraction is given by
\begin{equation}
\langle\psi_{\downarrow}\psi_{\uparrow}\rangle=-\lim_{\nu\rightarrow\infty}\frac{\Delta(\nu)}{U}=-\frac{\alpha}{U},
\end{equation}
where in the last line we have used the Ward identity~\eqref{eq: Ward Identity} and the fact that $h_\pi\rightarrow1$ for $\nu\rightarrow\infty$. In Fig.~\ref{fig: QMC comparison} we show the computed condensate fraction and we compare it with the BCS result and with auxiliary field quantum Monte Carlo (AFQMC) data, taken from Ref.~\cite{Karakuzu2018}. At weak coupling ($U=-2t$) we find a good agreement with AFQMC. This is due to the facts that in this regime the order parameter fluctuations, which we do no treat, are weaker, and that the stopping scale $\Lambda_s$ is small and therefore particle-hole fluctuations are better included. At moderate couplings ($U=-3t$, $-4t$) the distance from Monte Carlo data increases due to the increasing strength of fluctuations, the larger stopping scales and the reduction of accuracy of the 1-loop truncation performed in the symmetric phase.
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{fig9_paper.png}
\caption{\hgl{Condensate fraction $\langle\psi_{\downarrow}\psi_{\uparrow}\rangle$ \emph{vs} coupling $U$. We indicate the present approach as fRG+MF, and we compare it with BCS theory and AFQMC data, taken from Ref.~\cite{Karakuzu2018}.}}
\label{fig: QMC comparison}
\end{figure}
}
\section{Conclusion}
\label{sec: conclusion}
We have introduced a truncation of fRG flow equations which, with the introduction of a Hubbard-Stratonovich boson, has been proven to be equivalent to the MF equations obtained in Refs.~\cite{Wang2014,Yamase2016}. These flow equations satisfy fundamental requirements such as the Goldstone theorem and the Ward identities associated with global symmetries, and can be integrated analytically, reducing the calculation of correlation functions in the SSB phase to a couple of self consistent equations for the bosonic expectation value $\alpha$ and the transverse Yukawa coupling $h_\pi$. A necessary step to perform the Hubbard-Stratonovich transformation, on which our method relies, is to extract a factorizable dependence on fermionic variables $k$ and $k'$ from the vertex at the critical scale. A strategy to accomplish this goal has been suggested for a vertex whose dependence on spatial momenta $\mathbf{k}$ and $\mathbf{k}'$ is treated by a form factor expansion, making use of the vertex asymptotics introduced in Ref.~\cite{Wentzell2016}.
Furthermore, we have tested the feasibility and efficiency of our method on a prototypical model, namely the half-filled attractive Hubbard model in two dimensions, focusing on frequency dependencies of the two fermion interactions, Yukawa couplings and fermionic gap. We have found a good convergence of the iterative scheme proposed. The remaining couplings introduced in our method have been computed after the loop convergence from their integrated flow equations. \hgl{Moreover, we have analyzed the dependence of the gap and of the condensate fraction on the coupling $U$, by comparing our method with previous fRG works and with quantum Monte Carlo data.}
Our method leaves room for applications and extensions.
First, one can directly apply the MF method, as formulated in this paper, to access the SSB phase in those calculations for which the dependencies on fermionic momenta and/or frequencies cannot be neglected. Some examples are the fRG calculations with a full treatment of fermionic frequencies, within a 1-loop truncation~\cite{Vilardi2017}, in the recent implementations of multiloop fRG~\cite{Tagliavini2019,Hille2020} or in the DMF\textsuperscript{2}RG scheme~\cite{Vilardi2019}. These combinations can be applied to two- or three-dimensional systems. In the former case, even though in 2D order parameter fluctuations are expected to play a decisive role, our method can be useful to get a first, though approximate, picture of the phase diagram. Of particular relevance is the 2D repulsive Hubbard model, used in the context of high-Tc superconductors. An interesting system for the latter case, where bosonic fluctuations are expected to be less relevant, is the 3D attractive Hubbard model, which, thanks to modern techniques, can be experimentally realized in cold atoms setups.
Secondly, our method constitutes a convenient starting point for the inclusion of bosonic fluctuations of the order parameter, as done for example in Refs.~\cite{Friederich2011,Obert2013}, with the full dependence of the gap, Yukawa couplings and vertices on the fermionic momenta and/or frequencies being kept. In particular, by providing the Hubbard-Stratonovich boson with its own regulator, our MF-truncation of flow equations can be extended to include order parameter fluctuations, which in two spatial dimensions and at finite temperature restore the symmetric phase, in agreement with the Mermin-Wagner theorem. One may also adapt the bosonic field at every fRG step through the \textit{flowing bosonization}~\cite{Friederich2010,Friederich2011}. This can be done by keeping the full frequency dependence of the vertex and Yukawa coupling, by applying the strategy discussed in Sec.~\ref{sec: vertex bosonization} to the flow equation for the vertex.
Finally, our MF method does not necessarily require a vertex coming from a fRG flow. In particular, one can employ the DMFT vertex, extract the pairing channel~\cite{Rohringer2012} (or any other channel in which symmetry breaking occurs) from it, and apply the same strategy as described in this paper to extract Yukawa and other couplings. This application can be useful to compute those transport quantities~\cite{Bonetti2020} and response functions in the SSB phase which, within the DMFT, require a calculation of vertex corrections~\cite{Georges1996}. The anomalous vertices can be computed also at finite $q$ with a simple generalization of our formulas.
\section*{Acknowledgments}
I am grateful to W. Metzner and D. Vilardi for stimulating discussions and a critical reading of the manuscript.
| proofpile-arXiv_065-201 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Trotter Error}\label{app:trot-error}
Given the Lie-Trotter formula for our parameterised operator:
\begin{equation}
U_{\rho}(\vec{t}) = (\prod_{j}e^{\frac{t_j}{\rho}(\tau_j-\tau_j^{\dagger})})^{\rho}
\end{equation}
The Trotter error bound for this expansion, $\delta_{\rho}$, is
described in Low et al. \cite{low2019wellconditioned}:
\begin{equation}
\|\delta_{\rho}\| = \mathcal{O} \Big( ( \frac{1}{\rho}\sum_j \|t_j(\tau_j-\tau_j^{\dagger}) \| )^2 \Big)
\end{equation}
Given a well-chosen reference state with a large overlap with the
exact wavefunction, low amplitudes will parameterise the ansatz, i.e.
$\forall j$, $t_j << 1$ \cite{Romero_2018}. Therefore, the
$\|t_j(\tau_j-\tau_j^{\dagger}) \|$ term will be small, particularly
when compared to current two-qubit gate errors of the order of $0.1\%$
or greater \cite{Wright:2019aa, Arute:2019aa}.
Conveniently, Gui et al. give evidence to suggest that term grouping
by mutual commutation such as we describe in this paper minimises
Trotter error compared to other methods \cite{gui2020term}. For a
rigorous theory of Trotter error, see Childs et al.
\cite{childs2019theory}.
\section{Phase Gadgets}\label{app:phase-gadgets}
\begin{theorem}
We have the following laws for decomposition, commutation, and
fusion of phase gadgets \cite{Cowtan:2019aa}.
\begin{align*}
{\inltf{PhaseGadgetCNOT-lhs}} & \quad = \quad
{\inltf{PhaseGadgetCNOT}} \\ \\
{\tikzfig{PhaseGadgetCommute0}} & \quad = \quad
{\tikzfig{PhaseGadgetCommute1}} \\ \\
{\tikzfig{PhaseGadgetFusion0}} & \quad = \quad
{\tikzfig{PhaseGadgetFusion1}}
\end{align*}
\end{theorem}
Phase gadgets are invariant under qubit permutation. For an $n$-qubit
phase gadget, this gives a choice of $C_{n-1}n!$ different \ensuremath{\textsc{cx}}\xspace
arrangements, where $C_n$ is the $n$-th Catalan number.
Figure~\ref{fig:ladder_vs_tree} shows example arrangements.
\begin{figure}[th!]
\begin{equation*}
{\inltf{PhaseGadgetLadder}} = {\inltf{PhaseGadgetTree}}
\end{equation*}
\caption{Comparing worst-case and best-case patterns for constructing
phase gadgets with respect to \ensuremath{\textsc{cx}}\xspace depth. The left shows a \ensuremath{\textsc{cx}}\xspace ladder,
with a linear \ensuremath{\textsc{cx}}\xspace depth, and the right shows the balanced-tree form,
with a logarithmic \ensuremath{\textsc{cx}}\xspace depth.}
\label{fig:ladder_vs_tree}
\end{figure}
\section{Clifford Gates and Pauli Gadgets}\label{app:cliff-commutation}
\begin{theorem}
\label{fig:exponential_clifford_rules}
We have the following laws for commuting single-qubit Clifford gates
through Pauli gadgets \cite{Cowtan:2019aa}.
\begin{figure}[ht!]
\begin{subfigure}[b]{0.5\textwidth}
\begin{equation*}
{\tikzfig{PauliExpZZa}} = {\tikzfig{PauliExpZZb}}
\end{equation*}
\begin{equation*}
{\tikzfig{PauliExpZXa}} = {\tikzfig{PauliExpZXb}}
\end{equation*}
\begin{equation*}
{\tikzfig{PauliExpZYa}} = {\tikzfig{PauliExpZYb}}
\end{equation*}
\end{subfigure}
\begin{subfigure}[b]{0.5\textwidth}
\begin{equation*}
{\tikzfig{PauliExpXZa}} = {\tikzfig{PauliExpXZb}}
\end{equation*}
\begin{equation*}
{\tikzfig{PauliExpXXa}} = {\tikzfig{PauliExpXXb}}
\end{equation*}
\begin{equation*}
{\tikzfig{PauliExpXYa}} = {\tikzfig{PauliExpXYb}}
\end{equation*}
\end{subfigure}
\begin{equation*}
{\tikzfig{PauliExpHZa}} = {\tikzfig{PauliExpHZb}}
\end{equation*}
\begin{equation*}
{\tikzfig{PauliExpHXa}} = {\tikzfig{PauliExpHXb}}
\end{equation*}
\begin{equation*}
{\tikzfig{PauliExpHYa}} = {\tikzfig{PauliExpHYb}}
\end{equation*}
\begin{subfigure}[b]{0.5\textwidth}
\begin{equation*}
{\tikzfig{PauliExpSZa}} = {\tikzfig{PauliExpSZb}}
\end{equation*}
\begin{equation*}
{\tikzfig{PauliExpSXa}} = {\tikzfig{PauliExpSXb}}
\end{equation*}
\begin{equation*}
{\tikzfig{PauliExpSYa}} = {\tikzfig{PauliExpSYb}}
\end{equation*}
\end{subfigure}
\begin{subfigure}[b]{0.5\textwidth}
\begin{equation*}
{\tikzfig{PauliExpVZa}} = {\tikzfig{PauliExpVZb}}
\end{equation*}
\begin{equation*}
{\tikzfig{PauliExpVXa}} = {\tikzfig{PauliExpVXb}}
\end{equation*}
\begin{equation*}
{\tikzfig{PauliExpVYa}} = {\tikzfig{PauliExpVYb}}
\end{equation*}
\end{subfigure}
\end{figure}
\end{theorem}
\begin{theorem}
\label{fig:exponential_cx_rules}
We have the following laws for commuting \ensuremath{\textsc{cx}}\xspace gates through Pauli
gadgets \cite{Cowtan:2019aa}.
\begin{figure}[ht!]
\begin{subfigure}[b]{0.5\textwidth}
\begin{equation*}
{\tikzfig{PauliExpCXZra}} = {\tikzfig{PauliExpCXZrb}}
\end{equation*}
\begin{equation*}
{\tikzfig{PauliExpCXXra}} = {\tikzfig{PauliExpCXXrb}}
\end{equation*}
\begin{equation*}
{\tikzfig{PauliExpCXYra}} = {\tikzfig{PauliExpCXYrb}}
\end{equation*}
\end{subfigure}
\begin{subfigure}[b]{0.5\textwidth}
\begin{equation*}
{\tikzfig{PauliExpCX_ZIa}} = {\tikzfig{PauliExpCX_ZIb}}
\end{equation*}
\begin{equation*}
{\tikzfig{PauliExpCX_XIa}} = {\tikzfig{PauliExpCX_XIb}}
\end{equation*}
\begin{equation*}
{\tikzfig{PauliExpCX_YIa}} = {\tikzfig{PauliExpCX_YIb}}
\end{equation*}
\end{subfigure}
\par\bigskip
\begin{equation*}
{\tikzfig{PauliExpCX_ZXa}} = {\tikzfig{PauliExpCX_ZXb}}
\end{equation*}
\begin{equation*}
{\tikzfig{PauliExpCX_XZa}} = {\tikzfig{PauliExpCX_XZb}}
\end{equation*}
\begin{equation*}
{\tikzfig{PauliExpCX_XYa}} = {\tikzfig{PauliExpCX_XYb}}
\end{equation*}
\end{figure}
\end{theorem}
\section{Proof of Corollary 5.4}
\label{app:corollary54proof}
All commuting sets of $m$ Pauli gadgets over $n$ qubits are
diagonalisable using Theorem~\ref{thm:pauli-chain} if all sets of $m$
pairs of Paulis are either compatible with
Theorem~\ref{thm:pauli-chain} or already contain a diagonal qubit. We
prove by enumerating over these sets of pairs that this compatibility
is satisfied for the case $m = 3$, and therefore for $m < 4$.
Compatibility is not satisfied for $m = 4$, and therefore any $m > 3$.
A short script to verify this can be found at
\url{https://github.com/CQCL/tket_benchmarking/blob/master/compilation_strategy/corollaries/corollary54.py}.
\section{Proof of Corollary 5.5}
\label{app:corollary55proof}
Enumerating over all commuting sets of gadgets over 4 qubits and
finding at least one pair of compatible qubits for each commuting set
is sufficient proof, as each commuting set of gadgets over fewer than
4 qubits is just a special case of a 4-qubit set. However, each unique
commuting set is defined by a Clifford circuit. There are more than
$4.7 \times 10^{10}$ Clifford circuits over 4 qubits, ignoring global
phase \cite{PhysRevA.88.052307}. As an optimisation, we instead search
over all the \textit{generators} of each commuting set of gadgets.
Each commuting set over 4 qubits can be generated by taking products
from a commuting set of 4 Pauli strings. It is therefore sufficient to
find at least one pair of compatible qubits for each commuting set of
4 Pauli strings.
A short script to verify this can be found at
\url{https://github.com/CQCL/tket_benchmarking/blob/master/compilation_strategy/corollaries/corollary55.py}.
\section{Operator generation}
\label{app:qboperator}
For the Jordan-Wigner and Bravyi-Kitaev encodings, \ensuremath{\mathsf{t}|\mathsf{ket}\rangle}\xspace
\texttt{QubitPauliOperator} objects were produced using EUMEN, an
under-construction software platform for quantum chemistry on quantum
computers. Excitation operators $\tau_j$ are calculated from the
molecules' spin orbitals, after which they are converted into
\texttt{QubitPauliOperator} objects, which represent the $U(\vec{t})$
operators from Equation~\ref{eq:sum-expand}. These objects contain a
python dictionary from Pauli string to symbolic expression
representing $t^{'}_j$. The coefficients $a_j$ are dependent on the
molecular geometry, and are unimportant to the compilation strategy.
The qubit operators for the parity encoding were obtained from Qiskit
\cite{Qiskit}, and converted to \ensuremath{\mathsf{t}|\mathsf{ket}\rangle}\xspace native
\texttt{QubitPauliOperator} objects.
All \texttt{QubitPauliOperator} objects are serialised and stored at
\url{https://github.com/CQCL/tket_benchmarking/tree/master/compilation_strategy/operators}.
For the templated lexicographical operator sequence (TLOS) method, the
circuits are generated from the excitation operators, rather than a
dictionary of Pauli strings to expressions, and therefore bypass the
$U(\vec{t})$ operators stage of Equation~\ref{eq:sum-expand} entirely.
Rather than serialising the corresponding operators, the relevant TLOS
circuits are stored in the OpenQASM format
\cite{Cross2017Open-Quantum-As} at
\url{https://github.com/CQCL/tket_benchmarking/tree/master/compilation_strategy/TLOS_qasm_files}.
We do not include the operations required to generate a chosen
reference state $\ket{\Phi_0}$, as they are irrelevant to the
strategy.
\section{Introduction}
\label{sec:intro}
Many computational problems in quantum chemistry are classically
intractable for systems which are large and strongly correlated
\cite{Szalay:2012aa}. Instead, quantum algorithms have been proposed
\cite{Kassal:2011aa} to simulate and calculate chemical properties of
such systems. These algorithms leverage useful features of quantum
mechanics to perform calculations which would either take too long or
yield results too inaccurate using the best known classical
algorithms.
However, the resources required by such algorithms tend to be too
large for current quantum computers \cite{Troyer:2015:trotterstepsize}, which are
limited in the number of qubits and the available circuit depth before
decoherence and gate errors overwhelm the system and extracting a
correct result from the noise becomes infeasible. These machines are
known as Noisy Intermediate Scale Quantum (NISQ) devices
\cite{Preskill2018quantumcomputingin}.
A standard approach to reduce the resource requirements enough
to run algorithms successfully on NISQ devices is to only run the
quantum circuit as a subroutine in a larger, classical algorithm
\cite{1367-2630-18-2-023023}. In this model, the quantum circuit
prepares a parameterised state and measures the expectation value of a
relevant operator. The classical routine then performs an optimisation
algorithm, using the expectation value as an objective function, and
attempts to minimise this value with respect to the circuit's
parameters.
The Variational Quantum Eigensolver (VQE) is an archetypal hybrid
quantum-classical algorithm, designed for the estimation of ground
state energies of quantum systems on NISQ devices
\cite{PhysRevA.92.042303}. The expectation value of a molecular
Hamiltonian is the objective function, and VQE employs the variational
principle to approximate the ground state of this Hamiltonian using
the parameterised quantum circuit as an ansatz.
In this paper, we focus on the Unitary Coupled Cluster (UCC) ansatz
\cite{Romero_2018}, which is motivated by the orbital transitions
allowed by the simulated system. We present a compilation strategy for
reducing the major source of error affecting the algorithm: noise of
the quantum device. This compilation strategy increases circuit
fidelity by reducing circuit depth, hence minimising the number of
noisy gates and the qubits' exposure to decoherence.
For NISQ devices, two-qubit gates typically have error rates around an
order of magnitude higher than one-qubit gates, as well as taking 2-5x
as long \cite{Wright:2019aa, Arute:2019aa}. Defining the two-qubit
gate depth as the number of two-qubit parallel layers required to
complete the circuit, we aim to minimise this metric specifically with
our compilation strategy, along with two-qubit gate count. We
approximate the hardware-native two-qubit gate metrics with the
corresponding \ensuremath{\textsc{cx}}\xspace metrics, assuming that each two-qubit gate must be a
\ensuremath{\textsc{cx}}\xspace, noting that in certain scenarios this overstates the number of
required gates. Two-qubit gates which are not maximally entangling,
particularly tunable ones, can reduce the number of gates required for
certain algorithms compared to using \ensuremath{\textsc{cx}}\xspace gates \cite{arute2020quantum,
Nam2019groundstate}.
We begin by partitioning the terms in the UCC ansatz into mutually
commuting sets. We describe a well-known equivalence between this
sequencing problem and graph colouring. We then show that approximate
solutions to this problem enable large-scale synthesis of Pauli
exponentials into one- and two-qubit gates, and propose heuristics for
performing this synthesis to generate low depth circuits.
Our compilation strategy is valid for any ansatz which is generated by
Trotterization of an operator made up of a sum of Pauli tensor
products: this means that it is valid for $k$-UpCCGSD and other
variations on the UCC ansatz, such as UCCGSD \cite{Lee:2019aa} and the
parity-disregarding particle-exchange UCC \cite{xia2020coupled}. It is
also valid for fault-tolerant product formula algorithms for
Hamiltonian simulation \cite{Berry_2006}, although the benefits in
this setting are less clear. Our strategy is not intended for the
hardware efficient ansatz \cite{Kandala:2017aa} or iterative qubit
coupled-cluster ans{\"a}tze \cite{Ryabinkin:2020aa}. The strategy is
generic, and requires no prior knowledge of the qubit encoding, target
operator or basis set beyond the validity condition above.
We implemented the strategy in \ensuremath{\mathsf{t}|\mathsf{ket}\rangle}\xspace \cite{TKETPAPERHERE},
our retargetable compiler, and present benchmarks for a variety of UCC
circuits for realistic molecules to demonstrate empirically that the
strategy significantly reduces the \ensuremath{\textsc{cx}}\xspace gate count and depth compared
to previous strategies.
\paragraph{Related work:}A similar strategy for optimizing Hamiltonian
simulation circuits was recently presented by van den Berg \& Temme
\cite{Berg:2020aa}. This strategy uses different diagonalisation
methods. In addition, the strategy is intended for fault-tolerant
circuits for Hamiltonian simulation, and the two-qubit gate reduction
is obtained by targeting an ancilla qubit with every \ensuremath{\textsc{cx}}\xspace from each
diagonal set, as previously described in Hastings et al.
\cite{Troyer:2014:improvedchemistry}, which is impractical for some
NISQ devices. However, a thorough comparison of strategies for Pauli
partitioning and diagonalisation is presented, which can be applied in
the NISQ setting.
\paragraph{Notation:}In order to reason about and represent the
synthesis of Pauli exponentials, we use notation inspired by the
\textsc{zx}-calculus\xspace \cite{Coecke:2009aa}, although our strategy can be
followed without requiring any knowledge of the inference rules of the
calculus. A brief introduction to the \textsc{zx}-calculus\xspace is found in Fagan \&
Duncan \cite{EPTCS287.5}; for a complete treatment see Coecke \&
Kissinger \cite{Coecke2017Picturing-Quant}.
\paragraph{Terminology:}We refer to an $n$-qubit operator of the form
$\{I,X,Y,Z\}^{\otimes n}$ as a \emph{Pauli string}, composed of
\emph{letters} from the alphabet $\{I,X,Y,Z\}$. The \textit{weight} of
a Pauli string is the number of non-$I$ letters.
\section{The Unitary Coupled Cluster Ansatz}
\label{sec:ucc_ansatz}
The UCC ansatz is defined by the excitation of some reference state by
an operator parameterised with coupled cluster amplitudes $\vec{t}$:
\begin{equation}
\ket{\Psi (\vec{t})} = U(\vec{t})\ket{\Phi_0} = e^{T(\vec{t})-T^{\dagger}(\vec{t})}\ket{\Phi_0}
\end{equation}
where operator $T$ is a linear combination of fermionic excitation operators
$\vec{\tau}$ such that the parameterised operator can be rewritten:
\begin{equation}\label{eq:excite-sum}
U(\vec{t}) = e^{\sum_j t_j (\tau_j - \tau^{\dagger}_{j})}
\end{equation}
This parameterised operator cannot be directly implemented on
a gate-based quantum computer. It must be mapped to qubits and
decomposed into native gates.
In order to generate a quantum circuit, we employ Trotterization,
justified by Lloyd \cite{Lloyd:1996aa}. Here we show the first order
Lie-Trotter expansion:
\begin{equation}\label{eq:trotter-expr}
U(\vec{t}) \approx U_{Trott}(\vec{t}) = (\prod_{j}e^{\frac{t_j}{\rho}(\tau_j-\tau_j^{\dagger})})^{\rho}
\end{equation}
where $\rho$ is the number of Trotter steps. Since our focus is on the
NISQ setting, we will assume that only one Trotter step is taken. It
is straightforward to extend the presented techniques to arbitrary
step size.
To implement the Trotterized expression shown in
Equation~\ref{eq:trotter-expr} on a quantum computer, we map the
$\tau_j$ in the product to operations acting on qubits. This can be
performed using a variety of qubit encodings, such as Bravyi-Kitaev
(BK), Jordan-Wigner (JW) and parity (P) \cite{Steudtner_2018}. These
encodings have different resource requirements and the qubits
represent different physical properties, but regardless of our choice
we obtain:
\begin{equation}\label{eq:excite-expand}
(\tau_j - \tau_j^{\dagger}) = ia_j\sum_k P_{jk}
\end{equation}
where $a_{j} \in \mathbb{R}$ and $P_{jk} \in \{I,X,Y,Z\}^{\otimes n}$.
It can be shown that the Pauli strings $P_{jk}$ from a given
excitation operator $\tau_j$ always commute under multiplication
\cite{Romero_2018}. This gives a simpler expression for the
Trotterized operator,
\begin{equation}\label{eq:prod-formula}
U_{Trott}(\vec{t}) = \prod_j \prod_k e^{it_ja_jP_{jk}}
\end{equation}
where $e^{it_j a_{j}P_{jk}}$ terms are parameterised with some angle
$t_j$ which will be adjusted by the variational algorithm. We refer to
these terms as \emph{Pauli exponentials}, and relabel our coefficients
$t^{'}_j = t_j a_j$.
Pauli exponentials can be implemented on a quantum computer by
decomposition into one- and two-qubit native gates, discussed in
Section~\ref{sec:reppauli}. These gates are appended to a short,
constant depth circuit generating the reference state.
\section{Term Sequencing by Graph Colouring}
\label{sec:termseq}
Looking again at Equation~\ref{eq:excite-sum}, note that we can expand
the fermionic excitation operators at this stage into Pauli strings
using Equation \ref{eq:excite-expand}, i.e. our chosen qubit encoding:
\begin{equation}\label{eq:sum-expand}
U(\vec{t}) = e^{i\sum_j \sum_k t^{'}_j P_{jk}}
\end{equation}
Since addition is commutative we can freely choose the ordering of
$P_{jk}$ terms in this expression. After
Trotterization the Pauli exponentials do not commute, so it is
sensible at this stage to sequence the Pauli strings in a
beneficial order, such that our Trotterization incurs minimal Trotter
error and our circuit has low resource requirements. The implications
of the ordering of terms for chemical accuracy have been studied by H.
R. Grimsley et al. \cite{Grimsley_2019}. We justify in
Appendix~\ref{app:trot-error} that reducing Trotter error should be a
secondary concern for near-term VQE, and focus on minimising \ensuremath{\textsc{cx}}\xspace gate
count and depth.
Our strategy to reduce \ensuremath{\textsc{cx}}\xspace gate count and depth relies on partitioning
the set of Pauli exponentials into a small number of subsets, such
that within a given subset every Pauli exponential commutes.
\footnote{This problem is common in the literature on measurement
reduction \cite{jena2019pauli, crawford2019efficient,
zhao2019measurement, verteletskyi2019measurement}.} This partitioning
problem can be represented as the well-known graph colouring problem.
\begin{figure}
\centering
\tikzfig{Pauli_graph1}
\caption{Graph colouring to partition Pauli terms into sets of
mutually commuting strings. While the parameters are not shown, they
must be tracked for synthesis later.}
\label{fig:pauli_graph1}
\end{figure}
We represent each Pauli string as a vertex in an undirected graph. An
edge is given between any two vertices that correspond to Pauli
strings which anti-commute. Figure~\ref{fig:pauli_graph1} shows an
example of this graph representation.
Finding the minimum number of mutually commuting sets which cover all
vertices in this graph is then equivalent to the \textit{colouring
problem}, a well known NP-hard problem \cite{Garey:1974aa}. In this
instance, the colour assigned to the vertex corresponds to the subset
the corresponding Pauli exponential is placed in, and since no two
adjacent vertices can have the same colour, all Pauli exponentials
within a subset will mutually commute.
We use a simple greedy colouring algorithm to partition the Pauli
strings. The complexity of this algorithm is $\mathcal{O}(m)$, with
$m$ the number of Pauli strings, although building the
anti-commutation Pauli graph in the first place scales as
$\mathcal{O}(m^2n)$, with $n$ the number of qubits.
Once the vertices have been assigned colours, the UCC reference state
$\ket{\Phi_0}$ is prepared and the corresponding Pauli exponential
terms are appended, colour by colour, in lexicographical order. For
example, given the graph colouring solution from
Figure~\ref{fig:pauli_graph1}, a valid ordering of strings is: IIXY,
IIYX, XYII, YXII, XXXY, XXYX, XYXX, XYYY, YXXX, YXYY, YYXY, YYYX.
Neither the order of the sets nor the order of terms within each set
is considered important for optimisation; lexicographical order was an
arbitrary choice.
\section{Pauli Exponentials}
\label{sec:reppauli}
A translation of relevant gates between the quantum circuit
and \textsc{zx}-calculus\xspace formalisms is given in Figure~\ref{fig:zx_gates}.
\begin{figure}[t]
\centering
\begin{tabular}{ccc}
\begin{minipage}{.3\textwidth}
\begin{equation*}
\inltf{GatesRZ} \simeq \inltf{ZXCalcRZ}
\end{equation*}
\end{minipage} &
\begin{minipage}{.3\textwidth}
\begin{equation*}
\inltf{GatesRX} \simeq \inltf{ZXCalcRX}
\end{equation*}
\end{minipage} &
\multirow{2}{*}{
\begin{minipage}{.3\textwidth}
\begin{equation*}
\inltf{GatesH} \simeq \inltf{ZXCalcH}
\end{equation*}
\end{minipage}
}
\\
\begin{minipage}{.25\textwidth}
\begin{equation*}
\inltf{GatesZ} \simeq \inltf{ZXCalcZ}
\end{equation*}
\end{minipage} &
\begin{minipage}{.25\textwidth}
\begin{equation*}
\inltf{GatesX} \simeq \inltf{ZXCalcX}
\end{equation*}
\end{minipage} & \\
\begin{minipage}{.25\textwidth}
\begin{equation*}
\inltf{GatesS} \simeq \inltf{ZXCalcS}
\end{equation*}
\end{minipage} &
\begin{minipage}{.25\textwidth}
\begin{equation*}
\inltf{GatesV} \simeq \inltf{ZXCalcV}
\end{equation*}
\end{minipage} &
\multirow{2}{*}{
\begin{minipage}{.25\textwidth}
\begin{equation*}
\inltf{GatesCX} \simeq \inltf{ZXCalcCX}
\end{equation*}
\end{minipage}
}
\\
\begin{minipage}{.25\textwidth}
\begin{equation*}
\inltf{GatesSdg} \simeq \inltf{ZXCalcSdg}
\end{equation*}
\end{minipage} &
\begin{minipage}{.25\textwidth}
\begin{equation*}
\inltf{GatesVdg} \simeq \inltf{ZXCalcVdg}
\end{equation*}
\end{minipage} &
\end{tabular}
\caption{Common circuit gates and their representations in the
scalar-free \textsc{zx}-calculus\xspace. The $S$ gate corresponds to
$\ensuremath{R_Z}\xspace(\frac{\pi}{2})$, and the $V$ gate to
$\ensuremath{R_X}\xspace(\frac{\pi}{2})$. }
\label{fig:zx_gates}
\end{figure}
Recall the notation of \textit{phase gadgets} $\Phi_n(\alpha)$,
equivalent to the operator $e^{-i\frac{\alpha}{2} Z^{\otimes n}}$.
These gadgets were described in Kissinger \& van de Wetering
\cite{Kissinger:2019aa}, and have a natural representation in the
\textsc{zx}-calculus\xspace.
\begin{definition}\label{def:phasegadget}
In \textsc{zx}-calculus\xspace notation we have:
\[
\Phi_n(\alpha) := \inltf{PhaseGadgetDef} = \inltf{PhaseGadgetDecomp}
\]
\end{definition}
The algebra for phase gadgets and alternate decompositions into one-
and two-qubit gates are given in Appendix~\ref{app:phase-gadgets}.
Note that $\Phi_1(\alpha) = \ensuremath{R_Z}\xspace(\alpha)$.
The correspondence between phase gadgets and Pauli-$Z$ exponentials
generalises to any Pauli exponential $e^{-i\frac{\alpha}{2} P}$, by
conjugating the phase gadget with single-qubit Clifford gates. We
recall the \textit{Pauli gadget} diagrammatic notation for the Pauli
exponential from Cowtan et al. \cite{Cowtan:2019aa}.
\begin{definition} Pauli exponentials are represented succinctly as:
\[
e^{-i\frac{\alpha}{2} I X Y Z} = {\tikzfig{PhaseGadgetIXYZ}} = {\tikzfig{PauliExpDef}}
\]
\label{def:pauli_exp_gadget}
\end{definition}
The red, mixed-colour, and green boxes respectively represent the
Pauli gadget acting on a qubit in the $X$, $Y$, and $Z$ bases. These
are formed by a phase gadget on the qubits (generating $Z$-only
interactions), then conjugating the qubits with appropriate
single-qubit Cliffords.
Clifford gates may be commuted through Pauli gadgets, but may incur a
phase flip or basis change. The exhaustive set of diagrammatic rules
required to perform this procedure for relevant Clifford gates are
shown in Appendix~\ref{app:cliff-commutation}, although they are
simple to calculate using linear algebra.
The definitions above imply a naive method of circuit synthesis for
Pauli gadgets. For a set of $m$ Pauli gadgets over $n$ qubits, this
naive synthesis requires $\mathcal{O}(nm)$ \ensuremath{\textsc{cx}}\xspace gates. More, precisely,
we require at most $2m(n-1)$, if all the Pauli strings are maximal
weight. This gives the baseline performance against which we will
compare the method introduced in the next section.
\section{Set-based Synthesis}
\label{sec:setsynth}
The effect of the transformations in Section~\ref{sec:termseq} is to
partition our ansatz into large commuting sets of Pauli gadgets. The
next step is to synthesise circuits for these sets, while minimising
the \ensuremath{\textsc{cx}}\xspace overhead.
The approach we propose here has two steps:
\begin{enumerate
\item Diagonalisation: every Pauli gadget in a given commuting set is
simultaneously converted into a phase gadget by conjugating with an
appropriate Clifford circuit.
\item Phase gadget synthesis: the resulting phase gadgets are
converted into \ensuremath{\textsc{cx}}\xspace and $\ensuremath{R_Z}\xspace$ gates using the well-studied \textit{phase
polynomial} formalism \cite{Amy2014Polynomial-Time}.
\end{enumerate}
While diagonalisation incurs a gate overhead, in practice we find that
the gate reduction from synthesising using this technique more than
makes up for this overhead.
Figure~\ref{fig:complexities} summarises the relevant complexities of
the different subroutines in our strategy.
\begin{figure}[t]
\centering
\begin{tabular}{l|c|c}
& Time complexity & \ensuremath{\textsc{cx}}\xspace complexity \\ \hline
Graph Colouring & $\mathcal{O}(m^2n)$ & - \\
Diagonalisation & $\mathcal{O}(mn^3)$ & $\mathcal{O}(n^2)$ \\
GraySynth \cite{Amy_2018, AmyEmail:2020aa} & $\mathcal{O}(mn^3)
$ & $\mathcal{O}(mn)$ \\ \hline
\end{tabular}
\caption{Summary of subroutine complexities, where $m$ is the total
number of Pauli exponentials and $n$ is the number of qubits. Time
complexity refers to the compilation time, while \ensuremath{\textsc{cx}}\xspace complexity is
defined as the maximum number of \ensuremath{\textsc{cx}}\xspace gates required for circuit
synthesis. Graph colouring does not perform circuit synthesis so has no \ensuremath{\textsc{cx}}\xspace complexity.}
\label{fig:complexities}
\rule{\textwidth}{0.5pt}
\end{figure}
\subsection{Diagonalisation}
\label{sec:diag}
Phase gadgets -- that is, Pauli gadgets whose Pauli strings contain
only the letters $Z$ and $I$ -- define unitary maps which are diagonal
in the computational basis. For this reason, we'll call a \emph{set}
of Pauli gadgets diagonal when it contains only phase gadgets. Abusing
terminology slightly, given a set of Pauli gadgets, we call a
\emph{qubit} diagonal over that set when the Pauli letter for that
qubit is either $Z$ or $I$ for every Pauli string in the set.
Evidently, if every qubit is diagonal then the set as a whole is too.
A set $S$ of commuting Pauli gadgets can be simultaneously
diagonalised using only Clifford operations. Our goal is to find
a Clifford circuit $C$ and a diagonal set $S'$ such that
\begin{equation}\label{eq:basic-diag-relation}
S = CS'C^{\dagger}
\end{equation}
where $C$ is as small as possible. Several methods have been proposed
\cite{jena2019pauli,crawford2019efficient,Scott-Aaronson:2004yf,
Maslov2017Shorter-stabili} to compute a suitable polynomially-sized
circuit $C$.
Note that, since $[A,B] = 0 \iff [UAU^{\dagger},UBU^{\dagger}] = 0$
for unitaries $A$, $B$ and $U$, conjugating the gadgets preserves
commutation, so the required $C$ can be constructed by conjugation
with Cliffords. Below, we use this approach on \emph{compatible
pairs} of qubits, where one can qubit can be used to diagonalise the
other. In the worst case $C$ has $\mathcal{O}(n^2)$ \ensuremath{\textsc{cnot}}\xspace gates;
however in practice, on realistic examples of UCC ansatz circuits, we
find our method typically produces Clifford diagonalisers much smaller
than the asymptotic worst case.
\begin{remark}\label{rem:diag-complexity-related-works}
Jena et al.~\cite{jena2019pauli} presented an algorithm guaranteed to
give such a $C$ for qudits of any dimension, of size quadratic in
the number of qudits.
For $m$ Pauli gadgets, Crawford et al.~\cite{crawford2019efficient}.
recently presented two efficient constructions of $C$ with a bound of
$mn-m(m+1)/2$ and $\mathcal{O}(mn/\log m)$ \ensuremath{\textsc{cx}}\xspace gates respectively,
when $m < n$.
When $m \geq n$, the construction
provided by Aaronson \& Gottesman requires $\mathcal{O}(n^2/\log n)$
\ensuremath{\textsc{cx}}\xspace gates \cite{Scott-Aaronson:2004yf, Maslov2017Shorter-stabili}.
\end{remark}
\subsubsection{Diagonalising a compatible pair}
\begin{figure}\label{fig:theorem_example}
\begin{subfigure}[b]{\textwidth}
\begin{equation*}
\inltf{theorem_example1}
\end{equation*}
\caption{A compatible pair.}
\label{fig:theorem_example1}
\end{subfigure}
\vspace{6mm}
\begin{subfigure}[b]{\textwidth}
\begin{equation*}
\inltf{theorem_example2}
\end{equation*}
\caption{Conjugation with appropriate single-qubit Cliffords.}
\label{fig:theorem_example2}
\end{subfigure}
\vspace{6mm}
\begin{subfigure}[b]{\textwidth}
\begin{equation*}
\inltf{theorem_example3}
\end{equation*}
\caption{Conjugation with \ensuremath{\textsc{cx}}\xspace gates to diagonalise the second qubit.}
\label{fig:theorem_example3}
\end{subfigure}
\vspace{6mm}
\caption{Application of Theorem~\ref{thm:pauli-chain} to diagonalise a qubit.}
\end{figure}
In the following, let $S$ be a set of $m$ commuting Pauli gadgets
acting on $n$ qubits, and let $\sigma_{kl}$ denote the Pauli letter on
qubit $k$ from gadget $l$.
\begin{definition}\label{def:compatible-pair}
Let $i,j \in \{1, \ldots, n\}$ with $i\neq j$. Qubits $i$ and $j$
are called \emph{compatible} in $S$ if the following relation holds:
\begin{equation}\label{eq:pauli-diag-condition}
\exists A, B \in \{X,Y,Z\} \:\: s.t. \:\: \forall l \in
\{1,...,m\}, \sigma_{il} \in \{I,A\} \iff \sigma_{jl} \in \{I,B\} \;\;;
\end{equation}
In this case $i$ and $j$ are called a \emph{compatible pair}.
\end{definition}
\begin{theorem}\label{thm:pauli-chain}
If qubits $i$ and $j$ form a compatible pair in $S$ then one of them can be
can be diagonalised by conjugating $S$ with a single \ensuremath{\textsc{cnot}}\xspace and at most two
single qubit Cliffords acting on qubits $i$ and $j$.
\begin{proof}
Without loss of generality, assume $i=1$ and $j=2$, and let $P_2$
be a 2-qubit Pauli string. Applying the transformation $ P_2
\mapsto \ensuremath{\textsc{cnot}}\xspace \circ P_2 \circ \ensuremath{\textsc{cnot}}\xspace $ (with the second qubit the
target) will diagonalise the second qubit when $P_2 \in \{II, IZ,
XX, XY, YX, YY, ZI, ZZ\}$. This set satisfies the property
$\sigma_{1} \in \{Z,I\} \iff \sigma_{2} \in \{Z,I\}$, and all
other sets of 2-qubit Paulis which satisfy this property are
subsets of this one. (Note that the control qubit will be diagonal
after conjugation iff it was diagonal before.) Conjugating the
first and/or second qubit by an additional single qubit Clifford
allows this relation to be generalised to any pair of Paulis as in
(\ref{eq:pauli-diag-condition}), giving the result.
\end{proof}
\end{theorem}
If $i$ and $j$ are a compatible pair, then the specific values of $A$
and $B$ in relation (\ref{eq:pauli-diag-condition}) determine which
single-qubit Cliffords are required before conjugation by \ensuremath{\textsc{cx}}\xspace gates
will diagonalise a qubit. An example is shown in
Figure~\ref{fig:theorem_example1}. The first two qubits are
compatible, with $A = Y$ and $B = Y$, which implies that $V$ and
$V^{\dagger}$ gates are required to prepare the second qubit for
diagonalisation as shown in Figure~\ref{fig:theorem_example2}. The
diagonalisation is completed by \ensuremath{\textsc{cnot}}\xspace conjugation as shown in
Figure~\ref{fig:theorem_example3}.
Applying Theorem~\ref{thm:pauli-chain} to compatible pairs of qubits
is the key subroutine in our diagonalisation algorithm, described in
the next section.
\begin{corollary}
For any commuting set of $m$ gadgets over $n$ qubits, if $m < 4$ a
Clifford circuit $C$ exists which diagonalises this set of gadgets
using at most $n-1$ \ensuremath{\textsc{cx}}\xspace gates.
\begin{proof}
See Appendix~\ref{app:corollary54proof}.
\end{proof}
\end{corollary}
\begin{corollary}
For any commuting set of gadgets over $n$ qubits, if $n < 5$ a
Clifford circuit $C$ exists which diagonalises this set of gadgets
using at most $n-1$ \ensuremath{\textsc{cx}}\xspace gates.
\begin{proof}
See Appendix~\ref{app:corollary55proof}.
\end{proof}
\end{corollary}
\subsubsection{Diagonalising a commuting set}
\begin{figure}
\begin{algorithmic}
\Function{GadgetDiag}{$S$}
\State $Q \gets $ Qubits($S$)
\State $C \gets $ EmptyCircuit(Q)
\While {$Q$ non-empty}
\State ($S$, $Q$, $C$) $\gets $ UpdateSingleQubits($S$, $Q$, $C$)
\If {$Q$ empty}
\State \textbf{break}
\EndIf
\State $Q^{'} \gets Q$
\State ($S$, $Q$, $C$) $\gets $ UpdatePairQubits($S$, $Q$, $C$)
\If {$Q = Q^{'}$}
\State ($S$, $Q$, $C$) $\gets $ GreedyDiagonalisation($S$, $Q$, $C$)
\EndIf
\EndWhile
\State \textbf{return} ($S$, $C$)
\EndFunction
\Function{UpdateSingleQubits}{$S$, $Q$, $C$}
\For {$q \in Q$}
\State $p$ $\gets $ FindCommonPauli($S$, $q$) \Comment{$p$ : Maybe Pauli}
\If {$p \neq $ None}
\State $S \gets $ UpdateGadgetsSingleQubit($S$, $p$, $q$)
\State $Q \gets Q \setminus \{ q \} $
\State $C \gets $ AddCliffordsSingleQubit($C$, $p$, $q$)
\EndIf
\EndFor
\State \textbf{return} ($S$, $Q$, $C$)
\EndFunction
\vspace{3mm}
\Function{UpdatePairQubits}{$S$, $Q$, $C$}
\For {$q_a \in Q$}
\For {$q_b \in Q \setminus \{q_a\}$}
\State ($p_a$, $p_b$) $\gets $ FindValidPaulis($S$, $q_a$, $q_b$)
\Comment{($p_a$, $p_b$) : Maybe Pair Pauli}
\If {($p_a$, $p_b$) $\neq$ None}
\State $S \gets $ UpdateGadgetsPairQubit($S$, $p_a$, $p_b$, $q_a$, $q_b$)
\State $Q \gets Q \setminus \{ q_b \} $
\State $C \gets $ AddCliffordsPairQubit($C$, $p_a$, $p_b$, $q_a$, $q_b$)
\State \textbf{return} ($S$, $Q$, $C$)
\EndIf
\EndFor
\EndFor
\State \textbf{return} ($S$, $Q$, $C$)
\EndFunction
\vspace{3mm}
\end{algorithmic}
\caption{Diagonalisation algorithm}
\label{alg:diagonalise}
\end{figure}
This section describes our method for diagonalising a set of commuting
Pauli gadgets. The basic approach is to repeatedly apply three
methods which diagonalise a single qubit :
\begin{enumerate}
\item Diagonalise the trivially diagonalisable qubits
\item Diagonalise qubits in compatible pairs
\item Synthesise a single gadget to diagonalise one of its qubits.
\end{enumerate}
Detailed pseudo-code for the algorithm\footnote{
We have omitted the greedy diagonalisation method from the
pseudo-code, as it is straightforward.
} is presented in Figure~\ref{alg:diagonalise}. The overall time
complexity for this algorithm is $\mathcal{O}(mn^3)$, with $m$ the
number of Pauli gadgets in the commuting set and $n$ the number of
qubits.
To make the algorithm clearer, we'll go through a worked example in
Figures~\ref{fig:set_exampleA} and \ref{fig:set_exampleB}. Initially
we have the mutually commuting gadgets shown in
Figure~\ref{fig:set_example1}, corresponding to the Pauli strings
$IXZIZ$, $IYIZY$, $XXIYI$, $YYXII$, $ZIYXX$, $ZXIZZ$, and $ZYZIY$. We
proceed as follows.
\begin{enumerate
\item First, check whether there is a trivially diagonalisable qubit:
that is, a qubit $i$ for which $\exists P \in \{ X, Y, Z\}$ s.t.
$\forall l, \sigma_{il} \in \{I, P\} $. Any such qubits may be
diagonalised with only single-qubit Clifford gates, and ignored from
now on. This check takes time $\mathcal{O}(mn)$.
Figure~\ref{fig:set_example1} contains no such qubits.
\item Now search for a compatible pair of qubits
(Defn.~\ref{def:compatible-pair})
Theorem~\ref{thm:pauli-chain} for any choice of Paulis $A$ and $B$,
and apply the conjugation of Theorem \ref{thm:pauli-chain} to
diagonalise a qubit and remove it from consideration. The choice of
qubit within the pair is arbitrary.
This search can be performed in $\mathcal{O}(mn^2)$. The example of
Figure~\ref{fig:set_example1} does not contain a compatible pair.
\item If no compatible pair is found, we adopt a greedy approach as a
backup strategy. In $\mathcal{O}(m)$ time, find the Pauli string with
the lowest weight; if there are multiple pick arbitrarily. Conjugate
the corresponding Pauli gadget with single-qubit Clifford and \ensuremath{\textsc{cx}}\xspace
gates to convert the Pauli string to $II\ldots IZ$, demonstrated in
Figure~\ref{fig:set_example2}. Then, commute the Clifford gates
through the rest of the gadgets, as shown in
Figure~\ref{fig:set_example3}, until all Clifford gates are outside
the adjacent Pauli gadgets. Every gadget must still commute with the
$II\ldots IZ$ string, and therefore the last qubit must be diagonal.
This is a similar method to Jena et al. \cite{jena2019pauli}.
\end{enumerate}
These steps are repeated until all qubits are diagonal over the set
of Pauli gadgets. Following our example, we find that
Figure~\ref{fig:set_example3} has the same two-qubit chain on the
first and second qubits as our example from
Figure~\ref{fig:theorem_example1}, and can therefore be diagonalised
in the same way, resulting in the circuit in
Figure~\ref{fig:set_example5}. The backup strategy is not required for
the remaining qubits. See Figure~\ref{fig:set_example6} for the
circuit after full diagonalisation.
Since each iteration will diagonalise at least one qubit,
$\mathcal{O}(n)$ repetitions are required,
so Algorithm~\ref{alg:diagonalise} has time complexity
$\mathcal{O}(mn^3)$.
In the worst case, the greedy approach is required repeatedly, so $C$
will require at most $\frac{1}{2}n(n-1)$ \ensuremath{\textsc{cx}}\xspace gates. If the greedy
approach is not required at all, $C$ will require at most $n-1$ \ensuremath{\textsc{cx}}\xspace
gates. For our small, 5-qubit example circuit, the greedy approach was
required at one iteration, and $C$ used 5 \ensuremath{\textsc{cx}}\xspace gates.
\afterpage{
\begin{figure}[h]
\begin{subfigure}[b]{\textwidth}
\begin{equation*}
\inltf{commuting_set1}
\end{equation*}
\caption{Example set of adjacent commuting Pauli gadgets.}
\label{fig:set_example1}
\end{subfigure}
\vspace{6mm}
\begin{subfigure}[b]{\textwidth}
\begin{equation*}
{\inltf{decomp_gadget1}} = {\inltf{decomp_gadget2}}
\end{equation*}
\caption{Leftmost Pauli gadget before and after decomposition.}
\label{fig:set_example2}
\end{subfigure}
\vspace{6mm}
\begin{subfigure}[b]{\textwidth}
\begin{equation*}
{\inltf{decomp_set2}}
\end{equation*}
\caption{Pauli gadget set after commuting Cliffords through.}
\label{fig:set_example3}
\end{subfigure}
\vspace{6mm}
\begin{subfigure}[b]{\textwidth}
\begin{equation*}
{\inltf{decomp_set_YY_1}}
\end{equation*}
\caption{Theorem~\ref{thm:pauli-chain} is satisfied by $A=Y$ and $B=Y$ for the first two qubits. Single-qubit rotations are applied to convert to the $Z$-basis.}
\label{fig:set_example4}
\end{subfigure}
\caption{Set diagonalisation example.}
\label{fig:set_exampleA}
\end{figure}
\begin{figure}[h]
\centering
\begin{subfigure}[b]{\textwidth}
\begin{equation*}
{\inltf{decomp_set_YY_2}}
\end{equation*}
\caption{Diagonalise the second qubit with a \ensuremath{\textsc{cx}}\xspace.}
\label{fig:set_example5}
\end{subfigure}
\vspace{10mm}
\begin{subfigure}[b]{\textwidth}
\centering
{\inltf{decomp_set_YY_4}}
\caption{Repeat the procedure for the remaining qubits to fully diagonalise the set.}
\label{fig:set_example6}
\end{subfigure}
\vspace{10mm}
\begin{subfigure}[b]{\textwidth}
\begin{equation*}
{\inltf{decomp_set_YY_5}}
\end{equation*}
\caption{Convert phase gadgets to \ensuremath{\textsc{cx}}\xspace and $\ensuremath{R_Z}\xspace$ gates using GraySynth \cite{Amy_2018}.}
\label{fig:set_example7}
\end{subfigure}
\caption{Set diagonalisation example (cont'd).}
\label{fig:set_exampleB}
\end{figure}
\clearpage
}
\subsection{Phase Polynomials}
\label{sec:phasepoly}
After diagonalisation, we have a circuit $CS'C^\dag$ where the
interior section $S'$ is comprised entirely of phase gadgets,
with each gadget acting on a different set of qubits. This
phase gadget circuit can be expressed as a \emph{phase polynomial}.
\begin{proposition}[Nam et al. \cite{Nam:2018aa}]
\label{prop:phase-poly}
Let $D$ be a quantum circuit containing only \ensuremath{\textsc{cx}}\xspace gates and the gates
$\ensuremath{R_Z}\xspace(\theta_1)$, $\ensuremath{R_Z}\xspace(\theta_2)$,..., $\ensuremath{R_Z}\xspace(\theta_m)$. The action of $D$
on a basis state $\ket{x_1,x_2...x_n}$ has the form:
\begin{equation}\label{eq:phase-poly-i}
D\ket{x_1,x_2...x_n} = e^{ip(x_1,x_2,...,x_n)}\ket{h(x_1,x_2,...,x_n)}
\end{equation}
where $h(x_1,x_2,...,x_n)$ is a linear reversible function and
\begin{equation}\label{eq:phase-poly-ii}
p(x_1,x_2,...,x_n) = \sum_{i=1}^{m} \theta_i f_i(x_1,x_2,...,x_n)
\end{equation}
is a linear combination of linear Boolean functions $f_i$: $\{0,1\}^n$
$\rightarrow$ $\{0,1\}$.
\end{proposition}
\begin{definition}\label{def:phase-poly}
Given $D$ as above, the \emph{phase polynomial} of circuit $D$ is
$p(x_1,x_2,...,x_n)$, and each $f_i$ is called a \textit{parity}.
\end{definition}
\begin{example}\label{ex:phase-poly}
The circuit shown below has the required form.
\[
\inltf{phase_poly_example}
\qquad \equiv \qquad
\ket{q_1,q_2} \ \mapsto \ {e^{i \alpha(q_1 \oplus q_2)} \ket{q_1,q_2}}
\]
We can read that the corresponding phase polynomial is
$p(q_1,q_2) = \alpha(q_1 \oplus q_2)$, defined on the single parity
$q_1 \oplus q_2$.
\end{example}
Phase polynomials have been studied for the purposes of circuit
optimisation\footnote{The representation of phase gadget circuits as
phase polynomials has also inspired circuit optimisation techniques
\cite{Beaudrap:2019aa}. }
\cite{Nam:2018aa, Maslov2017Shorter-stabili, Amy2014Polynomial-Time,
6516700}. Phase polynomial \emph{synthesis} refers to the task of
generating a circuit over a chosen gate set, usually \ensuremath{\textsc{cnot}}\xspace and
$\ensuremath{R_Z}\xspace(\theta)$, which implements a given phase polynomial with minimal
resources. Optimal synthesis is NP-complete in specific cases
\cite{Amy_2018}, but the time complexity of the general case remains open.
In practice, heuristic methods such as the GraySynth algorithm of Amy
et al. \cite{Amy_2018} can achieve significant reductions in \ensuremath{\textsc{cnot}}\xspace count.
The circuit of Example \ref{ex:phase-poly} can be equivalently written
as a phase gadget over two qubits. In fact, every $n$-qubit phase
gadget is equivalent to a phase polynomial with a single term in the
summation of Eq.~(\ref{eq:phase-poly-ii}), so each phase gadget
corresponds to a parity $f_i$ and a rotation $\theta_i$.
\[
\inltf{gadget_poly_n}
\qquad \equiv \qquad
\ket{q_1,q_2,...,q_n}
\ \mapsto \
{e^{i \alpha \big( \bigoplus_{j=1}^{n}q_j \big)} \ket{q_1,q_2,...,q_n}}
\]
More generally, a circuit comprising only phase gadgets can be
represented by a phase polynomial
where the linear reversible function $h(x_1,x_2,...,x_n)$ of
Eq.~(\ref{eq:phase-poly-i}) is the identity. This allows us to use
techniques based on phase polynomials to synthesise a circuit for $S'$.
While any synthesis method could be used, for the results described
here we chose the heuristic GraySynth method \cite{Amy_2018} because
it produces an efficient circuit\footnote{
Across a suite of Clifford+$T$ benchmark circuits, the
implementation of Amy et al. reduced the \ensuremath{\textsc{cx}}\xspace gate count by 23\%
with a maximum reduction of 43\% \cite{Amy_2018}.
}
at reasonable computational cost. If a specific qubit architecture
was required, then an architecture-aware synthesis method would be more
appropriate \cite{Nash:2019aa,Arianne-Meijer-van-de-Griend:2020aa}.
GraySynth runs in time $\mathcal{O}(mn^3)$, and requires a maximum of
$\mathcal{O}(mn)$ \ensuremath{\textsc{cx}}\xspace gates when the linear reversible function $h$ is
identity \cite{AmyEmail:2020aa}. For reasons of space we omit the
algorithm
Returning to the running example, the synthesised circuit generated
from the interior phase gadgets is shown in
Figure~\ref{fig:set_example7}. Using a naive decomposition, as
described in Definitions~\ref{def:phasegadget} and
\ref{def:pauli_exp_gadget}, the initial set from
Figure~\ref{fig:set_example1} would have required 34 \ensuremath{\textsc{cx}}\xspace gates, and 34
\ensuremath{\textsc{cx}}\xspace depth. Our strategy has reduced the \ensuremath{\textsc{cx}}\xspace count to 22, and the \ensuremath{\textsc{cx}}\xspace
depth to 18.
\section{Results and Discussion}
\label{sec:results}
We implemented our strategy in \ensuremath{\mathsf{t}|\mathsf{ket}\rangle}\xspace \cite{TKETPAPERHERE},
our retargetable compiler. We benchmarked this implementation on a
suite of ansatz circuits for electronic structure UCCSD (Unitary
Coupled Cluster Singles and Doubles) VQE problems. We included the
molecules $\mathrm{H}_2$, $\mathrm{H}_4$, $\mathrm{H}_8$,
$\mathrm{LiH}$, $\mathrm{BeH}_2$, $\mathrm{NH}$,
$\mathrm{H}_2\mathrm{O}$, $\mathrm{CH_2}$, $\mathrm{NH}_3$,
$\mathrm{HNO}$, $\mathrm{HCl}$, $\mathrm{N}_2$, $\mathrm{C}_2$,
$\mathrm{H}_2\mathrm{CO}$, $\mathrm{CO_2}$ and $\mathrm{C}_2
\mathrm{H}_4$ in the `sto-3g' basis set. For the smaller molecules, we
also used the `631g' basis. We tested using the Bravyi-Kitaev (BK),
Jordan-Wigner (JW) and parity (P) encodings.
The comparisons made are: \footnote{We would additionally like to
compare to the low-rank decomposition methods of Motta et al.
\cite{motta2018low}, as the circuit depths and gate counts are stated
to have lower complexity than the standard method described herein.
However, we could not obtain a working implementation of the method.
We would also like to compare to van den Berg \& Temme
\cite{Berg:2020aa}, but data is available only for Hamiltonian
simulation circuits.}
\begin{enumerate}
\item Naive decomposition: circuits generated from
Equation~\ref{eq:prod-formula} by decomposing Pauli gadgets naively
into \ensuremath{\textsc{cx}}\xspace and single-qubit gates, as described in
Section~\ref{sec:reppauli}.
\item Pairwise synthesis: circuits generated by graph colouring and
then synthesising Pauli gadgets within a set in a pairwise manner with
\ensuremath{\textsc{cx}}\xspace balanced trees using the methods from Cowtan et al.
\cite{Cowtan:2019aa}.
\item Set synthesis: our full compilation strategy. Graph colouring,
diagonalisation and phase polynomial synthesis.
\item Templated lexicographical operator sequence (TLOS): for ansatz
circuits generated using the JW encoding we compare against a mock
implementation of the best known previous strategy for JW circuit
synthesis: operator sequencing methods from Hastings et al.
\cite{Troyer:2014:improvedchemistry} allowing for \ensuremath{\textsc{cx}}\xspace cancellation
between excitations, with templated excitation operators from Nam et
al.~\cite{Nam2019groundstate} for low \ensuremath{\textsc{cx}}\xspace count excitations
\footnote{We do not allow the use of ancilla qubits for this method,
which Hastings et al. showed can reduce \ensuremath{\textsc{cx}}\xspace overhead significantly.
Additionally, Nam et al. used a bosonic excitation technique relating
molecular and spin orbitals, which we do not include here.}.
We are not aware of similar strategies for the BK or P encoding.
\end{enumerate}
Circuits in our test set were chosen to have a \ensuremath{\textsc{cx}}\xspace count and depth of
less than $10^6$ when decomposed naively. All results were obtained
using \texttt{pytket v0.5.5}, on a machine with a 2.3~GHz Intel Core
i5 processor and 8~GB of 2133~MHz LPDDR3 memory, running MacOS
Mojave~v10.14.
A benchmark script for reproducing the results, along with the input
operators, can be found at
\url{https://github.com/CQCL/tket_benchmarking/tree/master/compilation_strategy}.
The methodology for generating and serialising these operators is
described in Appendix~\ref{app:qboperator}.
\begin{figure}
\centering
\begin{subfigure}[b]{\textwidth}
\begin{tabular}{cc}
\includegraphics[scale=0.5]{figures/Compare_BK_CX_count} &
\includegraphics[scale=0.5]{figures/Compare_BK_CX_depth}
\end{tabular}
\caption{Bravyi-Kitaev qubit encoding.}
\end{subfigure}
\begin{subfigure}[b]{\textwidth}
\begin{tabular}{cc}
\includegraphics[scale=0.5]{figures/Compare_JW_CX_count} &
\includegraphics[scale=0.5]{figures/Compare_JW_CX_depth}
\end{tabular}
\caption{Jordan-Wigner qubit encoding.}
\end{subfigure}
\begin{subfigure}[b]{\textwidth}
\begin{tabular}{cc}
\includegraphics[scale=0.5]{figures/Compare_P_CX_count} &
\includegraphics[scale=0.5]{figures/Compare_P_CX_depth}
\end{tabular}
\caption{Parity qubit encoding.}
\end{subfigure}
\caption{Comparison of compilation strategies for molecules with
varying active spin orbital counts using different qubit encoding
methods. A 4th-degree polynomial least-squares fit has been added to
suggest scaling.}
\label{fig:main_results}
\end{figure}
A comparison of \ensuremath{\textsc{cx}}\xspace metrics for different compilation strategies,
active spin orbitals and qubit encoding methods is shown in
Figure~\ref{fig:main_results}.
The set-based synthesis strategy outperforms pairwise and naive
strategies on all encodings, but is on average outperformed by the
TLOS method for the JW encoding, particularly for larger systems with
more active spin orbitals.
\begin{figure}[tbh]
\centering
\begin{subfigure}[b]{\textwidth}
\begin{tabular}{l|c|c}
& Mean \ensuremath{\textsc{cx}}\xspace count reduction (\%) & Mean \ensuremath{\textsc{cx}}\xspace depth reduction (\%)
\\ \hline
Pairwise Synthesis & $40.0$ & $56.9$ \\
Set-based Synthesis & $63.6$ & $71.9$ \\ \hline
\end{tabular}
\caption{Bravyi-Kitaev qubit encoding.}
\end{subfigure}
\vspace{3.5mm}
\begin{subfigure}[b]{\textwidth}
\begin{tabular}{l|c|c}
& Mean \ensuremath{\textsc{cx}}\xspace count reduction (\%) & Mean \ensuremath{\textsc{cx}}\xspace depth reduction (\%) \\\hline
Pairwise Synthesis & $49.9$ & $67.9$ \\
Set-based Synthesis & $78.0$ & $82.1$ \\
TLOS Synthesis & $82.6$ & $84.5$ \\\hline
\end{tabular}
\caption{Jordan-Wigner qubit encoding.}
\end{subfigure}
\vspace{3.5mm}
\begin{subfigure}[b]{\textwidth}
\begin{tabular}{l|c|c}
& Mean \ensuremath{\textsc{cx}}\xspace count reduction (\%) & Mean \ensuremath{\textsc{cx}}\xspace depth reduction (\%) \\\hline
Pairwise Synthesis & $38.1$ & $55.7$ \\
Set-based Synthesis & $65.3$ & $72.1$ \\\hline
\end{tabular}
\caption{Parity qubit encoding.}
\end{subfigure}
\vspace{3.5mm}
\begin{subfigure}[b]{\textwidth}
\begin{tabular}{l|c|c}
& Mean \ensuremath{\textsc{cx}}\xspace count reduction (\%) & Mean \ensuremath{\textsc{cx}}\xspace depth reduction (\%) \\\hline
Pairwise Synthesis & $42.7$ & $60.2$ \\
Set-based Synthesis & $69.0$ & $75.4$ \\\hline
\end{tabular}
\caption{All encodings. }
\end{subfigure}
\caption{Mean \ensuremath{\textsc{cx}}\xspace metric reductions. All reductions are measured against the naive decomposition method. }
\label{fig:mean_results}
\rule{\textwidth}{0.5pt}
\end{figure}
Set-based synthesis gives greater fractional reductions for larger
circuits than for smaller ones. For the largest circuits, up to 89.9\%
\ensuremath{\textsc{cx}}\xspace depth reduction can be achieved, compared to the mean \ensuremath{\textsc{cx}}\xspace
depth reduction of 75.4\% shown in Figure~\ref{fig:mean_results}. As
the compilation strategy is composed of several heuristics in
sequence, we do not at this stage argue that the asymptotic complexity
of the UCC ansatz can be reduced - in order to do this, we would need
to prove sufficient bounds on the size of sets found by graph
colouring, the \ensuremath{\textsc{cx}}\xspace complexities of Clifford circuits required for
diagonalisation and the number of \ensuremath{\textsc{cx}}\xspace gates produced by phase
polynomial synthesis.
\noindent
\textbf{Remark:\ } This compilation strategy assumes that the qubits
have all-to-all connectivity, so \ensuremath{\textsc{cx}}\xspace gates are allowed between any two
qubits. When connectivity is constrained, \textit{routing} is required
to ensure the circuit conforms to the constraints. The typical
approach to this problem is SWAP network insertion
\cite{Childs:2019aa,Zulehner:2017aa,Lao:2019aa,Alexander-Cowtan:2019aa}.
\noindent
\textbf{Remark:\ } While VQE using the UCC ansatz is a candidate for
quantum advantage, there are no complexity-theoretic guarantees of
success. Should the advantage be sufficiently small, the low-degree
polynomial compilation time required for this strategy could be too
slow. In this case, we emphasise that \textit{co-design} of a
compilation strategy with the qubit encoding can give large
reductions, shown by the TLOS method, while reducing compilation time.
\section{Conclusions and Future Work}
\label{sec:conc}
The primary contribution of our paper is an empirically successful
method to efficiently synthesise the UCC ansatz to one- and two-qubit
gates. We have shown large average reductions in \ensuremath{\textsc{cx}}\xspace metrics for the
Bravyi-Kitaev, Jordan-Wigner, and parity qubit encodings; although
alternative methods are competitive with ours for the JW encoding, we
emphasise that our strategy is valid for any other qubit encodings
which generate similar Trotterized excitation operators. We note that
the reductions for the JW encoding are the greatest, with respect to
both metrics and both the pairwise and set-based synthesis methods.
This may suggest that this encoding has more exploitable redundancy
then the BK or P encodings.
We briefly discuss four future directions to explore.
\subsection{Applications to Measurement Reduction}
Measurement reduction for VQE is a method to simultaneously measure
terms in a Hamiltonian which commute, and therefore reduce the number
of circuits required to run \cite{jena2019pauli,
crawford2019efficient, zhao2019measurement}. For realistic devices,
assuming that the only available native measurements are single-qubit
$Z$-basis measurements, generating a Clifford circuit to diagonalise
this set is required. Minimising this Clifford circuit using
applications of Theorem~\ref{thm:pauli-chain} can reduce the \ensuremath{\textsc{cx}}\xspace
overhead required for measurement reduction.
\subsection{Architecture-Aware Synthesis}
Instead of introducing a SWAP network to enforce connectivity
constraints on NISQ devices, recent work has explored the possibility
of resynthesising the circuit in a topologically aware manner, for
limited gate sets \cite{Kissinger:2019ac, Nash:2019aa,
wu2019optimization}. This constrained synthesis has been found to
typically produce lower \ensuremath{\textsc{cx}}\xspace counts than SWAP networks, and phase
polynomials are a viable class of circuit for constrained synthesis
\cite{amy2019staq, Arianne-Meijer-van-de-Griend:2020aa}. If topologically
constrained phase polynomials can be composed with Clifford regions in
a manner that respects architecture, this would appear to be a
suitable strategy for those devices with limited connectivity.
\subsection{Applications to QAOA}
The Quantum Approximate Optimisation Algorithm (QAOA)
\cite{Farhi:2014aa} for combinatorial optimisation problems consists
of repeated blocks of `mixing' and `driver' exponentiated
Hamiltonians. The driver Hamiltonians are already diagonal, as they
encode a classical system, and typically the mixing Hamiltonians
correspond to single-qubit gates only. However, recent work on a
so-called Quantum Alternating Operator Ansatz \cite{Hadfield_2019}
introduces more complicated mixing Hamiltonians. These mixing
Hamiltonians could be amenable to our compilation strategy.
\subsection{Applications to Fault Tolerant Computation}
\label{sec:appftqc}
While this strategy was designed specifically for VQE, it can be
directly ported over to non-variational quantum algorithms for
Hamiltonian dynamics which require gate counts and qubit numbers too
high for NISQ computers. For the case where the Hamiltonian evolution
is approximated using product formulae, Gui et al.~\cite{gui2020term}
and van den Berg \& Temme~\cite{Berg:2020aa} have performed term
sequencing similar to our work in Section~\ref{sec:termseq} for
digital quantum simulation, in which a quantum evolution defined by a
time-dependent Hamiltonian is mapped to a quantum circuit. Reducing
Trotter error is more important for fault-tolerant algorithms than for
VQE, as it is the only significant non-correctable source of error,
and Gui et al.~argue that our term sequencing method would also
minimise Trotter error.
The efficacy of our proposed compilation strategy is greatly dependent
on the model of fault-tolerant computation. For example, in the model
presented by Litinski \cite{Litinski2019gameofsurfacecodes}, all
non-Clifford Pauli exponentials are performed natively by performing
ancilla measurements. In this model, the exponentials do not need to
be converted to one- and two-qubit gates at all.
Even for models which perform individual gates explicitly, our
proposed compilation strategy is optimised to reduce two-qubit gate
count and depth, which are not considered as important on planned
fault tolerant devices as non-Clifford gates, as the \ensuremath{\textsc{cx}}\xspace gate can be
performed without magic state distillation. However, on surface
code-based computers performing lattice surgery, a two-qubit gate
between distant logical qubits can be as costly in the worst case as
magic state distillation \cite{Litinski_2019}; in general, two-qubit
gates increase the overhead from routing on surface codes. Therefore,
two-qubit gate reduction may still be a valuable optimisation.
Moreover, the circuits produced by the strategy are structured such
that all non-Clifford rotations reside in partitioned phase
polynomials, and will be approximated with $T$ gates and one-qubit
Cliffords. $T$-count and $T$-depth optimisation has been successfully
performed using the phase polynomial formalism via matroid
partitioning \cite{Amy2014Polynomial-Time}. The $T$-count of phase
polynomials generated via diagonalisation cannot be optimised using
phase-folding, as the parities are guaranteed to be unique, but
$T$-depth reduction could still be enabled by our strategy.
\section*{Acknowledgements}
The authors would like to thank John van de Wetering, Arianne Meijer,
David Zsolt-Manrique and Irfan Khan for helpful discussions, and
Matthew Amy for correspondence on the GraySynth algorithm.
\small
| proofpile-arXiv_065-202 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\label{intro}
\input{Sections/2-introduction}
\section{State Of The Art}
\label{related_work}
\input{Sections/3-state_of_the_art}
\section{An Overview of the Proposed Architecture}
\label{architecture}
\input{Sections/4-architecture}
\section{Authentication}
\label{authentication}
\input{Sections/5-authentication}
\section{Data and Performance Evaluation}
\label{data and evaluation}
\input{Sections/6-evaluation}
\section{Discussion}
\label{discussion}
\input{Sections/7-discussion}
\section{Conclusion}
\label{conclusion}
\input{Sections/8-conclusion}
\section*{Acknowledgments}
We would like to thank the Information Technology at Purdue (ITaP) department for their support in managing the security, networking and operating system. Specifically, we appreciate the genuine support from Pascal Meunier and Andrew Thomas.
\bibliographystyle{ACM-Reference-Format}
\subsection{Experimental Setup}
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{Figures/response_time_simple.pdf}
\caption{Response time vs. number of records for a simple query using default and ORC tables: The response time increases as we increase the size of the data for both types of tables. ORC tables show a great improvement in response time using the optimize way of data storing.}
\label{fig:rtime-simple}
\end{figure}
We have deployed our architecture with an on-premise private cloud system at the Regenstrief Center for Healthcare Engineering (RCHE) at Purdue University. The system has 10 nodes in total, 8 worker nodes and 2 master nodes, to perform the performance evaluation. Each worker node is equipped with $188$ GB memory, $24$ processors each with $12$ cores and $30$ TB disk space. Each master node is equipped with $250$ GB memory and $40$ processors with $10$ cores for each processor. We use HDP-2.6.5.0 for the Hadoop cluster with Apache Spark version $2$, and client version 1.15.0 for the Kubernetes cluster.
\subsection{Datasets}
Cerner Health Facts (CHF) data is a clinical database that includes diagnostic information, demographics, medical history, admissions, discharges, drug prescriptions, and laboratory tests associated with over 69 million patients for the 19 year period of 2000 to 2018. This longitudinal data for individual patients comes from the electronic health records (EHR) of over $750$ hospitals . CHF is HIPAA compliant and de-identified and is used by selected community and academic facilities across the United States. Purdue University has Data Use Agreements to use the data for research purposes. The use of the CHF data has been approved by the Purdue University Human Research Protections Program (HRPP) Institutional Review Board (IRB) with an exemption determination (PROPEL Determination No. 29007411). The dataset has previously been used in other environments to answer some specific clinical questions \cite{petrick2016temporal}. However, no such computational platform as described here have been developed and implemented for the CHF dataset\cite{miao2018assessment}. \\
A second source of data, MIMIC III and associated Physionet tools, is a publicly available system for EHR data and high resolution time series data from medical devices. It is created and maintained by the Laboratory for Computational Physiology at MIT \cite{johnson2016mimic,doi:10.1093/jamia/ocx084}. The database contains high resolution waveform data and clinical information on patients admitted to the Intensive Care Unit (ICU) since 2001 at the Beth Israel Deaconess Medical Center.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{Figures/response_time_complex.pdf}
\caption{Response time vs. number of records for a complex queries using default and ORC tables: The response time increases as we increase the size of the data for both types of tables. Even for complex queries containing multiple Join operations and aggregation functions, ORC tables show a great improvement in response time using the optimized way of data storing. }
\label{fig:rtime-complex}
\end{figure}
\subsection{Performance Evaluation}
We evaluate the functionality and performance of the proposed architecture from different perspectives. As the performance metric, we measure the response time for a query which is submitted from the JupyterLab instance on the Kubernetes cluster to the Hadoop cluster. To capture the response time, we used function \emph{time()} from the "time" library in Python, which returns the number of seconds that has passed since epoch (for Unix system epoch has started from January 1, 1970, 00:00:00). By measuring the time before and after submitting the job to Hadoop and calculating their difference, we obtain the approximate response-time in seconds. For the benchmark, we chose tables \verb|Encounter| and \verb|Lab_procedure| from CHF. The \verb|Lab_procedure| table is one of the largest tables in CHF and has information on lab events, and the \verb|Encounter| table has information on events associated with each patient and is linked to \verb|Lab_procedure|. The tables are stored in HDFS as CSV files but are accessible through Hive external tables. In this case, the default format of Hive tables are TEXTFILE, but we also created Optimized Row Columnar (ORCFILE) tables which stores the collection of rows in one file in a columnar way. Therefore, specific columns can be accessed faster and in parallel. Furthermore, we define both a simple and a complex query. The simple query is an aggregation function to simply count the number of records in the \verb|Lab_procedure| table. The complex query joins two tables, categorizes a specific lab result value, and gives the distribution of number of patients over the categories. Using the different types of tables and queries, we define the following three scenarios:
\begin{enumerate}
\item \textbf{Scenario One: Simple Query} In this scenario we measured the response time for the simple query to count the number of records against both types of Hive tables for various sizes of data. As the results in Figure \ref{fig:rtime-simple} show, response time goes up as the size of the data increases but the increase in latency is significantly slower using ORC tables. The ORC file groups the data rows in stripes along with indexes, it improves the performance when Hive processes the data.
\item \textbf{Scenario Two: Complex Query} In this scenario, we considered the complex query that joins two tables, categorizes a specific lab result value, and gives the distribution of number of patients over the categories to analyze the cluster using both types of tables for different sizes of data. The trend is similar to scenario one, but in overall we have a larger response time because of the complexity of the query and the joining of two large tables (Figure \ref{fig:rtime-complex}).
\item \textbf{Scenario Three: Spark Parameter Optimization} We evaluate the impact of the number of Spark executors on the response time. Since the ORC tables has shown a drastic improvement on the response time, we decided to use it as the base table format to evaluate the impact of the Spark parameters on the performance. As Figure \ref{fig:rtime-exec1} shows, increasing the number of Spark instances doesn't improve the response time for the small data size independent of the complexity of the query. Increasing the number of executors for large data and complex queries (i.e. Complex Query - \#Records = 1046046081 or Complex Query - \#Records = 4299492713) improves the response time. However, after a specific number of executors, the response time increases due to the overhead of distributed communication.
\end{enumerate}
In order to tune the Spark parameters to achieve optimal performance, a user should be aware of the size of the data, distribution of the data across the cluster and the complexity of the query. Otherwise, simply increasing the number of executors, amount of memory or the number of virtual cores for each executor would not necessarily lead to an improvement. For example, having executors with large amounts of memory often leads to excessive garbage collection overhead. Further, running executors with a single core and small memory suitable for only one task doesn't allow for the benefit of parallelism.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{Figures/response_time_orc_exec.pdf}
\caption{Response time vs. number of Spark executors using ORC tables for simple and complex queries with various sizes of data , spark-executor-mem = 256M - spark-executor-cores = 3: There is a trade-off for increasing the number of Spark executors, namely, as it increases the parallelism also increases the overhead from communications and garbage collection.}
\label{fig:rtime-exec1}
\end{figure}
\subsection{Patient-Centered Informatics Common: Standard Unification of Research Elements(PIC-SURE)} PIC-SURE\cite{bd2k,murphy2017grappling,GS2017Rcupcake} is an open source software platform to incorporate multiple heterogeneous patient level data including clinical, -omics and environmental data. The core idea of PIC-SURE is to utilize distributed data resources of various types and protocols such as SciDB, i2b2\cite{murphy2007architecture} and any other data systems by a single communication interface to perform queries and computations across different resources. For this purpose, PIC-SURE developed the \emph{Inter Resource Communication Tool (IRCT)}. IRCT is a resource-driven system and allows new resources to be integrated quickly. Furthermore, PIC-SURE API provides several pre-defined API resources that users can use to define and run a query and the results generated by a user can be available to that specific user only\cite{Jeremy2016BD2KPIC-SURE}. However, PIC-SURE API is not responsible for authentication and governance for individual access. While it provides a programming API that can be used in R and Python within an environment such as Jupyter Notebook, it is limited only to the pre-defined resources. In addition, to provide the reproducibility requirements (e.g. hard-reproducibility in terms of the same data and same environment) the proposed API should be integrated with a cloud-native development environment that users can develop and deploy their programs in a reproducible manner.
\subsection{Informatics for Integrating Biology and the Bedside (i2b2)}
i2b2\cite{murphy2007architecture} is an open source analytic query tool built on web services. i2b2 consists of a set of server-side software modules called \emph{Cell} and uses XML message for inter-cell communications which is illustrated in Figure \ref{fig:i2b2}.
Data is stored in a relational database such as Oracle using a common star schema data model. i2b2 is used by more than 200 healthcare institutions for cohort selection. Although using the relational database gives the advantage of SQL, with web based architecture the proposed services are not as flexible as SQL itself. Furthermore, relational databases have the horizontal scalability issue and also are not optimized for large unstructured data. In the context of reproducibility, i2b2 allows users to share their queries within a group to be repeated with the same data or used for a new set of data. However, this notion of reproducibility is only at the query level and there is no mechanism to share statistical methods and analytics pipelines. Although i2b2 provides a set of pre-loaded machine learning and statistical algorithms, its web based architecture allows to develop more sophisticated and complex algorithms as new web services for advanced programmers or engineers.
\begin{figure}[!t]
\centering
\includegraphics[width=2.5in]{Figures/i2b2.pdf}
\caption{i2b2 Architecture\cite{murphy2007architecture}: The Component and Connector view shows a Client/Server view of i2b2's instances and the protocols they use for connection.}
\label{fig:i2b2}
\end{figure}
\subsection{Observational Health Data Science and Informatics (OHDSI)} OHDSI\cite{hripcsak2015observational} is an open network of multiple observational data holders such as healthcare providers, hospitals, insurance companies, etc. It requires the network participants to translate their data into a common data model \emph{(OMOP\footnote{Observational Medical Outcomes Partnership})} in order to reuse the same query across different systems. Figure \ref{fig:ohdsi} shows the layered architecture for OHDSI consisting of three layers \emph{Client Tier}, \emph{Server Tier} and \emph{Data Tier}.
\begin{figure}[ht]
\centering
\includegraphics[width=2.5in]{Figures/OHDSI.pdf}
\caption{OHDSI Layered Architecture\cite{hripcsak2015observational}: This layered architecture shows a visual illustration of the main components of Client Tier, Service Tier and Data Tier of OHDSI's technology stack.}
\label{fig:ohdsi}
\end{figure}
Similar to i2b2, OHDSI only works with relational databases such as Oracle and PostgreSQL and has the same challenges of being limited to structured data and issues with scalability. In terms of reproducibility, the OHDSI community has developed methods, libraries and tools consisting of R packages and shared them in the community's repository to be accessible by everyone in the community.
\subsection{Unified Platform for Big Trajectory Data Management and Analytics (UlTraMan)}In \cite{DingXin:2018}, the authors propose a unified platform for big trajectory data management and analytics called UlTraMan. UlTraMan is an integrated platform of extended Apache Spark in both data storage and computing aspects. The extension has been made by integrating Spark with Chronicle Map and enhanced MapReduce. Chronicle Map is a key-value store for the purpose of data storage and random data access. They also have improved MapReduce by random data access optimization for computing purpose. The goal is to handle the pipeline of transforming, processing and analyzing the big trajectory data such as data generated by cars and mobiles.
\begin{figure}[ht]
\centering
\includegraphics[width=2.5in]{Figures/UltraMan.pdf}
\caption{ULTraMan Layered Architecture\cite{DingXin:2018}: The Storage and Computation layers show the underlying unified engine of UltraMan which is an integration of Apache Spark, TrajDataset abstraction and Chronicle Map.}
\label{fig:ultraman}
\end{figure}
Chronicle Map is a high performance, in-memory key-value data store that plays as internal block manager of Spark in this platform. To utilize random-access based techniques and optimization such as hash-map and indexes the authors improved MapReduce by an abstraction called TrajDataset. TrajDataset enables random access in both the local and global levels. The platform consists of four layers: storage, computation, operation, and application. The storage layer handles the data and the indexes. The computation layer is responsible for the distributed computations using the TrajDataset abstraction to enable random access. The operation layer supports a programming language interface to develop reusable components to analyze and process the data. In the application layer, UltraMan offers multiple ways of interaction for users, such as Spark shell and HTTP server for web requests. However, it doesn't propose any mechanism to provide reproducibility in terms of sharing and reproducing data pipelines and analytics. Also, UltraMan is not suitable for highly sensitive data such as health data.
\subsection{WaveformECG}
Winslow et al. \cite{winslow2016waveformecg} developed WaveformECG, an open source web based platform that supports interactive analysis, data visualization, and annotation of the Electrocardiogram (ECG) data. ECG is a well known time series data type in cardiovascular research. It can contain high frequency data and is primarily used for monitoring the heart condition or diagnosing diseases such as atrial fibrillation.
\begin{figure}[ht]
\centering
\includegraphics[width=2.5in]{Figures/WaveFormECG.pdf}
\caption{WaveformECG Architecture\cite{winslow2016waveformecg}: WaveformECG uses a web based architecture to provide access to data, analysis algorithms, analysis results and data annotation.}
\label{fig:waveformecg}
\end{figure}
Users can login to WaveformECG through a portal developed using Liferay Portal Community Edition that is extended to use a federated identity provider Globus Nexus for authentication and authorization. After the authentication layer, there are four portlet interfaces to upload, visualize, analyze and download- supported by several backend libraries. The upload and visualization interfaces utilize OpenTSDB\cite{sigoure2012opentsdb}, which is an open source distributed time-series database with Apache Hadoop and Hbase. This architecture with Apache Zookeaper provides an interface to process real-time streaming ECG data. In addition, OpenTSDB provides RESTful APIs to access its storage and retrieve data that makes it possible to query ECG data from other software. Analysis algorithms are available as web services accessed through Apache Axis2. When a user selects data file(s) and executes algorithm(s), data will be retrieved by a HTTP request from OpenTSDB and written in the desired format (e.g., XML or WFDB) by the algorithm.
Visualization services let users examine the actual ECG data directly and annotate them manually. WaveformECG is integrated with i2b2 clinical data warehouse so the selected cohort in i2b2 can be sent to WaveformECG for further analysis.
\subsection{Starfish} The ability to perform cost-effective analysis in a timely fashion over big heterogeneous data is one of the purposes of Hadoop software stack. Hadoop provides different parameters such as the number of map and reduce tasks that can be tuned based on the job specification for optimal performance. However, most of the Hadoop users lack the expertise needed to tune the system for good performance. Starfish \cite{chen2012interactive} is a self-tuning system for big data analytics that is built on top of Hadoop. It tunes the system based on the user's needs and the system workloads. Starfish tunes at three different levels, \emph{Job}-level by approximating the job's statistics and performance mode, \emph{Workflow}-level by handling the unbalanced data layout because of the data-local fashion in Hadoop and \emph{Workload}-level by optimizing workflows based on the shared data-flow or re-using the intermediate data and handing them to the workflow scheduler.
\subsection{Cerner's HealtheDataLab}
Cerner's HealtheDataLab\cite{ehwerhemuepha2020healthedatalab} is a cloud computing solution utilizing Fast Healthcare Interoperability Resources (FHIR) for data standardization, and distributed computing solutions for advanced data analysis. It is designed to serve the researchers to develop data analysis and machine learning models in a HIPPA compliant, high-performance and cloud-based computing environment. Jupyter Notebook is the front-end of the platform and it provides a web-based interface to develop, document, and execute codes in Python and R programming languages. Apache Spark is the core component of the backend as a computing engine. Apache Spark is an in-memory, parallel analytic engine that provides big data analysis and a rich machine learning libraries. HealtheDataLab is deployed in Amazon AWS to provide scalability and elasticity and data is stored in Amazon Simple Storage Solution (S3). The primary source of data is the Cerner’s HealtheIntent platform.
\subsection{Infrastructure Layer}
\label{Infrastructure Layer}
The Infrastructure layer is the underlying equipment made up of a cluster of computers with high capacity in memory, storage and processors. This layer can be equipped with specific hardware such as specialized microprocessors including \emph{Graphical Processing Unit (GPU)} or \emph{Solid-State Drive (SSD)}. GPUs are capable of handling a few specific tasks in a very short time. SSDs are special disks that increase the performance for read/write (I/O) operations.
\subsection{Storage Layer}
\label{Storage Layer}
The Storage layer is responsible for storing and managing large data with heterogeneous data types (e.g. images, time-series and structured data) of varying size. Figure \ref{fig:Architecture} shows this layer consists of two major components to keep the storage management separate for the data and users' files. The first component is \emph{Hadoop Distributed File System (HDFS)}. HDFS is the storage system used by Hadoop applications and provides high throughput data access for large and heterogeneous data sets. Different types of data such as structured, unstructured and image data can be stored on HDFS and be accessible by the components in computation layer. Different types of clinical data such as medical image data, time series data from medical devices such as ECGs, physiological monitoring devices, genomcis and clinical data can be stored and processed for analysis.
\begin{figure}[!t]
\centering
\includegraphics[width=\columnwidth]{Figures/Diagram.pdf}
\caption{RCHE-HUB Layered Architecture: This layered architecture shows the integration of open-source cloud based technologies to provide a data-type agnostic, programming language agnostic, scalable and reproducible environment in a privacy-preserving manner suitable for highly sensitive data.}
\label{fig:Architecture}
\end{figure}
The second component is Ceph \cite{weil2006ceph} storage which is directly used by the service layer. Services on a data processing environment are mostly stateful and hence need a place to store their intermediate data and files. For this purpose, we have used Ceph as the storage platform to provide persistent volume for services' persistent data. In order to ease the administrative overhead, Ceph storage is managed automatically by a Rook \cite{rook} cluster, which is deployed in the service layer. We explain the Rook cluster in more details in Section \ref{Service Layer}.
\subsection{Computation Layer}
\label{Computation Layer}
To utilize a cluster of computers and processors, Hadoop provides a framework that holds a collection of open-source software and tools. The computation layer holds the tools we use to support the distributed processing and computing for a large amount of data.
\emph{Hadoop Image Processing Interface (HIPI)} \cite{hipi} is an image processing library that can be used with Apache Hadoop MapReduce for parallel processing. HIPI helps to store a large collection of images on the HDFS and also makes them available for distributed processing. Furthermore, we can use HIPI with well known open source image processing libraries such as OpenCV that provides a variety of computer vision algorithms.
To store the temporal and waveform data, we use HBase (non-relational database) that runs on top of HDFS. HBase stores data as key/value pairs in columnar fashion and provides real-time read and write access with low latency. However, executing queries against HBase are not convenient for a relational data schema. To support SQL-like queries for relational database schema (e.g., for electronic health records (EHR)) we use Apache Hive. Apache Hive is a data warehouse software on top of Hadoop and provides SQL-Like queries (i.e. HiveQL). Hive, by default uses Hadoop MapReduce to execute queries and analyze the data. MapReduce performs the processing in disk which is I/O intensive and very slow. Hive can also use other distributed computation engines such as Apache Spark. In order to improve the performance and processing speed, we use Apache Spark on top of Hive. The biggest advantage of Spark over MapReduce is that it performs the processing in-memory and in parallel using \emph{Resilient Distributed Dataset (RDD)}. This improves performance due to low disk communication needs.
\subsection{Service Layer}
\label{Service Layer}
The Service layer provides a container based environment for scalability, self-healing, auto-managing and monitoring services running in containers (\emph{micro-services}). A popular open-source container technology is Docker\cite{docker}, which allows us to create, run and manage containers on a single operating system. However, for the case of a cluster of hosts, it is hard to keep track of all of the containers on different hosts. In such a scenario, we can leverage the open-source container orchestration platform Kubernetes\cite{kubernetes} to automate the management of the containerized applications across the cluster of hosts.
Kubernetes was originally developed to support stateless applications with no need for storage. In order to deploy applications that need to store their data in persistent storage (known as \emph{stateful} applications) on a Kubernetes cluster, we need storage that is available anytime and anywhere the containers can be deployed. Cloud providers offer their own storage services to provide persistent volume for persistent data. Unfortunately, for on-premise systems with Kubernetes, we cannot rely on these convenient storage services. In order to address this issue we use the storage orchestrator Rook. Rook \cite{rook} is an open-source cloud-native (container based) storage orchestrator that takes advantage of the underlying container based environment for Kubernetes to facilitate managing, monitoring, scaling, provisioning, healing, and recovery of the storage services. Rook supports multiple storage solutions such as Ceph, \emph{Network File System (NFS)} and Cassandra to be integrated with cloud-native environments. For a production environment the Ceph storage system is recommended since it is more stable since most other solutions are still in Alpha or Beta versions. Ceph is a highly scalable distributed storage system that provides block storage, object storage, and shared file systems. We use block storage that can be mounted to a single pod to store its persistent data and shared file system which is shared between multiple pods.
Another issue to address for on-premise installation of Kubernetes is routing traffic into the cluster using load balancers. Public clouds such as GCP or AWS have convenient services for routing traffics to Kubernetes cluster. However, most of the standard load balancers are only deployable on public cloud providers and are not supported for on-premise installation. Fortunately, MetalLB\cite{metallb} has been developed to address this issue. MetalLB is an on-premise load balancer that provides two different configuration, BGP based and Layer2 based load balancing. MetalLB is mostly responsible for distributing the network or application traffic across multiple servers to increase the capacity and reliability of the applications.
Such a scalable container based environment that handles storage and service load balancing is ready to deploy services and applications in a reproducible manner. One of the primary services we provide for researchers in our platform is JupyterHub\cite{jupyterhub}, an environment for developing applications to analyze and process data. JupyterHub is an open source multi-user web based programming language interface that supports multiple programming language Kernels such as Python, R, Scala. Deploying JupyterHub on the Kubernetes cluster makes it manageable and scalable. Using JupyterHub helps to use single server JupyterLab for a group of people. Every instance of a JupyterLab will be deployed inside a docker container in our Kubernetes cluster with two different spaces as storage. A block storage is used as a local directory and a shared file system which is shared between all JupyterLab instances. In addition, utilizing the containerization helps to provide reproducible applications which can be shared easily among the researchers.
| proofpile-arXiv_065-203 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\label{sec:intro}
Due to the sheer amount of bandwidth available in millimeter-wave
(mmwave) bands, they are becoming an increasingly attractive choice
for various types of wireless networks. They are already included in
the standardization efforts for 5G \cite{zorzi-tutorial, 3gpp-tech},
and, as such, they are set to become an essential resource for
5G-enabled vehicular networks. Indeed, the increasing need for
on-board high-definition maps, their real-time updates, as well as the
data generated by the vehicles themselves, make conncted cars
prime consumers and producers of network traffic, which can be catered to only by a substantial amount of bandwidth, readily available in mmwaves.
However, due to the high operating frequency, these bands are known to experience harsh propagation conditions, and are highly susceptible to blockages, which has rendered them unusable until very recently \cite{rappaport-chanmodels}. Advances in large smart antenna systems, composed of many antenna elements, can overcome these problems by establishing communication links over narrow beams with high beamforming gains. In addition, these advanced systems can support several simultaneous beams of varying witdth \cite{kulkarni-hybridbf}.
The fact that the communications are conducted over highly directed
beams
introduces both challenges and opportunities. Narrow beams, when efficiently used, can
ensure a high degree of isolation from interference due to other
ongoing communications, but they also
significantly curb coverage and range of the mmwave base stations
(gNBs). This means that beam management aspects in future networks
will be no small feat. For this reason, it is not expected that
mmwave service will be deployed in a stand-alone fashion, but rather
in tandem with networks operating in sub-6 GHz frequencies, to
alleviate shortcomings, especially during initial access and link establishment \cite{zorzi-lte}.
A signifcant body of work has addressed this issue,
either in the context of initial access and link establishment, or
beam alignment/configuration and user association.
In \cite{perfecto-v2v} the authors present a framework that combines matching
theory and swarm intelligence to dynamically and efficiently
perform user association and beam alignment in a vehicle-to-vehicle communication network.
Methods aided by location information have been proposed in \cite{location-allerton}, as have methods which use information from road-side and vehicle sensors \cite {discover-5g, arxivali2019}.
In our own previous work \cite{noi-wowmom19}, a method leveraging traffic regulating signals was proposed to alleviate the need for real-time beam realignment.
An optimal mutli-user,
non-interactive beam alignment approach is proposed in
\cite{optimal-niba}, which however focuses on a single-cell network.
In this work we
take a novel graph-based approach to the beam management problem. In
particular, we address it by casting the joint beam design and user
association task as a {\em conflict-aware} matching problem in a
weighted bipartite graph, with the goal of ensuring broad coverage
while maximizing the network data rate.
Few works in the literature have
applied graph theory in general, to address beam management in mmwaves
\cite{mmwave-graph-icc, mmwave-graph-tvt}.
In both \cite{mmwave-graph-icc, mmwave-graph-tvt},
the authors propse a graph-based approach to reduce the inter-cell
interference, whereby each mmwave cell/link is modelled as a vertex and the
edges between them respresent the mutual inferference caused. The goal
is to find a subgraph that minimizes the number of beam
collisions.
To summarize, in this work we make the following main contributions:
\begin{itemize}
\item we formulate the beam design problem, i.e., the joint selection
of the number, width and direction of beams, as an
optimization problem, aiming at maximizing the rate of the covered
users, while respecting all practical constraints;
\item we develop a graph-based model of the mmwave system, which
captures the most essential features. The optimization problem is
thus turned into a problem of {\em bipartite weighted matching with
conflicts}, which can be solved in linear time using heuristic algorithms. In particular, the introduction of {\em{conflicts}} within the graph model enables us to accomodate the practical constraints of mmwave communications;
\item we evaluate our approach leveraging a large-scale
trace, including the real-world urban topology and realistic
vehicular mobility of Luxembourg City, Luxembourg, and compare it against a state-of-the-art cluster-based beam design approach.
Our results show that the proposed scheme is able to provide better performance, in particular thanks to its ability to accomodate practical constraints into the solution mechanism.
\end{itemize}
Unlike previous work, which mainly addressed a single parameter at a
time, our approach jointly selects, for each gNB, three beam-design
parameters: the number of simultaneous beams, their width, and their
direction.
Note that, while weighted bipartite matching {\em without} conflicts
is a fairly well-studied problem, the
same problem {\em with conflicts} has been much less
explored and applied.
The remainder of the paper is organized as follows. After
detailing the system
model and our vehicular network in Sec.~\ref{sec:system-trace},
we formulate the optimization problem and introduce our graph-based
approach in Sec.~\ref{sec:problem}.
In Sec.~\ref{sec:results},
through extensive simulations using real-world vehicular traces,
we present the performance evaluation. Finally, Sec.~\ref{sec:conclusion}
concludes the paper.
\section{The mmwave vehicular network}
\label{sec:system-trace}
We consider a realistic vehicular network in
an urban setting, based on real-world publicly available data for the
city of Luxembourg \cite{lust}. This data contains sufficient
information about the topology of the city, the road layout (e.g.,
regulated intersections), as well as the mobility traces of around
14,000 vehicles traveling within the city center, accumulated over a
12-hour window. Based on this data, we construct a scenario as
depicted in Fig.~\ref{fig:scenario}, in which a set of gNBs, denoted by $\mathcal{G}$, are colocated with traffic lights to serve a set of
vehicles, i.e., the mmwave {\em{users}}, denoted by $\mathcal{V}$.
gNBs and users are equipped with uniform planar array (UPA) antennas, composed of a grid of $N_t$ and $N_r$ antenna elements, respectively,
spaced by $\lambda/2$. Array antennas at the gNB can support up to $N$
beams simultaneously, limited by the number of available RF chains, while vehicles can use only one beam at a time.
For the mmwave communication between gNBs and users to be successful,
the beams need to be fully aligned at both the transmitting and
receiving end, or in the case of no line of sight (nLoS), directed in such a manner that the angle of arrival of the incoming waves coincides with the direction of the receiving beam. Moreover, the directivity and gain of a beam are inversely proportional to the width of the beam, i.e., the narrower the beam, the higher the gain. The number and width of the beams determine the range and coverage of the gNB, while the direction of the beam ensures alignment with the receiving beam, and isolation from interfering signals. It is clear therefore that the number, width, and directions of the beams are critical aspects that need to be addressed in a mmwave vehicular network.
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{lux_scenario.png}
\caption{\label{fig:scenario} Real-world scenario: Luxembourg city
center. The red circles represent the locations of the traffic
lights, i.e., of the gNBs.}
\end{figure}
In this paper, we focus on downlink communications, although the work
can easily be extended to the uplink direction as well.
Furthermore, multiple vehicles can be multiplexed within the same beam, using multiple access techniques.
Coordinated multi point transmission (CoMP) is also supported, i.e., we assume that a vehicle can receive data through its single beam from several gNBs.
Finally,
to make the model more tractable, we divide the network area under
consideration into equal-sized square zones. We denote the set of
zones by $\mathcal{Z}$ and make them sufficiently small so that their size is negligigle with respect to the footprint of any
beam. It follows that we can
consider that all vehicles within a specific zone expericence the same
propagation and LoS conditions with respect to the surrounding gNBs.
\section{Beam matching and user association}
\label{sec:problem}
We now focus on the main aspects of the mmwave vehicular network we
address and present our approach to overcome the existing hurdles. In
particular, Sec.\,\ref{sub:opt} formally formulates the optimization
problem for the user-gNB association, stating its objective and
constraints. Then
Sec.\,\ref{sub:graph} introduces our graph-based
model with constraints and a heuristic algorithm that effectively
solves the problem in linear time.
\subsection{The optimization problem\label{sub:opt}}
Given the set of gNBs $\mathcal{G}$, $N$, the number of supported beams at
each gNB, and the set of zones $\mathcal{Z}$, our aim is to jointly address
the two following questions while maximizing the overall achieved
network rate: i) what beam design should each gNB employ, i.e., how many beams, of what width, and direction; and ii) which zones should be associated to which gNB and scheduled on which beam.
To this end, we formulate an optimization problem that needs to be
solved periodically at every time step $k\in\mathcal{K}$. Given the solution,
this is provided to the set of gNBs, which update their beams design
accordingly.
Since the problem formulation holds at every time step, to simplify
the notation, in the following we drop the time index $k$.
Let us first define a set of beams $\mathcal{B}$ available at every gNB.
Each beam $b\in\mathcal{B}$ is defined by a direction $\delta_b$ and
half-power beamwidth $\alpha_b$.
If we consider a finite set of possible directions $D$ and a finite
set of possible beamwidths $A$, then the set of beams, $\mathcal{B}$, at the gNB would contain $|D||A|$ potential beams.
Let $\pi(g,b)$ be the binary variable indicating
whether beam $b$ at gNB $g$ is employed or not, and let
$\gamma(g,b,z)$ be the binary variable indicating that zone $z$ is
associated with gNB $g$ on beam $b$.
To assess whether a beam $b$ at gNB $g$ can cover a zone $z$, we
proceed as follows.
First, for each zone $z$, we derive geometrically the LoS direction from a
gNB $g$, i.e., the angle of departure, denoted as
$\theta(g,z)$\footnote{All directions are defined in reference to a
global coordinate system.}.
We then denote with $\mathcal{C}_z$ the set of all $(g,b)$ tuples
covering zone $z$, i.e., all tuples for which $\pi(g,b)=1$ and which
fulfil the condition\footnote{Recall that a zone size is negligible
with respect to any beam footprint.} $|\theta(g,z)-\delta_b|\leq\frac{\alpha_b}{2}$.
The optimization problem is then defined as follows:
\begin{equation}\label{eq:opt-obj}
\max_{\bm{\pi},\bm{\gamma}}\sum_{g\in\mathcal{G}}\sum_{b\in\mathcal{B}}\sum_{z\in\mathcal{Z}} \pi(g,b)\gamma(g,b,z)R(g,b,z)
\end{equation}
where $ R(g,b,z)$ is the achieveable rate at zone $z$ from gNB $g$ on beam $b$, given the set of indicator variables $\bm{\pi}$ and $\bm{\gamma}$.
The achieveable rate at zone $z$ is given by the following expression:
\begin{multline}\label{eq:rate}
R(g,b,z)\mathord{=}W\sum_{v\in\mathcal{V}}\log_2\left(1 +\frac{P(g,b)\left|\tilde{h}(g,b,v)\right|^2}{N_0+I_v}\right)
\end{multline}
where $W$ is the system bandwidth, while the second term within the
logarithmic function is the signal-to-interference-and-noise (SINR)
ratio.
In the numerator, $\tilde{h}_c(g,b,v)$ represents the channel gain between gNB $g$ and
the vehicles $v$ in zone $z$ on beam $b$, while $P(g,b)$ is the
power allocated to beam $b$ by $g$. At the denominator,
$N_0$ represents the white noise power, while $I_v$ represents the
interference experienced by the vehicle in the zone from all other
active $(g',b')$ tuples in $\mathcal{C}_z$ to which $z$ is not associated
with.
For any vehicle $v$ in zone $z$, $I_v$ can be expressed as:
\begin{equation}
I_v\mathord{=}\sum_{(g',b')\in \mathcal{C}_z}[1-\gamma(g',b',z)]\pi(g',b')P(g',b')|\tilde{h}(g',b',v)|^2\,.
\end{equation}
The channel gains, $\tilde{h}_c(g,b,v)$ which account for propagation losses and the beamforming gains, are derived according to \cite{zorzi-channel}.
Next, we define the constraints characterizing the system under
study. First, we limit the number of simultaneous beams that can be used by the gNB:
\begin{equation}
\sum_{b\in\mathcal{B}}\pi(g,b)\leq N, \forall g\in \mathcal{G} \,.
\end{equation}
gNBs must also adhere to a power budget and ensure that power is not allocated to unused beams, $P^t(g)$, namely:
\begin{equation}
\sum_{b\in\mathcal{B}}P(g,b)\leq P^t(g), \forall g\in \mathcal{G}\,.
\end{equation}
\begin{equation}
P(g,b)\leq \pi(g,b) P^t(g), \forall g\in \mathcal{G}, b\in\mathcal{B} \,.
\end{equation}
Finally, we must ensure that beams do not overlap with each other, i.e., for every two beams $b_i, b_j$ at $g$, for which $\pi(g,b)=1$, the following condition must hold:
\begin{equation}
|\delta_{b_i}-\delta_{b_j}|\geq\frac{\alpha_{b_i}+\alpha_{b_j}}{2},
\end{equation}
We then impose constraints on the receiving end of the
communication.
First, we ensure that no zone $z$ is associated with a $(g,b)$ tuple that
cannot cover that zone:
\begin{equation}
\sum_{(g,b)\notin \mathcal{C}_z}\gamma(g,b,z)\leq 0, \forall z\in\mathcal{Z}\,,
\end{equation}
and that no zone $z$ is scheduled on an inactive beam:
\begin{equation}
\gamma(g,b,z)\leq\pi(g,b), \forall g\in\mathcal{G},b\in\mathcal{B}, z\in\mathcal{Z}\,.
\end{equation}
In addition,
for CoMP-like communications, in which several gNBs coordinate to
transmit the same data to a certain zone, we impose that:
\begin{equation}
\sum_{g,b\in c_z}\gamma(g,b,z)\leq L, \forall z\in\mathcal{Z},
\end{equation}
where $L$ is the maximum number of gNBs that can partake in the
coordinated tranmission.
Clearly, when no CoMP is enabled, $L=1$.
The problem contanis nonlinear equations, e.g., \Eq{rate}, and contains
integer variables, namely,~$\pi$ and~$\gamma$; it therefore falls into
the category of nonlinear integer
programming~\cite{hemmecke2010nonlinear}. Such problems are more complex
than mixed-integer linear programming (MILP) problems, which are
themselves known to be NP-hard~\cite{boyd}.
While there are algorithms that do solve MILP problems to the
optimum,
solution strategies for non-linear integer problems only find
local optima in the general case.
\subsection{A graph-based approach\label{sub:graph}}
To overcome the complexity of the above problem, we proceed as
follows.
First, we develop a graph-based
model of the system that captures all the essential aspects of the
mmwave vehicular networks. Second, we leverage such a model to devise
an effective,
linear-complexity heuristic.
\begin{figure}
\centering
\includegraphics[width=.9\columnwidth]{img/bipartite3.png}
\caption{\label{fig:bipartite}
Example of bipartite graph. Left-hand, blue vertices represent beam steering decisions; right-hand, orange vertices represent zones. Green edges represent coverage opportunities, and their weights represent the corresponding rate. Red conflict edges join mutually-exclusive options.
}
\end{figure}
As already mentioned, an effective beam design requires to
{\em{jointly}} set (i)
the number of beams of each gNB,
(ii) their width, and
(iii) their direction,
while respecting the system constraints.
Making these decisions sequentially, e.g., first deciding the width of beams and then their directions, is possible but likely to result in suboptimal solutions. On the other hand, making them jointly requires recognizing and accounting for the {\em conflicts} between decisions, i.e., the fact that several options are mutually exclusive and taking one renders the others invalid.
To this end, we first cast the task of beam design and user association into a problem of {\em bipartite weighted matching with conflicts}. Specifically, we build a bipartite graph similar to the one in \Fig{bipartite}, where:
\begin{itemize}
\item left-hand side vertices represent $(g,b)$ tuples;
\item right-hand side vertices represent zones;
\item edges between left- and right-hand side vertices represent
the fact that a certain beam covers a certain zone, i.e., that
$(g,b)\in \mathcal{C}_z$;
\item the edge weights correspond to the achievable rate $R(g,b,z)$ between left-hand vertex $(g,b)$ and right-hand vertex $z$;
\item conflict edges, drawn between left-hand side vertices, denote combinations that are mutually exclusive.
\end{itemize}
With reference to the example in \Fig{bipartite}, we can observe that
conflict edges are drawn between vertices corresponding to the same
gNB, with incompatible beam choices, e.g., combinations of decisions
that would result in beams of the same gNB overlapping. To make the
matching problem tractable, the weights of the edges, i.e., the
achievable rates, are calculated by taking into account only the
noise-limited rate, which is a fair assumption considering that mmwave networks have in general very limited interference. The selection of an edge between the left-hand and
right-hand vertices corresponds to setting both binary variables $\pi(g,b)=\gamma(g,b,z)=1$.
Although the problem of weighted bipartite matching {\em with
conflicts} is NP-hard~\cite{chen2016conflict}, some heuristic
algorithms are available in the literature, which we can be used for
an efficient and effective solution of the beam design problem.
Specifically, we leverage the algorithm presented in~\cite[Sec.~4.3]{chen2016conflict}, which operates in linear time, at the cost of a linear competitive ratio.
\section{Numerical results}
\label{sec:results}
The performance of our approach is evaluated through numerical
simulations of an urban vehicular network constructed through the
publicly available traces for the city of Luxembourg as described in
Sec.~\ref{sec:system-trace}.
We limit our scenario to a 4\,km$^2$ area of the city, covering most
of its centre,
in which 51 gNBs are distributed as depcited in
Fig.~\ref{fig:scenario}. The system parameters are as follows. The
central frequency is set at $f_c=76$\,GHz \cite{comm-radar}, while the
bandwidth is set at $W=400$\,MHz as foreseen in 5G networks
\cite{zorzi-tutorial}. All gNBs are equipped with $32\times32$ UPA, with antenna elements spaced by $\lambda/2$, that
can transmit up to $N=4$ beams simultaneously. The transmit power is
limited to $P^t=30$\,dBm for any gNB. The vehicles are equipped with
$8\times8$ UPA, and can receive on a single beam at a time, which is
always directed towards the associated gNB. The composite effect of
channel and beamforming gains are modeled in accordance with
\cite{zorzi-channel}, as well as LoS and outage probability which are tailored to the Luxembourg
scenario.
Furthermore, we consider three supported beamwidths $A=\{5^{\circ},
10^{\circ}, 15^{\circ}\}$, while beam directions can be any integer
number between $0^{\circ}$ and $359^{\circ}$.
We compare the performance of the proposed approach against a
clustering-based technique, using the low-complexity but efficient
DBSCAN algorithm \cite{dbscan} to generate the number and directions
of the beams. This is used as a benchmark approach. It should be noted
that DBSCAN cannot determine the width of the beams,
therefore, we try all possible values and take the one resulting in the best performance.
Both algorithms are executed periodically
every 1~s, and the total simulation duration is 20~s. The underlying
allocation of resources is performed using the Proportional-Fair
scheduling algorithm. In the following plots, CAWBM denotes the
proposed conflict-aware weighted bi-partite matching approach.
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{img/total_data_users}
\caption{\label{fig:totaldata} Total amount of data downloaded [Tb] and number of users served [$10^3$]. }
\end{figure}
We first look at the perfomance of both approaches in absolute terms, by evaluating the total number of users served over the 20-s period and the amount of data downloaded, as shown in Fig.~\Fig{totaldata}. CAWBM can serve around 35\%, roughly 1.4 Tb, more data than DBSCAN, and around 7\% more users. In particular, CAWBM serves 92\% of all vehicles in the network, while DBSCAN serves around 85\% of them.
\begin{figure}
\centering
\includegraphics[width=.23\textwidth]{img/sinr_cdf}
\includegraphics[width=.23\textwidth]{img/rate_cdf}
\caption{CDF of average SINR experienced by vehicles (left) and effective rate experienced by served vehicles (right).
\label{fig:rate}
}
\end{figure}
Next, we look at the cummulative distribution function (CDF) of the
experienced average SINR and data rates achieved by the
vehicles. During the simulations, the SINR was calculated taking into
account both the noise and interference, and then mapped into the data
rate by using the 4-bit channel quality indicator (CQI) table in
\cite{3gpp-tech}. It is interesting to note that while DBSCAN can
ensure slightly better SINR values for the top 40\% of the users, as
seen in \Fig{rate}(left), this behavior is not reflected in the achieved rates by the vehicles, shown in \Fig{rate}(right).
This can be explained by the fact that the CAWBM approach serves both more vehicles per beam (as shown in \Fig{beamtime}(left)) and for a longer period of time (\Fig{beamtime}(right)).
\begin{figure}
\centering
\includegraphics[width=.23\textwidth]{img/user_served_by_beam.png}
\includegraphics[width=.23\textwidth]{img/timeserved.png}
\caption{CDF of number of users served by each beam (left) and CDF of amount of time each vehicle is served (right).
\label{fig:beamtime}
}
\end{figure}
The improvement in performance brought by CAWBM when compared to
DBSCAN can be attributed to the fact that the former takes into
account the rates of all potential vehicles within the small area of
the zone. DBSCAN, on the other hand, acts only on the information
regarding the vehicle relative LoS direction towards the gNBs. In
addition, CAWBM is likely to favor beams that cover zones that are
both more frequented and well positioned to experience higher levels
of SINR. This is the reason why over 50\% of the vehicles are served
more than 5~s with CAWBM, while, under DBSCAN, the corresponding
percentage of vehicles is just 15\%.
\section{Conclusions and future work}
\label{sec:conclusion}
While mmwave communications have emerged as a promising candidate technology
for future vehicular networks, the performance of mmwave networks heavily depends upon beam management aspects.
The need for adequate
alignment of beams between gNBs and vehicles is critical, and, as such,
efficient beam design becomes paramount. We adressed both beam design
aspects and user association though a graph-based approach. Once we
modeled our system as a weighted bipartite graph, we were able to
cast the problem at hand as a conflict-aware matching problem, which
can be efficiently solved in linear time, through heuristic algorithms. Our performance evaluation, based on real-world
topology and mobility information, has provided relevant
insights. Thanks to the conflict-aware approach, the solution we
proposed significantly outperforms our benchmark scheme leveraging a clustering algorithm.
Future work will focus on further improving the mmWave graph
model, and further investigating the interaction between gNBs during
the beam design phase.
\section*{Acknowledgement}
This work has been performed in the framework of the European Union’s
Horizon 2020 project 5G-CARMEN co-funded by the EU under grant
agreement No.\, 825012, and has also been partially supported by the Academy of Arts and
Sciences of Kosovo.
| proofpile-arXiv_065-204 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{\@startsection{section}{1}%
\z@{.7\linespacing\@plus\linespacing}{.5\linespacing}%
{\bfserie
\centering
}}
\def\@secnumfont{\bfseries}
\makeatother
\setlength{\textheight}{19.5 cm}
\setlength{\textwidth}{12.5 cm}
\newcommand{\mbox{\normalfont\Large\bfseries 0}}{\mbox{\normalfont\Large\bfseries 0}}
\newcommand{\hspace*{-\arraycolsep}\vline\hspace*{-\arraycolsep}}{\hspace*{-\arraycolsep}\vline\hspace*{-\arraycolsep}}
\newtheorem{thm}{Theorem}[section]
\newtheorem{lem}[thm]{Lemma}
\newtheorem{prop}[thm]{Proposition}
\newtheorem{cor}[thm]{Corollary}
\theoremstyle{definition}
\newtheorem{definition}[thm]{Definition}
\newtheorem{example}{Example}
\newtheorem{exercise}[thm]{Exercise}
\theoremstyle{remark}
\newtheorem{rem}[thm]{Remark}
\numberwithin{equation}{section}
\def\title#1{{\Large\bf \begin{center} #1 \vspace{0pt} \end{center} } }
\def\authors#1{{\large\bf \begin{center} #1 \vspace{0pt} \end{center} } }
\def\university#1{{\sl \begin{center} #1 \vspace{0pt} \end{center} } }
\def\inst#1{\unskip $^{#1}$}
\newcommand{\vertiii}[1]{{\left\vert\kern-0.25ex\left\vert\kern-0.25ex\left\vert
#1 \right\vert\kern-0.25ex\right\vert\kern-0.25ex\right\vert}}
\usepackage{amsfonts}
\renewcommand{\thefootnote}{\fnsymbol{footnote}}
\def \mbox{\boldmath $\ell$} {\mbox{\boldmath $\ell$}}
\def \mbox{\boldmath $\eta$} {\mbox{\boldmath $\eta$}}
\def \mbox{\boldmath $\mbox{argmin}$} {\mbox{\boldmath $\mbox{argmin}$}}
\def \text {\rm Im} {\text {\rm Im}}
\def \text {\rm In} {\text {\rm In}}
\def \text {\rm Tr} {\text {\rm Tr}}
\def \text {\rm tr} {\text {\rm tr}}
\def \text {\rm conv} {\text {\rm conv}}
\def \text {\rm Re} {\text {\rm Re}}
\def \text {\rm d} {\text {\rm d}}
\def \text {\rm det} {\text {\rm det}}
\def \text {\rm diag} {\text {\rm diag}}
\def \text {\rm Eig} {\text {\rm Eig}}
\begin{document}
\title{First order sensitivity analysis of symplectic eigenvalues}
\bigskip
\authors{Hemant Kumar Mishra}
\smallskip
\university{Indian Statistical Institute, New Delhi 110016,
India\\hemant16r@isid.ac.in
\date{\today}
\begin{abstract}
\sloppy
For every $2n \times 2n$ positive definite matrix $A$ there are $n$ positive numbers $d_1(A) \leq \ldots \leq d_n(A)$ associated with $A$ called the symplectic eigenvalues of $A.$
It is known that $d_m$ are continuous functions of $A$ but are not differentiable in general.
In this paper, we show that the directional derivative of $d_m$ exists and derive its expression.
We also discuss various subdifferential properties of $d_m$ such as Clarke and Michel-Penot subdifferentials.
\end{abstract}
\makeatletter
\@setabstract
\makeatother
\vskip0.3in
\footnotetext{\noindent {\bf AMS Subject Classifications:} 15A18, 15A48, 26B05, 26B27.}
\footnotetext{\noindent {\bf Keywords : } Positive definite matrix, symplectic eigenvalue, Fenchel subdifferential, Clarke subdifferential, Michel-Penot subdifferential, directional derivative.}
\section{Introduction}
\sloppy
Let $\mathbb{S}(n)$ be the space of $n \times n$ real symmetric matrices with the usual inner product defined by $\langle A, B \rangle = \text {\rm tr} AB,$ for all $A,B$ in $ \mathbb{S}(n).$
Let $\mathbb{P}(2n)$ be the subset of $\mathbb{S}(2n)$ consisting of the positive definite matrices.
Denote by $J$ the $2n\times 2n$ matrix
\begin{equation*}
J=\begin{pmatrix}O & I_n\\
-I_n & O\end{pmatrix},
\end{equation*}
where $I_n$ is the $n\times n$ identity matrix.
A $2n\times 2n$ real matrix $M$ is called a {\it symplectic matrix} if
$$M^TJM=J.$$
Williamson's theorem \cite{dms, will} states that for every element $A$ in $ \mathbb{P}(2n)$ there exists a symplectic matrix $M$ such that
\begin{equation} \label{2eqn31}
M^TAM=\begin{pmatrix}D & O\\
O & D\end{pmatrix},
\end{equation}
where $D$ is an $n\times n$ positive diagonal matrix with diagonal elements $d_1(A) \le \cdots \le d_n(A).$
The diagonal entries of $D$ are known as symplectic eigenvalues (Williamson parameters) of $A.$
Symplectic eigenvalues occur in various fields of mathematics and physics.
In quantum information theory, von Neumann entropy is determined by the symplectic eigenvalues of a generic covariance matrix \cite{arl, demarie, p, sis}.
See also \cite{sanders, koenig, safranek}.
\sloppy
Much more interest is shown in symplectic eigenvalues by mathematicians and physicists in the past few years due to their importance in many areas such as
symplectic topology \cite{hofer}, quantum mechanics \cite{degosson} and Hamiltonian mechanics \cite{ar, nss}.
Some interesting work has been done on symplectic eigenvalues recently.
Various inequalities about these numbers, some variational principles, a perturbation theorem,
and some inequalities between symplectic eigenvalues and ordinary eigenvalues are obtained in \cite{bj}. In a more recent work \cite{jm},
many results on differentiability and analyticity of symplectic eigenvalues, some inequalities about these numbers involving two matrices are obtained.
It is known that the ordered symplectic eigenvalue maps $d_1, \ldots, d_n$ are continuous but not differentiable in general.
This is illustrated in \cite[Example 1]{jm}.
Our goal is to further investigate these maps.
In this paper, we show that the first order directional derivatives of these maps exist, and compute the expression of their directional derivatives.
We also discuss some subdifferential properties of $d_1, \ldots, d_n,$ namely, Fenchel subdifferential, Clarke subdifferential, and Michel-Penot subdifferential.
Subdifferentials are useful in the field of optimization and non-smooth analysis.
They provide various characterisations of optimality conditions such as local minimizer, local sharp minimizer, local blunt minimizer \cite{penot, borwein, roshchina, zalinescu}.
In numerical methods, subdifferentials are useful in minimizing local Lipschitzian functions \cite{bagirov}.
The class of convex functions enjoys many useful and interesting differential properties and they are widely studied \cite{borwein}.
A positive number $d$ is a symplectic eigenvalue of $A$ if and only if $\pm d$ are eigenvalues of the Hermitian matrix $\iota A^{1/2}J A^{1/2}$ \cite[Lemma 2.2]{jm}.
Eigenvalues of Hermitian matrices have rich theory, and have been studied for a long time.
Therefore it seems natural to study the properties of symplectic eigenvalues by applying the well developed theory of eigenvalues.
But it is difficult to obtain results on symplectic eigenvalues by the direct approach due to the complicated form of $\iota A^{1/2}J A^{1/2}.$
For instance, the map $A \mapsto \iota A^{1/2} J A^{1/2}$ from $\mathbb{P}(2n)$ to the space of $2n \times 2n$ Hermitian matrices is neither convex nor concave (see Example \ref{2ex1}).
Therefore it is not apparent whether the sums of eigenvalues of $\iota A^{1/2}JA^{1/2}$ are convex or concave functions of $A.$
Our methods make use of independent theory for symplectic eigenvalues developed in \cite{bj, jm}.
Eigenvalue maps of Hermitian matrices can be written as difference of convex functions using Ky Fan's extremal characterisation \cite[Theorem 1]{fan} of sum of eigenvalues of Hermitian matrices.
It is this property of eigenvalues that plays a key role in the study of their various subdifferentials and directional derivatives properties \cite{hiriart1999, hiriart1995, torki}.
$\text{Theorem } 5$ of \cite{bj} gives an extremal characterisation of sum of symplectic eigenvalues,
and this enables us to write symplectic eigenvalue maps as difference of convex maps.
This characterisation plays a key role in our paper.
The work by Hiriart-Urruty et al. \cite{hiriart1999, hiriart1995} for eigenvalues was motivation for our present work.
\begin{example} \label{2ex1}
Let $\Phi(A) = \iota A^{1/2} J A^{1/2}$ for all $A$ in $\mathbb{P}(2n).$
Let $I$ be the $2 \times 2$ identity matrix and
\[ A = \begin{pmatrix}
1 & 0 \\
0 & 4
\end{pmatrix}. \]
We have $\Phi(I)= \iota J,$
$$ \Phi(A) = \iota \begin{pmatrix}
0 & 2 \\
- 2 & 0
\end{pmatrix},$$
and
$$ \Phi\left( \frac{I+A}{2} \right) = \frac{\iota}{2} \begin{pmatrix}
0 & \sqrt{10} \\
- \sqrt{10} & 0
\end{pmatrix}.$$
This gives
$$ \Phi\left( \frac{I+A}{2} \right) - \frac{1}{2} ( \Phi(I) + \Phi(A) ) = \frac{\iota}{2} \begin{pmatrix}
0 & \sqrt{10}- 3 \\
-(\sqrt{10}- 3) & 0
\end{pmatrix}.$$
Here $\Phi\left( \frac{I+A}{2} \right) - \frac{1}{2} ( \Phi(I) + \Phi(A) )$ is neither negative semidefinite nor positive semidefinite.
Therefore, $\Phi$ is neither convex nor concave.
\end{example}
The paper is organized as follows:
In $\text{Section } \ref{2prel},$ we recall the definitions of $\text{Fenchel } (\ref{2eqn25}),$ $\text{Clarke } (\ref{2eqn23})$ and $\text{Michel-Penot } (\ref{2eqn24})$ subdifferentials
and derivatives, and some of their properties.
We also discuss some basic properties of symplectic eigenvalues that are useful later in the paper.
In $\text{Section } \ref{2fensub},$ for every positive integer $m \leq n,$
we introduce a map $\sigma_m: \mathbb{S}(2n) \to (-\infty, \infty]$
such that $\sigma_m(A) = -2 \sum_{j=1}^{m} d_j(A)$ for all $A \in \mathbb{P}(2n).$
We calculate the Fenchel subdifferential of $\sigma_m$ $(\text{Theorem } \ref{2mainthm1}).$
We also derive the expressions for the directional derivatives of $\sigma_m$ $(\text{Theorem } \ref{2thm4})$ and $d_m$ $(\text{Theorem } \ref{2thm1}).$
In $\text{Section } \ref{2mpsub},$ we find the Clarke and Michel-Penot subdifferentials of $-d_m.$
We show that these subdifferentials coincide at $A,$ and are independent of the choices of $m$ corresponding to equal symplectic eigenvalues of $A.$
As an application of the Clarke and Michel-Penot subdifferentials, we give an alternate proof of the monotonicity property of symplectic eigenvalues $(\text{Corollary } \ref{2cor3}).$
\section{Preliminaries} \label{2prel}
In this section, we recall the definitions and some properties of three kinds of subdifferentials, and discuss some simple properties of symplectic eigenvalues.
The notion of various subdifferentials and directional derivatives exist for more general spaces.
For our present work we will only be discussing subdifferentials of maps on the space of symmetric matrices.
Let $\mathcal{O}$ be an open subset of $\mathbb{S}(n)$ and $A$ be an element of $ \mathcal{O}.$
Let $f: \mathbb{S}(n) \rightarrow (-\infty, \infty]$ be a function such that $f(\mathcal{O}) \subseteq \mathbb{R}.$
For $H$ in $ \mathbb{S}(n),$ if the limit
$$\lim_{t \to 0^+} \dfrac{f(A+tH)-f(A)}{t}, $$
exists in $\mathbb{R}$, we say that $f^{\prime} (A;H)$ is defined and is equal to the limit.
We say that $f$ is directionally differentiable at $A$ if $f^{\prime} (A;H)$ exists for all $H$ in $ \mathbb{S}(n).$
In this case, we call the map $f^{\prime} (A; \cdot): \mathbb{S}(n) \to \mathbb{R}$ the directional derivative of $f$ at $A.$
The {\it Fenchel subdifferential } of $f$ at $A$ is defined by
\begin{align} \label{2eqn25}
\partial f(A) = \{ X \in \mathbb{S}(n): \langle X, H-A \rangle \leq f(H)-f(A) \ \ \forall H \in \mathbb{S}(n) \}.
\end{align}
If $f$ is a convex map then $\partial f(A)$ is a non-empty, convex and compact set \cite{penot, zalinescu}.
The {\it Clarke directional derivative} of $f$ at $A$ is defined as the function
\begin{equation*}
H \in \mathbb{S}(n) \mapsto f^{\circ} (A;H) = \limsup_{X \to A, t \to 0^+} \dfrac{f(X+tH)-f(X)}{t},
\end{equation*}
and the {\it Clarke subdifferential} of $f$ at $A$ is given by
\begin{equation} \label{2eqn23}
\partial^{\circ} f(A) = \{X \in \mathbb{S}(n): \langle X, H \rangle \leq f^{\circ}(A;H) \ \ \forall H \in \mathbb{S}(n) \}.
\end{equation}
If the function $f$ is directionally differentiable at every point in $\mathcal{O},$ then
\begin{equation} \label{2eqn21}
f^{\circ}(A;H) = \limsup_{X \to A} f^{\prime}(X;H).
\end{equation}
The {\it Michel-Penot directional derivative} of $f$ at $A$ is defined by the function
\begin{equation*}
H \in \mathbb{S}(n) \mapsto f^{\diamond}(A;H)= \sup_{X \in \mathbb{S}(n)} \limsup_{t \to 0^+} \dfrac{f(A+tX+tH)-f(A+tX)}{t},
\end{equation*}
and the {\it Michel-Penot subdifferential} of $f$ at $A$ is defined as
\begin{equation} \label{2eqn24}
\partial^{\diamond} f(A) = \{X \in \mathbb{S}(n): \langle X, H \rangle \leq f^{\diamond}(A;H) \ \ \forall H \in \mathbb{S}(n) \}.
\end{equation}
If the function $f$ is directionally differentiable at $A,$ then we have
\begin{equation} \label{2eqn22}
f^{\diamond}(A;H)= \sup_{X \in \mathbb{S}(n)} \{ f^{\prime}(A;H+X)- f^{\prime}(A;H)\}'
\end{equation}
for all $H$ in $ \mathbb{S}(n).$
Let $A$ be an element of $\mathbb{P}(2n)$ and $m \leq n$ be a positive integer.
Let $u_1, \ldots, u_n,$ $ v_1, \ldots, v_n$ represent the $2n$ columns of $M$ in Williamson's theorem.
The equation $(\ref{2eqn31})$ is equivalent to the following conditions
\begin{align}
&Au_j = d_j(A)Jv_j, Av_j =-d_j(A)Ju_j, \label{2eq01} \\
&\langle u_j, Ju_k \rangle = \langle v_j, Jv_k \rangle =0, \label{2eq02} \\
& \langle u_j, Jv_k \rangle = \delta_{jk} \label{2eq03}
\end{align}
for all $1 \leq j,k \leq n.$
Here $\delta_{jk}=1$ if $j=k,$ and $0$ otherwise.
A pair of vectors $(u_j, v_j)$ satisfying $(\ref{2eq01})$ is called a pair of symplectic eigenvectors of $A$ corresponding to the symplectic eigenvalue $d_j(A),$
and is called a { \it normalised } pair of symplectic eigenvectors of $A$ corresponding to the symplectic eigenvalue $d_j(A)$ if it also satisfies $\langle u_j, J v_j \rangle = 1.$
We call a set $\{u_j,v_j \in \mathbb{R}^{2n}: 1 \leq j \leq m \}$ {\it symplectically orthogonal} if it satisfies $(\ref{2eq02})$,
and we call it {\it symplectically orthonormal} if it also satisfies $(\ref{2eq03})$ for $1 \leq j, k \leq m.$
A symplectically orthonormal set with $2n$ vectors is called a {\it symplectic basis} of $\mathbb{R}^{2n}.$
We denote by $Sp(2n)$ the set of $2n \times 2n $ symplectic matrices.
Let $Sp(2n,2m)$ denote the set of real $2n \times 2m$ matrices $S=[u_1, \ldots, u_m, v_1, \ldots, v_m]$ such that its columns form a symplectically orthonormal set,
and denote by $Sp(2n, 2m, A)$ the subset of $Sp(2n,2m)$ with the extra condition (\ref{2eq01}) for all $j=1,2, \ldots, m.$
We call a symplectic and orthogonal matrix {\it orthosymplectic } matrix.
One can find more on symplectic matrices and symplectic eigenvalues in \cite{dms, degosson, jm}.
\begin{definition}
Let $S$ be a real matrix with $2m$ columns and $\alpha_1, \ldots, \alpha_k$ be positive integers with $\alpha_1 + \ldots+ \alpha_k=m.$
Let $\mathcal{I}_1, \ldots, \mathcal{I}_k$ be the partition of $\{1, \ldots, 2m\}$ given by
\begin{align*}
\mathcal{I}_1 &= \{1, \ldots, \alpha_1, m+1, \ldots, m+\alpha_1\}, \\
\mathcal{I}_2&= \{ \alpha_1+1, \ldots, \alpha_1+\alpha_2, m+\alpha_1+1, \ldots, m+\alpha_1+\alpha_2 \}, \\
\vdots \\
\mathcal{I}_k &= \{ (\alpha_1+ \ldots + \alpha_{k-1})+1, \ldots, (\alpha_1+ \ldots + \alpha_{k-1})+\alpha_k , \\
& \ \ \ \ \ \ \ \ m+(\alpha_1+ \ldots + \alpha_{k-1})+1, \ldots, m+(\alpha_1+ \ldots + \alpha_{k-1})+\alpha_k \}.
\end{align*}
By the expression
$$S=S_{\mathcal{I}_1} \diamond \ldots \diamond S_{\mathcal{I}_k}$$
we mean that the submatrix of $S$ consisting of the columns of $S$ indexed by $\mathcal{I}_j$ is $S_{\mathcal{I}_j},$ $j=1,\ldots, k.$
We call it symplectic column partition of $S$ of order $(\alpha_1, \ldots, \alpha_k).$
\end{definition}
For example, let $m=6$ and $ \alpha_1=2, \alpha_2=3, \alpha_3=1.$
Then we have
\begin{align*}
\mathcal{I}_1 &= \{1, 2, 7, 8\}, \\
\mathcal{I}_2 &=\{3, 4, 5, 9, 10, 11 \}, \\
\mathcal{I}_3 &= \{6, 12\}.
\end{align*}
We observe that if $I$ is the $2m \times 2m$ identity matrix then
$$S_{\mathcal{I}_1} = S I_{\mathcal{I}_1}, \ldots, S_{\mathcal{I}_k} = S I_{\mathcal{I}_k}.$$
So the symplectic column partition of $S$ of order $(\alpha_1, \ldots, \alpha_k)$ is given by
$$S=S I_{\mathcal{I}_1} \diamond \ldots \diamond S I_{\mathcal{I}_k}.$$
The following proposition gives a property of symplectic column partition.
The proof is straightforward, so we omit it.
\begin{prop} \label{2prop4}
Let $S$ be a real matrix with $2m$ columns and $\alpha_1, \ldots, \alpha_k$ be positive integers whose sum is $m.$
Let $S=S_{\mathcal{I}_1}\diamond \ldots \diamond S_{\mathcal{I}_k}$ be the symplectic column partition of $S$ of order $(\alpha_1, \ldots, \alpha_k).$
We have
$$TS=TS_{\mathcal{I}_1} \diamond \ldots \diamond TS_{\mathcal{I}_k},$$
where $T$ is a matrix of appropriate size.
\end{prop}
\begin{prop} \label{2prop2}
Let $A \in \mathbb{P}(2n)$ and $m \leq n$ be any positive integer.
Every symplectically orthogonal set consisting of $m$ symplectic eigenvector pairs of $A$ can be extended to a
symplectically orthogonal set
consisting of $n$ symplectic eigenvector pairs of $A.$
\end{prop}
\begin{proof}
We know that any orthogonal subset of $\mathbb{C}^{2n}$ consisting of eigenvectors of a
Hermitian matrix can be extended to an orthogonal basis of $\mathbb{C}^{2n}.$
Therefore, the result easily follows from \cite[Proposition 2.3]{jm}.
\end{proof}
\begin{cor} \label{2cor2}
Let $A \in \mathbb{P}(2n)$ and $m \leq n$ be any positive integer.
Every symplectically orthonormal set consisting of $m$ symplectic eigenvector pairs of $A$
can be extended to a symplectic basis of $\mathbb{R}^{2n}$ consisting of symplectic eigenvector pairs of $A.$
\end{cor}
\begin{proof}
Let $\{u_j, v_j \in \mathbb{R}^{2n}: j= 1, \ldots, m\}$ be a symplectically orthonormal set
consisting of $m$ symplectic eigenvector pairs of $A.$
By $\text{Proposition } \ref{2prop2}$ we can extend the above set to a symplectically orthogonal set
$$\{u_j, v_j \in \mathbb{R}^{2n}: j= 1, \ldots, m\} \cup \{\tilde{u}_j, \tilde{v}_j \in \mathbb{R}^{2n}: j= m+1, \ldots, n\}$$
consisting of $n$ pairs of symplectic eigenvectors of $A.$
Let $\tilde{d}_{j} $ be the symplectic eigenvalue of $A$ corresponding to
the symplectic eigenvector pair $(\tilde{u}_j, \tilde{v}_j)$ for $j=m+1, \ldots, n.$
Therefore we have $$\langle \tilde{u}_j, J \tilde{v}_j \rangle =\dfrac{1}{\tilde{d}_j} \langle \tilde{u}_j, A \tilde{u}_j \rangle > 0.$$
Define $$ u_j = \langle \tilde{u}_j, J \tilde{v}_j \rangle^{-1/2} \tilde{u}_j, $$
$$v_j = \langle \tilde{u}_j, J \tilde{v}_j \rangle^{-1/2} \tilde{v}_j$$ for all $j = m+1, \ldots, n.$
This implies $\langle u_j, Jv_j \rangle = 1$ for all $j= m+1, \ldots, n.$
The set $\{u_j, v_j \in \mathbb{R}^{2n}: j= 1, \ldots, n\}$ is the desired symplectic basis
of $\mathbb{R}^{2n}$ consisting of symplectic eigenvector pairs of $A$
which is an extension of the given symplectically orthonormal set.
\end{proof}
\section{Fenchel subdifferential and directional derivatives} \label{2fensub}
In this section we show that the directional derivative of $d_m$ exists, and derive its expression.
For every positive integer $m \leq n$ define a map $\sigma_m: \mathbb{P}(2n) \to \mathbb{R}$ by
$$\sigma_m(P)= -2 \sum_{j=1}^{m} d_j(P)$$ for all $P$ in $ \mathbb{P}(2n).$
By $\text{Theorem }5$ of \cite{bj} we have
$$\sigma_m(P)= \max\left\lbrace -\text {\rm tr} S^TPS : S \in Sp(2n,2m) \right\rbrace. $$
Therefore $\sigma_m$ is a convex function on $\mathbb{P}(2n).$
Extend the function $\sigma_m$ to the whole space $\mathbb{S}(2n)$ by setting $\sigma_m(P)= \infty$ if $P$ is not in $\mathbb{P}(2n).$
Thus $\sigma_m: \mathbb{S}(2n) \rightarrow (-\infty, \infty]$ is a convex function.
For any subset $\Omega$ of symmetric matrices, its closed convex hull will be denoted by $\text {\rm conv} \ \Omega.$
\begin{prop} \label{2prop3}
Let $A$ be an element of $\mathbb{P}(2n)$ and $M$ be an element of $Sp(2n, 2n, A).$
The Fenchel subdifferential of $\sigma_m$ at $A$ is given by
\begin{equation*}
\partial \sigma_m (A)= \text {\rm conv} \left\lbrace -SS^T: S \in Sp(2n, 2m, A) \right\rbrace.
\end{equation*}
\end{prop}
\begin{proof}
Let $\mathcal{Q}=\text {\rm conv} \left\lbrace -SS^T: S \in Sp(2n, 2m, A) \right\rbrace $.
For any $S \in Sp(2n, 2m, A)$ and $B \in \mathbb{S}(2n)$ we have
\begin{align*}
\langle -SS^T, B-A \rangle
& = - \text {\rm tr} SS^TB + \text {\rm tr} SS^TA \\
&= - \text {\rm tr} S^TBS + \text {\rm tr} S^TAS \\
&= - \text {\rm tr} S^TBS - \sigma_m(A) \\
& \leq \sigma_m(B)- \sigma_m(A).
\end{align*}
The last equality follows from the fact that $S \in Sp(2n, 2m, A)$ and the last inequality follows by the definition by $\sigma_m.$
This implies that $-SS^T \in \partial \sigma_m(A).$
We know that $\partial \sigma_m(A)$ is a closed convex set.
Thus we have $\mathcal{Q} \subseteq \partial \sigma_m(A).$
For the other side inclusion, we assume $\partial \sigma_m(A) \backslash \mathcal{Q} \neq \emptyset$ and derive a contradiction.
Let $B \in \partial \sigma_m(A) \backslash \mathcal{Q}.$
By $\text{Theorem } 1.1.5$ in \cite{zalinescu} we get a $\delta > 0$ and $C_0 \in \mathbb{S}(2n)$ such that for all $S \in Sp(2n, 2m, A),$
\begin{equation}
\langle B, C_0 \rangle \geq \langle -SS^T, C_0 \rangle + \delta. \label{2eq2}
\end{equation}
Let $(a,b)$ be an open interval containing $0$ such that $A(t) = A + tC_0$ is in $\mathbb{P}(2n)$ for all $t \in (a,b).$
By $\text{Theorem 4.7}$ of \cite{jm} we get an $\varepsilon > 0$ and continuous maps $d_j, u_j, v_j$ on $[0, \varepsilon) \subset (a,b)$ for $j=1,2, \ldots, n$
such that $d_j(t)= d_j(A(t))$ and $\{u_j(t), v_j(t): j=1, \ldots, n \}$ is a symplectic basis of $\mathbb{R}^{2n}$
consisting of symplectic eigenvector pairs of $A(t)$ for all $t \in [0, \varepsilon).$
Therefore the matrix $$S(t) = [u_1(t), u_2(t), \ldots, u_n(t), v_1(t), v_2(t), \ldots, v_n(t)]$$
is an element of $ Sp(2n, 2n, A(t))$ for all $t \in [0, \varepsilon).$
For any $t$ in $ (0,\varepsilon)$ we have
\begin{align*}
\langle -S(t) S(t)^T, C_0 \rangle &=- \text {\rm tr} S(t)^T C_0S(t) \\
&= \dfrac{- \text {\rm tr} S(t)^T (A+t C_0) S(t)+ \text {\rm tr} S(t)^TAS(t)}{t}\\
&= \dfrac{- \text {\rm tr} S(t)^T A(t) S(t)+ \text {\rm tr} S(t)^TAS(t)}{t}\\
&= \dfrac{\sigma_m(A(t))+ \text {\rm tr} S(t)^TAS(t)}{t} \\
& \geq \dfrac{\sigma_m(A(t))- \sigma_m(A)}{t} \\
& \geq \langle C_0, B \rangle.
\end{align*}
The second last inequality follows because $S(t)$ is an element of $ Sp(2n,2m)$ for all $t \in (0, \varepsilon),$ and the last inequality follows from the fact that $B \in \partial \sigma_m(A).$
By continuity we get
\begin{align*}
\langle -S(0) S(0)^T, C_0 \rangle \geq \langle B, C_0 \rangle.
\end{align*}
But $S(0) \in Sp(2n, 2m, A)$ and hence we get a contradiction by (\ref{2eq2}).
Therefore our assumption $\partial \sigma_m(A) \backslash \mathcal{Q} \neq \emptyset$ is wrong.
This completes the proof.
\end{proof}
We will now provide a more transparent expression of the Fenchel subdifferential of $\sigma_m.$
A symplectic eigenvalue $d$ of $A$ has {\it multiplicity $m$} if
the set $\{i:d_i(A)=d\}$ has exactly $m$ elements.
For $A \in \mathbb{P}(2n),$ let us define non-negative integers $i_m, j_m, r_m$ as follows.
Let $r_m=i_m+j_m$ be the multiplicity of $d_m(A)$ and $i_m \geq 1.$ Further,
\begin{align*}
d_{m-i_m}(A) < d_{m-i_m+1}(A) = \ldots = d_{m+j_m}(A) < d_{m+j_m+1}(A).
\end{align*}
In particular, $i_1=1, j_1=r_1-1$ and $i_n=r_n, j_n=0.$
Define $\Delta_m(A)$ to be the set of $2n \times 2m$ real matrices of the form
\begin{equation}
\begin{pmatrix}
\begin{matrix}
&I \ & 0 \\
&0 \ & U \\
&0 \ & 0
\end{matrix}
& \vline & \begin{matrix}
&0 & \ 0 \\
&0 & \ V \\
&0 & \ 0
\end{matrix} \\
\hline
\begin{matrix}
\ &0 & 0 \\
\ &0 & -V \\
\ &0 & 0
\end{matrix}
& \vline & \begin{matrix}
& I \ & 0 \\
& 0 \ & U \\
& 0 \ & 0
\end{matrix}
\end{pmatrix},
\end{equation}
where $I$ is the $(m-i_m) \times (m-i_m)$ identity matrix, and
$U,V$ are $r_m \times i_m$ real matrices such that the columns of $U+\iota V$ are orthonormal.
\begin{thm} \label{2mainthm1}
Let $A$ be an element of $\mathbb{P}(2n)$ and $M$ be an element of $Sp(2n, 2n, A).$
The Fenchel subdifferential of $\sigma_m$ at $A$ is given by
\begin{equation*}
\partial \sigma_m (A)= \text {\rm conv} \left\lbrace -M HH^TM^T: H \in \Delta_m(A) \right\rbrace .
\end{equation*}
\end{thm}
\begin{proof}
We first show that $$\partial \sigma_m (A) \subseteq \text {\rm conv} \left\lbrace -M HH^TM^T: H \in \Delta_m(A) \right\rbrace.$$
By $\text{Proposition } \ref{2prop3}$ it suffices to show that for every $S \in Sp(2n, 2m, A)$ there exists some $H \in \Delta_m(A)$ such that $SS^T = MHH^TM^T.$
Let $I$ denote the $2n \times 2n$ identity matrix and $I=\overline{I} \diamond \widetilde{I} \diamond \widehat{I}$ be the symplectic column partition of $I$
of order $(m-i_m, r_m, n-m-j_m).$
Let $\overline{M}=M\overline{I},$ $\widetilde{M}=M \widetilde{I}$ and
$\widehat{M}=M \widehat{I}.$
The columns of $\widetilde{M}$ consist of symplectic eigenvector pairs of $A$ corresponding to the symplectic eigenvalue $d_m(A).$
Let $S \in Sp(2n, 2m, A)$ be arbitrary and $S=\overline{S} \diamond \widetilde{S}_1$ be the symplectic column partition of $S$ of order $(m-i_m, i_m).$
Extend $S$ to a matrix $S \diamond \widetilde{S}_2$ in $ Sp(2n, 2(m+j_m), A)$ by $\text{Corollary } \ref{2cor2}.$
The columns of $\overline{S}$ consist of symplectic eigenvector pairs of $A$ corresponding to $d_1(A), \ldots, d_{m-i_m}(A),$
and the columns of $\widetilde{S}_1 \diamond \widetilde{S}_2$ consist of symplectic eigenvector pairs of $A$ corresponding to $d_m(A).$
By $\text{Corollary }5.3$ of \cite{jm} we can find orthosymplectic matrices $Q$ and $R$ of orders $2(m-i_m) \times 2(m-i_m)$ and $2r_m \times 2r_m$
respectively such that $\overline{S}= \overline{M}Q$ and $\widetilde{S}_1 \diamond \widetilde{S}_2= \widetilde{M}R.$
Let $R = \overline{R} \diamond \widetilde{R}$ be the symplectic column partition of $R$ of order $(i_m,j_m).$
By $\text{Proposition } \ref{2prop4}$ we have $\widetilde{S}_1 \diamond \widetilde{S}_2= \widetilde{M}\overline{R} \diamond \widetilde{M} \widetilde{R}.$
This implies $\widetilde{S}_1= \widetilde{M} \overline{R}.$
Therefore
\begin{equation*}
S = \overline{S} \diamond \widetilde{S}_1 = \overline{M}Q \diamond \widetilde{M} \overline{R}.
\end{equation*}
So we have
\begin{equation*}
S= M (\overline{I} Q \diamond \widetilde{I} \overline{R} ).
\end{equation*}
There exist \cite{dms} $r_m \times r_m$ real matrices $X,Y$ such that $X+\iota Y$ is unitary and
\[ R= \begin{pmatrix}
X & Y \\ -Y & X
\end{pmatrix}.\]
Let $U,V$ be the $r_m \times i_m$ matrices consisting of the first $i_m$ columns of $X,Y$ respectively.
Therefore
\begin{equation} \label{2eqn4}
\overline{R} = \begin{pmatrix}
U & V \\
-V & U
\end{pmatrix}.
\end{equation}
We have
\begin{align*}
SS^T &= M (\overline{I} Q \diamond \widetilde{I} \overline{R} )(\overline{I} Q \diamond \widetilde{I} \overline{R} )^TM^T \\
&= M \left( (\overline{I} Q)(\overline{I} Q)^T+ (\widetilde{I} \overline{R})(\widetilde{I} \overline{R})^T \right)M^T\\
&= M \left( \overline{I} QQ^T \overline{I}^T+ (\widetilde{I} \overline{R})(\widetilde{I} \overline{R})^T \right)M^T \\
&= M \left( \overline{I} \overline{I}^T+ (\widetilde{I} \overline{R})(\widetilde{I} \overline{R})^T \right)M^T \\
&= M (\overline{I} \diamond \widetilde{I} \overline{R} )(\overline{I} \diamond \widetilde{I} \overline{R} )^TM^T.
\end{align*}
The second and the last equalities follow from $\text{Proposition } \ref{2prop4}.$
The fourth equality follows from the fact that $Q$ is an orthogonal matrix.
Let $H=\overline{I} \diamond \widetilde{I} \overline{R}.$
By the definition of $\Delta_m(A)$ and $(\ref{2eqn4})$ we have $H \in \Delta_m(A).$
Therefore $SS^T=MHH^TM^T,$ where $H \in \Delta_m(A).$
Now we prove the reverse inclusion.
By definition, observe that any $H \in \Delta_m(A)$ is of the form
$$ H = \overline{I} \diamond \widetilde{I} \begin{pmatrix}
U & V \\
-V & U
\end{pmatrix}. $$
By $\text{Proposition }\ref{2prop4}$ we thus have
$$ MH = \overline{M} \diamond \widetilde{M} \begin{pmatrix}
U & V \\
-V & U
\end{pmatrix}.$$
We know that the columns of $\overline{M}$ correspond to the
symplectic eigenvalues $d_1(A), \ldots, d_{m-i_m}(A).$
By using the fact that the columns of $\widetilde{M}$ correspond to the
symplectic eigenvalue $d_m(A)$ we get
\begin{align*}
\begin{pmatrix} U & V \\ -V & U \end{pmatrix}^T \widetilde{M}^T A \widetilde{M} \begin{pmatrix} U & V \\ -V & U \end{pmatrix}
&= d_m(A) \begin{pmatrix} U & V \\ -V & U \end{pmatrix}^T \begin{pmatrix} U & V \\ -V & U \end{pmatrix} \\
&= d_m(A) I_{2i_m},
\end{align*}
where $I_{2i_m}$ is the $2i_m \times 2i_m$ identity matrix.
Here we used the fact that the columns of
$\begin{psmallmatrix} U & V \\ -V & U \end{psmallmatrix}$ are orthonormal.
The above relation implies that the columns of $\widetilde{M}\begin{psmallmatrix} U & V \\ -V & U \end{psmallmatrix}$
also correspond to the symplectic eigenvalue $d_m(A).$
Therefore we have $MH \in Sp(2n, 2m, A)$ for all $H \in \Delta_m(A),$
and hence
$$\partial \sigma_m (A) \supseteq \text {\rm conv} \left\lbrace -M HH^TM^T: H \in \Delta_m(A) \right\rbrace.$$
This completes the proof.
\end{proof}
In the next theorem we derive the directional derivative of $\sigma_m$ using the convexity and the Fenchel subdifferential of $\sigma_m.$
\begin{thm} \label{2thm4}
Let $A$ be an element of $\mathbb{P}(2n)$ and $M$ be an element of $Sp(2n, 2n, A).$
Let $I$ denote the $2n \times 2n$ identity matrix and $I=\overline{I} \diamond \widetilde{I} \diamond \widehat{I}$
be the symplectic column partition of $I$ of order $(m-i_m, r_m, n-m-j_m).$
Let $\overline{M}=M\overline{I},$ $\widetilde{M}=M \widetilde{I}$ and
$\widehat{M}=M \widehat{I}.$
Define $\overline{B}=-\overline{M}^TB \overline{M}$ and $\widetilde{B}=-\widetilde{M}^TB \widetilde{M}$ for every $B$ in $\mathbb{S}(2n).$
Let us consider the block matrix form of $\widetilde{B},$
\[\widetilde{B}=\begin{pmatrix}
\tilde{B}_{11} & \tilde{B}_{12} \\
\tilde{B}_{12}^T & \tilde{B}_{22}
\end{pmatrix}, \]
where each block has order $r_m \times r_m.$
Denote by $\widetilde{\widetilde{B}}$ the Hermitian matrix $\tilde{B}_{11} + \tilde{B}_{22} + \iota (\tilde{B}_{12} - \tilde{B}_{12}^{T} ).$
The directional derivative of $\sigma_m$ at $A$ is given by
\begin{align*}
\sigma_m^{\prime}(A;B) &= \text {\rm tr} \overline{B} + \sum_{j=1}^{i_m} \lambda_{j}^{\downarrow}(\widetilde{\widetilde{B}})
\end{align*}
for all $B \in \mathbb{S}(2n).$
Here $ \lambda_{j}^{\downarrow}(\widetilde{\widetilde{B}})$ denotes the $j\text{th}$ largest eigenvalue of the Hermitian matrix $\widetilde{\widetilde{B}}.$
\end{thm}
\begin{proof}
By the {\it max formula} \cite[Theorem 3.1.8]{borwein} we have
$$\sigma_m^{\prime}(A; B) = \max \{\langle C, B \rangle: C \in \partial \sigma_m(A)\}$$
for all $B \in \mathbb{S}(2n).$
By $\text{Theorem } \ref{2mainthm1}$ we have
\begin{equation}
\sigma_m^{\prime}(A; B) = \max \{\langle -MHH^T M^T,B \rangle : H \in \Delta_{m}(A) \}. \label{2eqn7}
\end{equation}
Every element of $\Delta_m(A)$ is of the form $\overline{I} \diamond \widetilde{I} \overline{R}$ where $\overline{R}$ is given by $(\ref{2eqn4}).$
Let $H = \overline{I} \diamond \widetilde{I} \overline{R}$ be an arbitrary element of $\Delta_m(A).$
This gives
\begin{align} \label{2eqn19}
MHH^T M^T &= (M ( \overline{I} \diamond \widetilde{I} \overline{R})) ( M ( \overline{I} \diamond \widetilde{I} \overline{R}))^T \nonumber \\
&= (M\overline{I} \diamond M \widetilde{I} \overline{R}) ( M \overline{I} \diamond M \widetilde{I} \overline{R})^T \nonumber \\
&= (\overline{M} \diamond \widetilde{M} \overline{R} ) ( \overline{M} \diamond \widetilde{M} \overline{R} )^T \nonumber \\
&= \overline{M} \overline{M}^{T} + \widetilde{M} \overline{R} \overline{R}^{T} \widetilde{M}^{T}.
\end{align}
The second and the last equalities follow from $\text{Proposition } \ref{2prop4}.$
This implies
\begin{align} \label{2eqn8}
\langle - MHH^TM^T, B \rangle &= \text {\rm tr}(-MHH^T M^TB) \nonumber \\
&= \text {\rm tr} (-\overline{M} \overline{M}^{T} B) + \text {\rm tr}( -\widetilde{M} \overline{R} \overline{R}^{T} \widetilde{M}^{T} B) \nonumber \\
&= \text {\rm tr} (-\overline{M} \overline{M}^{T} B) + \text {\rm tr}( - \overline{R}^{T} \widetilde{M}^{T} B \widetilde{M}\overline{R}) \nonumber \\
&= \text {\rm tr} (- \overline{M}^{T} B \overline{M}) + \text {\rm tr}(\overline{R}^{T} \widetilde{B}\overline{R}) \nonumber \\
&= \text {\rm tr} \overline{B} + \text {\rm tr} (U^T \tilde{B}_{11} U+ V^T \tilde{B}_{22} V-2U^T \tilde{B}_{12}V) \nonumber \\
& \ \ \ \ + \text {\rm tr} (V^T \tilde{B}_{11} V+ U^T \tilde{B}_{22} U+2U^T \tilde{B}_{12}^{T}V) \nonumber \\
&= \text {\rm tr} \overline{B} + \text {\rm tr} (U+\iota V)^{\ast} (\tilde{B}_{11}+\tilde{B}_{22}+ \iota (\tilde{B}_{12}- \tilde{B}_{12}^{T}))(U+\iota V) \nonumber \\
&= \text {\rm tr} \overline{B} + \text {\rm tr} (U+\iota V)^{\ast} \widetilde{\widetilde{B}} (U+\iota V).
\end{align}
Therefore by $(\ref{2eqn7})$ and $(\ref{2eqn8})$ we get
\begin{align*}
\sigma_m^{\prime}(A; B) &= \text {\rm tr} \overline{B} + \max_{U+ \iota V} \text {\rm tr} (U+\iota V)^{\ast} \widetilde{\widetilde{B}} (U+\iota V),
\end{align*}
where the maximum is taken over $r_m \times i_m$ unitary matrices $U+ \iota V.$
By Ky Fan's extremal characterisation \cite[Theorem 1]{fan} we have
$$ \max_{U+ \iota V} \text {\rm tr} (U+\iota V)^{\ast} \widetilde{\widetilde{B}} (U+\iota V) = \sum_{j=1}^{i_m}\lambda_{j}^{\downarrow}(\widetilde{\widetilde{B}}).$$
This completes the proof.
\end{proof}
\begin{definition}
Let $\mathcal{O}$ be an open subset of $\mathbb{S}(n).$
A function $f: \mathcal{O} \to \mathbb{R}$ is said to be G\^ ateaux differentiable
at $A \in \mathcal{O}$ if $f$ is directionally differentiable at $A$ and the directional derivative
is a linear map from $\mathbb{S}(n)$ to $\mathbb{R}.$
The linear map is denoted by $\nabla f(A)$ and called the gradient of $f$ at $A.$
\end{definition}
The following is an easy corollary of the above theorem.
\begin{cor} \label{2cor1}
Let $A$ be an element of $\mathbb{P}(2n)$ and $M$ be an element of $Sp(2n, 2n, A).$
If $d_{m}(A) < d_{m+1}(A)$ then $\sigma_m$ is G\^ ateaux differentiable at $A$ with the gradient
\begin{align*}
\nabla \sigma_m(A)= -(\overline{M} \diamond \widetilde{M} )(\overline{M} \diamond \widetilde{M} )^T.
\end{align*}
Here we assume $d_{m+1}(A)=\infty$ for $m=n.$
\end{cor}
\begin{proof}
If $d_m(A) < d_{m+1}(A)$ then $j_m=0$ and $i_m=r_m.$
Therefore $\overline{R}$ is a $2r_m \times 2r_m$ orthosymplectic matrix in the proof of $\text{Theorem } \ref{2thm4}.$
By $(\ref{2eqn19})$ we have
$$MHH^TM^T = \overline{M} \overline{M}^T+ \widetilde{M} \widetilde{M}^T$$
for all $H \in \Delta_m(A).$
By $\text{Theorem } \ref{2mainthm1}$ we get
$$\sigma_m^{\prime}(A; B) = \langle -\overline{M} \overline{M}^T - \widetilde{M} \widetilde{M}^T, B \rangle.$$
By $\text{Proposition } \ref{2prop4}$ we have
$$-\overline{M} \overline{M}^T - \widetilde{M} \widetilde{M}^T=-(\overline{M} \diamond \widetilde{M} )(\overline{M} \diamond \widetilde{M} )^T.$$
Therefore $\sigma_m$ is G\^ ateaux differentiable with $\nabla \sigma_m(A)= -(\overline{M} \diamond \widetilde{M} )(\overline{M} \diamond \widetilde{M} )^T.$
\end{proof}
We have the relation $2d_m = \sigma_{m-1} - \sigma_{m}$ whenever $m \geq 2,$ and $2d_1= -\sigma_1.$
Denote by $\sigma_0: \mathbb{S}(2n) \to \mathbb{R}$ the zero map so that $2d_1 = \sigma_0 - \sigma_1.$
Therefore we have
$$2d_m=\sigma_{m-1}-\sigma_{m},$$
for all positive integers $m \leq n.$
By the definition of directional derivative we have
$$2 d_m^{\prime}(A; B)= \sigma_{m-1}^{\prime}(A;B)-\sigma_m^{\prime}(A;B)$$
for all $B \in \mathbb{S}(2n).$
By this relation we know that $d_m$ is directionally differentiable and find the expression of its directional derivative.
The following is the main theorem of this section.
\begin{thm} \label{2thm1}
Let $A$ be an element of $\mathbb{P}(2n)$ and $M$ be an element of $Sp(2n, 2n, A).$
Let $I$ denote the $2n \times 2n$ identity matrix and $I=\overline{I} \diamond \widetilde{I} \diamond \widehat{I}$
be the symplectic column partition of $I$ of order $(m-i_m, r_m, n-m-j_m).$
Let $\overline{M}=M\overline{I},$ $\widetilde{M}=M \widetilde{I}$ and
$\widehat{M}=M \widehat{I}.$
Define $\overline{B}=-\overline{M}^TB \overline{M}$ and $\widetilde{B}=-\widetilde{M}^TB \widetilde{M}$ for every $B$ in $\mathbb{S}(2n).$
Let us consider the block matrix form of $\widetilde{B},$
\[\widetilde{B}=\begin{pmatrix}
\tilde{B}_{11} & \tilde{B}_{12} \\
\tilde{B}_{12}^T & \tilde{B}_{22}
\end{pmatrix}, \]
where each block has order $r_m \times r_m.$
Denote by $\widetilde{\widetilde{B}}$ the Hermitian matrix $\tilde{B}_{11} + \tilde{B}_{22} + \iota (\tilde{B}_{12} - \tilde{B}_{12}^{T} ).$
The directional derivative of $d_m$ at $A$ is given by
\begin{equation} \label{2eqn9}
d_m^{\prime}(A;B) = -\frac{1}{2} \lambda_{i_m}^{\downarrow} (\widetilde{\widetilde{B}}),
\end{equation}
for all $B \in \mathbb{S}(2n).$
\end{thm}
\begin{proof}
By definition we have $i_m \geq 1.$
We deal with the following two possible cases separately. \\
\textbf{Case: $i_m \geq 2$} \\
This is the case when $d_m(A)=d_{m-1}(A).$
This implies
\begin{equation*}
i_{m-1}=i_m-1, j_{m-1}=j_m+1, r_{m-1}=r_m.
\end{equation*}
Therefore we have $m-i_m = (m-1)-i_{m-1}.$
From $\text{Theorem } \ref{2thm4}$ we get,
\begin{align*}
d_m^{\prime}(A; B) &= \frac{1}{2} \sigma_{m-1}^{\prime}(A;B)- \frac{1}{2} \sigma_m^{\prime}(A;B) \\
&=\frac{1}{2} (\text {\rm tr} \overline{B} + \sum_{j=1}^{i_m-1} \lambda_{j}^{\downarrow}(\widetilde{\widetilde{B}}) ) -
\frac{1}{2} (\text {\rm tr} \overline{B} + \sum_{j=1}^{i_m} \lambda_{j}^{\downarrow}(\widetilde{\widetilde{B}}) ) \\
&= -\frac{1}{2} \lambda_{i_m}^{\downarrow}(\widetilde{\widetilde{B}}).
\end{align*}
\textbf{Case: $i_m = 1$} \\
In this case we have $d_{m-1}(A) < d_m(A).$
By $\text{Corollary } \ref{2cor1}$ the map $\sigma_{m-1}$ is G\^ateaux
differentiable at $A$ and we have
$$\nabla \sigma_{m-1}(A) = - SS^T,$$
where $S$ is the submatrix consisting of columns with indices $1, \ldots, (m-1)+j_{m-1}$ of $M.$
But here we have $j_{m-1}=0$ which means that $(m-1)+j_{m-1} = m-i_m.$
In other words, we have $S= \overline{M}.$
This gives
\begin{align*}
\sigma_{m-1}^{\prime}(A;B) &= \nabla \sigma_{m-1}(A)(B) \\
&=\langle -\overline{M}\overline{M}^T, B \rangle \\
&= \text {\rm tr} (-\overline{M}\overline{M}^T B) \\
&= \text {\rm tr}(- \overline{M}^TB\overline{M}) \\
&= \text {\rm tr} \overline{B}
\end{align*}
Therefore by $\text{Theorem } \ref{2thm4}$ we have
\begin{align*}
\sigma_m^{\prime}(A;B) = \sigma_{m-1}^{\prime}(A;B) + \lambda_{1}^{\downarrow}(\widetilde{\widetilde{B}}).
\end{align*}
This gives
\begin{equation*}
2 d_m^{\prime}(A;B) = - \lambda_{1}^{\downarrow} (\widetilde{\widetilde{B}})
\end{equation*}
which is the same as $(\ref{2eqn9})$ for $i_m=1.$
\end{proof}
\section{Clarke and Michel-Penot subdifferentials} \label{2mpsub}
Let us denote by $S_m(A)$ the set of normalised symplectic eigenvector pairs $(u,v)$ of $A$ corresponding to the symplectic eigenvalue $d_m(A).$
Let $\widehat{m}$ be the index of the smallest symplectic eigenvalue of $A$ equal to $d_m(A).$
In other words, $d_{j}(A)= d_{m}(A)$ implies $j \geq \widehat{m}.$
\begin{prop} \label{2prop1}
Let $A$ be an element of $ \mathbb{P}(2n)$ and $M$ in $Sp(2n,2n, A)$ be fixed.
The function $-d^{\prime}_{\widehat{m}}(A; \cdot)$ is sublinear and its Fenchel subdifferential at zero is given by
\begin{equation*}
\partial (-d^{\prime}_{\widehat{m}}(A; \cdot))(0)= \text {\rm conv} \{-\frac{1}{2} (xx^T+yy^T): (x,y) \in S_m(A)\}.
\end{equation*}
\end{prop}
\begin{proof}
By definition we have $i_{\widehat{m}}=1.$
Therefore by $\text{Theorem } \ref{2thm1}$ we have
$$ -d_{\widehat{m}}^{\prime}(A;B) = \frac{1}{2} \lambda_{1}^{\downarrow}(\widetilde{\widetilde{B}})$$
for all $B \in \mathbb{S}(2n).$
The map $B \mapsto \widetilde{\widetilde{B}}$ is a linear map from $\mathbb{S}(2n)$ to the space of $r_m \times r_m$ Hermitian matrices,
and the largest eigenvalue map $\lambda_{1}^{\downarrow}$ on the space of $r_m \times r_m$ Hermitian matrices is sublinear.
Therefore $-d^{\prime}_{\widehat{m}}(A; \cdot)$ is a sublinear map.
It suffices \cite[ Remark 1.2.3, p.168]{hl} to show that
\begin{equation*}
-d^{\prime}_{\widehat{m}}(A; B) = \max \{-\frac{1}{2} \langle xx^T+yy^T, B \rangle: (x,y) \in S_m(A) \}
\end{equation*}
for all $B \in \mathbb{S}(2n).$
Let $(x,y) \in S_m(A)$ be arbitrary.
By $\text{Corollary } \ref{2cor2}$ extend $[x, y]$ to $S$ in $Sp(2n, 2r_m)$ with columns consisting of symplectic eigenvector pairs of $A$ corresponding to $d_m(A).$
By $\text{Corollary } 5.3$ of \cite{jm} we get a $2r_m \times 2r_m$ orthosymplectic matrix $Q$ such that $S=\widetilde{M}Q.$
We know that $Q$ is of the form
$$ \begin{pmatrix}
U & V \\
-V & U
\end{pmatrix},$$
where $U,V$ are $r_m \times r_m$ real matrices such that $U + \iota V$ is unitary. Let $u,v$ be the first columns of $U$ and $V$ respectively.
This implies
\begin{equation} \label{2eqn29}
[x, y]= \widetilde{M} \begin{pmatrix}
u & v \\
-v & u
\end{pmatrix}.
\end{equation}
Conversely, if $u+\iota v$ is a unit vector in $\mathbb{C}^{r_m}$ and $x, y \in \mathbb{R}^{2n}$ satisfy the above relation $(\ref{2eqn29}),$ then $(x,y) \in S_m(A).$
Therefore $(\ref{2eqn29})$ gives a one to one correspondence $(x, y) \mapsto u+ \iota v$ between $S_m(A)$ and the set of unit vectors in $\mathbb{C}^{r_m}.$
We consider $\mathbb{C}^{r_m}$ equipped with the usual inner product $\langle z, w \rangle = z^{\ast}w$
for all $z,w \in \mathbb{C}^{r_m}.$
For simplicity, we use the same notation for the different inner products discussed here.
Their use will be clear from the context.
We have
\begin{align*}
-\frac{1}{2} \langle xx^T+yy^T, B \rangle &= -\frac{1}{2} \langle [x, y] [x, y]^T, B \rangle \\
&= -\frac{1}{2} \text {\rm tr} [x, y]^T B [x, y] \\
&=-\frac{1}{2} \text {\rm tr} \begin{pmatrix}
u & v \\
-v & u
\end{pmatrix}^T \widetilde{M}^T B \widetilde{M} \begin{pmatrix}
u & v \\
-v & u
\end{pmatrix} \\
&= \frac{1}{2} \text {\rm tr} \begin{pmatrix}
u & v \\
-v & u
\end{pmatrix}^T \widetilde{B} \begin{pmatrix}
u & v \\
-v & u
\end{pmatrix} \\
&= \frac{1}{2} (u+ \iota v)^{\ast} \widetilde{\widetilde{B}} (u+ \iota v) \\
&= \frac{1}{2} \langle u+ \iota v, \widetilde{\widetilde{B}} (u+ \iota v) \rangle
\end{align*}
Therefore we get
\begin{align*}
-d^{\prime}_{\widehat{m}}(A; B) &= \frac{1}{2} \lambda_{1}^{\downarrow}(\widetilde{\widetilde{B}}) \\
&= \frac{1}{2} \max \{ \langle u+ \iota v, \widetilde{\widetilde{B}} (u+ \iota v) \rangle: \|u+ \iota v\|=1 \} \\
&= \max \{-\frac{1}{2} \langle xx^T+yy^T, B \rangle: (x,y) \in S_m(A) \}.
\end{align*}
The last equality follows from the above observation that $(\ref{2eqn29})$ is a one to one correspondence between $S_m(A)$ and the set of unit vectors in $\mathbb{C}^{r_m}.$
This completes the proof.
\end{proof}
\begin{thm} \label{2thm2}
Let $A$ be an element of $\mathbb{P}(2n).$
The Michel-Penot subdifferentials of $-d_m$ coincide at $A$ for all the choices of $m$ corresponding to the equal symplectic eigenvalues of $A$ and are given by
\begin{equation*}
\partial^{\diamond} (-d_m)(A) = \partial (-d^{\prime}_{\widehat{m}}(A; \cdot))(0).
\end{equation*}
\end{thm}
\begin{proof}
We saw that $-d^{\prime}_{\widehat{m}}(A; \cdot)$ is convex and takes value zero at zero.
By $\text{Proposition } 3.1.6$ of \cite{borwein} we have
\begin{equation*}
\partial (-d^{\prime}_{\widehat{m}}(A; \cdot))(0)= \text {\rm conv} \{B \in \mathbb{S}(2n): \langle B, H \rangle \leq -d^{\prime}_{\widehat{m}}(A; H) \ \ \forall H \in \mathbb{S}(2n)\}.
\end{equation*}
By the definition of Michel-Penot subdifferential it therefore suffices to show that $(-d_{m})^{\diamond}(A; B)= -d^{\prime}_{\widehat{m}}(A; B)$ for all $B$ in $\mathbb{S}(2n).$
By $(\ref{2eqn22})$ it is equivalent to showing
\begin{equation} \label{2eqn11}
\sup_{H \in \mathbb{S}(2n)} \{-d^{\prime}_{m}(A; B+H)+d^{\prime}_{m}(A; H)\} = -d^{\prime}_{\widehat{m}}(A; B).
\end{equation}
Let $M \in Sp(2n, 2n, A)$ be fixed
and $M = \overline{M} \diamond \widetilde{M} \diamond \widehat{M}$
be the symplectic column partition of $M$ of order $(m-i_m, r_m, n-m-j_m).$
Let $B, H$ be elements of $ \mathbb{S}(2n).$
We recall the meaning of $\widetilde{\widetilde{B}}.$
Let us write the block matrix form of $\widetilde{B} = - \widetilde{M}^TB \widetilde{M}$ as
\[\widetilde{B}=\begin{pmatrix}
\tilde{B}_{11} & \tilde{B}_{12} \\
\tilde{B}_{12}^T & \tilde{B}_{22}
\end{pmatrix}, \]
where each block is of order $r_m \times r_m.$
The matrix $\widetilde{\widetilde{B}}$ is the $r_m \times r_m$ Hermitian matrix given by
$\tilde{B}_{11} + \tilde{B}_{22} + \iota (\tilde{B}_{12} - \tilde{B}_{12}^{T} ).$
Similarly we have $\widetilde{\widetilde{H}}.$
It is easy to see that
$$\widetilde{ \widetilde{B+H}} = \widetilde{\widetilde{B}} + \widetilde{ \widetilde{H}}.$$
By $\text{Theorem } \ref{2thm1}$ we get
\begin{align*}
-d^{\prime}_{m}(A; B+H)+d^{\prime}_{m}(A; H) &= \frac{1}{2} \lambda_{i_m}^{\downarrow}(\widetilde{\widetilde{B+H}}) - \frac{1}{2} \lambda_{i_m}^{\downarrow}(\widetilde{\widetilde{H}}) \\
&= \frac{1}{2} \lambda_{i_m}^{\downarrow}(\widetilde{\widetilde{B}}+\widetilde{\widetilde{H}}) - \frac{1}{2} \lambda_{i_m}^{\downarrow}(\widetilde{\widetilde{H}}).
\end{align*}
It is clear that $H \mapsto \widetilde{\widetilde{H}}$ is an onto map from $\mathbb{S}(2n)$ to the space of $r_m \times r_m$ Hermitian matrices.
Therefore by $(\ref{2eqn11})$ we need to show that
\begin{equation} \label{2eqn14}
\frac{1}{2} \sup_{C} \{ \lambda_{i_m}^{\downarrow}(\widetilde{\widetilde{B}}+ C) - \lambda_{i_m}^{\downarrow}(C)\} = -d^{\prime}_{\widehat{m}}(A; B),
\end{equation}
where $C$ varies over the space of $r_m \times r_m$ Hermitian matrices.
By an inequality due to Weyl \cite[Corollary III.2.2]{rbh}, we have
\begin{equation} \label{2eqn12}
\lambda_{i_m}^{\downarrow}(\widetilde{\widetilde{B}}+ C) \leq \lambda_{i_m}^{\downarrow}(C) + \lambda_{1}^{\downarrow}(\widetilde{\widetilde{B}})
\end{equation}
for all Hermitian matrices $C.$
We can construct a Hermitian matrix $C $ for which equality holds in $(\ref{2eqn12}).$
See the proof of $\text{Theorem } 4.2$ in \cite{hiriart1999}.
This gives
\begin{equation*}
\sup_{C} \{ \lambda_{i_m}^{\downarrow}(\widetilde{\widetilde{B}}+ C) - \lambda_{i_m}^{\downarrow}(C) \} = \lambda_{1}^{\downarrow}(\widetilde{\widetilde{B}}),
\end{equation*}
where $C$ varies over the space of $r_m \times r_m$ Hermitian matrices.
But we know by $\text{Theorem } \ref{2thm1}$ that $-d_{\widehat{m}}^{\prime}(A; B)= \frac{1}{2} \lambda_{1}^{\downarrow}(\widetilde{\widetilde{B}}).$
This implies that $(\ref{2eqn14})$ holds.
This completes the proof.
\end{proof}
We now give the main result of this section which states that the Clarke and Michel-Penot subdifferentials of $-d_m$ are equal.
\begin{thm} \label{2thm3}
Let $A$ be an element of $\mathbb{P}(2n).$
The Clarke and Michel-Penot subdifferentials of $-d_m$ are equal at $A$ and they are given by
\begin{equation*}
\partial^{\circ} (-d_m)(A)=\partial^{\diamond} (-d_m)(A) = \text {\rm conv} \{-\frac{1}{2} (xx^T+yy^T): (x,y) \in S_m(A)\}.
\end{equation*}
In particular, the subdifferentials are independent of the choice of $m$ corresponding to equal symplectic eigenvalues of $A.$
\end{thm}
\begin{proof}
By $\text{Proposition }\ref{2prop1}$ and $\text{Theorem } \ref{2thm2}$ we have
$$\partial^{\diamond} (-d_m)(A) = \text {\rm conv} \{-\frac{1}{2} (xx^T+yy^T): (x,y) \in S_m(A)\}.$$
By $\text{Corollary } 6.1.2$ of \cite{borwein} we have $\partial^{\diamond} (-d_m)(A) \subseteq \partial^{\circ} (-d_m)(A).$
Therefore it only remains to prove is that $\partial^{\circ} (-d_m)(A) \subseteq \partial^{\diamond} (-d_m)(A).$
Let $B$ in $ \mathbb{S}(2n)$ be arbitrary.
By the relation $(\ref{2eqn21})$ we get a sequence $A_{(p)} \in \mathbb{P}(2n)$ for $p \in \mathbb{N}$ such that $ \lim_{p \to \infty} A_{(p)}=A$ and
\begin{equation} \label{2eqn17}
(-d_m)^{\circ}(A; B) = - \lim_{p \to \infty} d_{m}^{\prime}( A_{(p)};B).
\end{equation}
Let $\mathcal{I}_p=\{i: d_i( A_{(p)})= d_m( A_{(p)})\}$ for every $p \in \mathbb{N}.$
There are only finitely many choices for $\mathcal{I}_p$ for each $p.$
Therefore we can get a subsequence of $( A_{(p)})_{p \in \mathbb{N}}$ such that $\mathcal{I}_p$ is independent of $p.$
Let us denote the subsequence by the same sequence $( A_{(p)})_{p \in \mathbb{N}}$ for convenience
and let $\mathcal{I}$ denote the common index set $\mathcal{I}_p.$
Let $M_{(p)}$ be an element of $ Sp(2n, 2n, A_{(p)})$ for all $p \in\mathbb{N}.$
If $(u,v)$ is a pair of normalized symplectic eigenvectors of $ A_{(p)}$ corresponding to a symplectic eigenvalue $d,$
we get
\begin{align*}
\|u\|^2 + \|v\|^2 & \leq \| A_{(p)}^{-1} \| (\|(A_(p)^{1/2} u \|^2 + \|A_{(p)}^{1/2} v \|^2 ) \\
& = \| A_{(p)}^{-1} \| \cdot \|A_{(p)}^{1/2} u - \iota A_{(p)}^{1/2} v \|^2 \\
& = 2 d \langle u, J v \rangle \| A_{(p)}^{-1} \| \\
&= 2 d \| A_{(p)}^{-1} \| \\
& \leq 2 \| A_{(p)} \| \cdot \| A_{(p)}^{-1} \| \\
& = 2 \kappa(A_{(p)}),
\end{align*}
where $\| A_{(p)} \|$ and $ \| A_{(p)}^{-1} \|$ represent the operator norms of $A_{(p)}$ and $A_{(p)}^{-1},$
and $\kappa(A_{(p)})$ is the condition number of $A_{(p)}.$
The second equality follows from $\text{Proposition } 2.3$ of \cite{jm}, and
the second inequality follows from the fact that $d \leq \| A_{(p)} \|.$
Therefore we have
\begin{equation} \label{2eqn26}
\| M_{(p)} \|_{F}^{2} \leq 2n \kappa(A_{(p)}),
\end{equation}
where $\|M_{(p)}\|_{F}$ represents the Frobenius norm of $M_{(p)}$
for all $p \in \mathbb{N}.$
We know that $\kappa$ is a continuous function and the sequence $(A_{(p)})_{p \in \mathbb{N}}$ is convergent.
Therefore the sequence $(\kappa(A_{(p)}))_{p \in \mathbb{N}}$ is also convergent, and hence bounded.
By $(\ref{2eqn26})$ the sequence $(M_{(p)})_{p \in \mathbb{N}}$ of $2n \times 2n$ real matrices is bounded as well.
By taking a subsequence we can assume that $(M_{(p)})_{p \in \mathbb{N}}$ converges to some $2n \times 2n$ real matrix $M.$
We know that $Sp(2n)$ is a closed set and therefore $M \in Sp(2n).$
By continuity of the symplectic eigenvalue maps we also have $M \in Sp(2n, 2n, A).$
Let $m_1= \min \mathcal{I}$ and $m_2= \max \mathcal{I}.$
Let $M_{(p)}= \overline{M}_{(p)} \diamond \widetilde{M}_{(p)} \diamond \widehat{M}_{(p)}$ be the symplectic column partition of $M_{(p)}$ of order $(m_1-1, m_2-m_1+1, n-m_2).$
Let
\begin{equation*}
\widetilde{B}_{(p)} = -\widetilde{M}_{(p)}^{T} B \widetilde{M}_{(p)} ,
\end{equation*}
\begin{equation*}
\widetilde{M}_{(0)} = \lim_{p \to \infty} \widetilde{M}_{(p)}
\end{equation*}
and
\begin{equation} \label{2eqn28}
\widetilde{B}_{(0)} = \lim_{p \to \infty} \widetilde{B}_{(p)} = -\widetilde{M}_{(0)}^{T} B \widetilde{M}_{(0)}.
\end{equation}
Consider the block matrix form of $\widetilde{B}_{(p)}$ given by
\[\widetilde{B}_{(p)}=\begin{pmatrix}
(\widetilde{B}_{(p)})_{11} & (\widetilde{B}_{(p)})_{12}\\
(\widetilde{B}_{(p)})_{12}^{T} & (\widetilde{B}_{(p)})_{22}
\end{pmatrix}, \]
where each block has size $m_2-m_1+1.$
Let
\begin{equation}
\widetilde{\widetilde{B}}_{(p)} = (\widetilde{B}_{(p)})_{11}+ (\widetilde{B}_{(p)})_{22} + \iota ((\widetilde{B}_{(p)})_{12} -(\widetilde{B}_{(p)})_{12}^{T} )
\end{equation}
be the Hermitian matrix associated with $ \widetilde{B}_{(p)}.$
Let $$\widetilde{\widetilde{B}}_{(0)} = \lim_{p \to \infty} \widetilde{\widetilde{B}}_{(p)}.$$
Let $M= \overline{M} \diamond \widetilde{M} \diamond \widehat{M}$ be the symplectic column partition of $M$ of order $(m-i_m, r_m, n-m-j_m).$
Let $\widetilde{B} = -\widetilde{M}^{T} B \widetilde{M}$ and write $\widetilde{B}$ in the block matrix form
\[\widetilde{B}=\begin{pmatrix}
\tilde{B}_{11} & \tilde{B}_{12} \\
\tilde{B}_{12}^T & \tilde{B}_{22}
\end{pmatrix}, \]
where each block has order $r_m \times r_m.$
Denote by $\widetilde{\widetilde{B}}$ the Hermitian matrix $\tilde{B}_{11} + \tilde{B}_{22} + \iota (\tilde{B}_{12} - \tilde{B}_{12}^{T} ).$
The matrix $ \widetilde{M}_{(0)}$ is the submatrix of $M$ consisting of the $i\text{th}$ and $(n+i)\text{th}$ columns of $M$ for all $i \in \mathcal{I}.$
By continuity of the symplectic eigenvalues we have $\mathcal{I} \subseteq \{m-i_m+1, \ldots, m+j_m\}.$
Therefore $ \widetilde{M}_{(0)}$ is also a submatrix of $\widetilde{M}.$
It thus follows by relation $(\ref{2eqn28})$ that each block of $\widetilde{B}_{(0)}$ is obtained by removing $i\text{th}$ row and $i\text{th}$ column of $\widetilde{B}$ for all $i$ not in $\mathcal{I}.$
Therefore $\widetilde{\widetilde{B}}_{(0)}$ is a compression of $\widetilde{\widetilde{B}}.$
By Cauchy interlacing principle we have
$$ \lambda_{1}^{\downarrow}(\widetilde{\widetilde{B}}_{(p)}) \leq \lambda_{1}^{\downarrow}(\widetilde{\widetilde{B}}).$$
Using equation $(\ref{2eqn17})$ we get
\begin{align*}
(-d_m)^{\circ}(A; B) &= - \lim_{p \to \infty} d_{m}^{\prime}(A_{(p)};B) \\
&\leq \frac{1}{2} \lim_{p \to \infty} \lambda_{1}^{\downarrow}(\widetilde{\widetilde{B}}_{(p)}) \\
&= \frac{1}{2} \lambda_{1}^{\downarrow}(\lim_{p \to \infty} \widetilde{\widetilde{B}}_{(p)}) \\
&= \frac{1}{2} \lambda_{1}^{\downarrow}( \widetilde{\widetilde{B}}_{(0)}) \\
& \leq \frac{1}{2} \lambda_{1}^{\downarrow}( \widetilde{\widetilde{B}}) \\
&= -d_{\widehat{m}}^{\prime}(A; B).
\end{align*}
Thus we have proved that $(-d_m)^{\circ}(A; B) \leq -d_{\widehat{m}}^{\prime}(A; B)$ for all $B$ in $ \mathbb{S}(2n).$
This implies $\partial^{\circ} (-d_m)(A) \subseteq \partial (-d_{\widehat{m}}^{\prime}(A;\cdot))(0)$ by definition.
By $\text{Theorem } \ref{2thm2}$ we know that $ \partial^{\diamond} (-d_m)(A)=\partial (-d_{\widehat{m}}^{\prime}(A;\cdot))(0).$
We have thus proved that $\partial^{\circ} (-d_m)(A) \subseteq \partial^{\diamond} (-d_m)(A).$
\end{proof}
The following is a well known result.
A proof of this result using matrix inequalities can be found in \cite[Theorem 8.15]{degosson}.
We give an alternate proof of this result using the Michel-Penot and Clarke subdifferential of $-d_m.$
\begin{cor} \label{2cor3}
For every $A, B$ in $\mathbb{P}(2n),$ we have $d_j(A) \leq d_j(B)$ for all $j=1,\ldots, n,$ whenever $A \leq B.$
\end{cor}
\begin{proof}
By $\text{Theorem } 3.1$ of \cite{idel} we know that $-d_m$ is a locally Lipschitz function.
Let $A, B$ be elements of $\mathbb{P}(2n).$
By Lebourg mean value theorem \cite[Theorem 1.7]{lebourg}, there exist $P$ in $ \mathbb{P}(2n)$ and $C$ in $ \partial^{\circ} (-d_m)(P)$ such that
\begin{align*}
(-d_m)(A)- & (-d_m)(B) = \langle C, A-B \rangle
\end{align*}
But we know by $\text{Theorem } \ref{2thm3}$ that
\begin{equation*}
\partial^{\circ} (-d_m)(P) = \text {\rm conv} \{-\frac{1}{2} (xx^T+yy^T): (x,y) \in S_m(P)\}.
\end{equation*}
Therefore we have
\begin{equation} \label{2eqn30}
d_m(B) - d_m(A) \in \text {\rm conv} \{\frac{1}{2} \langle xx^T+yy^T, B-A \rangle: (x,y) \in S_m(P)\}.
\end{equation}
Thus, $A \leq B$ implies
$$\text {\rm conv} \{\frac{1}{2} \langle xx^T+yy^T, B-A \rangle : (x,y) \in S_m(P)\} \subseteq [0, \infty).$$
By $(\ref{2eqn30})$ we conclude $ d_m(B) \geq d_m(A).$
\end{proof}
\vskip.2in
{\bf\it{Acknowledgement}}: This work has been done under the guidance of Prof. Tanvi Jain, supervisor of my doctoral studies.
\vskip0.2in
| proofpile-arXiv_065-205 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Estimating depth and egomotion from monocular camera is a fundamental and valuable task in computer vision, which has wide applications in augmented reality~\cite{dtam}, robotics navigation~\cite{desouza2002vision} and autonomous driving~\cite{menze2015object}.
Though monocular camera is cheap and lightweight, the task is hard for conventional SfM/SLAM algorithms~\cite{dso,mur2015orb,pire2017s} and continues challenging deep learning based approaches~\cite{struct2depth,undeepvo,vomonodepth,sc-sfmlearner,sfmlearner}.
Deep learning for depth and egomotion estimation can be broadly categorized into supervised and self-supervised learning.
For depth estimation, supervised learning takes images paired with depth maps as input~\cite{Eigen,dorn,bts}, where depth maps are sparsely collected from expensive LiDAR sensors ~\cite{kitti} or densely rendered from simulation engines~\cite{mayer2016large}, while supervision from LiDAR limits the generalization to new cameras and supervision from rendering limits the generalization to real scenes.
For egomotion estimation, supervised signals come from trajectories computed by classical methods with high precision sensors like IMU and GPS, which are also costly and cannot guarantee absolute accuracy.
Self-supervised learning unifies these two tasks into one framework, and only uses monocular videos as inputs, and supervision is from view synthesis~\cite{sfmlearner,GeoNet,vid2deep,monodepth,monodepth2}.
The setup is simpler, and easy to generalize among cameras.
However, self-supervised approaches are still inferior to supervised ones by large margins when compared on standard benchmarks.
The main problem lies in the weak supervision added as photometric loss, which is defined as the photometric difference between a pixel warped from source view by estimated depth and pose and the pixel captured in the target view.
Nevertheless, small photometric loss does not necessarily guarantee accurate depth and pose, especially for pixels in textureless regions.
The problem can be partially solved by adding smoothness loss on depth map, which encourages first-order smoothness~\cite{struct2depth,monodepth,monodepth2} or second-order smoothness~\cite{lego,dnc,epc,epc++}, and forces depth propagation from discriminative regions to textureless regions.
However, such propagation is with limited range and tends to cause over-smooth results around boundaries.
Considering the basic limitation is from representation, feature-metric loss is proposed to use learned feature representation for each pixel, which is explicitly constrained to be discriminative even in textureless regions. For learning feature representation, a single view reconstruction pathway is added as an auto-encoder network. To ensure loss landscapes defined on the learned feature representation having desired shapes, two additional regularizing losses are added to the auto-encoder loss, \emph{i.e.}, discriminative loss and convergent loss. The discriminative loss encourages feature differences across pixels modeled by first-order gradients, while the convergent loss ensures a wide convergence basin by penalizing feature gradients' variances across pixels.
In total, our network architecture contains three sub-networks, \emph{i.e.}, DepthNet and PoseNet for cross-view reconstruction, and FeatureNet for single-view reconstruction, where features generated by FeatureNet are used to define feature-metric loss for DepthNet and PoseNet.
In experiment, feature-metric loss outperforms widely used first-order and second-order smoothness losses, and improves state-of-the-art depth estimation from 0.885 to 0.925 measured by $\delta_1$ on KITTI dataset.
In addition, our method generates better egomotion estimation and results in more accurate visual odometry.
In general, our contributions are summarized as three-fold:
\begin{itemize}
\item Feature-metric loss is proposed for self-supervised depth and egomotion estimation.
\item FeatureNet is proposed for feature representation learning for depth and egomotion estimation.
\item State-of-the-art performances on depth and egomotion estimation are achieved on KITTI dataset.
\end{itemize}
\section{Related Work}
\label{rw}
In this section, we review related works of self-supervised learning for two tasks, \emph{i.e.}, monocular depth and egomotion estimation, as well as visual representation learning.
\noindent
\textbf{Monocular depth and egomotion estimation:}
SfMLearner is a pioneering work~\cite{sfmlearner} for this task, where geometry estimation from DepthNet and PoseNet is supervised by photometric loss. To tackle moving objects that break the assumption of static scenes, optical flow is estimated to compensate these moving pixels~\cite{GeoNet,epc,epc++,dfnet}, segmentation masks provided by pre-trained segmentation models are also to handle potential moving objects separately~\cite{struct2depth,signet,learnk}.
More geometric priors are also used to strengthen the self-supervised learning. Depth-normal consistency loss is proposed as as extra constraint~\cite{lego,dnc}. 3D consistency between point clouds backprojected from adjacent views is considered in~\cite{vid2deep,glnet,sc-sfmlearner}. In addition, binocular videos are used for training to solve both scale ambiguity and scene dynamics~\cite{undeepvo,monodepth2,epc,epc++}, where only inference can be carried on monocular video.
In contrast to all above methods where focuses are on the geometry parts of the task, deep feature reconstruction~\cite{dfr} proposed to use deep features from pre-trained models to define reconstruction loss. Our method shares the same spirit, but takes a step further to explicitly learn deep features from the geometry problem under the same self-supervised learning framework.
\iffalse
-----------------------------------------------------------------
Since no ground truth are available, the pioneering work \cite{sfmlearner} imposes supervision on how predicted depth and egomotion should behave rather than what they should be.
Correct depth and egomotion should be able to synthesize one view from another, the photometric errors between synthesized and target views are evaluated to guide the training of depth and egomotion estimators as an indirect supervisory signal.
It is noteworthy that this initial work is built upon the static scene assumption, therefore follow-up works try to consider scene dynamics during their modeling.
\cite{GeoNet, epc, epc++, dfnet} utilize optical flow to model scene dynamics, while \cite{struct2depth,signet,learnk} adopt pre-trained segmentation models to crop out potential moving objects and regress a corresponding motion vector for each.
Even though successes has been made, using photometric loss alone as supervisory signal is not enough, driving researchers to find more constraints.
Many geometric constraints are utilized to regularize the training process.
\cite{lego,dnc} incorporates depth-normal consistency as an extra constraint.
\cite{vid2deep,glnet,sc-sfmlearner} enforce 3D consistency between point clouds back-projected from adjacent views.
Besides works trained on monocular videos, \cite{undeepvo,monodepth2,epc,epc++} address scale ambiguity and scene dynamics issues by training on calibrated binocular videos.
All these methods use low level information to supervise network, however deeplearning methods are able to get more effective information from deep feature.DFR \cite{dfr} takes advantage of deep features and evaluates the difference at feature level for losses to enhance photometric loss, which is similar to ours.
However, its best model uses the features from supervised stereo matching model \cite{weerasekera2017learning} and without extra constraints added to the features for better convergence properties.
While we focus on learning features in a self-supervised way.
And we propose two losses to constrain learned feature to be able to induce the feature-metric loss with a desirable landscape.
-----------------------------------------------------------------
\fi
\noindent
\textbf{Visual representation learning:}
It is of great interest of self-supervised visual representation learning for downstream tasks. Without explicitly provided labels, the losses are defined by manipulating the data itself in different ways, which could be reconstructing input data~\cite{stacked,denoise,afl,avb}, predicting spatial transformations~\cite{sp1,sp2,sp3,sp4}, coloring grayscale input images~\cite{colorization1,colorization2,colorization3,colorization4} etc. Our work belongs to reconstruct the input through an auto-encoder network. Different from previous works mainly aiming for learning better features for recognition tasks, our method is designed to learn better features for the geometry task.
\section{Method}
\label{method}
In this section, we firstly introduce geometry models with required notations, then define two reconstruction losses, one for depth and ego-motion learning, the other for feature representation learning.
Finally, we present our overall pipeline and implementation details about loss settings and network architectures.
\subsection{Geometry models}\label{sec41}
\textbf{Camera model and depth.}
The camera operator $\pi: \mathbb{R}^3 \rightarrow \mathbb{R}^2$ projects a 3D point $P=(X,Y,Z)$ to a 2D pixel $p=(u,v)$ by:
\begin{equation}\label{31}
\pi(P) = (f_x \frac{X}{Z}+c_x, f_y \frac{Y}{Z} + c_y)
\end{equation}
where $(f_x, f_y, c_x, c_y)$ are the camera intrinsic parameters. Similarly, a pixel $p$ is projected to a 3D point $P$ given its depth $D(p)$, i.e., backprojection $\pi^{-1}: \mathbb{R}^2 \times \mathbb{R} \rightarrow \mathbb{R}^3$:
\begin{equation}\label{32}
\pi^{-1}\big(p, D(p)\big) = D(p)\Big(\frac{x-c_x}{f_x}, \frac{y-c_y}{f_y},1\Big)^\top
\end{equation}
\textbf{Ego-motion.} Ego-motion is modeled by transformation $G\in \mathbb{SE}(3)$, together with $\pi$ and $\pi^{-1}$, we can define a projective warping function $\omega: \mathbb{R}^2 \times \mathbb{R} \times \mathbb{SE}(3) \rightarrow \mathbb{R}^2$, which maps a pixel $p$ in one frame to the other frame transformed by $G$:
\begin{equation}\label{warp}
\widehat{p}=\omega\big(p,D(p),G\big)=\pi\Big(G\cdot \pi^{-1}\big(p,D(p)\big)\Big)
\end{equation}
\subsection{Cross-view reconstruction}
With the above geometry models, target frame $I_t$ can be reconstructed from source frame $I_s$ via,
\begin{equation}\label{sample1}
\widehat{I}_{s \rightarrow t}(p) = I_s(\widehat{p})
\end{equation}
where $\widehat{p}$ is defined in Eq.~\ref{warp} and depends on both depth and ego-motion.
$I_t(p)$ and $I_s(\widehat{p})$ should be similar given a set of assumptions, including both depth and ego-motion are correct; the corresponding 3D point is static with Lambertian reflectance and not occluded in both views.
Then, a multi-view reconstruction loss can be defined for learning depth and motion, i.e.,
\begin{equation}\label{sample2}
\mathcal{L}_{s \rightarrow t} = \sum_p \ell\big(I_s(\widehat{p}), I_t(p)\big),
\end{equation}
where $\ell(,)$ is the per-pixel loss which measures the photometric difference, i.e, photometric loss.
Though the loss works, it is fundamentally problematic since correct depth and pose is sufficient but not necessary for small photometric error, e.g., pixels in a textureless with the same photometric values can have small photometric losses even the depth and pose are wrongly estimated.
The problem can be formally analysed from the optimization perspective by deriving the gradients with respect to both depth $D(p)$ and egomotion $G$,
\begin{equation}\label{derivative1}
\frac{\partial \mathcal{L}_{s \rightarrow t}}{\partial D(p)} = \frac{\partial \ell\big(I_s(\widehat{p}), I_t(p)\big)}{\partial I_s(\widehat{p})} \cdot \frac{\partial I_s(\widehat{p})}{\partial \widehat{p}} \cdot \frac{\partial \widehat{p}}{\partial D(p)},
\end{equation}
\begin{equation}\label{derivative2}
\frac{\partial \mathcal{L}_{s \rightarrow t}}{\partial G} = \sum_p \frac{\partial \ell\big(I_s(\widehat{p}), I_t(p)\big)}{\partial I_s(\widehat{p})} \cdot \frac{\partial I_s(\widehat{p})}{\partial \widehat{p}} \cdot \frac{\partial \widehat{p}}{\partial G},
\end{equation}
where both gradients depend on the image gradient $\frac{\partial I_s(\widehat{p})}{\partial \widehat{p}}$. For textureless region, the image gradients are close to zero which further causes zero gradients for Eq.~\ref{derivative1} and contributes zero to Eq.~\ref{derivative2} for egomotion estimation. In addition, locally non-smooth gradient directions are also challenging convergence due to inconsistent update directions towards minima.
Therefore, we propose to learn feature representation $\phi_s(p)$ with better gradient $\frac{\partial \phi_s(\widehat{p})}{\partial \widehat{p}}$ to overcome the above problems, and generalizes photometric loss to feature-metric loss accordingly,
\begin{equation}\label{feature_metric}
\mathcal{L}_{s \rightarrow t} = \sum_p \ell\big(\phi_s(\widehat{p}), \phi_t(p)\big).
\end{equation}
\iffalse
--------------------------------------------
The slope is defined as its first-order gradients, as shown in Eq. \ref{derivative1} and \ref{derivative2}.
Since $\ell$ is usually defined as a convex function like variants of $L_1$ or $L_2$ norm, and the mathematical relationship between $\widehat{p}$ and $D(p)$ as well as $G$ approximates a linear function, so the first and last terms in Eq. \ref{derivative1} and \ref{derivative2} do not influence the convexity of $\mathcal{L}_{s \rightarrow t}$.
The main influence comes from the properties of image ${I_s}$.
Due to the limitations of the raw image intensity, $I_s$ tend to have near-zero first-order at low-texture regions, which hampers the loss optimization.
Curvature is defined as the hessian matrix of object function, which is opposite to convergence radii, \emph{i.e.} large curvature lead to small convergence radii, vice versa.
The same with slopes, curvatures are mainly constrained by the properties of images.
However due to the non-convex property of image, the convergence radius of photometric loss is very small (usually 1-2 pixels).
Therefore, we aim to generalize the photometric loss to \textbf{feature-metric loss} with better features for each pixel $\phi(p) \in \mathbb{R}^c$ to solve the limitations.
\begin{equation}\label{feature_metric}
\mathcal{L}_{s \rightarrow t} = \sum_p \ell\big(\phi_s(\widehat{p}), \phi_t(p)\big)
\end{equation}
In order to let $\mathcal{L}_{s \rightarrow t}$ have a desired landscape, we need to control the slope and the curvature of the local geometry of learned visual representations $\phi$.
We add an extra single-view reconstruction approach to achieve this goal.
-----------------------------------------
\fi
\subsection{Single-view reconstruction}
The feature representation $\phi(p)$ is also learned in self-supervised manner with single-view reconstruction through an auto-encoder network. The auto-encoder network contains an encoder for deep feature extractions from an image and an decoder to reconstruct the input image based on the deep features. The deep features are learned to encode large patterns in an image where redundancies and noises are removed. To ensure the learned representation with good properties for optimizing Eq.~\ref{feature_metric}, we add two extra regularizers $\mathcal{L}_{dis}$ and $\mathcal{L}_{cvt}$ to the image reconstruction loss $\mathcal{L}_{rec}$, i.e.,
\begin{equation}\label{t}
\mathcal{L}_{s}=\mathcal{L}_{rec}+\alpha \mathcal{L}_{dis}+\beta \mathcal{L}_{cvt}
\end{equation}
where $\alpha$ and $\beta$ are set to 1e-3 via cross validation. These three loss terms are described in detail below.
\iffalse
------------------------------------
We utilize single-view reconstruction to obtain the required feature representation.
Single-view reconstruction is a process that using image to reconstruct itself, and is named as auto-encoder in some methods \cite{stacked,denoise,afl,avb}. Single-view reconstruction can make a feature representation that effective embedding the structure and information of original images. As discussed above, the feature representation should have additional two properties: First, it should have a relatively large slope specially at low-texture regions. Second, it should have small curvature to ensure large convergence radii.
To achieve this goal, we define the object function as follows:
\begin{equation}\label{t}
\mathcal{L}_{s}=\mathcal{L}_{rec}+\alpha \mathcal{L}_{dis}+\beta \mathcal{L}_{cvt}
\end{equation}
$\mathcal{L}_{rec}$ is the \textbf{reconstruction loss}, representing the original process of single-view reconstruction. $\mathcal{L}_{dis}$ and $\mathcal{L}_{cvt}$ are respectively named \textbf{image reconstruction loss}, \textbf{discriminative loss} are the losses to ensure feature representation's slope and curvature. $\alpha$ and $\beta$ are all set to $1e^{-3}$ via cross validation. These three functions are described in detail below.
----------------------------------
\fi
For simplicity, we denote first-order derivative and second-order derivative with respect to image coordinates by $\nabla^1$ and $\nabla^2$, which equals $\partial_x+\partial_y$ and $\partial_{xx}+2\partial_{xy}+\partial_{yy}$ respectively.
\textbf{Image reconstruction loss}
Image reconstruction loss $\mathcal{L}_{rec}$ is the standard loss function for an auto-encoder network, which requires the encoded features can be used to reconstruct its input, i.e.,
\begin{equation}\label{rec}
\mathcal{L}_{rec}= \sum_p |I(p) - I_{rec}(p)|_1
\end{equation}
where $I(p)$ is the input image, and $I_{rec}(p)$ is the image reconstructed from the auto-encoder network.
\textbf{Discriminative loss} $\mathcal{L}_{dis}$ is defined to ensure the learned features have gradients $\frac{\partial \phi(\widehat{p})}{\partial \widehat{p}}$ by explicitly encouraging large gradient, i.e.,
\begin{equation}\label{dis1}
\mathcal{L}_{dis}=-\sum_p |\nabla^1 \phi(p)|_1
\end{equation}
Furthermore, image gradients are used to emphasize low-texture regions,
\begin{equation}\label{dis2}
\mathcal{L}_{dis}=-\sum_p e^{-|\nabla^1 I(p)|_1} |\nabla^1 \phi(p)|_1
\end{equation}
where low-texture regions receive large weights.
\textbf{Convergent loss} $\mathcal{L}_{cvt}$ is defined to encourage smoothness of feature gradients, which ensures consistent gradients during optimization and large convergence radii accordingly. The loss is defined to penalize the second-order gradients, i.e.,
\begin{equation}\label{cvt1}
\mathcal{L}_{cvt}=\sum_p |\nabla^2 \phi(p)|_1
\end{equation}
\begin{figure*}[!tp]
\centering
\includegraphics[width=12cm]{fig/pipeline.png}
\caption{
An illustration of the overall framework, which contains DepthNet, PoseNet and FeatureNet for depth map prediction, egomotion prediction and feature learning respectively.
FeatureNet uses $\mathcal{L}_{s}$ to learn require visual representation, the encoder from FeatureNet is used to extract features for cross-view reconstruction loss $\mathcal{L}_{s \rightarrow t}$.
}
\label{fig3}
\end{figure*}
\subsection{Overall pipeline}\label{sec34}
Single-view reconstruction and cross-view reconstruction are unified to form the final framework as illustrated in Fig.~\ref{fig3}. DepthNet is a monodepth estimator which takes the target frame as input and outputs a depth map. PoseNet is an egomotion estimator, which takes two frames from both source and target view and outputs the relative pose between them. DepthNet and PoseNet provide the geometry information to establish point-to-point correspondences for cross-view reconstruction. FeatureNet is for feature representation learning, which follows the auto-encoder architecture and supervised by single-view reconstruction loss. Features from FeatureNet are used to define the cross-view reconstruction loss.
\iffalse
Single-view reconstruction and cross-view reconstruction are unified to form our final framework.
Fig. \ref{fig3} shows an illustration of the overall process.
The PoseNet is a pose estimator, which receives concatenation of two frames respectively from the source and target views and outputs the relative pose between them.
The DepthNet is a depth estimator, which takes the frame at the target view as input and outputs corresponding depth map.
The FeatureNet is a network of auto-encoder architecture, we used for single-view reconstruction.
Its training is constrained by the single-view reconstruction loss to make sure feature maps from its encoder have promised properties.
During the training, the feature map extracted from the frame at the source view are warped to target view according to the depth and egomotion estimated by DepthNet and PoseNet.
The discrepancy between the synthesized feature map and target feature map will be evaluated as the feature-metric loss.
\fi
Therefore, the total loss for the whole architecture contains two parts, where $\mathcal{L}_s$ constrains the quality of learned features through single-view reconstruction, whilst $\mathcal{L}_{s \rightarrow t}$ penalizes the discrepancy from cross-view reconstruction, i.e.,
\begin{equation}
\mathcal{L}_{total} = \mathcal{L}_s + \mathcal{L}_{s \rightarrow t}
\end{equation}
Toward better performance, the proposed feature-metric loss is combined with used photometric loss, i.e.,
\begin{equation}\label{fmph}
\begin{aligned}
\mathcal{L}_{s \rightarrow t} = &\sum_p \mathcal{L}_{fm}\big(\phi_s(\widehat{p}),\phi_t(p)\big)\\
+&\sum_p \mathcal{L}_{ph}(I_s(\widehat{p}),I_t(p))
\end{aligned}
\end{equation}
where $\mathcal{L}_{fm}$ and $\mathcal{L}_{ph}$ are the feature-metric loss and photometric loss respectively. Specifically, feature-metric loss is defined by
\begin{equation}
\mathcal{L}_{fm} = |\phi_s(\widehat{p})- \phi_t(p)|_1,
\end{equation}
and photometric loss is defined following~\cite{monodepth} using a combination of $L_1$ and SSIM
losses, i.e.,
\begin{equation}\label{ph}
\begin{aligned}
\mathcal{L}_{ph} =0.15 &\sum_p |I_s(\widehat{p})-I_t(p)|_1+\\
0.85 &\frac{1-\text{SSIM}(I_s(\widehat{p}),I_t(p))}{2}
\end{aligned}
\end{equation}
Furthermore, we resolve the occlusion problem following the practices in~\cite{monodepth2,ddvo,noise,dfr}, where two source views are used to define the cross-view reconstruction loss,
\begin{equation}\label{occ}
\begin{aligned}
\mathcal{L}_{s \rightarrow t}' = &\sum_p \underset{s \in V}{\min} \; \mathcal{L}_{s \rightarrow t}\big(\phi_s(\widehat{p}),\phi_t(p)\big)
\end{aligned}
\end{equation}
Where $V$ is a set composed of source frames.
When trained on the monocular videos, $V$ contains the previous and posterior source frames of current target frame; when trained on the calibrated binocular videos, an extra frame of opposite stereo pair is added.
\iffalse
We combine proposed feature-metric loss $\mathcal{L}_{fm}$ by photometric loss $\mathcal{L}_{ph}$ for ultimate cross-view reconstruction loss.
\begin{equation}\label{eq324}
\begin{aligned}
\mathcal{L}_{s \rightarrow t} = &\sum_p \underset{s}{min} \; \mathcal{L}_{fm}\big(\phi_s(\widehat{p}),\phi_t(p)\big)\\
+&\sum_p \underset{s}{min} \; \mathcal{L}_{ph}(I_s(\widehat{p}),I_t(p))
\end{aligned}
\end{equation}
Where the \textit{min} operation is used to avoid the interference of occlusions, as suggested by \cite{monodepth2}, only the minimal loss among all the source frames can go through back-propagation.
And hyperparameter $\lambda$ is set to $1e^{-3}$ by cross validation.
The specific form of the feature-metric loss, photometric loss and depth smoothness losses are:
\begin{equation}
\mathcal{L}_{fm} = |\phi_s(\widehat{p})- \phi_t(p)|_1
\end{equation}
\begin{equation}\label{ph}
\mathcal{L}_{ph} =0.15|I_s-I_t|_1+0.85 \frac{1-SSIM(I_s,I_t)}{2}
\end{equation}
\fi
\begin{figure*}[!tp]
\centering
\includegraphics[width=12cm]{fig/qualitative_study-crop.pdf}
\caption{
Qualitative comparison between Monodepth2 \cite{monodepth2} (second row) and our method (last row).
It can be seen that we achieve better performance on the low-texture regions like walls and billboards, and finer details are present like silhouette of humans and poles.
}
\label{qualitative}
\end{figure*}
\subsection{Implementation details}\label{sec45}
For FeatureNet, ResNet-50~\cite{resnet} with fully-connected layer removed is used as the encoder, where deepest feature map goes through 5 downsampling stages and reduces to 1/32 resolution of input image, the decoder contains five $3\times 3$ convolutional layers and each followed by a bilinear upsampling layer. Multi-scale feature maps from convolutional layers of the decoder are used to generate multi-scale reconstructed images, where feature map of each scale further goes through a $3\times 3$ convolution with sigmoid function for image reconstruction. The largest feature map with 64 channels from encoder is regularized by $\mathcal{L}_{dis}$ and $\mathcal{L}_{cvt}$ and will be used for feature-metric loss.
DepthNet also adopts an encoder-decoder structure, where ResNet-50 without fully-connected layer is used as encoder and multi-scale feature maps are outputted. The decoder for depth is implemented in a cascaded refinement manner, which decodes depth maps in a top-down pathway. Specifically, multiple-scale features from encoder are used to predict maps of corresponding sizes via a $3\times 3$ convolution followed by sigmoid, and these maps are refined in a coarse-to-fine manner towards the final depth map. Both FeatureNet and DepthNet take image size of $320 \times 1024$ as inputs.
\iffalse
To achieve our optimization target, we design a neural architecture, which is composed of three parts: PoseNet, DepthNet and FeatureNet.
The FeatureNet adopts an encoder-decoder architecture.
A ResNet-50 \cite{resnet} is used as encoder, the deepest feature map is passed to 5 cascaded upsample modules (a $3\times 3$ convolution followed by a bilinear interpolation layer) to recover the input resolution.
Then these feature maps will go through a $3\times 3$ convolution and a sigmoid function to get multi-scale reconstructed images.
The sum of $\mathcal{L}_{rec}$ at all scales will be used as final loss.
The largest feature map output by the encoder is regularized by $\mathcal{L}_{dis}$ and $\mathcal{L}_{cvt}$ and will be used for feature-metric loss.
For self-supervised visual representation learning, the input resolution is the same as DepthNet.
The DepthNet is a depth estimator, which adopts an encoder-decoder architecture.
The input resolution is $320 \times 1024$.
Its encoder uses a ResNet-50 \cite{resnet} to extract multi-scale feature maps.
Its decoder is implemented in a cascaded refinement manner, it iteratively utilizes a module named \textbf{refiner} to refine original multi-scale feature maps to finer feature maps, the detailed settings of DepthNet will be described in supplementary materials.
Multi-scale disparity maps are decoded from refined feature maps via a $3 \times 3$ convolution followed by a sigmoid activation function, the disparity map with the largest resolution is transformed to a depth map ranging from $0.1 \sim 100$ by a linear transformation.
\fi
The PoseNet is a pose estimator with a structure of ResNet-18 \cite{resnet}, which is modified to receive a concatenated image pair and predicts a relative pose therein. Here axis angle is chosen to represent the 3D rotation.
The input resolution is $192 \times 640$. Comparing with both FeatureNet and DepthNet, PoseNet uses lower image resolution and more light-weight backbone, which observes this has no obvious influence to pose accuracy, but significantly save both memory and computation.
We adopt the setting in~\cite{monodepth2} for data preprocessing.
Our models are implemented on PyTorch \cite{pytorch} with distributed computing, and trained for 40 epochs using Adam~\cite{adam} optimizer, with a batch size of 2, on the 8 GTX 1080Ti GPUs.
The learning rate is gradually warmed up to $1e^{-4}$ in 3 steps, where each step increases learning rate by $1e^{-4}/3$ in 500 iterations. After warmping, learning rate $1e^{-4}$ is used for the first 20 epochs and halved twices at 20th and 30th epoch.
As for online refinement technique we used during testing, we follow the practice proposed by \cite{glnet,struct2depth}.
We keep the model training while performing inference.
The batch size is set to 1.
Each batch consists of the test image and its two adjacent frames.
Online refinement is performed for 20 iterations on one test sample with the same setting introduced before.
No data augmentation is used in the inference phase.
\section{Experiments}
\label{exp}
In this section we show extensive experiments for evaluating the performance of our approach. We make a fair comparison on KITTI 2015 dataset~\cite{kitti} with prior art on both single view depth and visual odometry estimation tasks.
And detailed ablation studies of our approach are done to show the effectiveness of the \textbf{feature-metric loss}.
KITTI 2015 dataset contains videos in 200 street scenes captured by RGB cameras, with sparse depth ground truths captured by Velodyne laser scanner.
We follow \cite{sfmlearner} to remove static frames as pre-processing step.
We use the Eigen split of \cite{Eigen} to divide KITTI raw data, and resulting in 39,810 monocular triplets for training, 4,424 for validation and 697 for testing.
For depth evaluation, we test our depth model on divided 697 KITTI testing data.
For odometry evaluation, we test our system to the official KITTI odometry split which containing 11 driving sequences with ground truth odometry obtained through the IMU and GPS readings.
Following previous works \cite{dfr,sc-sfmlearner,sfmlearner}, we train our model on the sequence 00-08 and use the sequence 09-10 for testing.
\subsection{Depth evaluation.}
\begin{table}[!tp]
\begin{center}
\begin{tabular}{ll}
\hline
$\textbf{Abs Rel}:\frac{1}{|D|}\sum_{d \in D}|d^*-d|/d^*$
&$\textbf{RMSE}:\sqrt{\frac{1}{|D|}\sum_{d \in D}||d^*-d||^2}$\\
$\textbf{Sq Rel}:\frac{1}{|D|}\sum_{d \in D}||d^*-d||^2/d^*$
&$\textbf{RMSE log}:\sqrt{\frac{1}{|D|}\sum_{d \in D}||logd^*-logd||^2}$\\
\multicolumn{2}{l}{$\mathbf{\delta}_\mathbf{t}:\frac{1}{|D|}|\{d \in D| \: max(\frac{d^*}{d},\frac{d}{d^*}) \: < 1.25^t\}|\times 100\%$}
\\
\hline
\end{tabular}
\caption{Performance metrics for depth evaluation.
$d$ and $d^*$ respectively denotes predicted and ground truth depth, $D$ presents a set of all the predicted depth values of an image, $|.|$ returns the number of the elements in the input set.
}
\label{tab1}
\end{center}
\end{table}
\textbf{Performance metrics.}
Standard metrics are used for depth evaluation, as shown in Tab.~\ref{tab1}.
During evaluation, depth is capped to 80m.
For the methods trained on monocular videos, the depth is defined up to scale factor \cite{sfmlearner}, which is computed by
\begin{equation}\label{scale}
scale = median(D_{gt})/median(D_{pred})
\end{equation}
For evaluation, those predicted depth maps are multiplied by computed $scale$ to match the median with the ground truth, this step is called \textbf{median scaling}.
\begin{table}[!t]
\begin{center}
\resizebox{0.9\textwidth}{!}{
\begin{tabular}{l|l|cccc|ccc}
\hline
\multirow{2}*{Method} &\multirow{2}*{Train} &\multicolumn{4}{c|}{The lower the better} &\multicolumn{3}{c}{The higher the better}\\
~ &~ &Abs Rel &Sq Rel &RMSE &RMSE log &$\delta_1$ &$\delta_2$ &$\delta_3$\\
\hline
\hline
SfMLearner~\cite{sfmlearner} &M &0.208 &1.768 &6.958 &0.283 &0.678 &0.885 & 0.957\\
DNC~\cite{dnc} &M &0.182 &1.481 &6.501 &0.267 &0.725 &0.906 &0.963\\
Vid2Depth~\cite{vid2deep} &M &0.163 &1.240 &6.220 &0.250 &0.762 &0.916 &0.968\\
LEGO~\cite{lego} &M &0.162 &1.352 &6.276 &0.252 &0.783 &0.921 &0.969\\
GeoNet~\cite{GeoNet} &M &0.155 &1.296 &5.857 &0.233 &0.793 &0.931 &0.973\\
DF-Net~\cite{dfnet} &M &0.150 &1.124 &5.507 &0.223 &0.806 &0.933 &0.973\\
DDVO~\cite{ddvo} &M &0.151 &1.257 &5.583 &0.228 &0.810 &0.936 &0.974\\
EPC++~\cite{epc++} &M &0.141 &1.029 &5.350 &0.216 &0.816 &0.941 &0.976\\
Struct2Depth~\cite{struct2depth} &M &0.141 &1.036 &5.291 &0.215 &0.816 &0.945 &0.979\\
SIGNet~\cite{signet} &M &0.133 &0.905 &5.181 &0.208 &0.825 &0.947 &0.981\\
CC~\cite{cc} &M &0.140 &1.070 &5.326 &0.217 &0.826 &0.941 &0.975\\
LearnK~\cite{learnk} &M &0.128 &0.959 &5.230 &0.212 &0.845 &0.947 &0.976\\
DualNet~\cite{dualnet} &M
&0.121 &\underline{0.837} &4.945 &0.197 &0.853 &0.955 &\underline{0.982}\\
SuperDepth~\cite{superdepth} &M &0.116 &1.055 &- &0.209 &0.853 &0.948 &0.977\\
Monodepth2~\cite{monodepth2} &M &\underline{0.115} &0.882 &\underline{4.701} &\underline{0.190} &\underline{0.879} &\underline{0.961} &\underline{0.982}\\
\hline
Ours &M &\textbf{0.104} &\textbf{0.729} &\textbf{4.481} &\textbf{0.179} &\textbf{0.893} &\textbf{0.965} &\textbf{0.984}\\
\hline
\hline
Struct2Depth~\cite{struct2depth} &M${^*}$ &0.109 &0.825 &4.750 &0.187 &0.874 &\underline{0.958} &\textbf{0.983}\\
GLNet~\cite{glnet} &M$^{*}$ &\underline{0.099} &\underline{0.796} &\underline{4.743} &\underline{0.186} &\underline{0.884} &0.955 &0.979\\
\hline
Ours &M$^{*}$ &\textbf{0.088} &\textbf{0.712} &\textbf{4.137} &\textbf{0.169} &\textbf{0.915} &\textbf{0.965} &\underline{0.982}\\
\hline
\hline
Dorn~\cite{dorn} &Sup &0.099 &0.593 &3.714 &0.161 &0.897 &0.966 &0.986\\
BTS~\cite{bts} &Sup &0.091 &0.555 &4.033 &0.174 &0.904 &0.967 &0.984\\
\hline
\hline
MonoDepth~\cite{monodepth} &S &0.133 &1.142 &5.533 &0.230 &0.830 &0.936 &0.970\\
MonoDispNet~\cite{monodispnet} &S &0.126 &0.832 &\textbf{4.172} &0.217 &0.840 &0.941 &0.973\\
MonoResMatch~\cite{monoresmatch} &S &0.111 &0.867 &4.714 &0.199 &0.864 &0.954 &\underline{0.979}\\
MonoDepth2~\cite{monodepth2} &S &0.107 &0.849 &4.764 &0.201 &0.874 &0.953 &0.977\\
RefineDistill~\cite{refinedistill} &S &\textbf{0.098} &0.831 &4.656 &0.202 &0.882 &0.948 &0.973\\
UnDeepVO~\cite{undeepvo} &MS &0.183 &1.730 &6.570 &0.268 &- &- &-\\
DFR~\cite{dfr} &MS &0.135 &1.132 &5.585 &0.229 &0.820 &0.933 &0.971\\
EPC++~\cite{epc++} &MS &0.128 &0.935 &5.011 &0.209 &0.831 &0.945 &\underline{0.979}\\
MonoDepth2~\cite{monodepth2} &MS &0.106 &0.818 &4.750 &0.196 &0.874 &0.957 &\underline{0.979}\\
DepthHint~\cite{depthhint} &MS$^\dagger$ &0.100 &\underline{0.728} &4.469 &\underline{0.185} &\underline{0.885} &\underline{0.962} &\textbf{0.982}\\
\hline
Ours &MS &\underline{0.099} &\textbf{0.697} &\underline{4.427} &\textbf{0.184} &\textbf{0.889} &\textbf{0.963} &\textbf{0.982}\\
\hline
\hline
Ours &MS$^{*}$ &0.079 &0.666 &3.922 &0.163 &0.925 &0.970 &0.984\\
\hline
\end{tabular}
}
\caption{
Comparison of performances are reported on the KITTI dataset.
Best results are in bold, second best are underlined.
M: trained on monocular videos.
S: trained on stereo pairs.
MS: trained on calibrated binocular videos.
Sup: trained on labelled single images.
$*$: using the online refinement technique \cite{struct2depth}, which advocated keeping the model training while performing inference.
$\dagger$: using post processing steps.
}
\label{tab2}
\end{center}
\end{table}
\vspace{2pt}
\textbf{Comparison with state-of-the-art.}
Tab.~\ref{tab2} shows performances of current state-of-the-art approaches for monocular depth estimation.
They are trained on different kinds of data --- monocular videos (M), stereo pairs (S), binocular videos (MS) and labelled single images (Sup), while all of them are tested with single image as input.
We achieve the best performance compared to all self-supervised methods, no matter which training data is used.
Our method achieves more significant improvement in the performance metric Sq Rel. According to Tab. \ref{tab1}, this metric penalizes more on large errors in short range, where more textureless regions exist due near objects are large in images and our method handles well.
The closest results in self-supervised methods are from DepthHint~\cite{depthhint}, which uses the same input size but adds an extra post processing step.
It utilizes a traditional stereo matching method --- SGM~\cite{sgm} to provide extra supervisory signals for training, since SGM is less likely to be trapped by local minimums.
However, in its settings, the object function of SGM is still photometric loss, the drawbacks of photometric loss are still inevitable.
In contrast, proposed feature-metric loss will largely avoid the interference of local minimums.
Moreover, compared with state-of-the-art \textbf{supervised} methods \cite{dorn,bts}, which achieve top performances on the KITTI depth prediction competition, our model with online refinement technique even exceeds in many metrics.
Our advantage over supervised methods is that the gap between the distributions of training and testing data does exist, we can make full use of online refinement technique.
What is more, as shown in Sec. \ref{sec43}, the introduction of feature-metric loss can obtain more performance gain from online refinement technique.
Fig.~\ref{qualitative} shows the qualitative results.
Compared with state-of-the-art method MonoDepth2~\cite{monodepth2}, we achieve better performance on low-texture regions and finer details, e.g., walls, billboards, silhouette of humans and poles.
However, MonoDepth2 is built on the photometric loss, which is easily trapped by local minimums especially on low-texture regions like walls and billboards.
In contrast, the introduction of feature-metric loss leads the network into jumping out of local minimums, since our features are designed to form a desirable loss for easier optimization.
\begin{table}[!t]
\begin{center}
\begin{tabular}{l|cc|cc}
\hline
\multirow{2}*{Method} &\multicolumn{2}{c|}{Seq. 09} &\multicolumn{2}{c}{Seq. 10}\\
~ &$t_{err}$ &$r_{err}$ &$t_{err}$ &$r_{err}$\\
\hline
ORB-SLAM \cite{orbslam} &15.30 &0.26 &3.68 &0.48 \\
\hline
SfMLearner \cite{sfmlearner} &17.84 & 6.78 &37.91 &17.78 \\
DFR \cite{dfr} &11.93 &3.91 &12.45 &3.46 \\
MonoDepth2 \cite{monodepth2} &10.85 &2.86 &11.60 &5.72 \\
NeuralBundler \cite{neuralbundler} &\textbf{8.10} &2.81 &12.90 &\textbf{3.17} \\
SC-SfMlearner \cite{sc-sfmlearner} &8.24 &2.19 &10.70 &4.58 \\
\hline
Ours &8.75 &\textbf{2.11} &\textbf{10.67} &4.91\\
\hline
\end{tabular}
\caption{Comparison of performances are reported on the KITTI odometry dataset \cite{kitti}.
Best results are in bold.}
\label{tab4}
\end{center}
\end{table}
\subsection{Odometry evaluation}\label{sec53}
\textbf{Performance metric.}
Average translational root mean square error drift ($t_{err}$) and average rotational root mean square error drift ($r_{err}$) on length of 100m - 800m are adopted for evaluation.
For the methods who suffer from scale ambiguity, one global scale that best align the whole sequence is used.
\textbf{Comparison with state-of-the-art.}
As shown in Tab. \ref{tab4}, we report the performance of ORB-SLAM\cite{orbslam} as a reference and compare with recent deep methods.
our method gets top performances in two metrics and comparable performance in the rest metrics compared to other deep learning methods.
When compared to traditional SLAM method \cite{orbslam}, our translation performance is comparable, while in the rotation estimation we still fall short like other deep learning methods.
We believe that it is because the bundle adjustment of the traditional SLAM method can optimize subtler rotation errors along a long sequence which can't be observed in a small sequence used by current deep learning based methods.
Moreover current reconstruction process may be not sensible to variation of rotation \cite{bian2020depth}.
\begin{table*}[!tp]
\begin{center}
\resizebox{0.8\textwidth}{!}{
\begin{tabular}{l|c|cccc|ccc}
\multirow{2}*{Method} &\multirow{2}*{OR} &\multicolumn{4}{c|}{The lower the better} &\multicolumn{3}{c}{The higher the better}\\
~ &~ &Abs Rel &Sq Rel &RMSE &RMSE log &$\delta_1$ &$\delta_2$ &$\delta_3$\\
\hline
$\mathcal{L}_{ph}+\mathcal{L}_{ds}^1$ &$\times$
&0.105 &0.748 &4.835 &0.191 &0.878 &0.956 &0.979\\
$\mathcal{L}_{ph}+\mathcal{L}_{ds}^{2}$ &$\times$
&0.103 &0.740 &4.754 &0.187 &0.881 &0.959 &0.981\\
$\mathcal{L}_{ph}+\mathcal{L}_{ds}^{1}+\mathcal{L}_{ds}^{2}$ &$\times$
&0.103 &0.735 &4.554 &0.187 &0.883 &0.961 &0.981\\
$\mathcal{L}_{ph}+\mathcal{L}_{ds}^{1}+\mathcal{L}_{ds}^{2}$ &$\checkmark$
&0.088 &0.712 &4.237 &0.175 &0.905 &0.965 &0.982\\
\hline
$\mathcal{L}_{ph}+\mathcal{L}_{fm}$ &$\times$
&0.099 &0.697 &4.427 &0.184 &0.889 &0.963 &0.982\\
$\mathcal{L}_{ph}+\mathcal{L}_{fm}$ &$\checkmark$
&\textbf{0.079} &\textbf{0.666} &\textbf{3.922} &\textbf{0.163} &\textbf{0.925} &\textbf{0.970} &\textbf{0.984}\\
\end{tabular}
}
\caption*{
(a)
\textbf{Different loss combinations in} $\mathcal{L}_{s \rightarrow t}$ (Eq. \ref{feature_metric}), the term 'OR' denotes whether the online refinement \cite{struct2depth} is used.
}
\resizebox{0.98\textwidth}{!}{
\begin{tabular}{l|cccc|ccc|cc|cc}
\multirow{2}*{Loss} &\multicolumn{4}{c|}{The lower the better} &\multicolumn{3}{c|}{The higher the better} &\multicolumn{2}{c|}{Seq. 09} &\multicolumn{2}{c}{Seq. 10}\\
~ &Abs Rel &Sq Rel &RMSE &RMSE log &$\delta_1$ &$\delta_2$ &$\delta_3$ &$t_{err}$ &$r_{err}$ &$t_{err}$ &$r_{err}$\\
\hline
$\mathcal{L}_{rec}$
&0.105 &0.739 &4.585 &0.191 &0.883 &0.961 &\textbf{0.982} &4.30 &1.18 &8.50 &4.06\\
$\mathcal{L}_{rec}+\mathcal{L}_{dis}$
&0.103 &0.723 &4.535 &0.187 &0.884 &0.961 &\textbf{0.982}
&4.10 &1.07 &8.03 &3.94\\
$\mathcal{L}_{rec}+\mathcal{L}_{cvt}$
&0.100 &0.721 &4.474
&0.187
&0.885
&0.962
&\textbf{0.982}
&3.29
&1.16
&5.91
&3.48\\
$\mathcal{L}_{rec}+\mathcal{L}_{dis}+\mathcal{L}_{cvt}$
&\textbf{0.099} &\textbf{0.697} &\textbf{4.427} &\textbf{0.184} &\textbf{0.889} &\textbf{0.963} &\textbf{0.982} &\textbf{3.07} &\textbf{0.89} &\textbf{3.83} &\textbf{1.78}\\
\end{tabular}
}
\caption*{
(b)
\textbf{Different loss combinations in $\mathcal{L}_{s}$} (Eq. \ref{t}).
}
\caption{
The ablation study of different loss settings of our work.
}
\label{tab3}
\end{center}
\end{table*}
\subsection{Ablation study}\label{sec43}
To get a better understanding of the contribution of proposed losses---feature-metric loss, discriminative loss and convergent loss---to the overall performance, we perform an ablation study in Tab. \ref{tab3}.
\textbf{The losses for cross-view reconstruction.}
In Tab. \ref{tab3}a, different components of $\mathcal{L}_{s \rightarrow t}$ have been tried.
The smoothness losses which are widely used are used as baselines:
\begin{equation}
\mathcal{L}_{ds}^i=
\sum_p e^{-|\nabla^i I(p)|_1} |\nabla^i \widehat{D}(p)|_1
\end{equation}
where $\widehat{D}(p) = D(p)/\bar{D}$, this operation is the mean normalization technique advocated by \cite{ddvo}.
$i$ denotes the order of the derivatives.
These smoothness losses are used as baselines to verify the effectiveness of the feature-metric loss.
Compared with smoothness losses, feature-metric loss leads to much better effect.
We can see that a biggest performance boost is gained by introducing the feature-metric loss.
As we discussed before, the propagation range of smoothness losses is limited, in contrast, the feature-metric loss enable a long-range propagation, since it has a large convergence radius.
We also observe that when feature-metric loss can benefit more from the performance gain provided by online refinement than other loss combination.
Higher performance gain is attributed to better supervised signal provided by feature-metric loss during online refinement phase, where incorrect depth values can be appropriately penalized with larger losses based on more discriminative features.
\begin{figure}[!t]
\centering
\includegraphics[width=12cm]{fig/feature.png}
\caption{
A visualization of a learned visual representation, which is achieved by selecting one principle channel through PCA decomposition, then showing the feature map as a heat map, hotter color indicates a higher feature value.
First row shows a typical image which is full of textureless regions like walls and shadows.
The visualization of corresponding feature maps is shown in second to fourth rows.
The feature maps are respectively learned with different loss combinations, which sequentially correspond with the settings in the first three rows in Tab. \ref{tab3}b.
In order to get a better understanding, we crop three typical textureless regions as shown in (a-c), cropped feature maps are visualized according to the dynamic range after cropping.
}
\label{feature}
\end{figure}
\textbf{The losses for single-view reconstruction.}
Tab.\ref{tab3}b shows that the model without any of our contributions performs the worst.
When combined together, all our components lead to a significant improvement.
And as shown in right part of Tab. \ref{tab3}b, although small deviations are less obvious in some metrics of the depth evaluation, small errors will be magnified via accumulation and propagation during trajectory prediction, big differences are shown in the odometry evaluation.
Note that different from previous odometry evaluation, we directly applied the model trained on the kitti raw data to sequence 09-10 to get $t_{err}$ and $r_{err}$.
Merely using $\mathcal{L}_{rec}$ gets similar performance as merely using photometric loss (the third row in Tab. \ref{tab3}a), since it plays a similar role as the photometric loss at textureless regions.
Results get better when equipped with $\mathcal{L}_{dis}$, since discrimination at low-texture regions is improved.
Best performance is achieved when added $\mathcal{L}_{cvt}$, which means discrimination is not enough, a correct optimization direction is also important.
\textbf{Visualization analysis.}
In order to see whether learned visual representations have promised properties, we visualize it in Fig. \ref{feature}.
The feature maps learned with different loss combinations: $\mathcal{L}_{rec}$, $\mathcal{L}_{rec}+\mathcal{L}_{dis}$ and $\mathcal{L}_{rec}+\mathcal{L}_{dis}+\mathcal{L}_{cvt}$ are sequentially shown from the second to the fourth row.
Although we require our feature to be discriminative, this effect is not sufficient to be shown in a large view, since the gap between the features of different sorts are much larger than that of spatially adjacent features.
Therefore, we cropped three typical textureless regions, and visualize them again according to the dynamic range after cropping.
We can see that merely using $\mathcal{L}_{rec}$ get small variations at textureless regions.
The close-ups of original images are similar to feature maps only trained with $\mathcal{L}_{rec}$, which verifies the proposed losses in improving feature representations.
The feature map learned with $\mathcal{L}_{rec}+\mathcal{L}_{dis}$ is not smooth and disordered, since $\mathcal{L}_{dis}$ overemphasizes the discrepancy between adjacent features, the network degenerates to form a landscape of a zigzag shape.
This phenomenon can be approved by the results in the second row of Tab. \ref{tab3}b, which is only slightly higher than merely using $\mathcal{L}_{rec}$.
A desired landscape for feature maps is a smooth slope, in this way, feature-metric loss will be able to form a basin-like landscape.
The feature map learned with all the proposed losses approximates this ideal landscape, from zoom-in views we can see a clear and smooth transition along a certain direction.
On this landscape, gradient descent approaches can move smoothly toward optimal solutions.
\section{Conclusion}
\label{con}
In this work, feature-metric loss is proposed for self-supervised learning of depth and egomotion, where feature representation is additionally learned with two extra regularizers to ensure convergence towards correct depth and pose. The whole framework is end-to-end trainable in self-supervised setting, and achieves state-of-the-art depth estimation which is even comparable to supervised learning methods. Furthermore, visual odometry based on estimated egomotion also significantly outperforms previous state-of-the-art methods.
\\
\noindent
\textbf{Acknowledgements} This research is supported by Beijing Science and Technology Project (No. Z181100008918018).
| proofpile-arXiv_065-206 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\IEEEPARstart {I}mages captured by wide-angle camera usually suffer from a strong distortion, which influences the important scene perception tasks such as the object detection \cite{ref49, ref50} and semantic segmentation \cite{ref51, ref52}. The distortion rectification tries to recover the real geometric attributes from distorted scenes. It is a fundamental and indispensable part of image processing, which has a long research history extending back 60 years. In recent, distortion rectification through deep learning has attracted increasing attention\cite{ref10, ref11, ref47, ref44, ref46, ref48, ref55}.
Accurately estimating the distortion parameters derived from a specific camera, is a crucial step in the field of distortion rectification. However, there are two main limitations that make the distortion parameters learning challenging. (i) The distortion parameters are not observable and hard to learn from a single distorted image, such as the principal point and distortion coefficients. Compared with the intuitive targets, such as the object classification and bounding box detection studied in other regions, the distortion parameters have more complicated and implicit relationship with image features. As a result, the neural networks obtain an ambiguous and insufficient distortion perception, which leads to inaccurate estimation and poor rectification performance. (ii) The different components of distortion parameters have different magnitudes and ranges of values, showing various effects on the global distortion distribution of an image. Such a heterogeneous representation confuses the distortion cognition of neural networks and causes a heavy imbalance problem during the training process.
To overcome the above limitations of distortion parameters estimation, previous methods exploit more guided features such as the semantic information and distorted lines \cite{ref11, ref47}, or introduce the pixel-wise reconstruction loss \cite{ref44, ref46, ref48}. However, the extra features and supervisions impose increased memory/computation cost. In this work, we would like to draw attention from the traditional calibration objective to a learning-friendly perceptual target. The target is to unify the implicit and heterogeneous parameters into an intermediate representation, thus bridging the gap between image feature and distortion estimation in the field of distortion rectification.
In particular, we redesign the whole pipeline of deep distortion rectification and present an intermediate representation based on the distortion parameters. The comparison of the previous methods and the proposed approach is illustrated in Fig. \ref{Fig:1}. Our key insight is that distortion rectification can be cast as a problem of learning an \textit{ordinal distortion} from a distorted image. The ordinal distortion indicates the distortion levels of a series of pixels, which extend outward from the principal point. To predict the ordinal distortion, we design a local-global associated estimation network that is optimized with an ordinal distortion loss function, in which a distortion-aware perception layer is exploited to boost the features extraction of different degrees of distortion.
\begin{figure*}[t]
\centering
\includegraphics[width=1\linewidth]{teaser.jpg}
\caption{Method Comparisons. (a) Previous learning methods, (b) Our proposed approach. Our aim is to transfer the traditional calibration objective into a learning-friendly representation. Previous methods roughly feed the whole distorted image into their learning models and directly estimate the implicit and heterogeneous distortion parameters. In contrast, our proposed approach only requires a part of a distorted image (distortion element) and estimates the ordinal distortion. Due to its explicit description and homogeneity, we can obtain more accurate distortion estimation and thus achieve better corrected results.}
\label{Fig:1}
\end{figure*}
The proposed learning representation offers three unique advantages. First, the ordinal distortion is directly perceivable from a distorted image, it solves a simpler estimation problem than the implicit metric regression. As we can observe, the farther the pixel is away from the principal point, the larger the distortion degree is, and vice versa. This prior knowledge enables the neural networks to build a clear cognition with respect to the distortion distribution. Thus, the learning model gains more sufficient distortion perception of image features and shows faster convergence, without any extra features and pixel-wise supervisions.
Second, the ordinal distortion is homogeneous as its all elements share a similar magnitude and description. Therefore, the imbalanced optimization problem no longer exists during the training process, and we do not need to focus on the cumbersome factor-balancing task any more. Compared to the distortion parameters with different types of components, our learning model only needs to consider one optimization objective, thus achieving more accurate estimation and more realistic rectification results.
Third, the ordinal distortion can be estimated using only a part of a distorted image. Different from the semantic information, the distortion information is redundant in images, which shows the central symmetry and mirror symmetry to the principal point. Consequently, the efficiency of rectification algorithms can be significantly improved when taking the ordinal distortion estimation as a learning target. More importantly, the ordinal relationships are invariant to monotonic transformations of distorted images, thereby increasing the robustness of the rectification algorithm.
With lots of experimental results, we verify that the proposed ordinal distortion is more suitable than the distortion parameters as a learning representation for deep distortion rectification. The experimental results also show that our approach outperforms the state-of-the-art methods with a large margin, approximately 23\% improvement on the quantitative evaluation while using fewer input images, demonstrating its efficiency on distortion rectification.
The rest of this paper is organized as follows. We first introduce the related work in Section \ref{sec2}. We then present our approach in Section \ref{sec3}. The experiments are provided in Section \ref{sec4}. Finally, we conclude this paper in Section \ref{sec5}.
\section{Related Work}
\label{sec2}
In this section, we briefly review the previous distortion rectification methods and classify these methods into two groups, which are the traditional vision-based one and the deep learning one.
\subsection{Traditional Distortion Rectification}
There is a rich history of exploration in the field of distortion rectification. The most common method is based on a specific physical model. \cite{ref21, ref22, ref23} utilized a camera to capture several views of a 2D calibration pattern that covered points, corners, or other features, and then computed the distortion parameters of the camera. However, these methods cannot handle images captured by other cameras and thus are restricted to the application scenario. Self-calibration was leveraged for distortion parameter estimation in \cite{ref4, ref5, ref6}; however, the authors failed in the geometry recovery using only a single image. To overcome the above limitations and achieve automatic distortion rectification, Bukhari et al. \cite{ref7} employed a one-parameter camera model \cite{ref8} and estimated distortion parameters using the detected circular arcs. Similarly, \cite{ref9, ref45} also utilized the simplified camera model to correct the radial distortion in images. However, these methods perform poorly on scenes that are lacking of enough hand-crafted features. Thus, the above traditional methods are difficult to handle on the single distorted image rectification in various scenes.
\begin{figure*}
\begin{center}
\includegraphics[width=1\linewidth]{11.jpg}
\caption{Attributes of the proposed ordinal distortion. (a) Explicitness. The ordinal distortion is observable in an image and explicit to image features, which describes a series of distortion levels from small to large (top); the ordinal distortion always equals one in an undistorted image (bottom). (b) Homogeneity. Compared with the heterogeneous distortion parameters $\mathcal{K} = [k_1 \ \ k_2 \ \ k_3 \ \ k_4]$, the ordinal distortion $\mathcal{D} = [\delta_1 \ \ \delta_2 \ \ \delta_3 \ \ \delta_4]$ is homogeneous, representing the same concept of distortion distribution. (c) Redundancy. After different flip operations, although the semantic features of four patches have not any relevance (top), the ordinal distortion of four patches keeps the same in distribution with each other (bottom).}
\label{Fig:2}
\end{center}
\end{figure*}
\subsection{Deep Distortion Rectification}
In contrast to the long history of traditional distortion rectification, learning methods began to study the distortion rectification in the last few years. Rong et al. \cite{ref10} quantized the values of the distortion parameter to 401 categories based on the one-parameter camera model \cite{ref8} and then trained a network to classify the distorted image. This method achieved the deep distortion rectification for the first time, while the coarse values of parameters and the simplified camera model severely influenced its generalization ability. To expand the application, Yin et al. \cite{ref11} rectified the distortion in terms of the fisheye camera model using a multi-context collaborative deep network. However, their correction results heavily rely on the semantic segmentation results, leading to a strong cascading effect. Xue et al. \cite{ref47} improved the performance of distortion parameter estimation by distorted lines. In analogy to traditional methods \cite{ref7, ref9, ref45}, the extra introduced hand-crafted features limit the robustness of this algorithm and decrease the efficiency of the rectification. Note that the above methods directly estimates distortion parameters from a single distorted image, such an implicit and heterogeneous calibration objective hinders the sufficient learning with respect to the distortion information. To solve the imbalance problem in the estimation of distortion parameters, recent works \cite{ref44, ref46, ref48} optimized the image reconstruction loss rather than the parameters regression loss for rectification. However, their models are based on the parameter-free mechanism and cannot estimate the distortion parameters, which are important for the structure from motion and camera calibration. Manuel et al. \cite{ref55} proposed a parameterization scheme for the extrinsic and intrinsic camera parameters, but they only considered one distortion coefficient for the rectification and cannot apply the algorithm into more complicated camera models.
Different from previous methods, due to the proposed learning-friendly representation, i.e., ordinal distortion, our approach can not only boost the efficient learning of neural networks and eliminate the imbalance problem, but also obtain the accurate parameters with better rectification performance.
\section{Approach}
\label{sec3}
In this section, we describe how to learn the ordinal distortion given a single distorted image. We first define the proposed objective in Section \ref{s31}. Next, we introduce the network architecture and training loss in Section \ref{s32}. Finally, Section \ref{s33} describes the transformation between the ordinal distortion and distortion parameter.
\subsection{Problem Definition}
\label{s31}
\subsubsection{Parameterized Camera Model}
We assume that a point in the distorted image is expressed as $\mathbf{P} = [x, y]^{\rm T} \in {\mathbb{R}}^{2}$ and a corresponding point in the corrected image is expressed as $\mathbf{P'} = [x', y']^{\rm T} \in {\mathbb{R}}^{2}$. The polynomial camera model can be described as
\begin{equation}\label{eq1}
\begin{split}
&x' = x(1 + {k_1}{{r}^2} + {k_2}{{r}^4} + {k_3}{{r}^6} + {k_4}{{r}^8} + \cdots) \\
&y' = y(1 + {k_1}{{r}^2} + {k_2}{{r}^4} + {k_3}{{r}^6} + {k_4}{{r}^8} + \cdots), \\
\end{split}
\end{equation}
where $[k_1\ \ k_2 \ \ k_3 \ \ k_4 \ \ \cdots]$ are the distortion coefficients, $r$ is the Euclidean distance between the point $\mathbf{P}$ and the principal point $\mathbf{C} = [x_c, y_c]^{\rm T}$ in the distorted image, which can be expressed as
\begin{equation}\label{eq2}
r = \sqrt{(x - x_c)^2 + (y - y_c)^2}.
\end{equation}
This polynomial camera model fits well for small distortions but requires more distortion parameters for severe distortions. As an alternative camera model, the division model is formed by:
\begin{equation}\label{eq3}
\begin{split}
&x' = \frac{x}{1 + {k_1}{{r}^2} + {k_2}{{r}^4} + {k_3}{{r}^6} + {k_4}{{r}^8} + \cdots} \\
&y' = \frac{y}{1 + {k_1}{{r}^2} + {k_2}{{r}^4} + {k_3}{{r}^6} + {k_4}{{r}^8} + \cdots}.\\
\end{split}
\end{equation}
Compared with the polynomial camera model, the division model requires fewer parameters in terms of the strong distortion and thus is more suitable for the approximation of wide-angle cameras.
\subsubsection{Ordinal Distortion}
\label{sec3.2}
As mentioned above, most previous learning methods correct the distorted image based on the distortion parameters estimation. However, due to the implicit and heterogeneous representation, the neural network suffers from the insufficient learning problem and imbalance regression problem. These problems seriously limit the learning ability of neural networks and cause inferior distortion rectification results. To address the above problems, we propose a fully novel concept, i.e., ordinal distortion as follows. Fig. \ref{Fig:2} illustrates the attributes of the proposed ordinal distortion.
The ordinal distortion represents the image feature in terms of the distortion distribution, which is jointly determined by the global distortion parameters and local location information. We assume that the camera model is the division model, and the ordinal distortion $\mathcal{D}$ can be defined as
\begin{equation}\label{eqd}
\begin{split}
\mathcal{D} &= [\delta(r_1) \ \ \delta(r_2) \ \ \delta(r_3) \ \ \cdots \ \ \delta(r_n)], \\
&0 \leq r_1 < r_2 < r_3 < \cdots < r_n \leq R,\\
\end{split}
\end{equation}
where $R$ is the maximum distance between a point and the principal point, $\delta(\cdot)$ indicates the distortion level of a point $P_i$ in the distorted image:
\begin{equation}\label{eq5}
\delta(r_i) = \frac{x_i}{x'_i} = \frac{y_i}{y'_i} = 1 + {k_1}{{r_i}^2} + {k_2}{{r_i}^4} + {k_3}{{r_i}^6} + {k_4}{{r_i}^8} + \cdots.
\end{equation}
Intuitively, the distortion level expresses the ratio between the coordinates of $\mathbf{P}$ and $\mathbf{P'}$. The larger the distortion level is, the stronger the distortion of a pixel is, and vice versa. For an undistorted or ideally rectified image, $\delta(\cdot)$ always equals 1. Therefore, the ordinal distortion represents the distortion levels of pixels in a distorted image, which increases outward from the principal point sequentially.
We assume the width and height of a distorted image are $W$ and $H$, respectively. Then, the distortion level satisfies the following equation:
\begin{equation}\label{eq7}
\begin{split}
\delta(x_i, y_i) &= \delta(W - x_i + x_c, y_i) = \delta(x_i, H - y_i + y_c) \\
&= \delta(W - x_i + x_c, H - y_i + y_c).\\
\end{split}
\end{equation}
Thus, the ordinal distortion displays the mirror symmetry and central symmetry to the principal point in a distorted image. This prior knowledge ensures less data required in the distortion parameter estimation process.
\subsection{Network}
\label{s32}
\subsubsection{Network Input}
Considering the principal point is slightly disturbed in the image center, we first cut the distorted image into four patches along the center of the image, and obtain the distortion elements $\Pi = [\pi_1 \ \ \pi_2 \ \ \pi_3 \ \ \pi_4]$ with size of $\frac{H}{2}\times\frac{W}{2}\times3$. Although most distortion information covers in one patch, the distortion distribution of each patch is different. To normalize this diversity, we flip three of the four elements to keep the similar distortion distribution with that of the selected one. As shown in Fig. \ref{Fig:3} and Fig. \ref{Fig:2} (c), the top left, top right, and bottom left distortion parts are handled with the diagonal, vertical, and horizontal flip operations, respectively.
To calculate the ordinal distortion, we further crop each distortion element into the distortion blocks $\Theta = [\theta_1 \ \ \theta_2 \ \ \theta_3 \ \ \cdots \ \ \theta_n]$ with size of $\frac{H}{8}\times\frac{W}{8}\times3$ around the centers $\Omega = [\omega_1 \ \ \omega_2 \ \ \omega_3 \ \ \cdots \ \ \omega_n]$. To enable neural networks to explicitly learn the local distortion features, we construct the region-aware masks consisting of the bounding boxes and Gaussian blobs of the distortion blocks. Therefore, the network input includes two components. The first is the global distortion context, which provides the distortion elements with the overall distortion information and the region of interest (ROI) in which the $\Theta$ reside. The second is the local distortion context, which provides the distortion blocks and ROI in which the $\Omega$ reside.
\subsubsection{Network Architecture}
To jointly deduce different scales of distortion data, we design a local-global associate estimation network. As shown in Fig. \ref{Fig:3}, the network consists of two parts, a global perception module $M_{gp}$ and a local Siamese module $M_{ls}$, which take the global distortion context and local distortion context as inputs, respectively.
For the global perception module, its architecture can be divided into two sub-networks, a backbone network and a header network. Specifically, the general representation of the global distortion context is extracted using the backbone network composed of convolutional layers, which indicates the high-level information including the semantic features. Any prevalent networks such as VGG16 \cite{ref30}, ResNet \cite{ref31}, and InceptionV3 \cite{ref53} (without fully connected layers) can be plugged into the backbone network. We pretrain the backbone network on ImageNet \cite{ref33} and fine-tune on our synthesized distorted image dataset. The header network is employed to aggregate the general representation of the input and further abstract the high-level information in the form of a feature vector, which contains three fully connected layers. The numbers of units for these layers are 4096, 2048, and 1024. The activation functions for all of the fully connected layers are ReLUs. The extracted features of the global distortion context are used to combine with the features of local distortion context, which are derived from the local Siamese module.
\begin{figure*}
\begin{center}
\includegraphics[width=1\linewidth]{network.jpg}
\caption{Architecture of the local-global ordinal distortion estimation network. This network consists of a global perception module $M_{gp}$ and a local Siamese module $M_{ls}$, jointly considering the multiple scales of distortion information given a distorted image.}
\label{Fig:3}
\end{center}
\end{figure*}
The local Siamese module consists of $n$ components, each component also can be divided into a backbone network and a header network. In detail, we first use two convolutional layers to extract the low-level features with size of $\frac{H}{32}\times\frac{W}{32}\times256$ from the input local distortion context. Then, we feed the feature maps into a pyramid residual module consisting of five residual blocks and get the high-level features with a size of $\frac{H}{32}\times\frac{W}{32}\times512$. The pyramid residual module shares the weights in each component. Subsequently, a header network with three fully connected layers aggregates the general representation of the high-level features. To comprehensively analyze the distortion information, we combine each local distortion feature with the global distortion feature and fuse these features using two fully connected layers. Finally, a fully connected layer with the unit number of $n$ and linear activation function predicts the ordinal distortion $\mathcal{D} = [\delta(r_1) \ \ \delta(r_2) \ \ \delta(r_3) \ \ \cdots \ \ \delta(r_n)]$ of a distorted image.
In contrast to the undistorted image, the distorted image suffers from different geometric distortion in different locations. However, previous distortion rectification methods use the same size of filters to learn the overall distortion information. As a result, the learning model cannot explicitly perceive the different degrees of distortion in each distorted image and thus generates ambiguous distortion features. To enable an explicit extraction way of distortion feature, we design a distortion-aware perception layer. In general, the degree of distortion increases with the distance between the pixel and the principal point. Thus, we introduce this key prior knowledge into our learning model. Concretely, the distortion-aware perception layer is applied before feeding the input contexts to all modules. For the global distortion context, the distortion-aware perception layer leverages filters with middle size of $W_g\times H_g$ to learn its distortion information; for the local distortion context, the distortion blocks $\Theta = [\theta_1 \ \ \theta_2 \ \ \cdots \ \ \theta_n]$ are processed using the filters with sizes of $W_{l1}\times H_{l1}, W_{l2}\times H_{l2}, \cdots, W_{ln}\times H_{ln}$, from small to large. All sizes of filters satisfies the following relationship: $W_{l1}\times H_{l1} < \cdots < W_g\times H_g < \cdots < W_{ln}\times H_{ln}$. Therefore, we leverage the different sizes of filters to reason the region features with different degrees of distortions. As a benefit of the distortion-aware perception layer, our model gains improvements in regards to the distortion learning. The relevant experimental results will be described in Section \ref{sec43}.
\subsubsection{Training Loss}
After predicting the distortion labels of a distorted image, it is straightforward to use the distance metric loss such as $\mathcal{L}_1$ loss or $\mathcal{L}_2$ loss to learn our network parameters. However, the above loss functions cannot measure the ordered relationship between the distortion labels, while the proposed ordinal distortion possesses a strong ordinal correlation in terms of the distortion distribution. To this end, we cast the distortion estimation problem as an ordinal distortion regression problem and design an ordinal distortion loss to train our learning model.
Suppose that the ground truth ordinal distortion $\mathcal{D} = [\delta(r_1) \ \ \delta(r_2) \ \ \delta(r_3) \ \ \cdots \ \ \delta(r_n)]$ is an increasing vector, which means $\delta(r_1) < \delta(r_2) < \delta(r_3) < \cdots < \delta(r_n)$. Let $\mathcal{F}_g = \varphi(I_g, \Phi)$ indicates the feature maps given a global distortion context $I_g$, where $\Phi$ is the parameters involved in the backbone network of global perception module. $\mathcal{F}_l = \{\psi_1(I_{l}^1, \Psi_1), \psi_2(I_{l}^2, \Psi_2), \cdots, \psi_n(I_{l}^n, \Psi_n)\}$ indicate the feature maps given $n$ local distortion context $\{I_{l}^1, I_{l}^2, \cdots, I_{l}^n\}$, where $\{\Psi_1, \Psi_2, \cdots, \Psi_n\}$ are the parameters involved in the backbone networks of local Siamese module. Then, $\chi = \eta(\mathcal{F}_g, \mathcal{F}_l, \xi)$ of size of $n$ denotes the estimated ordinal distortion given a distorted image $I$, where $\xi = \{\xi_1, \xi_2, \cdots, \xi_n\}$ contains the weights of the fully connected layer of our network. The ordinal distortion loss $\mathcal{L}(\mathcal{F}_g, \mathcal{F}_l, \xi)$ can be described by the average of each distortion level loss $\mathcal{L}_d(i, \mathcal{F}_g, \mathcal{F}_l, \xi)$ over the entire sequence:
\begin{equation}\label{eq13}
\begin{split}
&\mathcal{L}(\mathcal{F}_g, \mathcal{F}_l, \xi) = -\frac{1}{n}\sum_{i=0}^{n-1}{\mathcal{L}_d(i, \mathcal{F}_g, \mathcal{F}_l, \xi)},\\
&\mathcal{L}_d(i, \mathcal{F}_g, \mathcal{F}_l, \xi) = \sum_{k=0}^{i-1}{\log(\mathcal{P}_i^k)} + \sum_{k=i}^{n-1}{\log(1 - \mathcal{P}_i^k)},\\
\end{split}
\end{equation}
where $\mathcal{P}_i^k = P(\delta(r_i) > \delta(r_k))$ indicates the probability that $\delta(r_i)$ is larger than $\delta(r_k)$.
\subsection{Ordinal Distortion to Distortion Parameter}
\label{s33}
Once the ordinal distortion is estimated by neural networks, the distortion coefficients $\mathcal{K} = [k_1 \ \ k_2 \ \ \cdots \ \ k_n]$ of a distorted image can be easily obtained by
\begin{equation}\label{eq8}
\begin{bmatrix}
k_1 \ \ k_2 \ \ \cdots \ \ k_n
\end{bmatrix} ={\begin{bmatrix}
\delta(r_1) - 1\\\\
\delta(r_2) - 1\\\\
\vdots \\ \\
\delta(r_n) - 1\\
\end{bmatrix}}^{\rm T} {\begin{bmatrix}
r_1^2 & r_2^2 & \cdots\ &r_n^2\\ \\
r_1^4 & r_2^4 & \cdots\ &r_n^4\\ \\
\vdots & \vdots & \ddots & \vdots \\ \\
r_1^{2n} & r_2^{2n} & \cdots\ &r_n^{2n}\\
\end{bmatrix}}^{-1}_{.}
\end{equation}
For clarity, we rewrite Eq. \ref{eq8} as follows:
\begin{equation}\label{eq9}
\mathcal{K} = \mathcal{D}^{*} \cdot \mathcal{R}^{-1},
\end{equation}
where $\mathcal{D}^{*} = \tilde{\mathcal{D}} - [\underbrace{1 \ \ 1 \ \ \cdots \ \ 1}_{n}]$ and $\tilde{\mathcal{D}}$ expresses the estimated ordinal distortion, and the location information with different powers is included in $\mathcal{R}$.
When the principal point is not fixed on the center of the image, we can also calculate all distortion parameters $[x_c \ \ y_c \ \ k_1 \ \ k_2 \ \ \cdots \ \ k_n]$ using more distortion levels $[\delta(r_1) \ \ \delta(r_2) \ \ \cdots \ \ \delta(r_n) \ \ \delta(r_{n+1}) \ \ \delta(r_{n+2})]$ based on Eq. \ref{eq8}.
In summary, we argue that by presenting our distortion rectification framework, we can have the following advantages.
1. The proposed ordinal distortion is a learning-friendly representation for neural networks, which is explicit and homogeneous compared with the implicit and heterogeneous distortion parameters. Thus, our learning model gains sufficient distortion perception of features and shows faster convergence. Moreover, this representation enables more efficient learning with less data required.
2. The local-global associate ordinal distortion estimation network considers different scales of distortion features, jointly reasoning the local distortion context and global distortion context. In addition, the devised distortion-aware perception layers boosts the features extraction of different degrees of distortion.
3. Our ordinal distortion loss fully measures the strong ordinal correlation in the proposed representation, facilitating the accurate approximation of distortion distribution.
4. We can easily calculate the distortion parameters with the estimated ordinal distortion in terms of the camera model. In contrast to previous methods, our method is able to handle various camera models and different types of distortion due to the unified learning representation.
\section{Experiments}
\label{sec4}
In this section, we first state the details of the synthetic distorted image dataset and the training process of our learning model. Subsequently, we analyse the learning representation for distortion estimation. To demonstrate the effectiveness of each module in our framework, we conduct an ablation study to show the different performance. At last, the experimental results of our approach compared with the state-of-the-art methods are exhibited, in both quantitative measurement and visual qualitative appearance.
\subsection{Implementation Settings}
\label{sec41}
\noindent \textbf{Dataset} We construct a standard distorted image dataset in terms of the division model discussed in Section \ref{s31}. Following the implementations of previous literature \cite{ref11, ref48, ref36}, we also use a $4^{th}$ order polynomial based on Eq. \ref{eq3}, which is able to approximate most projection models with high accuracy. All of the distortion coefficients are randomly generated from their corresponding ranges. Our dataset contains 20,000 training images, 2,000 test images, and 2,000 validation images.\\
\textbf{Training/Testing Setting}
We train our learning model using the constructed synthetic distorted images. We set the learning rate to $5\times10^{-4}$ and reduce it by a factor of 10 every 200K iterations. Adam \cite{ref34} is chosen as the optimizer. In the training stage, we crop each distorted image into four distortion elements and learn the parameters of neural network using all data. In the test stage, we only need one distortion element, i.e., 1/4 of an image, to estimate the ordinal distortion.\\
\textbf{Evaluation Metrics} Crucially, evaluating the performance of different methods with reasonable metrics benefits experimental comparisons. In the distortion rectification problem, the corrected image can be evaluated with the peak signal-to-noise ratio (PSNR) and the structural similarity index (SSIM). For the evaluation of the estimated distortion label, it is straightforward to employ the root mean square error (RMSE) between the estimated parameters $\tilde{\mathcal{K}}$ and ground truth parameters $\mathcal{K}$:
\begin{equation}\label{eq12}
RMSE = \frac{1}{N}\sum_{i=1}^N\sqrt{{(\tilde{\mathcal{K}_i} - \mathcal{K}_i)}^2},
\end{equation}
where $N$ is the number of estimated distortion parameters. However, we found that different groups of distortion parameters may display similar distortion distributions in images. To more reasonably evaluate the estimated distortion labels, we propose a new metric based on the reprojection error, mean distortion level deviation (MDLD):
\begin{equation}\label{eq13}
MDLD = \frac{1}{WH}\sum_{i=1}^W\sum_{j=1}^H{|\tilde{\delta(i, j)} - \delta(i, j)|},
\end{equation}
where $W$ and $H$ are the width and height of a distorted image, respectively. The ground truth distortion level $\delta(i, j)$ of each pixel can be obtained using Eq. \ref{eq5}.
In contrast to RMSE, MDLD is more suitable for parameter evaluation due to the uniqueness of the distortion distribution. Moreover, RMSE fails to evaluate the different numbers and attributes of estimated parameters with respect to the different camera models. Thanks to the objective description of the distortion, MDLD is capable of evaluating different distortion estimation methods using different camera models.
\begin{figure}
\begin{center}
\includegraphics[width=1\linewidth]{dl_dp.jpg}
\caption{Comparison of two learning representations for distortion estimation, distortion parameter (left) and ordinal distortion (right). In contrast to the ambiguous relationship between the distortion distribution and distortion parameter, the proposed ordinal distortion displays a very clear positive correlation to the distortion reprojection error.}
\label{Fig:dl_dp}
\end{center}
\end{figure}
\begin{figure*}
\begin{center}
\includegraphics[width=1\linewidth]{error_conv.jpg}
\caption{Analysis of two learning representations in terms of the error and convergence. We show the the histogram of error (top) and convergence (bottom) of two learning representations using three backbone networks, VGG16, ResNet50, and InceptionV3. Compared with the distortion estimation task, our proposed ordinal distortion estimation task achieves lower errors and faster convergence on all backbone networks.}
\label{Fig:error_conv}
\end{center}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[width=1\linewidth]{loss.jpg}
\caption{Analysis of two learning representation in terms of the training and validation loss curves. We show the learning performance of the distortion parameter estimation without (top) and with (middle) the normalization of magnitude, and the ordinal distortion estimation (bottom). Our proposed ordinal distortion estimation task displays the fast convergence and stable trend on both training and validation sets.}
\label{Fig:loss}
\end{center}
\end{figure*}
\subsection{Analysis of Learning Representation}
\label{sec42}
Previous learning methods directly regress the distortion parameters from a distorted image. However, such an implicit and heterogeneous representation confuses the distortion learning of neural networks and causes the insufficient perception of distortion. To bridge the gap between image feature and calibration objective, we present a novel intermediate representation, i.e., ordinal distortion, which displays a learning-friendly attribute for learning models. For an intuitive and comprehensive analysis, we compare these two representations from the following three aspects.
\noindent \textbf{Relationship to Distortion Distribution} We first emphasize the relationship between two learning representations and the realistic distortion distribution of a distorted image. In detail, we train a learning model to estimate the distortion parameters and the ordinal distortions separately, and the errors of estimated results are built the relationship to the distortion reprojection error. As shown in Fig. \ref{Fig:dl_dp}, we visualize the scatter diagram of two learning representations using 1,000 test distorted images. For the distortion parameter, its relationship to the distortion distribution is ambiguous and the similar parameter errors are related to quite different reprojection errors, which indicates that optimizing the parameter error would confuse the learning of neural networks. In contrast, the ordinal distortion error displays a very clear positive correlation to the distortion distribution error, and thus the learning model gains intuitive distortion perception and the proposed representation significantly decreases the error of distortion estimation.
\noindent \textbf{Distortion Learning Evaluation} Then, we introduce three key elements for evaluating the learning representation: training data, convergence, and error. Supposed that the settings such as the network architecture and optimizer are the same, a better learning representation can be described from the less the training data is, the faster convergence and the lower error are. For example, a student is able to achieve the highest test grade (the lowest error) with the fastest learning speed and the least homework, meaning that he grasps the best learning strategy compared with other students. In terms of the above description, we evaluate the distortion parameter and ordinal distortion as shown in Fig. \ref{Fig:error_conv} and Fig. \ref{Fig:loss}.
To comprehensively exhibit the performance, we employ three common network architectures VGG16, ResNet50, and InceptionV3 as the backbones networks of the learning model. The proposed MDLD metric is used to express the error of distortion estimation due to its unique and fair measurement for distortion distribution. To be specific, we visualize the error and convergence epoch when estimating two representations under the same number of training data in Fig. \ref{Fig:error_conv}, which is sampled with 20\%, 40\%, 60\%, 80\%, and 100\% from the entire training data. In addition, the training and validation loss curves of two learning representations are shown in Fig. \ref{Fig:loss}, in which the distortion parameters are processed without (top) and with (middle) the normalization of magnitude. From these learning evaluations, we can observe:
(1) Overall, the ordinal distortion estimation significantly outperforms the distortion parameter estimation in both convergence and accuracy, even if the amount of training data is 20\% of that used to train the learning model. Note that we only use 1/4 distorted image to predict the ordinal distortion. As we pointed out earlier, the proposed ordinal distortion is explicit to the image feature and is observable from a distorted image, thus it boosts the learning ability of neural networks. On the other hand, the performance of the distortion parameter estimation drops as the amount of training data decreases. In contrast, our ordinal distortion estimation performs more consistently due to the homogeneity of the learning representation.
(2) For each backbone network, the layer depths of VGG16, InceptionV3, and ResNet50 are 23, 159, and 168, respectively. These architectures represent the different extraction abilities of image features. As illustrated in Fig. \ref{Fig:error_conv}, the distortion parameter estimation achieves the lowest error (0.15) using InceptionV3 as the backbone under 80\% training data, which indicates its performance requires more complicated and high-level features extracted by deep networks. With the explicit relationship to image features, the ordinal distortion estimation achieves the lowest error (0.07) using the VGG16 as the backbone under 100\% training data. This promising performance indicates the ordinal distortion is a learning-friendly representation, which is easy to learn even using the very shallow network.
(3) From the loss curves in Fig. \ref{Fig:loss}, the ordinal distortion estimation achieves the fastest convergence and best performance on the validation dataset. It is also worth to note that the ordinal distortion estimation already performs well on the validation at the first five epochs, which verifies that this learning representation yields a favorable generalization for neural networks. In contrast, suffering from the heterogeneous representation, the learning process of distortion parameter estimation displays a slower convergence. Moreover, the training and validation loss curves show unstable descend processes when the distortion parameters are handled without the normalization of magnitude, which demonstrates the distortion parameter estimation is very sensitive to the label balancing.
We further present a \textit{learning-friendly rate} ($\Gamma_{lr}$) to evaluate the effectiveness of learning representation or strategy quantitatively. To our knowledge, this is the first evaluation metric to describe the effectiveness of learning representation for neural networks. As mentioned above, the required training data, convergence, and error can jointly describe a learning representation, and thus we formulate the learning-friendly rate as follows
\begin{equation}\label{eq_lr}
\Gamma_{lr} = \frac{1}{N}\sum_{i=1}^N{\frac{D_i}{D}(\frac{1}{E_i}\log(2 - \frac{C_i}{C}))},
\end{equation}
where $N$ is the number of split groups, $E_i$, $D_i$, and $C_i$ indicate the error, number of training data, the epoch of convergence of the $i$-th group, respectively. $D$ and $C$ indicate the total number of training data and total training epochs for the learning model. We compute the learning-friendly rates of two learning representations and list the quantitative results in Table \ref{tab:1}. The results show that our scheme outperforms the distortion parameter estimation on all backbone settings, and thus the proposed ordinal distortion is much suitable for the neural networks as a learning representation.
\begin{table}
\caption{The learning-friendly rates of two learning representation evaluated with three backbone networks.}
\label{tab:1}
\centering
\begin{tabular}{p{2.6cm}<{\centering}p{1.3cm}<{\centering}p{1.3cm}<{\centering}p{1.5cm}<{\centering}}
\toprule
Learning Representation & VGG16 & ResNet50 & InceptionV3 \\
\midrule
Distortion Parameter & 0.50 & 0.60 & 0.59\\
Ordinal Distortion & \textbf{2.23} & 1.43 & 1.50\\
\bottomrule
\end{tabular}
\end{table}
\begin{figure*}
\begin{center}
\includegraphics[width=1\linewidth]{DDM.jpg}
\caption{Qualitative comparison of two learning representations. For each comparison, we show the distorted image, the ground truth 3D DDM, the 3D DDM constructed by the estimated distortion parameter, and ordinal distortion, from left to right.}
\label{Fig:ddm}
\end{center}
\end{figure*}
\noindent \textbf{Qualitative Comparison} To qualitatively show the performance of different learning representations, we visualize the 3D distortion distribution maps (3D DDM) derived from the ground truth and these two schemes in Fig. \ref{Fig:ddm}, in which each pixel value of the distortion distribution map indicates the distortion level. Since the ordinal distortion estimation paid more attention to the realistic distortion perception and reasonable learning strategy, our scheme achieves results much closer to the ground truth 3D DDM. Due to implicit learning, the distortion parameter estimation generates inferior reconstructed results, such as the under-fitting (left) and over-fitting (right) on the global distribution approximation as shown in Fig. \ref{Fig:ddm}.
\subsection{Ablation Study}
\label{sec43}
To validate the effectiveness of each component in our approach, we conduct an ablation study to evaluate the error of distortion estimation as shown in Fig. \ref{Fig:ab}. Concretely, we first use VGG16 network without the fully connected layers as the backbone of the ordinal distortion estimation network, which is based on the analysis of the learning representation in Section \ref{sec42}. Subsequently, we implement the learning model without the flip operation (FO) on global distortion context, ordinal supervision (OS), region-aware mask (RM), and distortion-aware perception layer (DL) as the baseline (BS), and then gradually add these removed components to show the different estimation performance. In addition, we perform two loss functions: $\mathcal{L}_2$ and $\mathcal{L}_{sm}$ to optimize the baseline model, in which $\mathcal{L}_{sm}$ is the smooth $\mathcal{L}_1$ loss function \cite{ref54} that combines the attributes of $\mathcal{L}_1$ and $\mathcal{L}_2$. We name these two types of baseline models as BS-1 and BS-2.
Overall, the completed framework achieves the lowest error of distortion estimation as shown in Fig. \ref{Fig:ab}, verifying the effectiveness of our proposed approach. For the optimization strategy, the BS-2 used $\mathcal{L}_{sm}$ performs much better than BS-1 used $\mathcal{L}_{2}$ since the $\mathcal{L}_{sm}$ loss function boosts a more stable training process. Due to the effective normalization of distortion distribution, the network gains explicit spatial guidance with the flip operation on the global distortion context. We also show the training loss of the first 30 epochs derived from the BS-2 and BS-2 + FO in Fig. \ref{Fig:ab_loss}, where we can observe that the distribution normalization can significantly accelerate the convergence of the training process. By contrary, the BS-2 without flip operation suffers from a \textit{confused learning period} especially in the first 10 epochs, which indicates that the neural network is unsure how to find a direct optimization way from the distribution difference. Moreover, the ordinal supervision fully measures the strong ordinal correlation in the proposed representation, and thus facilitates the accurate approximation of distortion distribution. With the special attention mechanism and distortion feature extraction, our learning model gains further improvements using the region-aware mask and distortion-aware perception layer.
\begin{figure}
\begin{center}
\includegraphics[width=1\linewidth]{ablation.jpg}
\caption{Ablation study of the proposed ordinal distortion estimation approach.}
\label{Fig:ab}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=1\linewidth]{ablation_loss.jpg}
\caption{Training loss of first 30 epochs derived from the BS-2 and BS-2 + FO. The flip operation that normalizes the distortion distribution of inputs is able to significantly accelerate the convergence of the learning process.}
\label{Fig:ab_loss}
\end{center}
\end{figure}
\begin{figure*}
\begin{center}
\includegraphics[width=1\linewidth]{cp1.jpg}
\caption{Qualitative evaluations of the rectified distorted images on indoor (left) and outdoor (right) scenes. For each evaluation, we show the distorted image, ground truth, and corrected results of the compared methods: Alem{\'a}n-Flores \cite{ref9}, Santana-Cedr{\'e}s \cite{ref45}, Rong \cite{ref10}, Li \cite{ref44}, and Liao \cite{ref46}, and rectified results of our proposed approach, from left to right.}
\label{Fig:cp1}
\end{center}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[width=1\linewidth]{cp2.jpg}
\caption{Qualitative evaluations of the rectified distorted images on people (left) and challenging (right) scenes. For each evaluation, we show the distorted image, ground truth, and corrected results of the compared methods: Alem{\'a}n-Flores \cite{ref9}, Santana-Cedr{\'e}s \cite{ref45}, Rong \cite{ref10}, Li \cite{ref44}, and Liao \cite{ref46}, and rectified results of our proposed approach, from left to right.}
\label{Fig:cp2}
\end{center}
\end{figure*}
\subsection{Comparison Results}
\label{sec44}
In this part, we compare our approach with the state-of-the-art methods in both quantitative and qualitative evaluations, in which the compared methods can be classified into traditional methods \cite{ref9}\cite{ref45} and learning methods \cite{ref10}\cite{ref44}\cite{ref46}. Note that our approach only requires 1/4 part of a whole distorted image to estimate the distortion label, which is further employed for the subsequent image rectification.
\begin{table}
\begin{center}
\caption{Quantitative evaluation of the rectified results obtained by different methods.}
\label{table:2}
\begin{tabular}{lccc}
\hline\noalign{\smallskip}
Comparison Methods & PSNR $\uparrow$ & SSIM $\uparrow$ & MDLD $\downarrow$ \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
Traditional Methods & & &\\
Alem{\'a}n-Flores \cite{ref9} & 9.47 & 0.31 & 0.26 \\
Santana-Cedr{\'e}s \cite{ref45} & 7.90 & 0.25 & 1.18 \\
\hline
Learning Methods & & & \\
Rong \cite{ref10} & 10.37 & 0.29 & 0.23\\
Li \cite{ref44} & 13.87 & 0.64 & - \\
Liao \cite{ref46} & 20.28 & 0.72 & - \\
Ours & \textbf{24.82} & \textbf{0.84} & \textbf{0.04} \\
\hline
\end{tabular}
\end{center}
\end{table}
\setlength{\tabcolsep}{1.6pt}
\noindent \textbf{Quantitative Evaluation}
To demonstrate a quantitative comparison with the state-of-the-art approaches, we evaluate the rectified images using PSNR, SSIM, and the proposed MDLD. As listed in Table \ref{table:2}, our approach significantly outperforms the compared approaches in all metrics, including the highest metrics on PSNR and SSIM, as well as the lowest metric on MDLD. Specifically, compared with the traditional methods \cite{ref9, ref45} based on the hand-crafted features, our approach overcomes the scene limitation and simple camera model assumption, showing more promising generality and flexibility. Compared with the learning distortion rectification methods \cite{ref10}\cite{ref44}\cite{ref46}, which ignores the prior knowledge of the distortion, our approach transfers the heterogeneous estimation problem into a homogeneous one, which also eliminates the implicit relationship between image features and predicted values in a more explicit expression. As benefits of the effective ordinal supervision and guidance of distortion information during the learning process, our approach outperforms Liao \cite{ref46} by a significant margin, with approximately 23\% improvement on PSNR and 17\% improvement on SSIM.\\
\begin{figure*}
\begin{center}
\includegraphics[width=1\linewidth]{cp3.jpg}
\caption{Qualitative evaluations of the rectified distorted images on real scenes. For each evaluation, we show the distorted image and corrected results of the compared methods: Alem{\'a}n-Flores \cite{ref9}, Santana-Cedr{\'e}s \cite{ref45}, Rong \cite{ref10}, Li \cite{ref44}, and Liao \cite{ref46}, and rectified results of our proposed approach, from left to right.}
\label{Fig:cp3}
\end{center}
\end{figure*}\noindent \textbf{Qualitative Evaluation}
We visually compare the corrected results from our approach with those of the state-of-the-art methods using our synthetic test set and the real distorted images. To show the comprehensive rectification performance under different scenes, we split the test set into four types of scenes as indoor, outdoor, people, and challenging scenes. The indoor and outdoor scenes are shown in Fig. \ref{Fig:cp1}, and the people and challenging scenes are shown in Fig. \ref{Fig:cp2}. Our approach performs well on all scenes, while the traditional methods \cite{ref9, ref45} show inferior corrected results under the scene that lacks sufficient hand-crafted features, especially in the people and challenging scenes. On the other hand, the learning methods \cite{ref10, ref44, ref46} lag behind in the sufficient distortion perception and cannot easily adapt to scenes with strong geometric distortion. For example, the results obtained by Rong \cite{ref10} show coarse rectified structures, which are induced by the implicit learning of distortion and simple model assumption. Li \cite{ref44} leveraged the estimated distortion flow to generate the rectified images. However, the accuracy of the pixel-wise reconstruction heavily rely on the performance of scene analysis, leading to some stronger distortion results under complex scenes. Although Liao \cite{ref46} generated better rectified images than the above learning methods in terms of the global distribution, the results display unpleasing blur local appearances due to the used adversarial learning manner. In contrast, our results achieve the best performance on both global distribution and local appearance, which are benefited by the proposed learning-friendly representation and the effective learning model.
The comparison results of the real distorted image are shown in Fig. \ref{Fig:cp3}. We collect the real distorted images from the videos in YouTube, which are capture by popular fisheye lenses, such as the SAMSUNG 10mm F3, Rokinon 8mm Cine Lens, Opteka 6.5mm Lens, and GoPro. As illustrated in Fig. \ref{Fig:cp3}, our approach generates the best rectification results compared with the state-of-the-art methods, showing the appealing generalization ability for blind distortion rectification. To be specific, the salient objects such buildings, streetlight, and roads are recovered into their original straight structures by our approach, which exhibit more realistic geometric appearance than the results of other methods. Since our approach mainly focuses on the design of learning representation for distortion estimation, the neural networks gains more powerful learning ability with respect to the distortion perception and achieves more accurate estimation results.
\section{Conclusion}
\label{sec5}
In this paper, we present a novel learning representation for the deep distortion rectification, bridging the gap between image feature and calibration objective. Compared with the implicit and heterogeneous distortion parameters, the proposed ordinal distortion offers three unique advantages such as the explicitness, homogeneity, and redundancy, which enables more sufficient and efficient learning on the distortion. To learn this representation, we design a local-global associate estimation network that is optimized with an ordinal distortion loss function, and a distortion-aware perception layer is used to boost the features extraction of different degrees of distortion. As the benefit of the proposed learning representation and learning model, our approach outperforms the state-of-the-art methods by a remarkable margin while only leveraging 1/4 data for distortion estimation. In future work, we plan to solve other challenging computer vision tasks with a new and learning-friendly representation.
\normalem
\bibliographystyle{ieeetr}
\section{Introduction}
\IEEEPARstart {I}mages captured by wide-angle camera usually suffer from a strong distortion, which influences the important scene perception tasks such as the object detection \cite{ref49, ref50} and semantic segmentation \cite{ref51, ref52}. The distortion rectification tries to recover the real geometric attributes from distorted scenes. It is a fundamental and indispensable part of image processing, which has a long research history extending back 60 years. In recent, distortion rectification through deep learning has attracted increasing attention\cite{ref10, ref11, ref47, ref44, ref46, ref48, ref55}.
Accurately estimating the distortion parameters derived from a specific camera, is a crucial step in the field of distortion rectification. However, there are two main limitations that make the distortion parameters learning challenging. (i) The distortion parameters are not observable and hard to learn from a single distorted image, such as the principal point and distortion coefficients. Compared with the intuitive targets, such as the object classification and bounding box detection studied in other regions, the distortion parameters have more complicated and implicit relationship with image features. As a result, the neural networks obtain an ambiguous and insufficient distortion perception, which leads to inaccurate estimation and poor rectification performance. (ii) The different components of distortion parameters have different magnitudes and ranges of values, showing various effects on the global distortion distribution of an image. Such a heterogeneous representation confuses the distortion cognition of neural networks and causes a heavy imbalance problem during the training process.
To overcome the above limitations of distortion parameters estimation, previous methods exploit more guided features such as the semantic information and distorted lines \cite{ref11, ref47}, or introduce the pixel-wise reconstruction loss \cite{ref44, ref46, ref48}. However, the extra features and supervisions impose increased memory/computation cost. In this work, we would like to draw attention from the traditional calibration objective to a learning-friendly perceptual target. The target is to unify the implicit and heterogeneous parameters into an intermediate representation, thus bridging the gap between image feature and distortion estimation in the field of distortion rectification.
In particular, we redesign the whole pipeline of deep distortion rectification and present an intermediate representation based on the distortion parameters. The comparison of the previous methods and the proposed approach is illustrated in Fig. \ref{Fig:1}. Our key insight is that distortion rectification can be cast as a problem of learning an \textit{ordinal distortion} from a distorted image. The ordinal distortion indicates the distortion levels of a series of pixels, which extend outward from the principal point. To predict the ordinal distortion, we design a local-global associated estimation network that is optimized with an ordinal distortion loss function, in which a distortion-aware perception layer is exploited to boost the features extraction of different degrees of distortion.
\begin{figure*}[t]
\centering
\includegraphics[width=1\linewidth]{teaser.jpg}
\caption{Method Comparisons. (a) Previous learning methods, (b) Our proposed approach. Our aim is to transfer the traditional calibration objective into a learning-friendly representation. Previous methods roughly feed the whole distorted image into their learning models and directly estimate the implicit and heterogeneous distortion parameters. In contrast, our proposed approach only requires a part of a distorted image (distortion element) and estimates the ordinal distortion. Due to its explicit description and homogeneity, we can obtain more accurate distortion estimation and thus achieve better corrected results.}
\label{Fig:1}
\end{figure*}
The proposed learning representation offers three unique advantages. First, the ordinal distortion is directly perceivable from a distorted image, it solves a simpler estimation problem than the implicit metric regression. As we can observe, the farther the pixel is away from the principal point, the larger the distortion degree is, and vice versa. This prior knowledge enables the neural networks to build a clear cognition with respect to the distortion distribution. Thus, the learning model gains more sufficient distortion perception of image features and shows faster convergence, without any extra features and pixel-wise supervisions.
Second, the ordinal distortion is homogeneous as its all elements share a similar magnitude and description. Therefore, the imbalanced optimization problem no longer exists during the training process, and we do not need to focus on the cumbersome factor-balancing task any more. Compared to the distortion parameters with different types of components, our learning model only needs to consider one optimization objective, thus achieving more accurate estimation and more realistic rectification results.
Third, the ordinal distortion can be estimated using only a part of a distorted image. Different from the semantic information, the distortion information is redundant in images, which shows the central symmetry and mirror symmetry to the principal point. Consequently, the efficiency of rectification algorithms can be significantly improved when taking the ordinal distortion estimation as a learning target. More importantly, the ordinal relationships are invariant to monotonic transformations of distorted images, thereby increasing the robustness of the rectification algorithm.
With lots of experimental results, we verify that the proposed ordinal distortion is more suitable than the distortion parameters as a learning representation for deep distortion rectification. The experimental results also show that our approach outperforms the state-of-the-art methods with a large margin, approximately 23\% improvement on the quantitative evaluation while using fewer input images, demonstrating its efficiency on distortion rectification.
The rest of this paper is organized as follows. We first introduce the related work in Section \ref{sec2}. We then present our approach in Section \ref{sec3}. The experiments are provided in Section \ref{sec4}. Finally, we conclude this paper in Section \ref{sec5}.
\section{Related Work}
\label{sec2}
In this section, we briefly review the previous distortion rectification methods and classify these methods into two groups, which are the traditional vision-based one and the deep learning one.
\subsection{Traditional Distortion Rectification}
There is a rich history of exploration in the field of distortion rectification. The most common method is based on a specific physical model. \cite{ref21, ref22, ref23} utilized a camera to capture several views of a 2D calibration pattern that covered points, corners, or other features, and then computed the distortion parameters of the camera. However, these methods cannot handle images captured by other cameras and thus are restricted to the application scenario. Self-calibration was leveraged for distortion parameter estimation in \cite{ref4, ref5, ref6}; however, the authors failed in the geometry recovery using only a single image. To overcome the above limitations and achieve automatic distortion rectification, Bukhari et al. \cite{ref7} employed a one-parameter camera model \cite{ref8} and estimated distortion parameters using the detected circular arcs. Similarly, \cite{ref9, ref45} also utilized the simplified camera model to correct the radial distortion in images. However, these methods perform poorly on scenes that are lacking of enough hand-crafted features. Thus, the above traditional methods are difficult to handle on the single distorted image rectification in various scenes.
\begin{figure*}
\begin{center}
\includegraphics[width=1\linewidth]{11.jpg}
\caption{Attributes of the proposed ordinal distortion. (a) Explicitness. The ordinal distortion is observable in an image and explicit to image features, which describes a series of distortion levels from small to large (top); the ordinal distortion always equals one in an undistorted image (bottom). (b) Homogeneity. Compared with the heterogeneous distortion parameters $\mathcal{K} = [k_1 \ \ k_2 \ \ k_3 \ \ k_4]$, the ordinal distortion $\mathcal{D} = [\delta_1 \ \ \delta_2 \ \ \delta_3 \ \ \delta_4]$ is homogeneous, representing the same concept of distortion distribution. (c) Redundancy. After different flip operations, although the semantic features of four patches have not any relevance (top), the ordinal distortion of four patches keeps the same in distribution with each other (bottom).}
\label{Fig:2}
\end{center}
\end{figure*}
\subsection{Deep Distortion Rectification}
In contrast to the long history of traditional distortion rectification, learning methods began to study the distortion rectification in the last few years. Rong et al. \cite{ref10} quantized the values of the distortion parameter to 401 categories based on the one-parameter camera model \cite{ref8} and then trained a network to classify the distorted image. This method achieved the deep distortion rectification for the first time, while the coarse values of parameters and the simplified camera model severely influenced its generalization ability. To expand the application, Yin et al. \cite{ref11} rectified the distortion in terms of the fisheye camera model using a multi-context collaborative deep network. However, their correction results heavily rely on the semantic segmentation results, leading to a strong cascading effect. Xue et al. \cite{ref47} improved the performance of distortion parameter estimation by distorted lines. In analogy to traditional methods \cite{ref7, ref9, ref45}, the extra introduced hand-crafted features limit the robustness of this algorithm and decrease the efficiency of the rectification. Note that the above methods directly estimates distortion parameters from a single distorted image, such an implicit and heterogeneous calibration objective hinders the sufficient learning with respect to the distortion information. To solve the imbalance problem in the estimation of distortion parameters, recent works \cite{ref44, ref46, ref48} optimized the image reconstruction loss rather than the parameters regression loss for rectification. However, their models are based on the parameter-free mechanism and cannot estimate the distortion parameters, which are important for the structure from motion and camera calibration. Manuel et al. \cite{ref55} proposed a parameterization scheme for the extrinsic and intrinsic camera parameters, but they only considered one distortion coefficient for the rectification and cannot apply the algorithm into more complicated camera models.
Different from previous methods, due to the proposed learning-friendly representation, i.e., ordinal distortion, our approach can not only boost the efficient learning of neural networks and eliminate the imbalance problem, but also obtain the accurate parameters with better rectification performance.
\section{Approach}
\label{sec3}
In this section, we describe how to learn the ordinal distortion given a single distorted image. We first define the proposed objective in Section \ref{s31}. Next, we introduce the network architecture and training loss in Section \ref{s32}. Finally, Section \ref{s33} describes the transformation between the ordinal distortion and distortion parameter.
\subsection{Problem Definition}
\label{s31}
\subsubsection{Parameterized Camera Model}
We assume that a point in the distorted image is expressed as $\mathbf{P} = [x, y]^{\rm T} \in {\mathbb{R}}^{2}$ and a corresponding point in the corrected image is expressed as $\mathbf{P'} = [x', y']^{\rm T} \in {\mathbb{R}}^{2}$. The polynomial camera model can be described as
\begin{equation}\label{eq1}
\begin{split}
&x' = x(1 + {k_1}{{r}^2} + {k_2}{{r}^4} + {k_3}{{r}^6} + {k_4}{{r}^8} + \cdots) \\
&y' = y(1 + {k_1}{{r}^2} + {k_2}{{r}^4} + {k_3}{{r}^6} + {k_4}{{r}^8} + \cdots), \\
\end{split}
\end{equation}
where $[k_1\ \ k_2 \ \ k_3 \ \ k_4 \ \ \cdots]$ are the distortion coefficients, $r$ is the Euclidean distance between the point $\mathbf{P}$ and the principal point $\mathbf{C} = [x_c, y_c]^{\rm T}$ in the distorted image, which can be expressed as
\begin{equation}\label{eq2}
r = \sqrt{(x - x_c)^2 + (y - y_c)^2}.
\end{equation}
This polynomial camera model fits well for small distortions but requires more distortion parameters for severe distortions. As an alternative camera model, the division model is formed by:
\begin{equation}\label{eq3}
\begin{split}
&x' = \frac{x}{1 + {k_1}{{r}^2} + {k_2}{{r}^4} + {k_3}{{r}^6} + {k_4}{{r}^8} + \cdots} \\
&y' = \frac{y}{1 + {k_1}{{r}^2} + {k_2}{{r}^4} + {k_3}{{r}^6} + {k_4}{{r}^8} + \cdots}.\\
\end{split}
\end{equation}
Compared with the polynomial camera model, the division model requires fewer parameters in terms of the strong distortion and thus is more suitable for the approximation of wide-angle cameras.
\subsubsection{Ordinal Distortion}
\label{sec3.2}
As mentioned above, most previous learning methods correct the distorted image based on the distortion parameters estimation. However, due to the implicit and heterogeneous representation, the neural network suffers from the insufficient learning problem and imbalance regression problem. These problems seriously limit the learning ability of neural networks and cause inferior distortion rectification results. To address the above problems, we propose a fully novel concept, i.e., ordinal distortion as follows. Fig. \ref{Fig:2} illustrates the attributes of the proposed ordinal distortion.
The ordinal distortion represents the image feature in terms of the distortion distribution, which is jointly determined by the global distortion parameters and local location information. We assume that the camera model is the division model, and the ordinal distortion $\mathcal{D}$ can be defined as
\begin{equation}\label{eqd}
\begin{split}
\mathcal{D} &= [\delta(r_1) \ \ \delta(r_2) \ \ \delta(r_3) \ \ \cdots \ \ \delta(r_n)], \\
&0 \leq r_1 < r_2 < r_3 < \cdots < r_n \leq R,\\
\end{split}
\end{equation}
where $R$ is the maximum distance between a point and the principal point, $\delta(\cdot)$ indicates the distortion level of a point $P_i$ in the distorted image:
\begin{equation}\label{eq5}
\delta(r_i) = \frac{x_i}{x'_i} = \frac{y_i}{y'_i} = 1 + {k_1}{{r_i}^2} + {k_2}{{r_i}^4} + {k_3}{{r_i}^6} + {k_4}{{r_i}^8} + \cdots.
\end{equation}
Intuitively, the distortion level expresses the ratio between the coordinates of $\mathbf{P}$ and $\mathbf{P'}$. The larger the distortion level is, the stronger the distortion of a pixel is, and vice versa. For an undistorted or ideally rectified image, $\delta(\cdot)$ always equals 1. Therefore, the ordinal distortion represents the distortion levels of pixels in a distorted image, which increases outward from the principal point sequentially.
We assume the width and height of a distorted image are $W$ and $H$, respectively. Then, the distortion level satisfies the following equation:
\begin{equation}\label{eq7}
\begin{split}
\delta(x_i, y_i) &= \delta(W - x_i + x_c, y_i) = \delta(x_i, H - y_i + y_c) \\
&= \delta(W - x_i + x_c, H - y_i + y_c).\\
\end{split}
\end{equation}
Thus, the ordinal distortion displays the mirror symmetry and central symmetry to the principal point in a distorted image. This prior knowledge ensures less data required in the distortion parameter estimation process.
\subsection{Network}
\label{s32}
\subsubsection{Network Input}
Considering the principal point is slightly disturbed in the image center, we first cut the distorted image into four patches along the center of the image, and obtain the distortion elements $\Pi = [\pi_1 \ \ \pi_2 \ \ \pi_3 \ \ \pi_4]$ with size of $\frac{H}{2}\times\frac{W}{2}\times3$. Although most distortion information covers in one patch, the distortion distribution of each patch is different. To normalize this diversity, we flip three of the four elements to keep the similar distortion distribution with that of the selected one. As shown in Fig. \ref{Fig:3} and Fig. \ref{Fig:2} (c), the top left, top right, and bottom left distortion parts are handled with the diagonal, vertical, and horizontal flip operations, respectively.
To calculate the ordinal distortion, we further crop each distortion element into the distortion blocks $\Theta = [\theta_1 \ \ \theta_2 \ \ \theta_3 \ \ \cdots \ \ \theta_n]$ with size of $\frac{H}{8}\times\frac{W}{8}\times3$ around the centers $\Omega = [\omega_1 \ \ \omega_2 \ \ \omega_3 \ \ \cdots \ \ \omega_n]$. To enable neural networks to explicitly learn the local distortion features, we construct the region-aware masks consisting of the bounding boxes and Gaussian blobs of the distortion blocks. Therefore, the network input includes two components. The first is the global distortion context, which provides the distortion elements with the overall distortion information and the region of interest (ROI) in which the $\Theta$ reside. The second is the local distortion context, which provides the distortion blocks and ROI in which the $\Omega$ reside.
\subsubsection{Network Architecture}
To jointly deduce different scales of distortion data, we design a local-global associate estimation network. As shown in Fig. \ref{Fig:3}, the network consists of two parts, a global perception module $M_{gp}$ and a local Siamese module $M_{ls}$, which take the global distortion context and local distortion context as inputs, respectively.
For the global perception module, its architecture can be divided into two sub-networks, a backbone network and a header network. Specifically, the general representation of the global distortion context is extracted using the backbone network composed of convolutional layers, which indicates the high-level information including the semantic features. Any prevalent networks such as VGG16 \cite{ref30}, ResNet \cite{ref31}, and InceptionV3 \cite{ref53} (without fully connected layers) can be plugged into the backbone network. We pretrain the backbone network on ImageNet \cite{ref33} and fine-tune on our synthesized distorted image dataset. The header network is employed to aggregate the general representation of the input and further abstract the high-level information in the form of a feature vector, which contains three fully connected layers. The numbers of units for these layers are 4096, 2048, and 1024. The activation functions for all of the fully connected layers are ReLUs. The extracted features of the global distortion context are used to combine with the features of local distortion context, which are derived from the local Siamese module.
\begin{figure*}
\begin{center}
\includegraphics[width=1\linewidth]{network.jpg}
\caption{Architecture of the local-global ordinal distortion estimation network. This network consists of a global perception module $M_{gp}$ and a local Siamese module $M_{ls}$, jointly considering the multiple scales of distortion information given a distorted image.}
\label{Fig:3}
\end{center}
\end{figure*}
The local Siamese module consists of $n$ components, each component also can be divided into a backbone network and a header network. In detail, we first use two convolutional layers to extract the low-level features with size of $\frac{H}{32}\times\frac{W}{32}\times256$ from the input local distortion context. Then, we feed the feature maps into a pyramid residual module consisting of five residual blocks and get the high-level features with a size of $\frac{H}{32}\times\frac{W}{32}\times512$. The pyramid residual module shares the weights in each component. Subsequently, a header network with three fully connected layers aggregates the general representation of the high-level features. To comprehensively analyze the distortion information, we combine each local distortion feature with the global distortion feature and fuse these features using two fully connected layers. Finally, a fully connected layer with the unit number of $n$ and linear activation function predicts the ordinal distortion $\mathcal{D} = [\delta(r_1) \ \ \delta(r_2) \ \ \delta(r_3) \ \ \cdots \ \ \delta(r_n)]$ of a distorted image.
In contrast to the undistorted image, the distorted image suffers from different geometric distortion in different locations. However, previous distortion rectification methods use the same size of filters to learn the overall distortion information. As a result, the learning model cannot explicitly perceive the different degrees of distortion in each distorted image and thus generates ambiguous distortion features. To enable an explicit extraction way of distortion feature, we design a distortion-aware perception layer. In general, the degree of distortion increases with the distance between the pixel and the principal point. Thus, we introduce this key prior knowledge into our learning model. Concretely, the distortion-aware perception layer is applied before feeding the input contexts to all modules. For the global distortion context, the distortion-aware perception layer leverages filters with middle size of $W_g\times H_g$ to learn its distortion information; for the local distortion context, the distortion blocks $\Theta = [\theta_1 \ \ \theta_2 \ \ \cdots \ \ \theta_n]$ are processed using the filters with sizes of $W_{l1}\times H_{l1}, W_{l2}\times H_{l2}, \cdots, W_{ln}\times H_{ln}$, from small to large. All sizes of filters satisfies the following relationship: $W_{l1}\times H_{l1} < \cdots < W_g\times H_g < \cdots < W_{ln}\times H_{ln}$. Therefore, we leverage the different sizes of filters to reason the region features with different degrees of distortions. As a benefit of the distortion-aware perception layer, our model gains improvements in regards to the distortion learning. The relevant experimental results will be described in Section \ref{sec43}.
\subsubsection{Training Loss}
After predicting the distortion labels of a distorted image, it is straightforward to use the distance metric loss such as $\mathcal{L}_1$ loss or $\mathcal{L}_2$ loss to learn our network parameters. However, the above loss functions cannot measure the ordered relationship between the distortion labels, while the proposed ordinal distortion possesses a strong ordinal correlation in terms of the distortion distribution. To this end, we cast the distortion estimation problem as an ordinal distortion regression problem and design an ordinal distortion loss to train our learning model.
Suppose that the ground truth ordinal distortion $\mathcal{D} = [\delta(r_1) \ \ \delta(r_2) \ \ \delta(r_3) \ \ \cdots \ \ \delta(r_n)]$ is an increasing vector, which means $\delta(r_1) < \delta(r_2) < \delta(r_3) < \cdots < \delta(r_n)$. Let $\mathcal{F}_g = \varphi(I_g, \Phi)$ indicates the feature maps given a global distortion context $I_g$, where $\Phi$ is the parameters involved in the backbone network of global perception module. $\mathcal{F}_l = \{\psi_1(I_{l}^1, \Psi_1), \psi_2(I_{l}^2, \Psi_2), \cdots, \psi_n(I_{l}^n, \Psi_n)\}$ indicate the feature maps given $n$ local distortion context $\{I_{l}^1, I_{l}^2, \cdots, I_{l}^n\}$, where $\{\Psi_1, \Psi_2, \cdots, \Psi_n\}$ are the parameters involved in the backbone networks of local Siamese module. Then, $\chi = \eta(\mathcal{F}_g, \mathcal{F}_l, \xi)$ of size of $n$ denotes the estimated ordinal distortion given a distorted image $I$, where $\xi = \{\xi_1, \xi_2, \cdots, \xi_n\}$ contains the weights of the fully connected layer of our network. The ordinal distortion loss $\mathcal{L}(\mathcal{F}_g, \mathcal{F}_l, \xi)$ can be described by the average of each distortion level loss $\mathcal{L}_d(i, \mathcal{F}_g, \mathcal{F}_l, \xi)$ over the entire sequence:
\begin{equation}\label{eq13}
\begin{split}
&\mathcal{L}(\mathcal{F}_g, \mathcal{F}_l, \xi) = -\frac{1}{n}\sum_{i=0}^{n-1}{\mathcal{L}_d(i, \mathcal{F}_g, \mathcal{F}_l, \xi)},\\
&\mathcal{L}_d(i, \mathcal{F}_g, \mathcal{F}_l, \xi) = \sum_{k=0}^{i-1}{\log(\mathcal{P}_i^k)} + \sum_{k=i}^{n-1}{\log(1 - \mathcal{P}_i^k)},\\
\end{split}
\end{equation}
where $\mathcal{P}_i^k = P(\delta(r_i) > \delta(r_k))$ indicates the probability that $\delta(r_i)$ is larger than $\delta(r_k)$.
\subsection{Ordinal Distortion to Distortion Parameter}
\label{s33}
Once the ordinal distortion is estimated by neural networks, the distortion coefficients $\mathcal{K} = [k_1 \ \ k_2 \ \ \cdots \ \ k_n]$ of a distorted image can be easily obtained by
\begin{equation}\label{eq8}
\begin{bmatrix}
k_1 \ \ k_2 \ \ \cdots \ \ k_n
\end{bmatrix} ={\begin{bmatrix}
\delta(r_1) - 1\\\\
\delta(r_2) - 1\\\\
\vdots \\ \\
\delta(r_n) - 1\\
\end{bmatrix}}^{\rm T} {\begin{bmatrix}
r_1^2 & r_2^2 & \cdots\ &r_n^2\\ \\
r_1^4 & r_2^4 & \cdots\ &r_n^4\\ \\
\vdots & \vdots & \ddots & \vdots \\ \\
r_1^{2n} & r_2^{2n} & \cdots\ &r_n^{2n}\\
\end{bmatrix}}^{-1}_{.}
\end{equation}
For clarity, we rewrite Eq. \ref{eq8} as follows:
\begin{equation}\label{eq9}
\mathcal{K} = \mathcal{D}^{*} \cdot \mathcal{R}^{-1},
\end{equation}
where $\mathcal{D}^{*} = \tilde{\mathcal{D}} - [\underbrace{1 \ \ 1 \ \ \cdots \ \ 1}_{n}]$ and $\tilde{\mathcal{D}}$ expresses the estimated ordinal distortion, and the location information with different powers is included in $\mathcal{R}$.
When the principal point is not fixed on the center of the image, we can also calculate all distortion parameters $[x_c \ \ y_c \ \ k_1 \ \ k_2 \ \ \cdots \ \ k_n]$ using more distortion levels $[\delta(r_1) \ \ \delta(r_2) \ \ \cdots \ \ \delta(r_n) \ \ \delta(r_{n+1}) \ \ \delta(r_{n+2})]$ based on Eq. \ref{eq8}.
In summary, we argue that by presenting our distortion rectification framework, we can have the following advantages.
1. The proposed ordinal distortion is a learning-friendly representation for neural networks, which is explicit and homogeneous compared with the implicit and heterogeneous distortion parameters. Thus, our learning model gains sufficient distortion perception of features and shows faster convergence. Moreover, this representation enables more efficient learning with less data required.
2. The local-global associate ordinal distortion estimation network considers different scales of distortion features, jointly reasoning the local distortion context and global distortion context. In addition, the devised distortion-aware perception layers boosts the features extraction of different degrees of distortion.
3. Our ordinal distortion loss fully measures the strong ordinal correlation in the proposed representation, facilitating the accurate approximation of distortion distribution.
4. We can easily calculate the distortion parameters with the estimated ordinal distortion in terms of the camera model. In contrast to previous methods, our method is able to handle various camera models and different types of distortion due to the unified learning representation.
\section{Experiments}
\label{sec4}
In this section, we first state the details of the synthetic distorted image dataset and the training process of our learning model. Subsequently, we analyse the learning representation for distortion estimation. To demonstrate the effectiveness of each module in our framework, we conduct an ablation study to show the different performance. At last, the experimental results of our approach compared with the state-of-the-art methods are exhibited, in both quantitative measurement and visual qualitative appearance.
\subsection{Implementation Settings}
\label{sec41}
\noindent \textbf{Dataset} We construct a standard distorted image dataset in terms of the division model discussed in Section \ref{s31}. Following the implementations of previous literature \cite{ref11, ref48, ref36}, we also use a $4^{th}$ order polynomial based on Eq. \ref{eq3}, which is able to approximate most projection models with high accuracy. All of the distortion coefficients are randomly generated from their corresponding ranges. Our dataset contains 20,000 training images, 2,000 test images, and 2,000 validation images.\\
\textbf{Training/Testing Setting}
We train our learning model using the constructed synthetic distorted images. We set the learning rate to $5\times10^{-4}$ and reduce it by a factor of 10 every 200K iterations. Adam \cite{ref34} is chosen as the optimizer. In the training stage, we crop each distorted image into four distortion elements and learn the parameters of neural network using all data. In the test stage, we only need one distortion element, i.e., 1/4 of an image, to estimate the ordinal distortion.\\
\textbf{Evaluation Metrics} Crucially, evaluating the performance of different methods with reasonable metrics benefits experimental comparisons. In the distortion rectification problem, the corrected image can be evaluated with the peak signal-to-noise ratio (PSNR) and the structural similarity index (SSIM). For the evaluation of the estimated distortion label, it is straightforward to employ the root mean square error (RMSE) between the estimated parameters $\tilde{\mathcal{K}}$ and ground truth parameters $\mathcal{K}$:
\begin{equation}\label{eq12}
RMSE = \frac{1}{N}\sum_{i=1}^N\sqrt{{(\tilde{\mathcal{K}_i} - \mathcal{K}_i)}^2},
\end{equation}
where $N$ is the number of estimated distortion parameters. However, we found that different groups of distortion parameters may display similar distortion distributions in images. To more reasonably evaluate the estimated distortion labels, we propose a new metric based on the reprojection error, mean distortion level deviation (MDLD):
\begin{equation}\label{eq13}
MDLD = \frac{1}{WH}\sum_{i=1}^W\sum_{j=1}^H{|\tilde{\delta(i, j)} - \delta(i, j)|},
\end{equation}
where $W$ and $H$ are the width and height of a distorted image, respectively. The ground truth distortion level $\delta(i, j)$ of each pixel can be obtained using Eq. \ref{eq5}.
In contrast to RMSE, MDLD is more suitable for parameter evaluation due to the uniqueness of the distortion distribution. Moreover, RMSE fails to evaluate the different numbers and attributes of estimated parameters with respect to the different camera models. Thanks to the objective description of the distortion, MDLD is capable of evaluating different distortion estimation methods using different camera models.
\begin{figure}
\begin{center}
\includegraphics[width=1\linewidth]{dl_dp.jpg}
\caption{Comparison of two learning representations for distortion estimation, distortion parameter (left) and ordinal distortion (right). In contrast to the ambiguous relationship between the distortion distribution and distortion parameter, the proposed ordinal distortion displays a very clear positive correlation to the distortion reprojection error.}
\label{Fig:dl_dp}
\end{center}
\end{figure}
\begin{figure*}
\begin{center}
\includegraphics[width=1\linewidth]{error_conv.jpg}
\caption{Analysis of two learning representations in terms of the error and convergence. We show the the histogram of error (top) and convergence (bottom) of two learning representations using three backbone networks, VGG16, ResNet50, and InceptionV3. Compared with the distortion estimation task, our proposed ordinal distortion estimation task achieves lower errors and faster convergence on all backbone networks.}
\label{Fig:error_conv}
\end{center}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[width=1\linewidth]{loss.jpg}
\caption{Analysis of two learning representation in terms of the training and validation loss curves. We show the learning performance of the distortion parameter estimation without (top) and with (middle) the normalization of magnitude, and the ordinal distortion estimation (bottom). Our proposed ordinal distortion estimation task displays the fast convergence and stable trend on both training and validation sets.}
\label{Fig:loss}
\end{center}
\end{figure*}
\subsection{Analysis of Learning Representation}
\label{sec42}
Previous learning methods directly regress the distortion parameters from a distorted image. However, such an implicit and heterogeneous representation confuses the distortion learning of neural networks and causes the insufficient perception of distortion. To bridge the gap between image feature and calibration objective, we present a novel intermediate representation, i.e., ordinal distortion, which displays a learning-friendly attribute for learning models. For an intuitive and comprehensive analysis, we compare these two representations from the following three aspects.
\noindent \textbf{Relationship to Distortion Distribution} We first emphasize the relationship between two learning representations and the realistic distortion distribution of a distorted image. In detail, we train a learning model to estimate the distortion parameters and the ordinal distortions separately, and the errors of estimated results are built the relationship to the distortion reprojection error. As shown in Fig. \ref{Fig:dl_dp}, we visualize the scatter diagram of two learning representations using 1,000 test distorted images. For the distortion parameter, its relationship to the distortion distribution is ambiguous and the similar parameter errors are related to quite different reprojection errors, which indicates that optimizing the parameter error would confuse the learning of neural networks. In contrast, the ordinal distortion error displays a very clear positive correlation to the distortion distribution error, and thus the learning model gains intuitive distortion perception and the proposed representation significantly decreases the error of distortion estimation.
\noindent \textbf{Distortion Learning Evaluation} Then, we introduce three key elements for evaluating the learning representation: training data, convergence, and error. Supposed that the settings such as the network architecture and optimizer are the same, a better learning representation can be described from the less the training data is, the faster convergence and the lower error are. For example, a student is able to achieve the highest test grade (the lowest error) with the fastest learning speed and the least homework, meaning that he grasps the best learning strategy compared with other students. In terms of the above description, we evaluate the distortion parameter and ordinal distortion as shown in Fig. \ref{Fig:error_conv} and Fig. \ref{Fig:loss}.
To comprehensively exhibit the performance, we employ three common network architectures VGG16, ResNet50, and InceptionV3 as the backbones networks of the learning model. The proposed MDLD metric is used to express the error of distortion estimation due to its unique and fair measurement for distortion distribution. To be specific, we visualize the error and convergence epoch when estimating two representations under the same number of training data in Fig. \ref{Fig:error_conv}, which is sampled with 20\%, 40\%, 60\%, 80\%, and 100\% from the entire training data. In addition, the training and validation loss curves of two learning representations are shown in Fig. \ref{Fig:loss}, in which the distortion parameters are processed without (top) and with (middle) the normalization of magnitude. From these learning evaluations, we can observe:
(1) Overall, the ordinal distortion estimation significantly outperforms the distortion parameter estimation in both convergence and accuracy, even if the amount of training data is 20\% of that used to train the learning model. Note that we only use 1/4 distorted image to predict the ordinal distortion. As we pointed out earlier, the proposed ordinal distortion is explicit to the image feature and is observable from a distorted image, thus it boosts the learning ability of neural networks. On the other hand, the performance of the distortion parameter estimation drops as the amount of training data decreases. In contrast, our ordinal distortion estimation performs more consistently due to the homogeneity of the learning representation.
(2) For each backbone network, the layer depths of VGG16, InceptionV3, and ResNet50 are 23, 159, and 168, respectively. These architectures represent the different extraction abilities of image features. As illustrated in Fig. \ref{Fig:error_conv}, the distortion parameter estimation achieves the lowest error (0.15) using InceptionV3 as the backbone under 80\% training data, which indicates its performance requires more complicated and high-level features extracted by deep networks. With the explicit relationship to image features, the ordinal distortion estimation achieves the lowest error (0.07) using the VGG16 as the backbone under 100\% training data. This promising performance indicates the ordinal distortion is a learning-friendly representation, which is easy to learn even using the very shallow network.
(3) From the loss curves in Fig. \ref{Fig:loss}, the ordinal distortion estimation achieves the fastest convergence and best performance on the validation dataset. It is also worth to note that the ordinal distortion estimation already performs well on the validation at the first five epochs, which verifies that this learning representation yields a favorable generalization for neural networks. In contrast, suffering from the heterogeneous representation, the learning process of distortion parameter estimation displays a slower convergence. Moreover, the training and validation loss curves show unstable descend processes when the distortion parameters are handled without the normalization of magnitude, which demonstrates the distortion parameter estimation is very sensitive to the label balancing.
We further present a \textit{learning-friendly rate} ($\Gamma_{lr}$) to evaluate the effectiveness of learning representation or strategy quantitatively. To our knowledge, this is the first evaluation metric to describe the effectiveness of learning representation for neural networks. As mentioned above, the required training data, convergence, and error can jointly describe a learning representation, and thus we formulate the learning-friendly rate as follows
\begin{equation}\label{eq_lr}
\Gamma_{lr} = \frac{1}{N}\sum_{i=1}^N{\frac{D_i}{D}(\frac{1}{E_i}\log(2 - \frac{C_i}{C}))},
\end{equation}
where $N$ is the number of split groups, $E_i$, $D_i$, and $C_i$ indicate the error, number of training data, the epoch of convergence of the $i$-th group, respectively. $D$ and $C$ indicate the total number of training data and total training epochs for the learning model. We compute the learning-friendly rates of two learning representations and list the quantitative results in Table \ref{tab:1}. The results show that our scheme outperforms the distortion parameter estimation on all backbone settings, and thus the proposed ordinal distortion is much suitable for the neural networks as a learning representation.
\begin{table}
\caption{The learning-friendly rates of two learning representation evaluated with three backbone networks.}
\label{tab:1}
\centering
\begin{tabular}{p{2.6cm}<{\centering}p{1.3cm}<{\centering}p{1.3cm}<{\centering}p{1.5cm}<{\centering}}
\toprule
Learning Representation & VGG16 & ResNet50 & InceptionV3 \\
\midrule
Distortion Parameter & 0.50 & 0.60 & 0.59\\
Ordinal Distortion & \textbf{2.23} & 1.43 & 1.50\\
\bottomrule
\end{tabular}
\end{table}
\begin{figure*}
\begin{center}
\includegraphics[width=1\linewidth]{DDM.jpg}
\caption{Qualitative comparison of two learning representations. For each comparison, we show the distorted image, the ground truth 3D DDM, the 3D DDM constructed by the estimated distortion parameter, and ordinal distortion, from left to right.}
\label{Fig:ddm}
\end{center}
\end{figure*}
\noindent \textbf{Qualitative Comparison} To qualitatively show the performance of different learning representations, we visualize the 3D distortion distribution maps (3D DDM) derived from the ground truth and these two schemes in Fig. \ref{Fig:ddm}, in which each pixel value of the distortion distribution map indicates the distortion level. Since the ordinal distortion estimation paid more attention to the realistic distortion perception and reasonable learning strategy, our scheme achieves results much closer to the ground truth 3D DDM. Due to implicit learning, the distortion parameter estimation generates inferior reconstructed results, such as the under-fitting (left) and over-fitting (right) on the global distribution approximation as shown in Fig. \ref{Fig:ddm}.
\subsection{Ablation Study}
\label{sec43}
To validate the effectiveness of each component in our approach, we conduct an ablation study to evaluate the error of distortion estimation as shown in Fig. \ref{Fig:ab}. Concretely, we first use VGG16 network without the fully connected layers as the backbone of the ordinal distortion estimation network, which is based on the analysis of the learning representation in Section \ref{sec42}. Subsequently, we implement the learning model without the flip operation (FO) on global distortion context, ordinal supervision (OS), region-aware mask (RM), and distortion-aware perception layer (DL) as the baseline (BS), and then gradually add these removed components to show the different estimation performance. In addition, we perform two loss functions: $\mathcal{L}_2$ and $\mathcal{L}_{sm}$ to optimize the baseline model, in which $\mathcal{L}_{sm}$ is the smooth $\mathcal{L}_1$ loss function \cite{ref54} that combines the attributes of $\mathcal{L}_1$ and $\mathcal{L}_2$. We name these two types of baseline models as BS-1 and BS-2.
Overall, the completed framework achieves the lowest error of distortion estimation as shown in Fig. \ref{Fig:ab}, verifying the effectiveness of our proposed approach. For the optimization strategy, the BS-2 used $\mathcal{L}_{sm}$ performs much better than BS-1 used $\mathcal{L}_{2}$ since the $\mathcal{L}_{sm}$ loss function boosts a more stable training process. Due to the effective normalization of distortion distribution, the network gains explicit spatial guidance with the flip operation on the global distortion context. We also show the training loss of the first 30 epochs derived from the BS-2 and BS-2 + FO in Fig. \ref{Fig:ab_loss}, where we can observe that the distribution normalization can significantly accelerate the convergence of the training process. By contrary, the BS-2 without flip operation suffers from a \textit{confused learning period} especially in the first 10 epochs, which indicates that the neural network is unsure how to find a direct optimization way from the distribution difference. Moreover, the ordinal supervision fully measures the strong ordinal correlation in the proposed representation, and thus facilitates the accurate approximation of distortion distribution. With the special attention mechanism and distortion feature extraction, our learning model gains further improvements using the region-aware mask and distortion-aware perception layer.
\begin{figure}
\begin{center}
\includegraphics[width=1\linewidth]{ablation.jpg}
\caption{Ablation study of the proposed ordinal distortion estimation approach.}
\label{Fig:ab}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=1\linewidth]{ablation_loss.jpg}
\caption{Training loss of first 30 epochs derived from the BS-2 and BS-2 + FO. The flip operation that normalizes the distortion distribution of inputs is able to significantly accelerate the convergence of the learning process.}
\label{Fig:ab_loss}
\end{center}
\end{figure}
\begin{figure*}
\begin{center}
\includegraphics[width=1\linewidth]{cp1.jpg}
\caption{Qualitative evaluations of the rectified distorted images on indoor (left) and outdoor (right) scenes. For each evaluation, we show the distorted image, ground truth, and corrected results of the compared methods: Alem{\'a}n-Flores \cite{ref9}, Santana-Cedr{\'e}s \cite{ref45}, Rong \cite{ref10}, Li \cite{ref44}, and Liao \cite{ref46}, and rectified results of our proposed approach, from left to right.}
\label{Fig:cp1}
\end{center}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[width=1\linewidth]{cp2.jpg}
\caption{Qualitative evaluations of the rectified distorted images on people (left) and challenging (right) scenes. For each evaluation, we show the distorted image, ground truth, and corrected results of the compared methods: Alem{\'a}n-Flores \cite{ref9}, Santana-Cedr{\'e}s \cite{ref45}, Rong \cite{ref10}, Li \cite{ref44}, and Liao \cite{ref46}, and rectified results of our proposed approach, from left to right.}
\label{Fig:cp2}
\end{center}
\end{figure*}
\subsection{Comparison Results}
\label{sec44}
In this part, we compare our approach with the state-of-the-art methods in both quantitative and qualitative evaluations, in which the compared methods can be classified into traditional methods \cite{ref9}\cite{ref45} and learning methods \cite{ref10}\cite{ref44}\cite{ref46}. Note that our approach only requires 1/4 part of a whole distorted image to estimate the distortion label, which is further employed for the subsequent image rectification.
\begin{table}
\begin{center}
\caption{Quantitative evaluation of the rectified results obtained by different methods.}
\label{table:2}
\begin{tabular}{lccc}
\hline\noalign{\smallskip}
Comparison Methods & PSNR $\uparrow$ & SSIM $\uparrow$ & MDLD $\downarrow$ \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
Traditional Methods & & &\\
Alem{\'a}n-Flores \cite{ref9} & 9.47 & 0.31 & 0.26 \\
Santana-Cedr{\'e}s \cite{ref45} & 7.90 & 0.25 & 1.18 \\
\hline
Learning Methods & & & \\
Rong \cite{ref10} & 10.37 & 0.29 & 0.23\\
Li \cite{ref44} & 13.87 & 0.64 & - \\
Liao \cite{ref46} & 20.28 & 0.72 & - \\
Ours & \textbf{24.82} & \textbf{0.84} & \textbf{0.04} \\
\hline
\end{tabular}
\end{center}
\end{table}
\setlength{\tabcolsep}{1.6pt}
\noindent \textbf{Quantitative Evaluation}
To demonstrate a quantitative comparison with the state-of-the-art approaches, we evaluate the rectified images using PSNR, SSIM, and the proposed MDLD. As listed in Table \ref{table:2}, our approach significantly outperforms the compared approaches in all metrics, including the highest metrics on PSNR and SSIM, as well as the lowest metric on MDLD. Specifically, compared with the traditional methods \cite{ref9, ref45} based on the hand-crafted features, our approach overcomes the scene limitation and simple camera model assumption, showing more promising generality and flexibility. Compared with the learning distortion rectification methods \cite{ref10}\cite{ref44}\cite{ref46}, which ignores the prior knowledge of the distortion, our approach transfers the heterogeneous estimation problem into a homogeneous one, which also eliminates the implicit relationship between image features and predicted values in a more explicit expression. As benefits of the effective ordinal supervision and guidance of distortion information during the learning process, our approach outperforms Liao \cite{ref46} by a significant margin, with approximately 23\% improvement on PSNR and 17\% improvement on SSIM.\\
\begin{figure*}
\begin{center}
\includegraphics[width=1\linewidth]{cp3.jpg}
\caption{Qualitative evaluations of the rectified distorted images on real scenes. For each evaluation, we show the distorted image and corrected results of the compared methods: Alem{\'a}n-Flores \cite{ref9}, Santana-Cedr{\'e}s \cite{ref45}, Rong \cite{ref10}, Li \cite{ref44}, and Liao \cite{ref46}, and rectified results of our proposed approach, from left to right.}
\label{Fig:cp3}
\end{center}
\end{figure*}\noindent \textbf{Qualitative Evaluation}
We visually compare the corrected results from our approach with those of the state-of-the-art methods using our synthetic test set and the real distorted images. To show the comprehensive rectification performance under different scenes, we split the test set into four types of scenes as indoor, outdoor, people, and challenging scenes. The indoor and outdoor scenes are shown in Fig. \ref{Fig:cp1}, and the people and challenging scenes are shown in Fig. \ref{Fig:cp2}. Our approach performs well on all scenes, while the traditional methods \cite{ref9, ref45} show inferior corrected results under the scene that lacks sufficient hand-crafted features, especially in the people and challenging scenes. On the other hand, the learning methods \cite{ref10, ref44, ref46} lag behind in the sufficient distortion perception and cannot easily adapt to scenes with strong geometric distortion. For example, the results obtained by Rong \cite{ref10} show coarse rectified structures, which are induced by the implicit learning of distortion and simple model assumption. Li \cite{ref44} leveraged the estimated distortion flow to generate the rectified images. However, the accuracy of the pixel-wise reconstruction heavily rely on the performance of scene analysis, leading to some stronger distortion results under complex scenes. Although Liao \cite{ref46} generated better rectified images than the above learning methods in terms of the global distribution, the results display unpleasing blur local appearances due to the used adversarial learning manner. In contrast, our results achieve the best performance on both global distribution and local appearance, which are benefited by the proposed learning-friendly representation and the effective learning model.
The comparison results of the real distorted image are shown in Fig. \ref{Fig:cp3}. We collect the real distorted images from the videos in YouTube, which are capture by popular fisheye lenses, such as the SAMSUNG 10mm F3, Rokinon 8mm Cine Lens, Opteka 6.5mm Lens, and GoPro. As illustrated in Fig. \ref{Fig:cp3}, our approach generates the best rectification results compared with the state-of-the-art methods, showing the appealing generalization ability for blind distortion rectification. To be specific, the salient objects such buildings, streetlight, and roads are recovered into their original straight structures by our approach, which exhibit more realistic geometric appearance than the results of other methods. Since our approach mainly focuses on the design of learning representation for distortion estimation, the neural networks gains more powerful learning ability with respect to the distortion perception and achieves more accurate estimation results.
\section{Conclusion}
\label{sec5}
In this paper, we present a novel learning representation for the deep distortion rectification, bridging the gap between image feature and calibration objective. Compared with the implicit and heterogeneous distortion parameters, the proposed ordinal distortion offers three unique advantages such as the explicitness, homogeneity, and redundancy, which enables more sufficient and efficient learning on the distortion. To learn this representation, we design a local-global associate estimation network that is optimized with an ordinal distortion loss function, and a distortion-aware perception layer is used to boost the features extraction of different degrees of distortion. As the benefit of the proposed learning representation and learning model, our approach outperforms the state-of-the-art methods by a remarkable margin while only leveraging 1/4 data for distortion estimation. In future work, we plan to solve other challenging computer vision tasks with a new and learning-friendly representation.
\normalem
\bibliographystyle{ieeetr}
| proofpile-arXiv_065-207 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section*{Acknowledgment}
This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 694277), and NSERC of Canada under the Discovery and CRC programs.
\section{Approach}\label{sec:approach}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.8\columnwidth]{figures/process.pdf}
\caption{An overview of our \underline{S}afe WCET \underline{A}nalysis method \underline{F}or real-time task sch\underline{E}dulability (SAFE).}
\label{fig:over}
\end{center}
\end{figure}
Figure~\ref{fig:over} shows an overview of our \underline{S}afe WCET \underline{A}nalysis method \underline{F}or real-time task sch\underline{E}dulability (SAFE). Phase 1 of SAFE aims at searching worst-case task-arrival sequences. A task-arrival sequence is worst-case if deadline misses are maximised or, when this is not possible, tasks complete their executions as close to their deadlines as possible. Building on existing work, we identify worst-case task-arrival sequences using a search-based approach relying on genetic algorithms. Phase 2 of SAFE, which is the main contribution of this paper, aims at computing safe WCET ranges under which tasks are likely to be schedulable. To do so, relying on logistic regression and an effective sampling strategy, we augment the worst-case task-arrival sequences generated in Phase 1 to compute safe WCET ranges with a certain deadline miss probability, indicating a degree of risk. We describe in detail \hbox{these two phases next.}
\subsection{Phase 1: Worst-case task arrivals}
\label{subsec:phase1}
The first phase of SAFE finds worst-case sequences in the space of possible sequences of task arrivals, defined by their inter-arrival time characteristics. As SAFE aims to provide conservative, safe WCET ranges, we optimise task arrivals to maximise task completion times and deadline misses, and indirectly minimise safe WCET ranges (see the safe area visually presented in Figure~\ref{fig:over}).
We address this optimisation problem using a single-objective search algorithm. Following standard practice~\cite{Ferrucci:13}, we describe our search-based approach for identifying worst-case task arrivals by defining the solution representation, the scheduler, the fitness function, and the computational search algorithm. We then describe the dataset of sequences generated by search and then used for training our logistic regression model to compute safe WCET ranges in the second phase of SAFE.
Our approach in Phase 1 is based on past work~\cite{Briand:05}, where a specific genetic algorithm configuration was proposed to find worst-case task arrival sequences. One important modification though is that we account for uncertainty in WCET values through simulations for evaluating the magnitude of deadline misses.
\textbf{Representation.} Given a set $J$ of tasks to be scheduled, a feasible solution is a set $A$ of tuples $(j, \mathit{at}_k(j))$ where $j \in J$ and $\mathit{at}_k(j)$ is the $k$th arrival time of a task $j$. Thus, a solution $A$ represents a valid sequence of task arrivals of $J$ (see valid $\mathit{at}_k(j)$ computation in Section~\ref{sec:problem}). Let $\mathbb{T} = [0, \mathbf{T}]$ be the time period during which a scheduler receives task arrivals. The size of $A$ is equal to the number of task arrivals over the $\mathbb{T}$ time period. Due to the varying inter-arrival times of aperiodic tasks (Section~\ref{sec:problem}), the size of $A$ will vary across different solutions.
\textbf{Scheduler.} SAFE uses a simulation technique for analysing the schedulability of tasks to account for the uncertainty in WCET values and scalability issues.
For instance, an inter-arrival time of a software update task in a satellite system is approximately at most three months. In such cases, conducting an analysis based on an actual scheduler is prohibitively expensive. Instead, SAFE uses a real-time task scheduling simulator, named SafeScheduler, which samples WCET values from their ranges for simulating task executions and applies a scheduling policy, e.g., rate monotonic~\cite{Liu:73}, based on discrete logical time events.
SafeScheduler takes a feasible solution $A$ for scheduling a set $J$ of tasks as an input. It then outputs a schedule scenario as a set $S$ of tuples $(j,\mathit{at}_k(j),\mathit{et}_k(j))$ where $\mathit{at}_k(j)$ and $\mathit{et}_k(j)$ are the $k$th arrival and end time values of a task $j$, respectively (see Section~\ref{sec:problem}). For each task $j$, SafeScheduler computes $\mathit{et}_k(j)$ based on its scheduling policy and a selected WCET value for $j$. Recall from Section~\ref{sec:problem}, $\mathit{wcet}(j)$ is a range. Hence, each run of SafeScheduler for the same input solution $A$ can and will likely produce a different schedule scenario.
SafeScheduler implements an extended rate monotonic policy which allows assigning explicit priority $\mathit{pr}(j)$ and deadline $\mathit{dl}(j)$ to a task $j$. The extended policy follows the same assumptions of the standard rate monotonic policy, except for explicit priorities and deadlines of tasks, such as the no resource sharing and free context switching assumptions~\cite{Liu:73}. Note that the assumptions are practically valid and useful at an early development step in the context of real-time analysis. For instance, our collaborating partner accounts for the waiting time of tasks due to resource sharing and context switching between tasks through adding some extra time to WCET ranges at the task design stage. We chose the extended rate monotonic policy for SafeScheduler because our case study system relies on this policy. Note that SAFE can be applied with any scheduling policy, including those that account for resource sharing and context switching time, as implemented by SafeScheduler.
\textbf{Fitness.} Given a feasible solution $A$ for a set $J$ of tasks, we formulate a fitness function, $f(A,J_t,n)$, to quantify the degree of deadline misses regarding a set $J_t \subseteq J$ of target tasks, where $n$ is a number of SafeScheduler runs to account for the uncertainty in WCET. SAFE provides the capability of selecting target tasks $J_t$ as practitioners often need to focus on the most critical tasks. We denote by $\mathit{dist}_k(j)$ the distance between the end time and the deadline of the $k$th arrival of task $j$ and define $\mathit{dist}_k(j) = \mathit{et}_k(j) - \mathit{at}_k(j) + \mathit{dl}(j)$ (see Section~\ref{sec:problem} for the notation end time $\mathit{et}_k(a)$, arrival time $\mathit{at}_k(j)$, and deadline $\mathit{dl}(j)$).
To compute the $f(A,J_t,n)$ fitness value, SAFE runs SafeScheduler $n$ times for $A$ and obtains $n$ schedule scenarios $S_1, S_2, \ldots, S_n$. For each schedule scenario $S_i$, we denote by $\mathit{dist}_k^i(j)$ the distance between the end and deadline time values corresponding to the $k$th arrival of the $j$ task observed in $S_i$. We denote by $\mathit{lk}(j)$ the last arrival index of a task $j$ in $A$. SAFE aims to maximise the $f(A,J_t,n)$ fitness function defined as follows:
\begin{equation*}
f(A,J_t,n) = \sum_{i =1}^n\max_{j \in J_t, k \in [1,\mathit{lk}(j)]}\mathit{dist}_k^i(j) / n
\end{equation*}
We note that soft deadline tasks also require to execute within reasonable execution time ranges. Hence, engineers may estimate more relaxed WCET ranges for soft deadline tasks than those of hard deadline tasks. SAFE uses the above fitness function for both soft and hard deadline tasks.
\textbf{Computational search.} SAFE employs a steady-state genetic algorithm~\cite{Luke:13}. The algorithm breeds a new population for the next generation after computing the fitness of a population. The breeding for generating the next population is done by using the following genetic operators: (1)~\emph{Selection.} SAFE selects candidate solutions using a tournament selection technique, with the tournament size equal to two which is the most common setting~\cite{Gendreau:10}. (2)~\emph{Crossover.} Selected candidate solutions serve as parents to create offspring using a crossover operation. (3)~\emph{Mutation.} The offspring are then mutated. Below, we describe our crossover and mutation operators.
\emph{Crossover.} A crossover operator is used to produce offspring by mixing traits of parent solutions. SAFE modifies the standard one-point crossover operator~\cite{Luke:13} as two parent solutions $A_p$ and $A_q$ may have different sizes, i.e., $|A_p| \neq |A_q|$. Let $J = \{j_1, j_2, \ldots, j_m\}$ be a set of tasks to be scheduled. Our crossover operator, named SafeCrossover, first randomly selects an aperiodic task $j_r \in J$.
For all $i \in [1,r]$ and $j_i \in J$, SafeCrossover then swaps all $j_i$ arrivals between two solutions $A_p$ and $A_q$. As the size of $J$ is fixed for all solutions, SafeCrossover can cross over two solutions that may have different sizes.
\emph{Mutation operator} SAFE uses a heuristic mutation algorithm, named SafeMutation. For a solution $A$, SafeMutation mutates the $k$th task arrival time $\mathit{at}_k(j)$ of an aperiodic task $j$ with a mutation probability.
SafeMutation chooses a new arrival time value of $\mathit{at}_k(j)$ based on the $[\mathit{pmin}(j), \mathit{pmax}(j)]$ inter-arrival time range of $j$. If such a mutation of the $k$th arrival time of $j$ does not affect the validity of the $k{+}1$th arrival time of $j$, the mutation operation ends. Specifically, let $d$ be a mutated value of $\mathit{at}_k(j)$. In case $\mathit{at}_{k+1}(j) \in [d + \mathit{pmin}(j), d + \mathit{pmax}(j)]$, \hbox{SafeMutation returns the mutated $A$ solution.}
After mutating the $k$th arrival time $\mathit{at}_k(j)$ of a task $j$ in a solution $A$, if the $k{+}1$th arrival becomes invalid, SafeMutation corrects the remaining arrivals of $j$. Let $o$ and $d$ be, respectively, the original and mutated $k$th arrival time of $j$. For all the arrivals of $j$ after $d$, SafeMutation first updates their original arrival time values by adding the difference $d-o$. Let $\mathbb{T} = [0,\mathbf{T}]$ be the scheduling period. SafeMutation then removes some arrivals of $j$ if they are mutated to arrive after $\mathbf{T}$ or adds new arrivals \hbox{if SafeScheduler can handle them.}
We note that when a system is only composed of periodic tasks, SAFE will skip searching for worst-case arrival sequences as arrivals of periodic tasks are deterministic (see Section~\ref{sec:problem}), but will nevertheless generate the labelled dataset described below. When needed, SAFE can be easily extended to manipulate offset (and period) values for periodic tasks, in a way identical to how we currently handle inter-arrival times.
\textbf{Labelled dataset.} SAFE infers safe WCET ranges using a supervised learning technique~\cite{Russell:10} which requires a labelled dataset, namely logistic regression. Recall from the fitness computation described above, SAFE runs SafeScheduler $n$ times to obtain schedule scenarios $S {=} {\{S_1, S_2,\ldots,S_n\}}$, and then computes a fitness value of a solution $A$ based on $S$. We denote by $W_i$ a set of tuples $(j,w)$ representing that a task $j$ has the $w$ WCET value in the $S_i$ schedule scenario. Let $\vv{D}$ be a labelled dataset to be created by the first phase of SAFE. We denote by $b_i$ a label indicating whether or not a schedule scenario $S_i$ has any deadline miss for any of the target tasks in $J_t$, i.e., $b_i$ is either $\mathit{safe}$ or $\mathit{unsafe}$ which denotes, respectively, no deadline miss or deadline miss. For each fitness computation, SAFE adds $n$ number of tuples $(W_i,b_i)$ to $\vv{D}$. Specifically, for a schedule scenario $S_i$, SAFE adds $(W_i,\mathit{unsafe})$ to $\vv{D}$ if there are $j {\in} J_t$ and $k {\in} [1, \mathit{lk}(j)]$ such that $\mathit{dist}_k^i(j) {>} 0$; otherwise SAFE \hbox{adds $(W_i, \mathit{safe})$ to $\vv{D}$.}
\subsection{Phase 2: Safe ranges of WCET}
\label{subsec:phase2}
\begingroup
\begin{algorithm}[t]
\parbox{0.95\columnwidth}{\caption{SafeRefinement. An algorithm for computing safe WCET ranges under which target tasks are schedulable. The algorithm consists of three steps as follows: ``reduce complexity'', ``handle imbalanced dataset'', and ``refine model'' steps.}\label{alg:safewcet}}
\SetKwFunction{ReduceDimension}{ReduceDimension}
\SetKwFunction{LearnRegressionModel}{LearnRegressionModel}
\SetKwFunction{StepwiseRegression}{StepwiseRegression}
\SetKwFunction{Regression}{Regression}
\SetKwFunction{Probability}{Probability}
\SetKwFunction{HandleImbalance}{HandleImbalance}
\SetKwFunction{RunSafeScheduler}{RunSafeScheduler}
\SetKwFunction{Add}{Add}
\SetKwFunction{Sample}{SampleWCET}
\SetKwFunction{Precision}{PrecisionByCrossValidation}
\begin{algorithmic}[1]
\INPUT- $\vv{D}$: Labelled dataset obtained from the SAFE search
\HINPUT- $P$: Worst solutions obtained from the SAFE search
\HINPUT- $\mathit{ns}$: Number of WCET samples per solution
\HINPUT- $\mathit{nl}$: Number of logistic regression models
\HINPUT- $\mathit{pt}$: Precision threshold
\OUTPUT- $m$: Safe WCET model
\HOUTPUT- $p$: Probability of deadline misses
\BlankLine
\STATE \COMMENT {step 1. reduce complexity}
\STATE $\vv{D}^r$ $\leftarrow$ \ReduceDimension {$\vv{D}$} \COMMENT {feature reduction}
\STATE $m$ $\leftarrow$ \StepwiseRegression {$\vv{D}^r$} \COMMENT {term selection}
\STATE $p$ $\leftarrow$ \Probability {$m$, $\vv{D}^r$}
\STATE \COMMENT {step 2. handle imbalanced dataset}
\STATE $\vv{D}^b$ $\leftarrow$ \HandleImbalance {$\vv{D}^r$, $m$}
\STATE \COMMENT {step 3. refine model}
\FOR {$\mathit{nl}$ times}
\STATE \COMMENT {step 3.1. add new data instances}
\FOR {each solution $A \in P$}
\STATE $\{S_1, S_2, \ldots, S_{\mathit{ns}}\}$ $\leftarrow$ \RunSafeScheduler {$A$, $m$, $p$, $\mathit{ns}$}
\FOR {each scenario $S_i \in \{S_1, S_2, \ldots, S_{\mathit{ns}}\}$}
\IF {$S_i$ has any deadline miss}
\STATE $\vv{D}^b$ $\leftarrow$ \Add {$\vv{D}^b, (\mathit{WCET}(S_i),\mathit{unsafe})$}
\ELSE
\STATE $\vv{D}^b$ $\leftarrow$ \Add {$\vv{D}^b, (\mathit{WCET}(S_i),\mathit{safe})$}
\ENDIF
\ENDFOR
\ENDFOR
\STATE \COMMENT {step 3.2. learn regression model}
\STATE $m$ $\leftarrow$ \Regression {$m$, $\vv{D}^b$}
\STATE $p$ $\leftarrow$ \Probability {$m$, $\vv{D}^b$}
\IF {\Precision {$m$,$\vv{D}^b$} $> \mathit{pt}$}
\STATE \algorithmicbreak
\ENDIF
\ENDFOR
\RETURN $m$, $p$
\end{algorithmic}
\end{algorithm}
\endgroup
In Phase 2, SAFE computes safe ranges of WCET values under which target tasks are likely to be schedulable. To do so, SAFE applies a supervised machine learning technique to the labelled dataset generated by Phase 1 (Section~\ref{subsec:phase1}). Specifically, Phase 2 executes SafeRefinement (Algorithm~\ref{alg:safewcet}) which has following steps: complexity reduction, imbalance handling and model refinement.
\textbf{Complexity reduction.} The ``reduce complexity'' step in Algorithm~\ref{alg:safewcet} reduces the dimensionality of a labelled dataset $\vv{D}$ obtained from the first phase of SAFE (line 2). It predicts initial safe WCET ranges based on the WCET variables for the tasks in $J$ (line 3) that have the most significant effect on deadline misses for target tasks. A labelled dataset $\vv{D}$ obtained from the first phase of SAFE contains tuples $(W,b)$ where $W$ is a set of WCET values for tasks in $J$ and $b$ is a label of $W$ indicating either no deadline miss (safe) or deadline miss (unsafe) (Section~\ref{subsec:phase1}). Note that some WCET values in $W$ may not be relevant to determine $b$. Hence, $\vv{D}$ may contain irrelevant variables to predict $b$. To decrease computational complexity for the remaining steps, SafeRefinement creates a reduced dataset $\vv{D}^r$ which contains the same number of data instances (tuples) as $\vv{D}$ while including only WCET values with a significant effect on $b$. To that end, SafeRefinement employs a standard feature reduction technique: random forest feature reduction~\cite{Breiman:01}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.7\linewidth]{figures/wcet_space.pdf}
\caption{A safe border line of WCET values for the $j_1$ and $j_2$ tasks. The safe border is determined by a deadline miss probability of 0.01. $w_1$ and $w_2$ determine safe WCET ranges of $j_1$ and $j_2$ under which they likely satisfy their deadlines.}
\label{fig:wspace}
\end{center}
\end{figure}
After reducing the dimensionality of the input dataset $\vv{D}$ in Algorithm~\ref{alg:safewcet}, resulting in the reduced dataset $\vv{D}^r$, SafeRefinement learns an initial model to predict safe WCET ranges. SafeRefinement uses logistic regression~\cite{Hosmer:13} because it enables a probabilistic interpretation of safe WCET ranges and the investigation of relationships among different tasks' WCETs. For example, Figure~\ref{fig:wspace} shows a \emph{safe border} determined by an inferred logistic regression model $m$ with a probability $p$ of deadline misses. Note that a safe range, e.g., $[\mathit{wmin}(j_1), w_1]$ of task $j_1$ in Figure~\ref{fig:wspace}, is determined by a point on the safe border in a multidimensional WCET space. A safe border distinguishes safe and unsafe areas in the WCET space. After inferring a logistic regression model $m$ from the dataset, SafeRefinement selects a probability $p$ maximising the safe area under the safe border determined by $m$ and $p$ while ensuring that all the data instances, i.e., sets of WCET values, classified as safe using the safe border are actually observed to be safe in the input dataset, i.e., no false positives (lines 3--4).
SafeRefinement uses a second-order polynomial response surface model (RSM)~\cite{Munda:08} to build a logistic regression model. RSM is known to be useful when the relationship between several explanatory variables (e.g., WCET variables) and one or more response variables (e.g., safe or unsafe label) needs to be investigated~\cite{Munda:08,Muhuri:18}. RSM contains linear terms, quadratic terms, and 2-way interactions between linear terms. Let $V$ be a set of WCET variables $v_i$ in $\vv{D}^r$. Then, the logistic regression model of SafeRefinement is defined as follows:
\begin{equation*}
\log \frac{p}{1-p} = c_0 + \sum_{i=1}^{|V|}{c_iv_i} + \sum_{i=1}^{|V|}{c_{ii}v_i^2} + \sum_{j>i}{c_{ij}v_iv_j}
\end{equation*}
As shown in the above equation, an RSM equation, i.e., the right-hand side, built on the reduced dataset $\vv{D}^r$ has a higher number of dimensions, i.e., the number of coefficients to be inferred, than $|V|$ as RSM additionally accounts for quadratic terms ($v_i^2$) and 2-way interactions ($v_iv_j$) between linear terms. Hence, SafeRefinement employs a stepwise regression technique (line 3), e.g., stepwise AIC (Akaike Information Criterion)~\cite{Yamashita:07}, in order to select significant explanatory terms from the RSM equation. This allows the remaining ``refine model'' step of SafeRefinement to execute efficiently as it requires to run SafeScheduler and logistic regression multiple times within a time budget (line 8), both operations being computationally expensive.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.7\linewidth]{figures/pruning.pdf}
\caption{Handling imbalanced dataset by excluding unsafe WCET values based on logistic regression intercepts.}
\label{fig:imbalance}
\end{center}
\end{figure}
\begin{sloppypar}
\textbf{Imbalance handling.} Recall from Section~\ref{subsec:phase1} that SAFE searches for worst-case sequences of task arrivals and is guided by maximising the magnitude of deadline misses, when they are possible. Therefore, the major portion of $\vv{D}$, the dataset produced by the first phase of SAFE, is a set of task arrival sequences leading to deadline misses. Supervised machine learning techniques (including logistic regression) typically produce unsatisfactory results when faced with highly imbalanced datasets~\cite{Batista:04}. SafeRefinement addresses this problem with the ``handle imbalanced dataset'' step in Algorithm~\ref{alg:safewcet} (lines 5--6) before refining safe WCET ranges. SafeRefinement aims to identify WCET ranges under which tasks are likely to be schedulable. This entails that WCET ranges under which tasks are highly unlikely to be schedulable can be safely excluded from the remaining analysis. Specifically, SafeRefinement prunes out WCET ranges with a high probability of deadline misses above a high threshold $p_u$ and thus creates a more balanced dataset $\vv{D}^b$ compared to the original imbalanced dataset $\vv{D}^r$ (line 6). SafeRefinement automatically finds a minimum probability $p_u$ which leads to a safe border classifying no false unsafe (negative) instances in $\vv{D}^r$. SafeRefinement then updates the maximum WCET $\mathit{wmax}(j)$ of a task $j$ based on the intercept of the logistic regression model $m$ (with a probability of $p_u$) on the WCET axis for $j$. Figure~\ref{fig:imbalance} shows an example dataset $\vv{D}^r$ with a safe border characterised by a high deadline miss probability, i.e., $p_u = 0.99$, to create a more balanced dataset $\vv{D}^b$ within the restricted ranges $[\mathit{wmin}(j_1), \mathit{intercept}(j_1)]$ and $[\mathit{wmin}(j_2), \mathit{intercept}(j_2)]$.
\end{sloppypar}
\textbf{Model refinement.} The ``refine model'' step in Algorithm~\ref{alg:safewcet} refines an inferred logistic regression model by sampling additional schedule scenarios selected according to a strategy that is expected to improve the model. As described in Section~\ref{subsec:phase1}, the SAFE search produces a set $P$ (population) of worst-case arrival sequences of tasks $J$ which likely violate deadline constraints of target tasks $J_t \subseteq J$. For each arrival sequence $A$ in $P$, SafeRefinement executes SafeScheduler $\mathit{ns}$ times to add $ns$ new data instances to the dataset $\vv{D}^b$ based on the generated schedule scenarios and their schedulability results (lines 9--19). After adding $\mathit{ns} \cdot |P|$ new data instances to $\vv{D}^b$, SafeRefinement runs logistic regression again to infer a refined logistic regression model $m$ and computes a probability $p$ that ensures no false safe instances (positives) in $\vv{D}^b$ and maximises the safe area under the safe border defined by $m$ and $p$ (lines 20--25).
In the second phase of SAFE, SafeScheduler selects WCET values for tasks in $J$ to compute a schedule scenario based on a distance-based random number generator, which extends the standard uniform random number generator. The distance-based WCET value sampling aims at minimising the Euclidean distance between the sampled WCET points and the safe border defined by the inferred model $m$ and the selected probability $p$. SafeScheduler iteratively computes new WCET values using the following distance-based sampling procedure: (1)~generating $r$ random samples in the WCET space, (2)~computing their distance values from the safe border, and (3)~selecting the closest point to the safe border.
SafeRefinement stops model refinements either by reaching an allotted analysis budget (line 8 of Algorithm~\ref{alg:safewcet}) or when a precision reaches an acceptable level $\mathit{pt}$, e.g., 0.99 (lines 23--25). SafeRefinement uses the standard precision metric~\cite{Witten:11} as described in Section~\ref{subsec:metrics}. In our context, practitioners need to identify safe WCET ranges at a high level of precision to ensure that identified safe WCET ranges can be trusted. To compute a precision value, SafeRefinement uses a standard k-fold cross-validation~\cite{Witten:11}. In k-fold cross-validation, $\vv{D}^b$ is partitioned into k equal-size splits. One split is retained as a test dataset, and the remaining k-1 splits are used as training datasets. The cross-validation process is then repeated k times to compute a precision of inferred safe borders which are determined by $m$ and $p$ at each validation.
\textbf{Selecting WCET ranges.} A safe border defined by an inferred logistic regression model and a deadline miss probability of $p$ represents a (possibly infinite) set of points, corresponding to safe WCET ranges of tasks, e.g., $[\mathit{wmin}(j_1), w_1]$ and $[\mathit{wmin}(j_2), w_2]$ in Figure~\ref{fig:wspace}. In practice, however, engineers need to choose a specific WCET range for each task to conduct further analysis and development. How to choose optimal WCET ranges depends on the system context. At early stages, however, such contextual information may not be available. Hence, SAFE proposes a \emph{best-size point}, i.e., WCET ranges, on a safe border which maximises the volume of the hyperbox the point defines. In general, the larger hyperbox, the greater flexibility the engineers have in selecting appropriate WCET values. Choosing the point with the largest volume is helpful when no domain-specific information is available to define other selection criteria. In general the inferred safe border enables engineers to investigate trade-off among different tasks' WCET values.
\section{Evaluation}
\label{sec:eval}
We evaluate SAFE using an industrial case study from the satellite domain. Our full evaluation package is available online~\cite{Artifacts}.
\subsection{Research Questions}
\label{subsec:rq}
\begin{sloppypar}
\noindent\textbf{RQ1 (effectiveness of distance-based sampling):} \textit{How does SAFE, based on distance-based sampling, perform compared with random sampling?} We compare our distance-based sampling procedure described in Section~\ref{subsec:phase2} and used in the second phase of SAFE with a naive random sampling. Our conjecture is that distance-based sampling, although expensive, is needed to improve the quality of the training data used for logistic regression. RQ1 assesses this conjecture by comparing distance-based and random sampling.
\end{sloppypar}
\noindent\textbf{RQ2 (usefulness):} \textit{Can SAFE identify WCET ranges within which tasks are highly likely to satisfy their deadline constraints?} In RQ2, we investigate whether SAFE identifies acceptably safe WCET ranges in practical time. We further discuss our insights regarding the usefulness of SAFE from the feedback obtained from engineers in LuxSpace.
\subsection{Industrial Study Subject}
\label{subsec:casestudy}
We evaluate SAFE by applying it to the satellite attitude determination and control system (ADCS) described in Section~\ref{sec:motivation}. Our evaluation relies on real task characteristics defined by our partner, LuxSpace, at an early development stage of the system. LuxSpace is a leading system integrator of micro satellites and aerospace systems. Our case study includes a set of 15 periodic and 19 aperiodic tasks. Eight tasks out of the 19 aperiodic tasks are constrained by their hard deadlines, i.e., sporadic tasks. Out of the 34 tasks, engineers provide single WCET values for eight tasks. For the remaining 26 tasks, engineers estimate WCET ranges due to uncertain decisions, e.g., implementation choices and hardware specifications, made at later development stages (see Section~\ref{sec:motivation}). The differences between the estimated WCET maximum and minimum values across the 26 tasks varies from 0.1ms to 20000ms. The full task descriptions are available online~\cite{Artifacts}.
The problem of schedulability analysis of real-time tasks has been widely studied~\cite{Alesio:15,Altenbernd:16,Hardy:16,Bonenfant:17,Bruggen:18}. As discussed in Section~\ref{sec:relatedwork}, however, none of the prior work addresses the same problem (see Section~\ref{sec:problem}) as that addressed by SAFE. Hence, the public study subjects in the literature do not fit our study's requirements. In particular, none of the public real-time system case studies~\cite{Alesio:15} contain estimated WCET ranges in their task descriptions. These ranges, however, are necessary to apply SAFE and to evaluate its effectiveness. Estimating (practically valid) WCET ranges requires significant domain expertise and we cannot construct such data for public domain case studies independently from their contexts and without having any access to the engineers who have developed those systems. Therefore, we choose to perform our experiments using the industrial case study, i.e., ADCS, in collaboration with LuxSpace. Our study subject not only provides all the task description information available in the public case studies~\cite{Alesio:15}, but it also includes estimates for WCETs provided by the engineers. Further, our collaboration context enables us to discuss SAFE results with engineers to draw important qualitative conclusions and to assess the benefits of SAFE
(see Section~\ref{subsec:res}) . In addition, we note that the size of our study subject is larger than any of the other reported systems. For example, the Herschel-Planck satellite system~\cite{Mikucionis:10} -- the largest system among the five reported systems~\cite{Alesio:15} -- contains 23 periodic and 9 aperiodic tasks; in contrast, our system consists of 15 periodic and 19 aperiodic tasks.
\subsection{Experimental Setup}
\label{subsec:setup}
To answer the RQs described in Section~\ref{subsec:rq}, we used the case study data provided by LuxSpace and considered all 34 tasks for analysis. We conducted two experiments, EXP1 and EXP2, as described below.
\noindent\textbf{EXP1.} To answer RQ1, EXP1 compares our distance-based WCET sampling technique (described in Section~\ref{subsec:phase2}) with the naive random WCET sampling technique, for the second phase of SAFE. To this end, EXP1 first creates an initial training dataset by running the first phase of SAFE. EXP1 then relies on this initial training data for model refinement (Section~\ref{subsec:phase2}) by using both distance-based and naive random sampling. For comparison, EXP1 creates a test dataset by randomly sampling WCET values, which is independently created from the second phase of SAFE, and then compares the accuracy of the two sampling approaches in identifying safe WCET ranges for the test dataset.
\noindent\textbf{EXP2.} To answer RQ2, EXP2 monitors precision values of SAFE, obtained from 10-fold cross-validation (see Section~\ref{subsec:phase2}), over each model refinement. In our study context, i.e., developing safety-critical systems, engineers require very high precision, i.e., ideally no false positives, (see Section~\ref{sec:approach}). Hence, EXP2 measures precision over model refinements to align with such practice. EXP2 then measures whether SAFE can compute safe WCET ranges within practical execution time and at an acceptable level of precision.
\subsection{Metrics}
\label{subsec:metrics}
We use the standard precision and recall metrics~\cite{Witten:11} to measure the accuracy in our experiments. To compute precision and recall in our context, for EXP1, we created a synthetic test dataset containing tuples of WCET values and a flag indicating the presence or absence of deadline miss obtained from running SafeScheduler. Note that creating a test dataset by running an actual satellite system with varying WCETs of 34 tasks is prohibitively expensive. We therefore used a set of task arrival sequences obtained from the first phase of SAFE as we aim at testing sequences of task arrivals which are more likely to violate their deadlines. We then ran SafeScheduler to simulate task executions for the set of task arrival sequences with randomly sampled WCET values. We note that WCET values were sampled within the restricted WCET ranges after the "handling imbalance" step in Algorithm~\ref{alg:safewcet}. Parts of the WCET ranges under which tasks are unlikely to be schedulable are therefore not considered when sampling.
For EXP2, we used 10-fold cross-validation based on the training dataset at \hbox{each model refinement step (phase 2).}
We define the precision and recall metrics as follows: (1)~precision $P = \mathit{TP} / (\mathit{TP} + \mathit{FP})$ and (2)~recall $R = \mathit{TP} / (\mathit{TP} + \mathit{FN})$, where $\mathit{TP}$, $\mathit{FP}$, and $\mathit{FN}$ denote the number of true positives, false positives, and false negatives, respectively. A true positive is a test instance (a set of WCET values) labelled as safe and correctly classified as such. A false positive is a test instance labelled as unsafe but incorrectly classified as safe. A false negative is a test instance labelled as safe but incorrectly classified as unsafe. We prioritise precision over recall as practitioners require (ideally) no false positives -- an unsafe instance with deadline misses is incorrectly classified as safe -- in the context of mission-critical, real-time satellite systems. For EXP1, precision and recall values are measured based on a synthetic test dataset. For EXP2, precision values are computed using collective sets of true positives and false positives obtained from 10-fold cross-validation at each model refinement.
Due to the randomness of SAFE, we repeat our experiments 50 times. To statistically compare our results, we use the non-parametric Mann-Whitney U-test~\cite{Mann:47}. We set the level of significance, $\alpha$, to 0.05.
\subsection{Implementation and Parameter Tuning}
\label{subsec:impl}
To implement the feature reduction step of Algorithm~\ref{alg:safewcet}, we used the random forest feature reduction~\cite{Breiman:01} as it has been successfully applied to high-dimensional data~\cite{Nguyen:15,Hideko:12}. For the stepwise regression step of Algorithm~\ref{alg:safewcet}, we used the stepwise AIC regression technique~\cite{Yamashita:07} which has been used in many applications~\cite{Zhang:16,May:16}. Recall from Section~\ref{subsec:phase2} that our distance-based sampling and best-size region recommendation require a numerical optimisation technique to find the nearest WCET sample and a maximum safe region size based on an inferred safe border. For such optimisations, we applied a standard numerical optimisation method, i.e., the Nelder-Mead method~\cite{Nelder:65}.
To compute the GA fitness, we set the number of SafeScheduler runs (Section~\ref{subsec:phase1}) for each solution ($A$ in Section~\ref{subsec:phase1}) to 20. This number was chosen based on our initial experiments. We observed that 20 runs of SafeScheduler per solution $A$ keeps execution time under a reasonable threshold, i.e., $<$1m, and is sufficient to compute the fitness of SAFE. SafeScheduler schedules 34 tasks in our case study data for 1820s during which SafeScheduler advances its simulation clock by 0.1ms, for adequate precision.
We chose the time period to ensure that all the 34 tasks can be executed at least once.
For the GA search parameters, we set the population size to 10, the crossover rate to 0.7, and the mutation rate to 0.2, which are consistent with existing guidelines~\cite{Haupt:88}. We ran GA for 1000 iterations after which we observed that fitness reached a plateau in our initial experiments.
Regarding the feature reduction step of Algorithm~\ref{alg:safewcet}, we set the random forest parameters as follows: (1)~the tree depth parameter is set to $\sqrt{|F|}$, where $|F|$ denotes the number of features, i.e., 26 WCET ranges in our case study data, based on guidelines~\cite{Hastie:09}. (2)~The number of trees is set to 100 based on our initial experiments. We observed that learning more than 100 trees does not provide additional gains in terms of reducing the number of features.
Note that all the parameters mentioned above can probably be further tuned to improve the performance of SAFE. However, since with our current setting, we were able to convincingly and clearly support our conclusions, we do not report further experiments on tuning those parameters.
We ran our experiments over the high-performance computing cluster~\cite{Varrette:14} at the University of Luxembourg. To account for randomness, we repeated each run of SAFE 50 times for all the experiments. Each run of SAFE was executed on a different node of the cluster. It took around 35h for us to create a synthetic test dataset with 50,000 instances. When we set 1000 GA iterations for the first phase of SAFE and 10,000 new WCET samples (100 refinements $\times$ 100 new WCET samples per refinement) for the second step of SAFE, each run of SAFE took about 19.1h -- phase 1: 10.6h and phase 2: 8.5h. The running time is acceptable as SAFE can be executed offline in practice.
\subsection{Experiment Results}
\label{subsec:res}
\noindent\textbf{Sanity check.} Recall from Section~\ref{sec:approach} that the first phase of SAFE uses GA. Based on existing guidelines~\cite{Arcuri:14,Harman:12}, a search-based solution should be compared with at least a naive random search (RS). Such a sanity check aims to ensure that a proposed search-based solution is not effective due to the search problem being simple. To do so, we compared two versions of SAFE, which use either GA or RS. We compared fitness values, which SAFE aims to maximise (see Section~\ref{subsec:phase1}), obtained by GA and RS over 1000 iterations. Our results show that, on average, fitness values obtained by GA are always higher than those obtained by RS over 1000 iterations. Based on our statistical comparisons, the differences in fitness values between GA and RS become statistically significant ($p$-value $<$ 0.05) after 939 iterations. Hence, we conclude that SAFE with GA finds worst-case sequences of task arrivals (which maximise deadline misses) more effectively than SAFE with RS. Since this aspect of the our experiments is not the main focus here, full results are available online~\cite{Artifacts}.
\begin{figure}[t]
\begin{center}
\subfloat[Precision]{
\includegraphics[width=0.4\columnwidth]{figures/rq2-precision.pdf}
\label{fig:rq2 precision}
}%
\quad\quad
\subfloat[Recall]{
\includegraphics[width=0.4\columnwidth]{figures/rq2-recall.pdf}
\label{fig:rq2 recall}
}%
\caption{Distributions of precision (a) and recall (b) over 100 model refinements when SAFE employs either our distance-based sampling (D) or random sampling (R). The boxplots (25\%-50\%-75\%) show precision (a) and recall (b) values obtained from 50 runs of SAFE with each algorithm. The lines represent average trends.}
\label{fig:rq2}
\end{center}
\end{figure}
\noindent\textbf{RQ1.} Figure~\ref{fig:rq2} depicts distributions of precision (Figure~\ref{fig:rq2 precision}) and recall (Figure~\ref{fig:rq2 recall}) obtained from EXP1. The boxplots in Figure~\ref{fig:rq2 precision} (resp. Figure~\ref{fig:rq2 recall}) show distributions (25\%-50\%-75\%) of precision (resp. recall) values obtained from 50 executions of SAFE with either distance-based sampling (D) or simple random sampling (R). The solid lines represent the average trends of precision and recall value changes over 100 regression model refinements.
As shown in Figure~\ref{fig:rq2 precision}, across over 100 model refinements, SAFE with D achieves higher precision values than those obtained by R. Also, Figure~\ref{fig:rq2 precision} shows that the variance of precision with D tends to be smaller than that of R. On average, D's precision converges toward 1 with model refinements; however, precision with R shows a markedly different trend without convergence to 1, an important property in our context. Based on our statistical comparisons, the difference in precision values between D and R becomes statistically significant after only 2 model refinements.
Regarding recall comparisons between D and R, as shown in Figure~\ref{fig:rq2 recall}, D produces higher recall values over 100 model refinements than those of R. The difference in recall values between D and R becomes statistically significant after only 3 model refinements. For 100 model refinements, SAFE took, on average, 8.5h and 5.7h with D and R, respectively.
\begin{mdframed}[style=RQFrame]
\emph{The answer to {\bf RQ1} is that} SAFE with distance-based sampling significantly outperforms SAFE with random sampling in achieving higher precision and recall. Only distance-based sampling can achieve a precision close to 1 within practical time, an important requirement in our context.
\end{mdframed}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.8\linewidth]{figures/kfold.pdf}
\caption{Precision values computed from 10-fold cross validation at each model refinement. The boxplots (25\%-50\%-75\%) show precision values obtained from 50 runs of SAFE. The line represents an average trend.}
\label{fig:kfold}
\end{center}
\end{figure}
\noindent\textbf{RQ2.} Figure~\ref{fig:kfold} shows precision values obtained from 10-fold cross-validation at each model refinement. Recall from Section~\ref{subsec:phase2} that SAFE stops model refinements once a precision value reaches a desired value. As shown in Figure~\ref{fig:kfold}, precision values tend to increase with additional WCET samples. Hence, practitioners are able to stop the model refinement procedure once precision reaches an acceptable level, e.g., $>$0.999. At 100 model refinement, SAFE reaches, on average, a precision of 0.99986. For EXP2, SAFE took, on average, 10.6h for phase 1 and 8.5h for phase 2.
As described in Section~\ref{subsec:phase2}, SAFE reduces the dimensionality of the WCET space through a feature reduction technique based on random forest. The computed importance scores of each task's WCET in our dataset are as follows: 0.773 for T30, 0.093 for T33, 0.016 for T23, and $\leq$0.005 for the remaining 31 tasks. Based on a standard feature selection guideline~\cite{Hastie:09}, only the WCET values of two tasks, i.e., T30 and T33, are deemed to be important enough to retain as their score is higher than the average importance, i.e., 0.0385. Hence, SAFE computes safe WCET ranges of these two tasks in the next steps described in Algorithm~\ref{alg:safewcet}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.8\linewidth]{figures/bestsize.pdf}
\caption{An inferred safe border and best-size WCET regions for tasks T30 and T33. The safe border determines WCET ranges within which tasks are likely to be schedulable with a deadline miss probability of 1.97\%.}
\label{fig:bestsize}
\end{center}
\end{figure}
Figure~\ref{fig:bestsize} shows the inferred safe border which identifies safe WCET ranges within which all 34 tasks are schedulable with an estimated deadline miss probability of 1.97\%.
Given the safe border, we found a best-size point which restricts the WCET ranges of T30 and T33 as follows: T30 [0.1ms, 458.0ms] and T33 [0.1ms, 2138.1ms]. We note that the initial estimated WCET ranges of the two tasks are as follows: T30 [0.1ms, 900.0ms] and T33 [0.1ms, 20000.0ms]. SAFE therefore resulted in safe WCET ranges representing a significant decrease of 49.11\% and 89.31\% of initial maximum WCET estimates, respectively. This information is therefore highly important and can be used to guide design and development.
\begin{mdframed}[style=RQFrame]
\emph{The answer to {\bf RQ2} is that} SAFE helps compute safe WCET ranges that have a much lower maximum than practitioners' initial WCET estimates. Our case study showed that SAFE determined safe maximum WCET values that were only 51\% or less the original estimate. Further, these safe WCET ranges have a deadline miss probability of 1.97\% based on the inferred logistic regression model. More restricted ranges can be selected to reduce this probability. SAFE took, on average, 19.1h to compute such safe WCET regions, which is acceptable for \hbox{offline analysis in practice.}
\end{mdframed}
\noindent\textbf{Benefits from a practitioner's perspective.} Investigating practitioners' perceptions of the benefits of SAFE is necessary to adopt SAFE in practice. To do so, we draw on the qualitative reflections of three software engineers at LuxSpace, with whom we have been collaborating on this research. The reflections are based on the observations that the engineers made throughout their interactions with the researchers.
SAFE produces a set of worst-case sequences of task arrivals (see Section~\ref{subsec:phase1}). Engineers deemed them to be useful for further examinations by experts. The current practice is to use an analytical schedulability test~\cite{Liu:73} which proves whether or not a set of tasks are schedulable. Such an analytical technique typically does not provide additional information regarding possible deadline misses. In contrast, worst-case task arrivals and safe WCET ranges produced by SAFE offer insights to engineers regarding deadline miss scenarios and the conditions under which they happen.
Engineers noted that some tasks' WCET are inherently uncertain and that such uncertainty is hard to estimate based on expertise. Hence, their initial WCET estimates were very rough and conservative. Further, estimating what WCET sub-ranges are safe is even more difficult. Since SAFE estimates safe WCET ranges systematically with a probabilistic guarantee, the engineers deem SAFE to improve over existing practice. Also, SAFE allows engineers to choose system-specific safe WCET ranges from the (infinite) WCET ranges modeled by the safe border, rather than simply selecting the best-size WCET range automatically suggested by SAFE (Figure~\ref{fig:bestsize}). This flexibility allows engineers to perform domain specific trade-off analysis among possible WCET ranges and is useful in practice to support decision making with respect to their task design.
Given the fact that we have not yet undertaken rigorous user studies, the benefits highlighted above are only suggestive but not conclusive. We believe the positive feedback obtained from LuxSpace and our industrial case study shows that SAFE is promising and worthy of further empirical research with human subjects.
\subsection{Threats to Validity}
\label{subsec:threats}
\textbf{Internal validity.} To ensure that our promising results cannot be attributed to the problem merely being simple, we compared SAFE with an alternative baseline using random search under identical parameter settings (see the sanity check results in Section~\ref{subsec:res}). Phase 1 of SAFE can indeed be replaced with a random search, as we did, or even an exhaustive technique if the targeted system is small. However, there are no alternatives for Phase 2 -- our main contribution -- which infers safe WCET ranges and enables trade-off analysis. We present all the underlying parameters and provide our full evaluation package~\cite{Artifacts} to facilitate reproducibility. We mitigate potential biases and errors in our experiments by drawing on an industrial case study in collaboration with engineers at LuxSpace.
\textbf{External validity.} The main threat to external validity is that our results may not generalize to other contexts. We evaluated SAFE using early-stage WCET ranges estimated by practitioners at LuxSpace. However, SAFE can be applied at later development stages as well (1)~to test the schedulability of the underlying set of tasks of a system and (2)~to develop tasks under more precise constraints regarding safe WCETs. Future case studies covering the entire development process remain necessary for a more conclusive evaluation of SAFE. In addition, while motivated by ADCS (see Section~\ref{subsec:casestudy}) in the satellite domain, SAFE is designed to be generally applicable to other contexts. However, the general usefulness of SAFE needs to be assessed in other contexts and domains.
\section{Conclusion}
\label{sec:conclusion}
We developed SAFE, a two-phase approach applicable in early design stages, to precisely estimate safe WCET ranges within which real-time tasks are likely meet their deadlines with a high-level of confidence. SAFE uses a meta-heuristic search algorithm to generate worst-case sequences of task arrivals that maximise the magnitude of deadline misses, when they are possible. Based on the search results, SAFE uses a logistic regression model to infer safe WCET ranges within which tasks are highly likely to meet their deadlines, given a selected probability. SAFE is developed to be scalable by using a combination of techniques such as a genetic algorithm and simulation for the SAFE search (phase 1) and feature reduction, an effective sampling strategy, and polynomial logistic regression for the SAFE model refinement (phase 2). We evaluated SAFE on a mission-critical, real-time satellite system. The results indicate that SAFE is able to precisely compute safe WCET ranges for which deadline misses are highly unlikely, these ranges being much smaller than the WCET ranges initially estimated by engineers.
For future work, we plan to extend SAFE in the following directions: (1)~developing a real-time task modelling language to describe dependencies, constraints, behaviours of real-time tasks and to facilitate schedulability analysis and (2)~building a decision support system to recommend a schedulable solution if a set of tasks are not schedulable, e.g., priority re-assignments. In the long term, we would like to more conclusively validate the usefulness of SAFE by applying it to other case studies in different domains.
\section{Introduction}
\label{sec:intro}
Safety-critical systems, e.g., those used in the aerospace, automotive and healthcare domains, often consist of many software tasks that run in parallel. The correctness of safety-critical systems does not only depend on the system outputs but also on the time the system takes to generate its outputs. For instance, the Anti-lock Braking System (ABS) of a vehicle has to activate within milliseconds after the driver breaks as failures to do so may result in a vehicle skid due to the wheels locking up. The systems that have to perform their operation in a timely manner are known as real-time systems (RTS)~\cite{Burns:09}. In order to ensure safe and correct operation of RTS, the execution of their concurrent software tasks are expected to satisfy a number of real-time constraints. RTS typically execute within a real-time operating system (RTOS)~\cite{Wang:17} where a scheduler is used to coordinate the execution of parallel tasks based on a standard scheduling policy~\cite{Liu:00}. To ensure RTOS can operate arrival, execution, preemption and completion of RTS tasks in a safe and timely manner, we need to perform \emph{schedulability analysis} at early design stages. The goal of schedulability analysis is to determine if a given set of RTS tasks are \emph{schedulable}, i.e., their executions always complete before their specified deadlines~\cite{Liu:00}.
The inputs to schedulability analysis are a set of task parameters, in particular, task priorities, deadlines, inter-arrival times and worst-case execution times (WCET). Some of these parameters can be specified or estimated with a high degree of precision at early development stages even when tasks are not yet fully implemented. For example, task priorities are typically determined by the selected scheduling policy, e.g., rate monotonic~\cite{Liu:73}, or based on the task criticality levels (i.e., more critical tasks are prioritised over the less critical ones). Task deadlines are typically decided by system requirements. Task inter-arrival times, i.e., the time interval between consecutive task executions, usually depend on system environment events triggering task executions. However, among task parameters, tasks' WCET values are typically difficult to accurately estimate at early development stages. For some tasks, WCET values may depend on various factors such as implementation decisions, task workloads, RTOS properties and hardware components. These factors may not be fully known at early stages of development, making it difficult to precisely estimate WCET values for some tasks~\cite{Gustafsson:09,Altenbernd:16,Bonenfant:17}. As a result, engineers tend to provide ranges for WCET values instead of point estimates.
Schedulability analysis is, in general, a hard problem because the space of all possible task schedules, i.e., all possible ways where tasks can be executed according to an underlying scheduling policy, is very large. The problem becomes computationally more expensive when WCET values are uncertain and are specified as value ranges instead of single values. Specifically, provided with WCET value ranges, engineers need to have ways to determine for what WCET values within the given ranges the system is likely to miss or satisfy its deadline constraints. Such results greatly support engineers during development as they provide targets driving design and implementation choices. If they know that deadline constraints are likely met for all or most of the expected WCET range, they can consider a wider choice of design and implementation choices, e.g., using a relational database instead of an in-memory data storage. Otherwise, in situations where only tight WCET sub-ranges seem acceptable, developers may have to consider more expensive hardware, decreased functionality or performance, or more restricted design and implementation choices.
The problem of schedulability analysis of real-time tasks has been extensively studied in the past. Using real-time schedulability theory~\cite{Liu:73}, engineers are able to determine if tasks are schedulable when exact WCET values are provided~\cite{Sha:90,Liu:00}. In addition to requiring exact WCET values, real-time schedulability theory often relies on implicit assumptions which may not hold in practice, e.g., treating aperiodic tasks with irregular arrival times as periodic tasks with regular arrival times~\cite{Sprunt:89}. As a result, approaches based on schedulability theory may be inaccurate when their underlying assumptions do not hold. In contrast to real-time schedulability theory, some model-based approaches~\cite{Alur:90,Alesio:12,Kwiatkowska:11} try to solve the schedulability problem exhaustively by applying a model checker to a real-time model of the system under analysis. Such approaches tend to suffer from the state-space explosion problem~\cite{Clarke:12} as the number of software tasks and their different states increases. More recently, stress testing and simulation-based approaches~\cite{Briand:05,Alesio:13} have been proposed to stress RTS and generate test scenarios where their deadline constraints are violated. Such approaches cast the schedulability test problem as an optimisation problem to find worst-case task execution scenarios exhibiting deadline misses. However, none of the existing simulation-based approaches account for uncertainties in WCET values and therefore \hbox{do not handle WCET value ranges.}
In this paper, we propose a \underline{S}afe WCET \underline{A}nalysis method \underline{F}or real-time task sch\underline{E}dulability (SAFE) (1)~to test schedulability of a set of tasks while accounting for uncertain WCET values, i.e., ranges, and (2)~to estimate WCET ranges under which tasks are likely to be schedulable. Our approach is based on a stress testing approach~\cite{Briand:05} using meta-heuristic search~\cite{Luke:13} in combination with polynomial logistic regression models. Specifically, we use a genetic algorithm~\cite{Luke:13} to search for sequences of task arrivals which likely lead to deadline misses. Afterwards, logistic regression~\cite{David:86}, a statistical classification technique, is applied to infer a \emph{safe WCET border} in the multidimensional WCET space. This border aims to partition the given WCET ranges into \emph{safe} and \emph{unsafe} sub-ranges for a selected deadline miss probability, and thus enables engineers to investigate trade-offs among different tasks' WCET values.
We evaluated our approach by applying it to a complex, industrial satellite system developed by our industry partner, LuxSpace. Results show that our approach can efficiently and accurately compute safe WCET ranges.
To our knowledge, SAFE is the first attempt to estimate safe WCET ranges within which real-time tasks are likely to meet their deadlines for a given level of confidence, while enabling engineers to explore trade-offs among tasks' WCET values. Our full evaluation package is available online~\cite{Artifacts}.
The remainder of this paper is structured as follows: Section~\ref{sec:motivation} motivates our work. Section~\ref{sec:problem} defines our specific schedulability analysis problem in practical terms. Section~\ref{sec:approach} describes SAFE. Section~\ref{sec:eval} evaluates SAFE. Sections~\ref{sec:relatedwork} compares SAFE with related work. \hbox{Section~\ref{sec:conclusion} concludes this paper.}
\section{Motivating case study}
\label{sec:motivation}
We motivate our work with a mission-critical real-time satellite system, named Attitude Determination and Control System (ADCS), which LuxSpace, a leading system integrator for microsatellites and aerospace systems, has been developing over the years. ADCS determines the satellite's attitude and controls its movements~\cite{Eickhoff:11}. ADCS controls a satellite in either autonomous or passive mode. In the autonomous mode, ADCS must orient a satellite in proper position on time to ensure that the satellite provides normal service correctly. In the passive mode, operators are able to not only control satellite positions but also maintain the satellite, e.g., upgrading software. Such a maintenance operation does not necessarily need to be completed within a fixed hard deadline; instead, it should be completed within a reasonable amount of time, i.e., soft deadlines. Hence, ADCS is composed of a set of tasks having real-time constraints with hard and soft deadlines.
Engineers at LuxSpace conduct real-time schedulability analysis across different development stages for ADCS. At an early development stage, practitioners use a theoretical schedulability analysis technique~\cite{Liu:73} which determines that a set of tasks is schedulable if CPU utilisation of the task set is less than a threshold, e.g., 69\%. As mentioned earlier, at an early development stage, practitioners estimate task WCETs as ranges and often assign large values to upper limits of WCET ranges base. To be on the safe side, practitioners tend to estimate large WCET values to avoid overly optimistic results, thus aiming at conservatively schedulable tasks.
Engineers, however, are still faced with the following issues: (1)~An analytical schedulability analysis technique, e.g., utilisation-based schedulability analysis~\cite{Liu:73}, typically indicates whether or not tasks are schedulable. However, practitioners need additional information to understand how tasks miss their deadlines. For instance, a set of tasks may not be schedulable for only for a few specific sequences of task arrivals. (2)~Practitioners estimate WCETs without any systematic support; instead, they often rely on their experience of developing tasks providing similar functions-to-develop. This practice typically results in imprecise estimates of WCET ranges, which may cause serious problems, e.g., significantly changing tasks at later development stages. To this end, LuxSpace is interested in SAFE as a way to address these issues in analysing schedulability.
\section{Problem description}
\label{sec:problem}
This section first formalises task, scheduler and schedulability concepts. We then describe the problem of identifying safe WCET ranges under which tasks likely meet their deadline constraints, at a certain level of confidence.
\textbf{Task.} We denote by $j$ a real-time task that should complete its execution within a specified deadline after it is activated (or arrived). Every real-time task $j$ has the following properties: priority denoted by $\mathit{pr}(j)$, deadline denoted by $\mathit{dl}(j)$, and worst-case execution time (WCET) denoted by $\mathit{wcet}(j)$. Task priority $\mathit{pr}$ determines if an execution of a task is preempted by another task. Typically, a task $j$ preempts the execution of a task $j^\prime$ if the priority of $j$ is higher than the priority of $j^\prime$, i.e., $\mathit{pr}(j) > \mathit{pr}(j^\prime)$.
The $\mathit{dl}(j)$ function determines the deadline of a task $j$ relative to its arrival time. A task deadline can be either \emph{hard} or \emph{soft}. A hard deadline of a task $j$ constrains that $j$ \emph{must} complete its execution within a deadline $\mathit{dl}(j)$ after $j$ is activated. While violations of hard deadlines are not acceptable, depending on the operating context of a system, violating soft deadlines may be tolerated to some extent. Note that, for notational simplicity, we do not introduce new notations to distinguish between hard and soft deadlines. In this paper, we refer to a hard deadline as a deadline. Section~\ref{sec:approach} further discusses how our approach manages hard and soft deadlines.
The $\mathit{wcet}(j)$ function denotes a range of WCET values of a task $j$. We denote by $\mathit{wmin}(j)$ and $\mathit{wmax}(j)$, respectively, the minimum and the maximum WCET values of $j$. As discussed in the introduction, at an early development stage, it is difficult to provide exact WCET values of real-time tasks. Hence, we assume that engineers specify WCETs using a range of values, instead of single values, by indicating estimated minimum and maximum values that they think each task's WCET can realistically take.
Real-time tasks are either \emph{periodic} or \emph{aperiodic}. Periodic tasks, which are typically triggered by timed events, are invoked at regular intervals specified by their \emph{period}. We denote by $\mathit{pd}(j)$ the period of a periodic task $j$, i.e., a fixed time interval between subsequent activations (or arrivals) of $j$. Any task that is not periodic is called aperiodic. Aperiodic tasks have irregular arrival times and are activated by external stimuli which occur irregularly, and hence, in general, there is no limit on the arrival times of an aperiodic task. However, in real-time analysis, we typically specify a minimum inter-arrival time denoted by $\mathit{pmin}(j)$ and a maximum inter-arrival time denoted by $\mathit{pmax}(j)$ indicating the minimum and maximum time intervals between two consecutive arrivals of an aperiodic task $j$. In real-time analysis, sporadic tasks are often separately defined as having irregular arrival intervals and hard deadlines~\cite{Liu:00}. In our conceptual definitions, however, we do not introduce new notations for sporadic tasks because the deadline and period concepts defined above sufficiently characterise sporadic tasks. Note that for periodic tasks $j$, we have $\mathit{pmin}(j) = \mathit{pmax}(j) = \mathit{pd}(j)$. Otherwise, for aperiodic tasks $j$, we have $\mathit{pmax}(j) > \mathit{pmin}(j)$.
\textbf{Scheduler.} Let $J$ be a set of tasks to be scheduled by a real-time scheduler. A scheduler then dynamically schedules executions of tasks in $J$ according to the tasks' arrivals and the scheduler's scheduling policy over the scheduling period $\mathbb{T} = [0,\mathbf{T}]$. We denote by $\mathit{at}_k(j)$ the $k$th arrival time of a task $j \in J$. The first arrival of a periodic task $j$ does not always occur immediately at the system start time $0$. Such offset time from the system start time $0$ to the first arrival time $\mathit{at}_1(j)$ of $j$ is denoted by $\mathit{offset}(j)$.
For a periodic task $j$, the $k$th arrival of $j$ within $\mathbb{T}$ is $\mathit{at}_k(j) \leq \mathbf{T}$ and is computed by $\mathit{at}_k(j) = \mathit{offset}(j) + (k-1) \cdot \mathit{pd}(j)$. For an aperiodic task $j^\prime$, $\mathit{at}_k(j^\prime)$ is determined based on the $k{-}1$th arrival time of $j^\prime$ and its minimum and maximum arrival times. Specifically, for $k > 1$, $\mathit{at}_k(j^\prime) \in [\mathit{at}_{k-1}(j^\prime)+\mathit{pmin}(j^\prime), \mathit{at}_{k-1}(j^\prime)+\mathit{pmax}(j^\prime)]$ and, for $k = 1$, $\mathit{at}_1(j^\prime) \in [\mathit{pmin}(j^\prime), \mathit{pmax}(j^\prime)]$ where $\mathit{at}_k(j^\prime) < \mathbf{T}$.
A scheduler reacts to a task arrival at $\mathit{at}_k(j)$ to schedule the execution of $j$. Depending on a scheduling policy (e.g., rate monotonic~\cite{Liu:73}), an arrived task $j$ may not start its execution at the same time as it arrives when a higher priority task is executing. Also, task executions may be interrupted due to preemption. We denote by $\mathit{et}_k(j)$ the end execution time for the $k$th arrival of a task $j$. Depending on actual worst-case execution time of a task $j$, denoted by $w(j)$, within its WCET range $[\mathit{wmin}(j), \mathit{wmax}(j)]$, the $\mathit{et}_k(j)$ end execution time of $j$ satisfies the following: $\mathit{et}_k(j) \ge \mathit{at}_k(j) + w(j)$.
During the system operation, a scheduler generates a \emph{schedule scenario} which describes a sequence of task arrivals and their end time values. We define a schedule scenario as a set $S$ of tuples $(j, \mathit{at}_k(j), \mathit{et}_k(j))$ indicating that a task $j$ has arrived at $\mathit{at}_k(j)$ and completed its execution at $\mathit{et}_k(j)$. Due to the randomness of task execution times and aperiodic task arrivals, a scheduler may generate a different schedule scenario in different runs of a system.
\begin{figure}[t]
\begin{center}
\subfloat[Deadline miss]
{\parbox{1\columnwidth}{\centering
\includegraphics[width=0.9\columnwidth]{figures/scheduling1}\label{fig:dm}
}}%
\subfloat[No deadline miss]
{\parbox{1\columnwidth}{\centering
\includegraphics[width=0.9\columnwidth]{figures/scheduling2}\label{fig:nodm}
}}%
\caption{Example schedule scenarios of three tasks: $j_1$, $j_2$, and $j_3$. (a)~$j_3$ is not schedulable, i.e., $\mathit{et}_2(j_3) > \mathit{at}_2(j_3) + \mathit{dl}(j_3)$. (b)~All the three tasks are schedulable. When $j_2$ executes over 3 (WCET) time units, it causes a deadline miss of $j_3$. When the WCET is reduced to 2, the three tasks are schedulable even for the same sequence of task arrivals.}
\label{fig:scheduling}
\end{center}
\end{figure}
Figure~\ref{fig:scheduling} shows two schedule scenarios produced by a scheduler over the $[0,23]$ time period of a system run. Both Figure~\ref{fig:dm} and Figure~\ref{fig:nodm} describe executions of three tasks, $j_1$, $j_2$, and $j_3$ arrived at the same time stamps (see $at_i$ in the figures). In both scenarios, the aperiodic task $j_1$ is characterised by: $\mathit{pmin}(j_1) = 5$, $\mathit{pmax}(j_1) = 10$, $\mathit{dl}(j_1) = 4$, and $\mathit{wmin}(j_1) = \mathit{wmax}(j_1) = 2$. The periodic task $j_2$ is characterised by: $\mathit{pd}(j_2) = 8$ and $\mathit{dl}(j_2) = 6$. The aperiodic task $j_3$ is characterised by: $\mathit{pmin}(j_3) = 3$, $\mathit{pmax}(j_3) = 20$, $\mathit{dl}(j_3) = 3$, and $\mathit{wmin}(j_3) = \mathit{wmax}(j_3) = 1$. The priorities of the three tasks satisfy the following: $pr(j_1) > pr(j_2) > pr(j_3)$. In both scenarios, task executions can be preempted depending on their priorities. We note that a WCET range of the $j_2$ task is set to $\mathit{wmin}(j_2) = 1$ and $\mathit{wmax}(j_2) = 3$ in Figure~\ref{fig:dm}, and $\mathit{wmin}(j_2) = 1$ and $\mathit{wmax}(j_2) = 2$ in Figure~\ref{fig:nodm}. Then, Figure~\ref{fig:dm} can be described by the $S_a = \{(j_1, 5, 7)$, $\ldots$, $(j_2, 0, 3)$, $\ldots$, $(j_3, 9, 14)$, $(j_3, 14, 15))\}$ schedule scenario; and Figure~\ref{fig:nodm} by $S_b = \{(j_1, 5, 7)$, $\ldots$, $(j_2, 0, 2)$, $\ldots$, $(j_3, 9, 11)$, $(j_3, 14, 15))\}$.
\textbf{Schedulability.} Given a schedule scenario $S$, a task $j$ is \emph{schedulable} if $j$ completes its execution before its deadline, i.e., for all $\mathit{et}_k(j)$ observed in $S$, $\mathit{et}_k(j) \le \mathit{at}_k(j) + \mathit{dl}(j)$. Let $J$ be a set of tasks to be scheduled by a scheduler. A set $J$ of tasks is then schedulable if for every schedule $S$ of $J$, we have no task $j \in J$ that misses its deadline.
As shown in Figure~\ref{fig:dm}, a deadline miss occurs after the second arrival of $j_3$, i.e., $\mathit{et}_2(j_3) > \mathit{at}_2(j_3) + \mathit{dl}(j_3)$. During $[\mathit{at}_2(j_3), \mathit{at}_2(j_3) + \mathit{dl}(j_3)]$ period, the $j_3$ task cannot execute because the other $j_1$ and $j_2$ tasks with higher priorities are executing. Thus, $j_3$ is not schedulable in the schedule scenario $S_a$ of Figure~\ref{fig:dm}. This scheduling problem can be solved by restricting tasks' WCET ranges as discussed below.
\textbf{Problem.} Uncertainty in task WCET values at an early development stage is a critical issue preventing the effective design and assessment of mission-critical real-time systems. Upper bounds of WCETs correspond to worst-case WCET values and have a direct impact on deadline misses as larger WCET values increase their probability. Lower bounds of WCETs are estimates of tasks' best-case WCET values, below which task implementations are likely not feasible. Our approach aims to determine the maximum upper bounds for WCET under which tasks are likely to be schedulable, at a given level of risk, and thus provides an objective to engineers implementing the tasks. Specifically, for every task $j \in J$ to be analysed, our approach computes a new upper bound value for the WCET range of $j$ (denoted by $\mathit{newwmax}(j)$) such that $\mathit{newwmax}(j) \leq \mathit{wmax}(j)$ and by restricting the WCET range of $j$ to $\mathit{newwmax}(j)$ we should, at a certain level of confidence, no longer have deadline misses. That is, tasks $J$ become schedulable, with a certain probability, after restricting the maximum WCET value of $j$ to $\mathit{newwmax}(j)$. For instance, as shown in Figure~\ref{fig:nodm}, restricting the maximum WCET of $j_2$ from $\mathit{wmax}(j_2) = 3$ to $\mathit{newwmax}(j_2) = 2$ enables all the three tasks to be schedulable.
We note that, in our context, both arrival time ranges for aperiodic tasks and WCET ranges for all tasks are represented as continuous intervals. Since our approach works based on sampling values from these continuous ranges, our approach cannot be exhaustive and cannot provide a guarantee that the tasks can always be schedulable after restricting their WCET ranges. Our approach instead relies on sampling values within the WCET and arrival time ranges, simulating the scheduler behaviour using the sampled values and observing whether, or not, a deadline miss occurs. In lieu of exhaustiveness, we rely on statistical and machine learning techniques to provide probabilistic estimates indicating how confident we are that a given set of tasks are schedulable.
\section{Related Work}
\label{sec:relatedwork}
This section discusses and compares SAFE with related work in the areas of schedulability analysis, as well as testing and verification of real-time systems.
\noindent\textbf{Schedulability analysis} has been widely studied for real-time systems~\cite{Bini:08,Xian:07,Axelsson:05,Muhuri:09,Bruggen:18,Maxim:13,Manolache:04,Hansen:09,Bernat:02,Hardy:16,Gustafsson:09,Altenbernd:16,Bonenfant:17}. Among them, the most related research strands study uncertain execution times~\cite{Bini:08,Xian:07,Axelsson:05,Muhuri:09}, probability of deadline misses~\cite{Bruggen:18,Maxim:13,Manolache:04}, and WCET estimations~\cite{Hansen:09,Bernat:02,Hardy:16,Gustafsson:09,Altenbernd:16,Bonenfant:17} in the context of real-time task analysis.
Bini et al.~\cite{Bini:08} propose a theoretical sensitivity analysis method for real-time systems accounting for a set of periodic tasks and their uncertain execution times. Br{\"u}ggen et al.~\cite{Bruggen:18} present an analytical method to analyse a deadline miss probability of real-time tasks using probability density functions of approximated task execution times. In contrast to SAFE, most of these analytical approaches do not directly account for aperiodic tasks having variable arrival intervals; instead, they treat aperiodic tasks as periodic tasks using their minimum inter-arrival times as periods~\cite{Davis:11}. However, SAFE takes various task parameters, including irregular arrival times, into account without any unwarranted assumption. Also, our simulation-based approach enables engineers to explore different scheduling policies provided by real RTOS; however, these analytical methods are typically only valid for \hbox{a specific conceptual scheduling policy model.}
\begin{sloppypar}
Hansen et al.~\cite{Hansen:09} present a measurement-based approach to estimate WCET and a probability of estimation failure. The measurement-based WCET estimation technique collects actual execution time samples and estimates WCETs using linear regression and a proposed analytical model. To our knowledge, most of the research strands regarding WCET estimation are developed for later development stages at which task implementations are available. Note that relatively few prior works aim at estimating WCET at an early design stage; however, these work strands still require access to source code, hardware, compilers, and program behaviour specifications~\cite{Gustafsson:09,Altenbernd:16,Bonenfant:17}. In contrast, SAFE uses as input estimated WCET ranges and then precisely restricts the WCET ranges within which tasks are schedulable with a selected deadline miss probability, by relying on a tailored genetic algorithm, simulation, feature reduction, a dedicated sampling strategy, and logistic regression.
\end{sloppypar}
\noindent\textbf{Testing and verification} are important to successfully develop safety-critical real-time systems~\cite{Mikucionis:04,Zander:08,Alesio:18,Alesio:12,Kwiatkowska:11,Briand:05}. Some prior studies employ model-based testing to generate and execute tests for real-time systems~\cite{Mikucionis:04,Zander:08,Alesio:18}. SAFE complements these prior studies by providing safe WCETs as objectives to engineers implementing and testing real-time tasks. Constraint programming and model checking have been applied to ensure that a system satisfies its time constraints~\cite{Alesio:12,Kwiatkowska:11}. These techniques may be useful to conclusively verify whether or not a WCET value is safe. However, such exhaustive techniques are not amenable to address the analysis problem addressed in this paper, which requires the inference of safe WCET ranges. To our knowledge, SAFE is the first attempt to accurately estimate safe WCET ranges to prevent deadline misses with a given level of confidence and offer ways to achieve different trade-offs among tasks' WCET values. | proofpile-arXiv_065-208 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The Sobolev classes $W^{1,p}(\Omega)$, with $\Omega$ a Euclidean domain, are
associated with (potentially degenerate)
elliptic partial differential equations related to a strongly local Dirichlet form.
In considering Dirichlet boundary value problems on a Euclidean domain in $\mathbb{R}^n$ related to elliptic differential operators, solutions are
known to exist if the prescribed boundary datum, namely a function that is given on the boundary of the domain,
arises as the trace of a Sobolev function that is defined on the domain. The works of Jonsson and Wallin~\cite{JW, JW2} identify
certain Besov spaces of functions on a compact $d$-set as traces on that set of Sobolev functions on $\mathbb{R}^n$. Here, a
set is a $d$-set if it is Ahlfors $d$-regular, namely, $\mathcal{H}^d(B(x,r))\simeq r^d$ whenever $x$ is a point in that set and
$r>0$ is no larger than the diameter of that set. If $\Omega$ is a bounded Lipschitz domain in $\mathbb{R}^n$, then its boundary
is an $(n-1)$-set. It was shown in~\cite[Page~228]{JW} that the trace of $W^{1,p}(\mathbb{R}^n)$ in a $d$-set $E\subset \mathbb{R}^n$ is
the Besov space $B^{1-(n-d)/p}_{p,p}(E)$. Thus for each $p>1$ a specific value of $\theta=1-\tfrac{n-d}{p}$ was chosen
amongst all possible values of $0<\theta<1$ for which the Besov space $B^\theta_{p,p}(E)$ is identified with the Sobolev
space $W^{1,p}(\mathbb{R}^n)$.
On the other hand,~\cite[Theorem~1.1]{B2S} tells us that each compact doubling metric measure space $Z$ is the boundary
of a uniform space (a locally compact non-complete metric space that is a uniform domain in its completion) $X_\varepsilon$ and
that for each choice of $p>1$ and $0<\theta<1$ there is a choice of a doubling measure $\mu_\beta$,
$\beta=\beta(\theta)$, on the uniform space such that $B^\theta_{p,p}(Z)$ is the trace space of the
Newton-Sobolev class $N^{1,p}(X_\varepsilon,\mu_\beta)$. The measures $\mu_\beta$ are obtained as weighted measures,
that is, $d\mu_\beta(x)=\omega_\beta(x) \, d\mu(x)$ for some underlying measure $\mu$ on $X_\varepsilon$. With this perspective
in mind, it is then natural to ask whether, given $p>1$ and $0<\theta<1$, there is a weighted measure $\mu_\alpha$
on $\mathbb{R}^n$, with $\alpha$ depending perhaps on $\theta$ and $p$, such that $B^\theta_{p,p}(E,\nu)$ is the trace space
of the weighted Sobolev class $W^{1,p}(\mathbb{R}^n, \mu_\alpha)$. Here $\nu$ is the Hausdorff measure on $E$
of dimension $\text{dim}_H(E)$.
There is evidence for this to be plausible, see for example~\cite{Maly} where $E$ was considered to be the boundary
of a uniform domain.
The paper~\cite[Theorem~1.5]{Barton} identified each $B^\theta_{p,p}(\partial \Omega)$, $1<p<\infty$, $0<\theta<1$, with certain
weighted Sobolev classes of functions on the Lipschitz domain $\Omega\subset\mathbb{R}^n$, while
the papers~\cite{DFM1, DFM2}
consider the case of $p=2$ with $\mathbb{R}^n\setminus E$ a domain satisfying a one-sided NTA domain condition.
In this note we do not assume that $\mathbb{R}^n\setminus E$ is a domain (the example of $E$ being a Sierpi\'nski
carpet does not have its complement in $\mathbb{R}^2$ be a domain) nor do we restrict ourselves to the case of $p=2$.
Indeed, when $n\ge 3$, in
considering $\mathbb{R}^n$, the choice of $p=n$ is related to the quasiconformal geometry of $\mathbb{R}^n\setminus E$.
In many cases, including the three examples considered here as well as the setting considered in
\cite{Maly, DFM1, DFM2}, the weight $\omega_\beta(x)\simeq\dist(x,E)^\beta$.
In this paper we expand on this question and study the following setting.
With $0<Q<n$, let $E\subset \mathbb{R}^n$ be an Ahlfors $Q$-regular compact set
with $\diam(E)\le 1$, and $B$ be a ball in $\mathbb{R}^n$ such that $E\subset\tfrac12 B$. We also assume that
for each $\alpha\le 0$ there is a measure $\mu_\alpha$ on $B$ such that whenever $\alpha+n-Q>0$
and $x\in E$, and $0<r<2$ such that $r>\dist(x,E)/9$, the comparison $\mu_\alpha(B(x,r))\simeq r^{n+\alpha}$
holds. We also assume that the ball $B$, equipped with the Euclidean metric $d$ and the measure $\mu_\alpha$,
is doubling. Furthermore, we assume that for each $p>1$ there exists such $\alpha$ so that $\mu_\alpha$
supports a $p$-Poincar\'e inequality. These assumptions are not as restrictive as one might think. From
Theorem~1.1 of~\cite{DILTV}, we know that the measure $\mu_\alpha$ given by $d\mu_\alpha=\dist(x,E)^\alpha dm(x)$
with $m$ the $n$-dimensional Lebesgue measure on $\mathbb{R}^n$ is a Muckenhoupt $\mathcal{A}_p$-weight and hence
is doubling and supports a $p$-Poincar\'e inequality for all $1<p<\infty$. Moreover, from the proof of
Theorem~3.4 of~\cite{DILTV} we also automatically have $\mu_\alpha(B(x,r))\lesssim r^{n+\alpha}$; thus
the only requirement we add to this discussion is that $r^{n+\alpha}\lesssim\mu_\alpha(B(x,r))$.
We prove the following two theorems in this paper.
In what follows, $\nu=\mathcal{H}^Q\vert_E$ and for each real number $\alpha$ there is a Borel regular measure
$\mu_\alpha$ on $\mathbb{R}^n$ that is absolutely continuous with respect to the Lebesgue measure $m$ and satisfying
$\mu_\alpha(B(x,r))\simeq r^{n+\alpha}$ when $x\in E$ and $0<r\le 10$.
\begin{thm}\label{thm:main-trace}
Let $p>1$ and $0<\theta<1$ such that $p\theta<1$.
Let $\alpha\le 0$ such that $\alpha+n-Q>0$ and
$\theta<1-\tfrac{\alpha+n-Q}{p}$ such that $\mu_\alpha$ is doubling and supports a $p$-Poincar\'e inequality.
Then there is a bounded linear trace operator
\[
T:N^{1,p}(B,\mu_\alpha)\to B^\theta_{p,p}(E,\nu)
\]
such that $Tu=u\vert_E$ when $u$ is a Lipschitz function on $B$.
\end{thm}
\begin{thm}\label{thm:main-extend}
Let $p>1$ and $0<\theta<1$, and let $\alpha\le 0$ such that
$\alpha+n-Q>0$ and $\theta\ge 1-\tfrac{\alpha+n-Q}{p}$. Then there is a bounded
linear extension operator
\[
S:B^\theta_{p,p}(E,\nu)\to N^{1,p}(B,\mu_\alpha)
\]
such that if $u$ is a Lipschitz function on $E$, then $Su$ is a Lipschitz function on $B$ with
$u=Su\vert_E$.
\end{thm}
As one might note above, there is a lack of sharpness in the range of allowable $\theta$ in Theorem~\ref{thm:main-trace},
which then prevents us from identifying $B^\theta_{p,p}(E,\nu)$ as the trace space of $N^{1,p}(B,\mu_\alpha)$
when $\theta=1-\tfrac{\alpha+n-Q}{p}$. This hurdle is overcome if $E$ is the boundary of a uniform domain,
as shown by Mal\'y in~\cite{Maly}, see also~\cite{DFM1}.
We do not know whether we can take $\theta=1-\tfrac{\alpha+n-Q}{p}$ in
Theorem~\ref{thm:main-trace}.
In this note we also show that the fractals such as the
standard Sierpi\'nski carpet, the standard Sierpin\'ski gasket, and the von Koch snowflake
curve in $\mathbb{R}^2$
satisfy the conditions imposed on the set $E$ above, with $n=2$. For such sets, the measure $\mu_\alpha$ is
a weighted Lebesgue measure given in~\eqref{eq:def-mu-alpha} below.
The trace and extension theorems for the example of the von Koch snowflake curve follow from the
work of~\cite{Maly} once a family of measures $\mu_\alpha$, $0\ge \alpha>-\beta_0$ for a suitable $\beta_0$ are
constructed and shown to satisfy the hypotheses of~\cite[Theorem~1.1]{Maly}. We do this construction in
Section~\ref{VonKoch}.
We show also that for the Sierpi\'nski
carpet and the Sierpi\'nski gasket, this weight
$\omega(x)=\dist(x,E)^\alpha$ is a Muckenhoupt $\mathcal{A}_q$-weight for each $q>1$ when $\alpha+2>Q$, where
$Q$ is the Hausdorff dimension of the fractal set.
As mentioned above, the Muckenhoupt $\mathcal{A}_p$ criterion follows from~\cite[Theorem~1.1]{DILTV}, but we
give a constructive proof for these fractals and in addition provide a proof of the co-dimensionality condition.
Observe that if a weight is an $\mathcal{A}_q$-weight, then
the associated weighted measure is doubling and supports a $q$-Poincar\'e inequality. However, not all
weights that give a doubling measure supporting a $q$-Poincar\'e inequality are $\mathcal{A}_q$-weights, see
the discussion in~\cite{HKM}.
Another interesting nonlocal space is the so-called Haj\l asz space, see for example~\cite{HM, HKSTbook}.
In~\cite[Theorem~9]{HM} it is shown that if $\Omega\subset \mathbb{R}^n$ satisfies an $A(c)$-condition (a porosity condition at the boundary),
then the trace of Sobolev spaces $W^{1,p}(\Omega)$
are Haj\l asz spaces of functions on $E$ when $E$ is equipped with an appropriately snow-flaked metric and
a doubling Borel measure.
We refer the interested reader to \cite{JW,JW2, Besov, Maz, KSW, Barton, HM}
for more on Sobolev spaces and Besov spaces of functions
on Euclidean domains and sets, to \cite{GKS, GKZ, Maly} for more on Besov spaces of functions on
subsets of certain metric measure spaces, but this is not an exhaustive list of papers on these topics in the current literature.
\section{Background}
In this section we describe the notions used throughout this note.
With $0<Q<n$, let $E\subset \mathbb{R}^n$
be an Ahlfors $Q$-regular compact set
with $\diam(E)\le 1$ and let $B$ be a fixed ball in $\mathbb{R}^n$ such that $E\subset\tfrac12 B$.
We set $\nu=\mathcal{H}^Q\vert_E$. We also consider the measure $\mu_\alpha$, obtained as a weighted measure with
respect to the Lebesgue measure on $\mathbb{R}^n$ as in~\cite{HKM}, namely $d\mu_\alpha(x)=\omega(x)\, dm(x)$ with
$m$ the canonical Lebesuge measure on $\mathbb{R}^n$.
There are two basic function spaces under consideration
here: the weighted Sobolev space $W^{1,p}(B,\mu_\alpha)$ and the Besov space $B^\theta_{p,p}(E,\nu)$ with $1<p<\infty$.
Recall from~\cite{HKM} that a function $f\in L^p(B,\mu_\alpha)$ is in the weighted Sobolev space $W^{1,p}(B,\mu_\alpha)$
if $f$ has a weak derivative $\nabla f:B\to\mathbb{R}^n$ such that $|\nabla f|\in L^p(B,\mu_\alpha)$. It was shown in~\cite{HKM}
that when $\mu_\alpha$ is doubling and satisfies a $p$-Poincar\'e inequality near
on $B$, then $W^{1,p}(B,\mu_\alpha)$ is a Banach space. Here, $W^{1,p}(B,\mu_\alpha)$ is equipped with the norm
\[
\Vert f\Vert_{N^{1,p}(B,\mu_\alpha)}:=\Vert f\Vert_{L^p(B,\mu_\alpha)}+\Vert |\nabla f|\Vert_{L^p(B,\mu_\alpha)}.
\]
Recall that $\mu_\alpha$ is doubling on $B$ if there is a constant $C\ge 1$ such that whenever $x\in B$ and $0<r\le 10$,
\[
\mu_\alpha(B(x,2r))\le C\, \mu_\alpha(B(x,r)),
\]
and we say that $\mu_\alpha$ supports a $p$-Poincar\'e inequality on $B$ if there is a constant $C>1$ such that
whenever $f$ is in $W^{1,p}(B,\mu_\alpha)$ and
$B_0$ is a ball with $B_0\cap B\ne \emptyset$ and $\text{rad}(B_0)\le 10$, then
\[
\vint_{B_0}|f-f_{B_0}|\, d\mu_\alpha\le C \text{rad}(B_0)\, \left(\vint_{B_0}|\nabla f|^p\, d\mu_\alpha\right)^{1/p}.
\]
Here
\[
f_{B_0}:=\vint_{B_0}f\, d\mu_\alpha=\frac{1}{\mu_\alpha(B_0)}\int_{B_0} f\, d\mu_\alpha.
\]
A function $u\in L^p(E,\nu)$ is in the Besov space $B^\theta_{p,p}(E,\nu)$ for a fixed $0<\theta<1$ if
\[
\Vert u\Vert_{B^\theta_{p,p}(E,\nu)}^p
:=\int_E\int_E \frac{|u(y)-u(x)|^p}{d(x,y)^{\theta p}\nu(B(x,d(x,y)))}\, d\nu(y)\, d\nu(x)
\]
is finite.
The next two notions are related to the specific examples considered in this paper.
Information about Muckenhoupt weights can be found for example in~\cite{MW1, MW2, HKM},
while information about uniform domains can be found for example in~\cite{MS, HrK, BS}; these example
references barely scratch the surface of the current literature on these topics and therefore should not be
considered to be even an almost exhaustive list.
\begin{defn}
For $p > 1$, a weight $\omega \colon \mathbb{R}^n \to [0,\infty)$ is said to be a
\emph{Muckenhoupt $\mathcal{A}_p$-weight near $B$} if
$\omega$ is a locally integrable function such that
\[
\sup_{B_0} \biggl( \frac{1}{m(B_0)} \int_{B_0} \omega dm \biggr)
\biggl(\frac{1}{m(B_0)} \int_{B_0} \omega^{-\tfrac{q}{p}} dm \biggr)^{\tfrac{p}{q}} < \infty
\]
where $\tfrac{1}{p} + \tfrac{1}{q} = 1$, and $B_0$ ranges over balls in
$\mathbb{R}^n$ intersecting $B$ with $\text{rad}(B_0)\le 10$. In this case we write $\omega \in \mathcal{A}_p$.
\end{defn}
\begin{defn}
A domain $\Omega\subset\mathbb{R}^n$ is said to be a uniform domain if $\Omega\ne\mathbb{R}^n$ and there is a constant
$A\ge 1$ such that for each distinct pair of points $x,y\in\Omega$ there is a curve $\gamma$ in $\Omega$
with end points $x,y$ with length $\ell(\gamma)\le A\, d(x,y)$ and
$\min\{\ell(\gamma_{z,x}),\ell(\gamma_{z,y})\}\le A\, \dist(z,\partial\Omega)$ whenever $z$ is a point on $\gamma$.
Here $\gamma_{z,y}$ denotes each subcurve of $\gamma$ with end points $z,y$.
\end{defn}
\section{The Carpet and Gasket examples}
In this section we first consider the weighted measure
corresponding to the weight $\omega_\alpha(x)=\dist(x,\mathbb{S})^\alpha$,
with $\mathbb{S}$ the standard Sierpi\'nski carpet with outer perimeter $\partial [0,1]^2$. We will show that
when $\alpha>Q-2=\tfrac{\log(8)}{\log(3)}-2$, the weighted measure is doubling. We also show that
if in addition $p>\tfrac{\alpha+2-Q}{2-Q}$, then the weight is a Muckenhoupt $\mathcal{A}_p$-weight.
For $\alpha \in \mathbb{R}$, we define a Borel measure $\mu_\alpha$ with density $\omega_\alpha(x) = \dist(x, \mathbb{S})^\alpha$
outside of $\mathbb{S}$. That is, for Borel sets $A\subset \mathbb{R}^2$ we have
\begin{equation}\label{eq:def-mu-alpha}
\mu_\alpha(A) = \int_A \dist(x,\mathbb{S})^\alpha dx.
\end{equation}
Note that $\mathbb{S}$ has Lebesgue measure zero. We now investigate for which values of $\alpha$ the measure $\mu_\alpha$ is doubling.
\begin{notn}
The carpet $\mathbb{S}$ can be written as $[0,1]^2 \setminus \bigcup_i H_i$ where the collection $\bigcup_i H_i$ consists of
pairwise disjoint open squares $H_i\subset[0,1]^2$. We call the open squares $H_i$ holes. Each hole has
sidelength $3^{-k}$ for some $k \in \mathbb{N}$. If the sidelength of a hole $H$ is $3^{-k}$, then we say $H$
belongs to generation $k$. For each $k\in\mathbb{N}$ there are $8^{k-1}$ holes in generation $k$.
\end{notn}
\begin{lemma}\label{hole lemma}
Let $H$ be a hole in generation $k$. If $\alpha > -1$, then $\mu_\alpha(H) = c_\alpha 3^{-k(\alpha+2)}$ where
$c_\alpha = \frac{8}{(\alpha+1)(\alpha+2)} 2^{-(\alpha + 2)}$ . Otherwise, $\mu_\alpha(H) = \infty$.
\end{lemma}
\begin{proof}
By symmetry, we have
\[
\mu_\alpha(H) = 8 \int_0^{2^{-1}3^{-k}} \int_0^x y^\alpha dy dx.
\]
If $\alpha \leq -1$, then $\mu_\alpha(H)$ is infinite.
Otherwise, we have $\alpha + 1 > 0$ and so
\[
\mu_\alpha(H) = \frac{8}{\alpha+1} \int_0^{2^{-1}3^{-k}} x^{\alpha + 1} dx
= \frac{8\cdot\, 2^{-(\alpha+2)}}{(\alpha+1)(\alpha+2)} (3^{-k})^{\alpha + 2} = c_\alpha 3^{-k(\alpha+2)}.
\]
\end{proof}
For our analysis we use squares instead of Euclidean balls.
For $x \in \mathbb{R}^2$ and $s>0$, we set
\[
S(x, s) = \{y \in \mathbb{R}^2 : \norm{x - y}_\infty < \tfrac{s}{2} \},
\]
the open square in $\mathbb{R}^2$ centered at $x$ with sidelength $s$. For $\tau > 0$, we set $\tau\, S(x,s) = S(x,\tau s)$
and estimate $\mu_\alpha(S(x,s))$.
To do this, we introduce families of squares that have easy to compute $\mu_\alpha$ mass. For $k \in \mathbb{N}$, let
$\mathcal{S}^k$ be the set of (open) squares of the form $S(x, 3^{-k})$ with $x$ of the form
$((m+ \tfrac{1}{2})3^{-k}, (n+ \tfrac{1}{2})3^{-k})$ with $m, n \in \mathbb{Z}$.
\begin{lemma}\label{sk square lemma}
Let $S = S(x, s) \in \mathcal{S}^k$ and $\alpha > \tfrac{\log(8)}{\log(3)} - 2$. If
$\overline{9S} \cap \mathbb{S} \neq \emptyset$, then $\mu_\alpha(S) \simeq s^{\alpha + 2}$.
Otherwise, $\mu_\alpha(S) \simeq s^2 d(x, \mathbb{S})^\alpha$. In particular, if $c > 9$ is the
smallest integer such that $\overline{cS} \cap \mathbb{S} \neq \emptyset$, then
$\mu_\alpha(S) \simeq c^\alpha s^{\alpha + 2}$
\end{lemma}
Observe that when $\alpha> \tfrac{\log(8)}{\log(3)} - 2$, we automatically have $\alpha>-1$.
\begin{proof}
For $\alpha = 0$ the claim is clear, so we assume that $\alpha \neq 0$ for the remainder of the proof.
Note that $s = 3^{-k}$ as $S \in \mathcal{S}^k$. First suppose that $\overline{9S} \cap \mathbb{S} \neq \emptyset$.
We examine three cases: (i) $3^k (\overline{S}\cap \mathbb{S})$ is isometric to the carpet, (ii) $S$ is a hole as above, or (iii)
neither of these cases.
\textbf{Case (i):} Using Lemma \ref{hole lemma}, we compute $\mu_\alpha(S)$ exactly. For each
$j \in \mathbb{N}_0$, we see that $S$ contains $8^j$ holes of generation $k+j+1$. By assumption,
$3^{\alpha + 2} > 8$.
As $S$ is a scaled copy of the carpet, it follows from Lemma \ref{hole lemma} that
\begin{equation*}
\begin{split}
\mu_\alpha(S) = \sum_{j=0}^{\infty} 8^j c_\alpha (3^{\alpha + 2})^{-k-j-1} &
= c_\alpha (3^{\alpha + 2})^{-k-1} \sum_{j=0}^\infty 8^j 3^{-j(\alpha+2)} \\
&= c_\alpha 3^{-k(\alpha + 2)}\frac{1}{3^{\alpha + 2} - 8}
= \biggl(\frac{c_\alpha}{3^{\alpha+2} - 8}\biggr) s^{\alpha + 2}.
\end{split}
\end{equation*}
\textbf{Case (ii):} In this case $S$ is a hole in generation $k$, so by Lemma \ref{hole lemma} we have
that $\mu_\alpha(S) = c_\alpha s^{\alpha + 2}$.
\textbf{Case (iii):} First assume that $\alpha < 0$. From our choice of $\mathcal{S}^k$, in this case we must have
that $S \cap \mathbb{S} = \emptyset$. It is clear that if $H$ is a hole of generation $k$ and $\iota \colon S \to H$
is an isometry given by translation, then for all $y \in S$ we have $d(y, \mathbb{S}) \geq d(\iota(y), \mathbb{S})$. As
$\alpha < 0$, it follows that $d(y, \mathbb{S})^\alpha \leq d(\iota(y), \mathbb{S})^\alpha$ and so
$\mu_\alpha(S) \leq \mu_\alpha(H) \simeq s^{\alpha + 2}$. On the other hand,
$\overline{9S} \cap \mathbb{S} \neq \emptyset$, so $d(y, \mathbb{S}) \leq 11 \cdot 3^{-k} = 11s$ for
all $y \in S$. Hence, as $\alpha < 0$ we have $d(y,\mathbb{S})^\alpha \geq (11s)^\alpha$. It follows that
\[
\mu_\alpha(S) \geq \int_S (11s)^\alpha = 11^\alpha s^{\alpha + 2}.
\]
If $\alpha > 0$ instead, then the lower and upper bounds above are reversed but the conclusion is the same.
Now suppose that $\overline{9S} \cap \mathbb{S} = \emptyset$. It follows that $d(x, \mathbb{S}) \geq 3s$. Then, if $y \in S$, we have
\[
\tfrac{1}{3} d(x, \mathbb{S}) \leq d(x, \mathbb{S}) - s \leq d(y, \mathbb{S}) \leq d(x, \mathbb{S}) + s \leq 2 d(x, \mathbb{S}).
\]
The result follows immediately.
For the last part of the lemma, if $c > 9$ is the smallest integer such that $\overline{cS} \cap \mathbb{S} \neq \emptyset$, then $d(x, \mathbb{S}) \simeq cs$ and so $\mu_\alpha(S) \simeq c^\alpha s^{\alpha + 2}$.
\end{proof}
We now use Lemma \ref{sk square lemma} to prove the same result for general squares.
\begin{lemma}\label{gen square lemma}
Let $S = S(x, s)$ with $s \le 9$. Let $\alpha > \tfrac{\log(8)}{\log(3)} - 2$. If $\overline{9S} \cap \mathbb{S} \neq \emptyset$, then $\mu_\alpha(S) \simeq s^{\alpha + 2}$. Otherwise, $\mu_\alpha(S) \simeq s^2 d(x, \mathbb{S})^\alpha$. In particular, if $c > 9$ is the smallest integer such that $\overline{cS} \cap \mathbb{S} \neq \emptyset$, then $\mu_\alpha(S) \simeq c^\alpha s^{\alpha + 2}$
\end{lemma}
\begin{proof}
The proof that if $\overline{9S} \cap \mathbb{S} = \emptyset$, then $\mu_\alpha(S) \simeq s^2 d(x, \mathbb{S})^\alpha$ is the same as in Lemma \ref{sk square lemma}. For the first part of the statement of the lemma, suppose that $\overline{9S} \cap \mathbb{S} \neq \emptyset$. Let $k \in \mathbb{N}$ be the smallest integer with $3^{-k} < s$. As $s\le 9$ we have $\tfrac{s}{3^{-k}}\le 3$.
It follows that there is a subset $\{S_i\}_{i \in I} \subseteq S_k$ with $S \subseteq \cup_{i \in I} S_i$ and
$|I| \le 25$. Write $S_i = S(x_i, s_k)$ with $s_k = 3^{-k}$. For each $S_i$ we have
\[
d(x_i, \mathbb{S}) \leq s_k + s + d(x, \mathbb{S})
\]
and, as $s_k \simeq s$ and $d(x,\mathbb{S}) \simeq s$, there is a constant $a > 0$ independent of $i$ and $S$
such that $\overline{aS_i} \cap \mathbb{S} \neq \emptyset$ for each $i \in I$. Hence, we may apply
Lemma~\ref{sk square lemma} to each $S_i$ (with different squares $S_i$ potentially falling into
different cases of Lemma~\ref{sk square lemma}) and conclude that $\mu_\alpha(S_i) \simeq s^{\alpha + 2}$.
Hence, as $|I| \le 25$ we have $\mu_\alpha(S) \lesssim s^{\alpha + 2}$.
For the lower bound, we again choose the smallest integer $k$ with $3^{-k} < s$ and then note that
there is a square $S' \in \mathcal{S}^{k+2}$ with $S' \subseteq S$. An argument similar to the above
shows us that $\mu_\alpha(S') \simeq s^{\alpha + 2}$, and so $\mu_\alpha(S) \gtrsim s^{\alpha + 2}$.
\end{proof}
We now show that $\mu_\alpha$ is doubling for small squares.
\begin{lemma}[Doubling]
Let $S = S(x, s)$ be a square such that $s \le 9$. Then, $\mu_\alpha(3S) \lesssim \mu_\alpha(S)$.
\end{lemma}
\begin{proof}
First suppose that $\overline{27S} \cap \mathbb{S} = \emptyset$. Then, by Lemma \ref{gen square lemma}
we know that $\mu_\alpha(3S) \simeq (3s)^2 d(x, \mathbb{S})^\alpha$ and $\mu_\alpha(S) \simeq s^2 d(x, \mathbb{S})^\alpha$.
Now, suppose that $\overline{27S} \cap \mathbb{S} \neq \emptyset$. Then, by Lemma \ref{gen square lemma},
we know that $\mu_\alpha(3S) \simeq (3s)^{\alpha + 2}$. We also see from Lemma \ref{gen square lemma}
that $\mu_\alpha(S) \simeq s^{\alpha + 2}$ (either $\overline{3S} \cap \mathbb{S} \neq \emptyset$, or we have $c \leq 27$
in the last part of the statement of Lemma \ref{gen square lemma}).
\end{proof}
Note that $q = \frac{p}{p-1}$. We investigate conditions that guarantee that the weight $\omega_\alpha$ given by
$\omega_\alpha(x) = \dist(x, \mathbb{S})^\alpha$ is in the Muckenhoupt class $\mathcal{A}_p$, see also~\cite[Theorem~1.1]{DILTV}.
It is clear that in
the definition of $\mathcal{A}_p$ we may replace the use of balls $B$ with that of squares $S$.
\begin{lemma}[$\mathcal{A}_p$ weights]\label{lem-ap-weights}
The function $\omega_\alpha(x) = \dist(x, \mathbb{S})^\alpha$ is a Muckenhoupt $\mathcal{A}_p$-weight near $B$ when
$\tfrac{\log(8)}{\log(3)} - 2 < \alpha < (p-1)(2 - \tfrac{\log(8)}{\log(3)})$.
\end{lemma}
\begin{proof}
Let $\tfrac{\log(8)}{\log(3)} - 2 < \alpha < (p-1)(2 - \tfrac{\log(8)}{\log(3)})$. Let $S = S(x,s)$ be a square with
$s < 9$. First, assume that $\overline{9S} \cap \mathbb{S} \neq \emptyset$. Then, by Lemma \ref{gen square lemma},
\[
\frac{1}{m(B)} \int_B \omega_\alpha dm \lesssim \frac{1}{s^2} s^{\alpha + 2} = s^\alpha.
\]
We see $\tfrac{q}{p} =\tfrac{1}{p-1}$. As $-\tfrac{\alpha}{p-1} > \tfrac{\log(8)}{\log(3)} - 2$, by Lemma \ref{gen square lemma} we have
\begin{equation*}
\begin{split}
\biggl(\frac{1}{m(B)} \int_B \omega_\alpha^{-\tfrac{q}{p}} dm \biggr)^{\tfrac{p}{q}} &= \biggl(\frac{1}{m(B)} \int_B \dist(x,\mathbb{S})^{-\tfrac{\alpha}{p-1}} dx \biggr)^{p-1} \\
&\lesssim \biggl(\frac{1}{s^2} s^{-\tfrac{\alpha}{p-1} + 2}\biggr)^{p-1} = s^{-\alpha}.
\end{split}
\end{equation*}
It follows that the $\mathcal{A}_p$ bound holds for these squares for $\alpha$ in the above range.
If, instead, we have $\overline{9S} \cap \mathbb{S} = \emptyset$, then with $\alpha$ in the same range we have
\[
\frac{1}{m(B)} \int_B \omega_\alpha dm \lesssim \frac{1}{s^2} s^{2}\dist(x,\mathbb{S})^\alpha =\dist(x,\mathbb{S})^\alpha
\]
and
\begin{equation*}
\begin{split}
\biggl(\frac{1}{m(B)} \int_B \omega_\alpha^{-\tfrac{q}{p}} dm \biggr)^{\tfrac{p}{q}} &= \biggl(\frac{1}{m(B)} \int_B \dist(x,\mathbb{S})^{-\tfrac{\alpha}{p-1}} dx \biggr)^{p-1} \\
&\lesssim \biggl(\frac{1}{s^2} s^{2} \dist(x, \mathbb{S})^{-\tfrac{\alpha}{p-1}}\biggr)^{p-1} = \dist(x,\mathbb{S})^{-\alpha}.
\end{split}
\end{equation*}
It again follows that for $\tfrac{\log(8)}{\log(3)} - 2 < \alpha < (p-1)(2 - \tfrac{\log(8)}{\log(3)})$ the $\mathcal{A}_p$ bound holds.
\end{proof}
Now, let $\mathbb{G}$ denote the Sierpi\'nski gasket for which the points $(0,0), (1,0)$, and $(\tfrac{1}{2}, \tfrac{\sqrt{3}}{2})$ are the vertices of its boundary triangle. Consider the measures
\[
\mu'_\alpha(A) = \int_A \dist(x,\mathbb{G})^\alpha dx.
\]
As for the carpet, for the correct values of $\alpha$ the measures $\mu'_\alpha$ are doubling and the functions $\dist(x, \mathbb{G})^\alpha$ are $\mathcal{A}_p$ weights. The argument for this is similar, and we summarize the differences below.
First, squares are no longer the natural objects for integration. Instead, it is easier to work with equilateral triangles. The grids $\mathcal{S}^k$ are replaced by grids of equilateral triangles with side lengths $2^{-k}$. If $T = T(x, s)$ is such a triangle (centered at $x$ with side length $s$), then for $\alpha > \tfrac{\log(3)}{\log(2)} - 2$ we can estimate $\mu'_\alpha(T)$ as in Lemma \ref{sk square lemma}. For grid triangles far from the gasket relative to their side lengths, the estimate $\mu'_\alpha(T) \simeq s^2 \dist(x, \mathbb{G})^\alpha$ still holds. For grid triangles near the gasket relative to their side length but which are neither holes nor scaled versions of the gasket, the estimate $\mu'_\alpha(T) \simeq s^{2+\alpha}$ still holds by comparison with $\mu'_\alpha(H)$ for hole-triangles $H$. For a single hole triangle $T$ with side length $s = 2^{-k}$, we have
\[
\mu'_\alpha(T) = 6 \int_0^{s/2} \int_0^{x / \sqrt{3}} y^\alpha dy dx \simeq s^{\alpha + 2}.
\]
For triangles $T = T(x, s)$ where $T \cap \mathbb{G}$ is a scaled copy of the gasket, we see that if $s = 2^{-k}$ then for each $j \in \mathbb{N}_0$, the triangle $T$ contains $3^j$ hole triangles of side length $2^{-k-j-1}$. Hence, in this case
\[
\mu'_\alpha(T) \simeq \sum_{j=0}^\infty 3^j 2^{(-k-j-1)(\alpha + 2)} \simeq s^{\alpha + 2}
\]
where the series is finite as $2^{\alpha+2} > 3$.
As in Lemma \ref{gen square lemma}, one again estimates the $\mu'_\alpha$ measure of arbitrary triangles from the grid triangles. Once this is done, the doubling property and $\mathcal{A}_p$ weight condition are as before. For the function $\dist(x, \mathbb{G})^\alpha$ to be an $\mathcal{A}_p$ weight, we require $\tfrac{\log(3)}{\log(2)} - 2 < \alpha < (p-1)(2 - \tfrac{\log(3)}{\log(2)})$.
\section{The von Koch snowflake curve}\label{VonKoch}
In this section we consider the von Koch snowflake curve $K$
(with an equilateral triangle with side-lengths $1$ as a ``zeroth" iteration)
that is the boundary of the von Koch snowflake domain. We will show that $\Omega$ will satisfy the hypotheses
of our paper with $K$ playing the role of $E$ and the ball $B$ replaced by $\Omega$. However, in this case
we obtain a sharper result by combining the results of~\cite{BS} with~\cite[Theorem~1.1]{Maly} to obtain the full range
$0<\theta\le 1-\tfrac{\alpha+n-Q}{p}$ in Theorem~\ref{thm:main-trace}.
From~\cite[Theorem~1]{Ahl} we know that the snowflake curve is a quasicircle. Therefore,
from~\cite[Theorem~2.15]{MS} or~\cite[Theorem~1.2]{Hr} it follows that the von Koch snowflake domain is a
uniform domain; this sets us up to use the results of~\cite{BS}.
\begin{defn}[Definition 2.6 in \cite{BS}]
Let $\Omega\subset\mathbb{R}^n$ be a domain and
$\beta > 0$. We say that $\Omega$ satisfies a local $\beta$-shell condition if there is a constant
$C > 0$ such that for all $x \in \overline{\Omega}$ and $0 < \rho \leq r \leq \diam(\Omega)$ we have
\begin{equation}\label{beta shell cond}
m(\{y \in B(x,r) \cap \Omega : \delta_\Omega(y) \leq \rho\}) \leq C \biggl( \frac{\rho}{r}\biggr)^\beta m(B(x,r) \cap \Omega)
\end{equation}
where $\delta_\Omega(y) = \dist(y, \partial\Omega)$.
Recall here that $m$ is the $n$-dimensional Lebesgue measure on $\mathbb{R}^n$.
\end{defn}
\begin{defn}
We say that $\Omega$ satisfies a strong local $\beta$-shell condition if it satisfies a local $\beta$-shell condition
and in addition,
\begin{equation}\label{strong beta shell cond}
m(\{y \in B(x,r) \cap \Omega : \delta_\Omega(y) \leq \rho\}) \simeq \biggl( \frac{\rho}{r}\biggr)^\beta m(B(x,r) \cap \Omega)
\end{equation}
whenever $x\in \partial\Omega$ and $0 < \rho \leq r \leq \diam(\Omega)$.
\end{defn}
By \cite[Lemma 2.7]{BS}, if $\Omega\subset\mathbb{R}^n$ is a bounded domain
that satisfies the above local $\beta$-shell condition for
some $\beta>0$, then for all
$\alpha > -\beta$ the measure $d\mu_\alpha(y)=\delta_\Omega(y)^\alpha dm(y)$ is doubling on $\Omega$. Combining this
with~\cite[Theorem~4.4]{BS} and noting that the Newton-Sobolev space discussed there is the same as the
standard Sobolev space $W^{1,p}(\mathbb{R}^2)$ in our setting (see for example~\cite{HKSTbook}) tells us that
when $\Omega$ is the von Koch snowflake domain, the metric measure space
$(\Omega, d, \mu_\alpha)$ is doubling and supports a $1$-Poincar\'e inequality. Here $d$ is the Euclidean metric. Moreover,
with $\nu=\mathcal{H}^Q\vert_K$ where $Q$ is the Hausdorff dimension of $K$, we would also
have that $\nu(B(x,r))\simeq r^Q$ whenever $x\in K$ and $0<r\le 10$.
\begin{lemma}\label{lem:vonKoch-codim}
Suppose that $\Omega\subset\mathbb{R}^n$ a bounded domain that
satisfies a strong local $\beta$-shell condition for some $\beta>0$ and that
$K=\partial\Omega$ is Ahlfors $Q$-regular for some $n>Q>0$. For $-\beta<\alpha\le 0$ we set
$\mu_\alpha$ to be the measure on $\Omega$ given by $d\mu_\alpha(y)=\delta_\Omega(y)^\alpha\, dm(y)$.
Moreover, assume that for each $x\in K$ and $0<r\le 10$ we have $m(B(x,r)\cap\Omega)\simeq r^n$.
Then whenever $0<r\le 10$ and $x\in K$, we have
\[
\mu_\alpha(B(x,r)\cap\Omega)\simeq r^{n+\alpha-Q}\, \mathcal{H}^Q(B(x,r)\cap K).
\]
\end{lemma}
\begin{proof}
Fix $x\in K$ and $0<r\le 10$, and without loss of generality assume that $\alpha<0$. Then by the Cavalieri principle,
\begin{align*}
\mu_\alpha(B(x,r)\cap\Omega)&=\int_{B(x,r)\cap\Omega}\frac{1}{\delta_\Omega(y)^{|\alpha|}}\, dm(y)\\
&=\int_0^\infty m(\{y\in B(x,r)\cap\Omega\, :\, \delta_\Omega(y)^{-|\alpha|}>t\})\, dt\\
&\simeq \int_0^M Cr^n\, dt+\int_M^\infty m(\{y\in B(x,r)\cap\Omega\, :\, \delta_\Omega(y)<t^{-1/|\alpha|}\})\, dt
\end{align*}
where $M=r^{-|\alpha|}>0$.
Note that as $\beta<\alpha<0$, we have $\beta/|\alpha|>1$.
Using $M=r^{-|\alpha|}$ and the local shell property, we obtain
\begin{align*}
\mu_\alpha(B(x,r)\cap\Omega)&\simeq r^{n-|\alpha|}+\int_M^\infty \left(\frac{1}{t^{1/|\alpha|}r}\right)^\beta r^n\, dt\\
&\simeq r^{n+\alpha}+r^{n-\beta}M^{1-\beta/|\alpha|}
\simeq r^{n+\alpha}.
\end{align*}
Since $\mathcal{H}^Q(B(x,r)\cap K)\simeq r^Q$, the conclusion follows.
\end{proof}
Hence if the von Koch snowflake domain satisfies a strong local $\beta$-shell condition, then $(\Omega, \mu_\alpha)$
is doubling and supports a $1$-Poincar\'e inequality when $\alpha>-\beta$, and in addition from Lemma~\ref{lem:vonKoch-codim}
we know that $\nu=\mathcal{H}^Q\vert_K$ is $2+\alpha-Q$-codimension regular with respect to $\mu_\alpha$, and
so by~\cite[Theorem~1.1]{Maly} the conclusions of Theorem~\ref{thm:main-trace} and Theorem~\ref{thm:main-extend}
hold for the von Koch domain and its boundary.
In light of the above discussion,
it only remains to verify the strong local $\beta$-shell condition for $\Omega$ for the choice of
$0<\beta=\beta_0:= 2 - \tfrac{\log(4)}{\log(3)}=2-Q$.
For each non-negative integer $n$, let $K_n$ denote the $n$-th
iteration of the von Koch snowflake, so $K_n$ consists of $3 \cdot 4^n$ line segments of
length $3^{-n}$. Let $x \in \overline{\Omega}$, $0<r<\tfrac{1}{2}$, and choose a non-negative integer $k$
such that $3^{-k-1} \leq 2r < 3^{-k}$. For $0<\rho<r$ choose a non-negative integer $j$ such that
$3^{-k-j-1} \leq \rho < 3^{-k-j}$, and so $\rho \simeq 3^{-j} r$.
There is a constant $M$, independent of $x$ and $r$, such that the number of the line
segments in $K_{k+1}$ intersecting $B(x, 2r)$ is at most $M$, for the set
of endpoints of the segments of $K_{k+1}$ are $3^{-k-1}$-separated and $r \simeq 3^{-k-1}$.
For $m \in \mathbb{N}$, let $A_m$ be the bounded component of $\mathbb{R}^2 \setminus K_m$.
Clearly $A_m\subset A_{m+1}$. Moreover,
\[
m(\{y\in\Omega\, :\, \delta_\Omega(y)\le \rho\}\cap B(x,r))
\simeq m(\{y\in A_{k+j+1}\, :\, \delta_\Omega(y)\le \rho\}\cap B(x,r)).
\]
Also, each line segment $L$ of length $3^{-k-1}$
that makes up the construction of $K_{k+1}$ is modified
$j$ times to obtain the set $K_{k+j+1}$ by replacing $L$ with $4^j$ number of line segments,
each of length $3^{-j-k-1}$. If $\ell$ is one of these line segments, then
\[
m(\{y\in A_{k+j+1}\, :\, \dist(y,\ell)\le \delta_\Omega(y)\le \rho\})\simeq\rho^2,
\]
and therefore
\[
m(\{y\in A_{k+j+1}\, :\, \delta_\Omega(y)\le \rho\}\cap B(x,r))\lesssim M\times 4^j\times \rho^2,
\]
with $\lesssim$ actually being $\simeq$ if $x\in K$. From the fact that $\rho\simeq 3^{-j}r$, it follows that
\[
m(\{y\in\Omega\, :\, \delta_\Omega(y)\le \rho\}\cap B(x,r))\lesssim \left(\frac{4}{9}\right)^j r^2
\simeq \left(\frac{4}{9}\right)^j m(B(x,r)\cap\Omega),
\]
again with $\lesssim$ actually being $\simeq$ if $x\in K$.
Set $\beta_0=2-\tfrac{\log(4)}{\log(3)}=2-Q$ and observe that $\rho/r\simeq 3^{-j}$.
Then we have that
\[
\left(\frac{4}{9}\right)^j=\left(3^{-j}\right)^{\beta_0}\simeq \left(\frac{\rho}{r}\right)^{\beta_0}
\]
as desired, proving that the snowflake domain satisfies the strong $\beta_0$-shell condition.
\section{Trace of weighted Sobolev functions are in Besov spaces, or how the surrounding leaves a mark on subsets}
The goal of this section is to study trace of $N^{1,p}(B,\mu_\alpha)$ on the set $E$ and relate it to the
Besov classes $B^\theta_{p,p}(E,\nu)$ and prove Theorem~\ref{thm:main-trace}.
We recall the setting considered here (see the introduction for more on this).
With $0<Q<n$, let $E\subset \mathbb{R}^n$ be an Ahlfors $Q$-regular compact set
with $\diam(E)\le 1$. Let $B$ be a ball in $\mathbb{R}^n$ such that $E\subset\tfrac12 B$. We also assume that
for each $\alpha\le 0$ there is a measure $\mu_\alpha$ on $B$ such that whenever $\alpha+n-Q>0$
and $x\in B$, and $0<r<2$ such that $r>\dist(x,E)/9$, the comparison $\mu_\alpha(B(x,r))\simeq r^{n+\alpha}$
holds. We also assume that the
ball $B$, equipped with the Euclidean metric $d$ and the measure $\mu_\alpha$,
is doubling and supports a $q$-Poincar\'e inequality for each $q>1$.
\begin{lemma}\label{lem:HLmaximal1}
Suppose that $B$ is a ball in $\mathbb{R}^n$ such that $E\subset \tfrac12B$, where $E$ is compact and is
Ahlfors $Q$-regular. Let $\mu_\alpha$ be as in~\eqref{eq:def-mu-alpha} such that $\nu=\mathcal{H}^Q\vert_E$ is
$\alpha+n-Q$-codimensional with respect to $\mu_\alpha$.
Suppose that $\gamma>0$ such that $\gamma\ge\alpha+n-Q$.
Then there is a constant $C>0$ such that
whenever $h\in L^1_{loc}(\mathbb{R}^n,\mu_\alpha)$, we have for all $t>0$,
\[
\nu(\{M_\gamma h>t\})\le \frac{C}{t}\int_{B}h\, d\mu_\alpha,
\]
where $M_\gamma$ is the fractional Hardy-Littlewood maximal function operator given by
\[
M_\gamma h(x)= \sup_{\text{rad}(B)\le 1, x\in B} \text{rad}(B)^\gamma \vint_B h\, d\mu_\alpha.
\]
\end{lemma}
\begin{proof}
This is a variant of the standard proof, the variant being that the measure with respect to which the maximal
operator functions, $\mu_\alpha$, is not the same as the measure $\nu$ with respect to which the superlevel
sets are measured. For this reason we give the complete proof here.
Let $t>0$ and set $E_t:=\{x\in E\, :\, M_\gamma h(x)>t\}$. Then for each $x\in E_t$ there is a ball
$B_x$ of radius $r_x>0$ such that $x\in B_x$ and
\[
r_x^\gamma \vint_{B_x} h\, d\mu_\alpha>t.
\]
It follows that
\[
\nu(E\cap B_x)\simeq r_x^Q<\frac{r_x^{\gamma+Q-(\alpha+n)}}{t}\int_{B_x}h\, d\mu_\alpha.
\]
Recalling that $\alpha+n>Q$, we set $\eta=\gamma-(\alpha+n-Q)$. Then $\eta\ge 0$.
The balls $B_x$, $x\in E_t$, cover $E_t$. Therefore, by the $5$-covering theorem (which is
applicable here to $E$ as $\nu$ is doubling on $E$), we obtain a pairwise disjoint countable
subfamily of balls $B_i$with radii $r_i$ such that $E_t\subset \bigcup_i 5B_i$. Then
\begin{align*}
\nu(E_t)\le \sum_i \nu(5B_i)\le C \sum_i r_i^Q \le \frac{C}{t} \sum_i r_i^\eta \int_{B_i}h\, d\mu_\alpha
&\le \frac{C}{t} \sum_i \int_{B_i}h\, d\mu_\alpha\\
&\le \frac{C}{t} \int_B h\, d\mu_\alpha,
\end{align*}
where we have used the facts that $r_i\le 1$, $\eta\ge 0$, and that the balls $B_i$ are pairwise disjoint.
\end{proof}
\begin{lemma}\label{lem:HLmaximal2}
Suppose that $0\le g\in L^p(B,\mu_\alpha)$ where $B$ is the ball as in Lemma~\ref{lem:HLmaximal1} and
$1<p<\infty$. Fix $1\le q<p$. Then
\[
\int_E (M_\gamma(g^q))^{p/q}\, d\nu\le C\int_B g^p\, d\mu_\alpha.
\]
\end{lemma}
\begin{proof}
Recall from the Cavalieri principle that
\[
\int_E (M_\gamma(g^q))^{p/q}\, d\nu=\frac{p}{q} \int_0^\infty t^{\tfrac{p}{q}-1}\nu(\{Mg^q>t\})\, dt.
\]
For $t>0$, we can write $g^q=G_1+G_2$, where
\[
G_1=g^q\chi_{\{g^q\le t/2\}},\qquad
G_2=g^q \chi_{\{g^q>t/2\}}.
\]
Then
\[
M_\gamma g^q\le M_\gamma G_1+M_\gamma G_2\le \frac{t}{2}+M_\gamma G_2.
\]
As $M_\gamma g^q(z)>t$ when $z\in E_t$, it follows that $\{M_\gamma g^q>t\}\subset \{M_\gamma G_2>t/2\}$.
Hence by Lemma~\ref{lem:HLmaximal1},
\begin{align*}
\int_E(M_\gamma g^q)^{p/q}\, d\nu &\le\frac{p}{q}\int_0^\infty t^{\tfrac{p}{q}-1}\, \nu(\{M_\gamma G_2>t/2\})\, dt\\
&\le C\int_0^\infty t^{\tfrac{p}{q}-2} \int_B G_2\, d\mu_\alpha \, dt\\
& = C \int_0^\infty t^{\tfrac{p}{q}-2} \int_{B\cap\{g^q>t/2\}}g^q\, d\mu_\alpha\, dt\\
&= \int_0^\infty t^{\tfrac{p}{q}-2} \left[ \frac{t}{2} \mu_\alpha(B\cap\{g^q>t/2\})
+\int_{t/2}^\infty \mu_\alpha(B\cap\{g^q>s\})\, ds\right]\, dt\\
&=C_1\int_0^\infty (t/2)^{\tfrac{p}{q}-1}\mu_\alpha(B\cap \{g^q>t/2\})\, dt\\
&\qquad\qquad +C\int_0^\infty\int_0^\infty t^{\tfrac{p}{q}-2}\chi_{(t/2,\infty)}(s)\mu_\alpha(B\cap\{g^q>s\})\, ds\, dt\\
&=C_2\int_B g^p\, d\mu_\alpha\\
&\qquad\qquad
+C\int_0^\infty\left(\int_0^\infty t^{\tfrac{p}{q}-2}\chi_{(0,2s)}(t)\, dt\right)\mu_\alpha(B\cap\{g^q>s\})\, ds\\
&=C_2\int_Bg^p\, d\mu_\alpha+C_3\int_0^\infty s^{\tfrac{p}{q}-1} \mu_\alpha(B\cap\{g^q>s\})\, ds,
\end{align*}
where we also used the Cavalieri principle and Tonelli's theorem in obtaining the last few lines above. By
the Cavalieri principle again, we obtain the desired result.
\end{proof}
Now we are ready to prove Theorem~\ref{thm:main-trace}. For the convenience of the reader, we state an expanded
version of this theorem now.
\begin{thm}\label{thm:Trace}
Let $E$ be an Ahlfors $Q$-regular compact subset of $\tfrac12B$ where $B$ is a ball in $\mathbb{R}^2$. Let $p>1$ and
$0<\theta<1$ be such that $p\theta<1$. Let $\alpha\le 0$ be such that $\alpha+n-Q>0$ and
$\theta<1-\tfrac{\alpha+n-Q}{p}$.
Then there exists $C\ge 1$ and a linear trace operator
\[
T:N^{1,p}(B,\mu_\alpha)\to B^\theta_{p,p}(E,\nu)
\]
with
\[
\Vert Tu\Vert_{B^\theta_{p,p}(E,\nu)}\le C\, \Vert |\nabla u| \Vert_{L^p(B,\mu_\alpha)}
\]
and
\[
\Vert Tu\Vert_{L^p(E,\nu)}^p\le C \Vert u\Vert_{N^{1,p}(B,\mu_\alpha)}.
\]
Moreover, if $u\in N^{1,p}(B,\mu_\alpha)$ is Lipschitz continuous in a neighborhood of $E$, then
$Tu=u\vert_E$.
\end{thm}
Note that if $p\theta<1$, then we can always choose $\alpha\le 0$ satisfying the hypotheses of the above theorem.
Moreover, if we only know that there is a fixed $\alpha>Q-n$ such that $\mu_\alpha$ is doubling and supports
a $p$-Poincar\'e inequality for some $p>1$, then the conclusion of the above theorem holds true as long
as there exists $1\le q<p$ such that $\mu_\alpha$ supports a $q$-Poincar\'e inequality and
$\alpha+n-Q<q(1-\theta)$. The support of a $q$-Poincar\'e inequality for some $1\le q<p$ is guaranteed
by the self-improvement property of Poincar\'e inequality, see~\cite{KZ, HKSTbook}.
\begin{proof}
We first prove the above claim for Lipschitz functions in $N^{1,p}(B,\mu_\alpha)$.
As $\mu_\alpha$ is doubling and supports a $p$-Poincar\'e inequality, we know that Lipschitz functions
are dense in $N^{1,p}(B,\mu_\alpha)$, and hence we get the estimates for all functions in
$N^{1,p}(B,\mu_\alpha)$. So, in the following we will assume that $u$ is Lipschitz continuous. It follows
that every point in $B$ is a $\mu_\alpha$-Lebesgue point of $u$. We denote $g_u:=|\nabla u|$.
Let $x,y\in E$ and set $B_0=B(x,2d(x,y))$, and for
positive integers $k$ we set $B_k=B(x,2^{1-k} d(x,y))$ and $B_{-k}=B(y,2^{1-k}d(x,y))$.
We also set $r_k=\text{rad}(B_k)$ for $k\in\mathbb{Z}$. Then
by the following standard telescoping argument and by the $q$-Poincar\'e inequality, we obtain
\begin{align*}
|u(y)-u(x)| &\le \sum_{k\in\mathbb{Z}}|u_{B_k}-u_{B_{k+1}}|\\
&\le C \sum_{k\in\mathbb{Z}} \vint_{2B_k}|u-u_{2B_k}|\, d\mu_\alpha\\
&\le C \sum_{k\in\mathbb{Z}} r_k\left(\vint_{2B_k} g_u^q\, d\mu_\alpha\right)^{1/q}\\
&= C \sum_{k\in\mathbb{Z}} r_k^{1-\gamma/q}\left( r_k^\gamma\vint_{2B_k} g_u^q\, d\mu_\alpha\right)^{1/q}\\
&= C\, d(x,y)^{1-\gamma/q}
\sum_{k\in\mathbb{Z}} 2^{-|k|(1-\gamma/q)}\left( r_k^\gamma\vint_{2B_k} g_u^q\, d\mu_\alpha\right)^{1/q}.
\end{align*}
By assumption, we have $p\theta<1$. Hence we can choose $\alpha$ with $Q-2<\alpha\le 0$ such that
$p\theta<p-(\alpha+n-Q)$.
Since $\alpha+n-Q>0$, we can choose $\gamma=\alpha+n-Q$ in the above. Then the condition on
$\alpha$ as described above reads as $p\theta<p-\gamma$, and so $0<\theta<1-\gamma/p$. Hence we can
choose $1<q<p$ such that $\theta<1-\gamma/q$, whence by this choice of $q$ we have that
$1-\gamma/q>0$. It follows that $\sum_{k\in\mathbb{Z}}2^{-|k|(1-\gamma/q)}<\infty$, and so
\[
|u(y)-u(x)|\le C\, d(x,y)^{1-\gamma/q}\left[M_\gamma g_u^q(x)^{1/q}+M_\gamma g_u^q(y)^{1/q}\right].
\]
Hence, for $0<\theta<1$, we obtain
\begin{align*}
\frac{|u(y)-u(x)|^p}{d(x,y)^{\theta p}\nu(B(x,d(x,y)))}
&\simeq \frac{|u(y)-u(x)|^p}{d(x,y)^{Q+\theta p}}\\
&\le C\, d(x,y)^{p-\tfrac{\gamma}{q}p-\theta p-Q}\, \left[M_\gamma g_u^q(x)^{p/q}+M_\gamma g_u^q(y)^{p/q}\right].
\end{align*}
Therefore,
\begin{align*}
\Vert u\Vert_{B^\theta_{p,p}(E,\nu)}^p
&\le C \int_E\int_E d(x,y)^{p-\tfrac{\gamma}{q}p-\theta p-Q}\, \left[M_\gamma g_u^q(x)^{p/q}+M_\gamma g_u^q(y)^{p/q}\right]\, d\nu(x)\, d\nu(y)\\
&= C\, [I_1+I_2],
\end{align*}
where
\begin{align*}
I_1&= \int_E\int_E d(x,y)^{p-\tfrac{\gamma}{q}p-\theta p-Q}\, M_\gamma g_u^q(x)^{p/q}\, d\nu(x)\, d\nu(y)\\
I_2&= \int_E\int_E d(x,y)^{p-\tfrac{\gamma}{q}p-\theta p-Q}\, M_\gamma g_u^q(y)^{p/q}\, d\nu(x)\, d\nu(y).
\end{align*}
Thanks to Tonelli's theorem, any estimate we obtain for $I_1$ is valid also for $I_2$, so we consider $I_1$ only next.
Note that for $x\in E$,
\begin{align*}
\int_E d(x,y)^{p-\tfrac{\gamma}{q}p-\theta p-Q}\, d\nu(y)
&=\sum_{j=0}^\infty \int_{B(x,2^{-j})\setminus B(x,2^{-j-1})}d(x,y)^{p-\tfrac{\gamma}{q}p-\theta p-Q}\, d\nu(y)\\
&\le C \sum_{j=0}^\infty 2^{-j[p-\tfrac{\gamma}{q}p-\theta p-Q]}2^{-jQ}
= C \sum_{j=0}^\infty 2^{-jp[1-\tfrac{\gamma}{q}-\theta]}.
\end{align*}
As we had chosen $1<q<p$ such that $\theta<1-\gamma/q$, it follows that
the above series is finite.
Then we get from Lemma~\ref{lem:HLmaximal2} that
\[
I_1\le C\int_EM_\gamma g_u^q(x)^{p/q}\, d\nu(x)\le C\int_Bg_u^p\, d\mu_\alpha.
\]
It then follows that
\[
\Vert u\Vert_{B^\theta_{p,p}(E,\nu)}\le C\, \Vert g_u\Vert_{L^p(B,\mu_\alpha)}=C\, \Vert |\nabla u|\Vert_{L^p(B,\mu_\alpha)}.
\]
In the above computation, if we fix $x\in E$ to be a $\mu_\alpha$--Lebesgue point of $u$
and consider only the balls $B_k=B(x,2^{-k})$ for $k\ge 0$, then we obtain
\begin{align*}
|u(x)|\le |u(x)-u_{B_0}|+|u_{B_0}|
&\le \sum_{k=0}^\infty |u_{B_k}-u_{B_{k+1}}|+C\, \vint_B|u|\, d\mu_\alpha\\
&\le \sum_{k=0}^\infty r_k\left(\vint_{2B_k}g_u^q\, d\mu_\alpha\right)^{1/q}
+C\left(\vint_B|u|^p\, d\mu_\alpha\right)^{1/p}\\
&\le C\sum_{k=0}^\infty r_k^{1-\gamma/q}M_\gamma g_u^q(x)^{1/q}
+C\left(\vint_B|u|^p\, d\mu_\alpha\right)^{1/p}\\
&= C M_\gamma g_u^q(x)^{1/q}
+C\left(\vint_B|u|^p\, d\mu_\alpha\right)^{1/p}.
\end{align*}
Therefore
\[
|u(x)|^p\le C M_\gamma g_u^q(x)^{p/q} + C\vint_B|u|^p\, d\mu_\alpha.
\]
Integrating the above over $E$ with respect to the measure $\nu$ and applying Lemma~\ref{lem:HLmaximal2}
we obtain
\[
\Vert u\Vert_{L^p(E,\nu)}^p\le C\int_B g_u^p\, d\mu_\alpha+C_B \int_B|u|^p\, d\mu_\alpha,
\]
from which the claim now follows.
\end{proof}
\vskip .3cm
\section{Extension of Besov functions are in Sobolev spaces, or how subsets influence their surroundings}
In this section we show that we can extend functions from the Besov class for $E$ to Newtonian class for
$(\mathbb{R}^2,\mu_\alpha)$ by proving Theorem~\ref{thm:main-extend}.
Since $E$ is compact and $E\subset\tfrac12B$, we can construct a Whitney cover $B_{i,j}$,
$i\in\mathbb{N}$ and $j=1,\cdots, M_i$, of $B\setminus E$.
Such a cover is described in~\cite[Section~2]{HM} and in~\cite[Proposition~4.1.15]{HKSTbook},
where the construction did not need the open set $\Omega$ to
be connected, and so their construction is available in our setting as well.
We can ensure
that with $B_{i,j}=B(x_{i,j},r_{i,j})$ we have $r_{i,j}=\dist(x_{i,j},E)=2^{-i}$ and that
for each $T\ge 1$ there exists $N_T\in\mathbb{N}$ such that for each $i\in\mathbb{N}$,
\begin{equation}\label{eq:bdd-overlap}
\sum_{j=1}^{M_i}\chi_{TB_{i,j}}\le N_T.
\end{equation}
Let $\varphi_{i,j}$ be a Lipschitz partition of unity subordinate to the cover $B_{i,j}$, that is,
each $\varphi_{i,j}$ is $2^i C$--Lipschitz continuous, $0\le \varphi_{i,j}\le 1$, $\text{supp}(\varphi_{i,j})\subset 2B_{i,j}$
and
\[
\sum_{i,j}\varphi_{i,j}=\chi_{B\setminus E}.
\]
Moreover, there exist $N_1, N_2\in\mathbb{N}$ such that if $2B_{i,j}$ and $2B_{m,n}$ intersect, then $|i-m|<N_1$
and there are at most $N_2$ balls $B_{m,n}$ satisfying the above when $i,j$ are fixed. For the convenience of
the reader, we state an expanded version of Theorem~\ref{thm:main-extend} below.
\begin{thm}\label{thm:extend}
With $E$, $B$, $\nu=\mathcal{H}^Q\vert_E$ and $0<Q<2$ as above, let $p>1$ and $0<\theta<1$. We fix $\alpha\le 0$ such that
$\alpha+n-Q>0$ and $\theta\ge 1-\tfrac{\alpha+n-Q}{p}$. Then there is a constant $C\ge 1$ and a
linear extension operator
\[
S:B^\theta_{p,p}(E,\nu)\to N^{1,p}(B,\mu_\alpha)
\]
such that
\[
\int_B |\nabla Su|^p\, d\mu_\alpha\le C\, \Vert u\Vert_{B^\theta_{p,p}(E,\nu)}^p, \qquad
\int_B|Su|^p\, d\mu_\alpha\le C\, \int_E|u|^p\, d\nu.
\]
Moreover, if $u$ is $L$-Lipschitz on $E$, then $Su$ is $CL$-Lipschitz on $B$.
\end{thm}
Note that in the trace theorem, Theorem~\ref{thm:Trace}, we were not able to gain control of
$\int_E|Tu|^p\, d\nu$ solely in terms of $\int_B|u|^p\, d\mu_\alpha$.
The above extension theorem however does allow us these separate controls.
\begin{proof}
From~\cite[Proposition~13.4]{B2S} we know that Lipschitz functions are dense in $B^\theta_{p,p}(E,\nu)$
for each $0<\theta<1$ and $p\ge 1$. We fix our attention on $p>1$ and $0<\theta<1$. We will
first extend Lipschitz functions in $B^\theta_{p,p}(E,\nu)$ to $N^{1,p}(B,\mu_\alpha)$ and use this
to conclude that every function in $B^\theta_{p,p}(E,\nu)$ has an extension lying in $N^{1,p}(B,\mu_\alpha)$.
To this end, let $u\in B^\theta_{p,p}(E,\nu)$ be Lipschitz continuous, and for $x\in B\setminus E$ we set
\[
Su(x)=\sum_{i,j} u_{2B_{i,j}}\, \varphi_{i,j}(x),
\]
where $u_{2B_{i,j}}:=\vint_{2B_{i,j}}u\, d\nu$.
We extend $Su$ to $E$ by setting $Su(x)=u(x)$ when $x\in E$.
If $x\in E$ and $y\in B_{i_0,j_0}$, then
\begin{align*}
|Su(y)-u(x)|&=\bigg\vert \sum_{i,j}[u_{2B_{i,j}}-u(x)]\varphi_{i,j}(y)\bigg\vert\\
&\le \sum_{i,j}\varphi_{i,j}(y)\, \vint_{2B_{i,j}}|u(w)-u(x)|d\nu(w)\\
&\le \sum_{i,j}\varphi_{i,j}(y)\, L\, 2^{1-i}\
\le CL\, d(y,x).
\end{align*}
It follows that for each $x\in E$, we have
\[
\limsup_{r\to 0^+}\vint_{B(x,r)\setminus E}|Su(y)-u(x)|^q\, d\mu_\alpha(y)=0
\]
for all $1\le q<\infty$.
If $x,y\in B_{i_0,j_0}$, then by the properties of the Whitney cover listed above,
\begin{align}
|Su(y)-Su(x)|&=\bigg\vert\sum_{i,j}[u_{2B_{i,j}}-u_{2B_{i_0,j_0}}][\varphi_{i,j}(y)-\varphi_{i,j}(x)]\bigg\vert\notag \\
\le &\frac{C\, d(y,x)}{r_{i_0}}\sum_{i,j; 2B_{i,j}\cap B_{i_0,j_0}\ne \emptyset}|u_{2B_{i,j}}-u_{2B_{i_0,j_0}}|\notag \\
\le &\frac{C\, d(y,x)}{r_{i_0}}\sum_{i,j; 2B_{i,j}\cap B_{i_0,j_0}\ne \emptyset}
\vint_{2B_{i,j}}\vint_{2B_{i_0,j_0}}|u(w)-u(v)|\, d\nu(v)\, d\nu(w)\notag\\
\le &\frac{C\, d(y,x)}{r_{i_0}}\vint_{CB_{i_0,j_0}}\vint_{CB_{i_0,j_0}}|u(w)-u(v)|\, d\nu(v)\, d\nu(w)\label{eq:control1}\\
\le &\frac{C\, d(y,x)}{r_{i_0}} 2CL\, r_{i_0}
\le C\, L\, d(x,y). \label{eq:control2}
\end{align}
It follows that $Su$ is $CL$-Lipschitz on $B$ and hence is in $N^{1,p}(B,\mu_\alpha)$. It now only remains to obtain
norm bounds.
Recall from~\cite[Theorem~5.2 and equation~(5.1)]{GKS} that
\begin{equation}\label{eq:Besov-alt-norm}
\Vert u\Vert_{B^\theta_{p,p}(E,\nu)}^p
\simeq \sum_{n=0}^\infty \int_E \vint_{B(x,2^{-n})} \frac{|u(x)-u(y)|^p}{2^{-n\theta p}}\, d\nu(y)\, d\nu(x).
\end{equation}
As $E$ is Ahlfors $Q$ regular for some $Q<2$, it follows that $\mathcal{H}^2(E)=0$, and hence $\mu_\alpha(E)=0$.
Let $z\in B_{i_0,j_0}$. Setting $x=z$ and letting $y\to z$, by applying the H\"older inequality to~\eqref{eq:control1}
we have that
\begin{align*}
\text{Lip}\, Su(z)^p&=\left(\limsup_{y\to z}\frac{|Su(y)-Su(z)|}{d(y,z)}\right)^p\\
&\le \frac{C}{r_{i_0}^p} \vint_{CB_{i_0,j_0}}\vint_{CB_{i_0,j_0}}|u(w)-u(v)|^p\, d\nu(v)\, d\nu(w)\\
&= \frac{C}{2^{-i_0(1-\theta)p}}\vint_{CB_{i_0,j_0}}\vint_{CB_{i_0,j_0}}\frac{|u(w)-u(v)|^p}{2^{-i_0\theta p}}\, d\nu(w)\, d\nu(v)\\
&\le \frac{C}{2^{-i_0(Q+(1-\theta)p)}}\int_{CB_{i_0,j_0}}\vint_{B(v,2^{k_0-i_0})}\frac{|u(w)-u(v)|^p}{2^{-i_0\theta p}}\, d\nu(w)\, d\nu(v),
\end{align*}
where $k_0$ is the smallest positive integer such that $2^{k_0}\ge 2C$; note that $k_0$ is independent of $i_0,j_0, v$.
Here we also used the fact that $r_{i_0}\simeq \dist(z,E)\simeq 2^{-i_0}$. Integrating the above over $B_{i_0,j_0}$, we obtain
\begin{align*}
\int_{B_{i_0,j_0}}&\text{Lip}\, Su(z)^p\, d\mu_\alpha(z)=\int_{B_{i_0,j_0}}|\nabla Su(z)|^p\, d\mu_\alpha(z)\\
&\le C\, 2^{-i_0(\alpha +2-Q-(1-\theta)p)} \int_{CB_{i_0,j_0}}\vint_{B(v,2^{k_0-i_0})}\frac{|u(w)-u(v)|^p}{2^{-i_0\theta p}}\, d\nu(w)\, d\nu(v).
\end{align*}
Summing the above over $j_0=1,\cdots, M_{i_0}$ and noting by~\eqref{eq:bdd-overlap}
that $\sum_{j=1}^{M_{j_0}}\chi_{CB_{i_0,j}}\le N_C$ with $E\subset\bigcup_{j=1}^{M_{i_0}}CB_{i_0,j}$,
and then summing over $i_0$, we obtain
\begin{align*}
\int_B&\text{Lip}\, Su(z)^p\, d\mu_\alpha(z)\\
&\le C \sum_{i_0=0}^\infty 2^{-i_0(\alpha +2-Q-(1-\theta)p)}
\int_E \vint_{B(v,2^{k_0-i_0})}\frac{|u(w)-u(v)|^p}{2^{-i_0\theta p}}\, d\nu(w)\, d\nu(v)\\
&\le \sum_{i_0=0}^\infty \int_E \vint_{B(v,2^{k_0-i_0})}\frac{|u(w)-u(v)|^p}{2^{-i_0\theta p}}\, d\nu(w)\, d\nu(v)
\end{align*}
provided that $\alpha+n-Q-(1-\theta)p\ge 0$. So if $\alpha\le 0$ is chosen such that $\alpha+n-Q>0$ and
\[
\theta\ge 1-\frac{\alpha+n-Q}{p},
\]
then by~\eqref{eq:Besov-alt-norm} we have that
\[
\int_B|\nabla Su|^p\, d\mu_\alpha=\int_B \text{Lip}\, Su^p\, d\mu_\alpha\le C\, \Vert u\Vert_{B^\theta_{p,p}(E,\nu)}^p.
\]
To complete the argument, we next obtain control of $\Vert Su\Vert_{L^p(B,\mu_\alpha)}$. For $x\in B_{i_0,j_0}$,
\begin{align*}
|Su(x)|&=\bigg\vert\sum_{i,j: 2B_{i,j}\cap B_{i_0,j_0}\ne \emptyset}\varphi_{i,j}(x)\vint_{2B_{i,j}}u(y)\, d\nu(y)\bigg\vert\\
&\le C \vint_{C_0B_{i_0,j_0}}|u(y)|\, d\nu(y)\\
&\le C \left(\vint_{C_0B_{i_0,j_0}}|u(y)|^p\, d\nu(y)\right)^{1/p}.
\end{align*}
Therefore
\begin{align*}
\int_{B_{i_0,j_0}}|Su(x)|^p\, d\mu_\alpha(x)
&\le C\, \mu_\alpha(B_{i_0,j_0}) \vint_{C_0B_{i_0,j_0}}|u(y)|^p\, d\nu(y)\\
&\le C\, 2^{-i_0(\alpha+n-Q)}\int_{C_0B_{i_0,j_0}}|u(y)|^p\, d\nu(y).
\end{align*}
As before, summing over $j_0=1,\cdots, M_{i_0}$ and then over $i_0$ gives
\[
\int_B|Su(x)|^p\, d\mu_\alpha(x)\le C \sum_{i_0=0}^\infty 2^{-i_0(\alpha+n-Q)} \int_E |u(y)|^p\, d\nu(y).
\]
As $\alpha+n-Q>0$, it follows that
\[
\int_B|Su(x)|^p\, d\mu_\alpha(x)\le C \int_E |u(y)|^p\, d\nu(y)
\]
as desired.
\end{proof}
| proofpile-arXiv_065-209 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction} \label{sec:intro}
Linear waves in an inviscid, perfectly-conducting fluid
permeated by a uniform magnetic field $\mib{B}_0$
in a frame rotating with rate $\mib{\Omega}$ satisfy the dispersion relation \citep{L54}
\begin{equation}
\omega = \pm \frac{\mib{\Omega}\cdot\mib{k}
\pm \sqrt{(\mib{\Omega}\cdot\mib{k})^2
+ |\mib{k}|^2 (\mib{B}_0 \cdot\mib{k})^2/\rho\mu_0} }
{|\mib{k}|} \; , \label{eq:MC_dispersion_relation_general}
\end{equation}
where $\omega$ is the frequency, $\mib{k}$ is the wavenumber vector,
$\rho$ is the density, and $\mu_0$ the magnetic permeability.
This yields a wide variety of magnetic Coriolis (MC) waves,
including fast (modified inertial) and slow (magnetostrophic) waves; the latter being
unique to rotating magnetohydrodynamics (MHD).
In this manuscript
we consider magnetostrophic waves
for which $(\mib{\Omega}\cdot\mib{k})^2/|\mib{k}|^2 \gg (\mib{B}_0\cdot\mib{k})^2/(\rho\mu_0)$:
In particular, one class which has the relation
\begin{equation}
\omega \approx - \frac{ (\mib{B}_0 \cdot \mib{k})^2 |\mib{k}|^2 }{ \rho\mu_0\beta k } \; .
\label{eq:slowMR}
\end{equation}
Here $\beta$ denotes the beta parameter, $k$ is the azimuthal wavenumber, and the minus sign
indicates waves travel opposite to the hydrodynamic Rossby wave, $\omega = \beta k/|\mib{k}|^2$.
This class is sometimes referred to as slow hydromagnetic-planetary or magnetic-Rossby (MR) waves \citep{H66}.
Relation (\ref{eq:slowMR}) indicates they are dispersive, and depend on the background field and the wavelength;
these waves have been suggested to be important
in Earth's fluid core and for the geomagnetic westward drift \citep[e.g.][]{H66,M67,CFF14,HJT15,NSKHH20}.
Other classes of MC waves include torsional Alfv\'{e}n waves
for which $\mib{\Omega}\cdot\mib{k} \approx 0$
and $(\mib{\Omega}\cdot\mib{k})^2/|\mib{k}|^2 \ll (\mib{B}_0\cdot\mib{k})^2/(\rho\mu_0)$
\citep{Bra70,RA12,GJF15}.
\revthree{More recently inertial-Alfv\'{e}n waves \citep{BD16} have been claimed
to account for the geomagnetic jerks \citep{AF19}.}
Laboratory experiments have identified several types of magnetostrophic waves
in spherical Couette flows with a dipolar magnetic field being applied \citep{SAetal08}.
We note the wave dynamics relies on both the direction and the morphology
of the background magnetic field,
as illustrated in the simple planar model (\ref{eq:slowMR}).
Here we focus on the problem with a purely azimuthal basic field;
for this case (\ref{eq:slowMR}) reduces to $\omega \propto k |\mib{k}|^2$\revthree{,
indicating its linear and cubic relationship to the azimuthal wavenumber.}
The linear theory for MC waves in stably-stratified, thin layers is well-studied
\citep[e.g.][]{Bra67,G00,ZOBS07,MJT17}
as observational exploration of the geomagnetic field and the solar corona has
developed to reveal periodic patterns \citep{CAM15,MCML17}.
Stratification in general introduces a correction term to
the dispersion relations of MC waves,
whilst in a thin layer the direction of
travel is usually reversed;
however, this is not always true in spherical geometries.
The unstratified thick shell problem considered here
is sufficient to provide some fundamental
understanding of the nonlinear problem.
Theoretical investigation is expanding
to consider their nonlinear properties such as
turbulence \citep{TDH07} and
triadic resonances \citep{RR15}.
\citet{L17} found a couple of cases in which
nonlinear equatorial waves in the shallow water MHD should be governed
by Korteweg-de Vries (KdV) equations and so behave like solitary waves.
They were mostly fast MR modes, recovering the equatorial Rossby wave soliton \citep{Boy80}
in the nonmagnetic limit,
but he reported one case in which the wave would slowly travel in the opposite azimuthal direction.
\citet{H19} investigated magnetostrophic MR waves in a Cartesian quasi-geostrophic (QG) model.
The slow, weakly-nonlinear waves
led to evolution obeying the KdV equation unless
the basic state -- all the magnetic field, topography, and zonal flow -- is uniform.
Slow MR waves have been seen in spherical dynamo DNS
travelling with crests/troughs that were isolated and sharp,
unlike the continuous wave trains
that might be expected \citep{HJT15,HTJ18}.
Hydrodynamic Rossby wave solitons have been extensively studied,
motivated by atmosphere and ocean dynamics \citep[e.g.][]{C71,R77,Boy80}.
In the long wave limit it has been demonstrated that the QG soliton relies on the presence of a shear in the basic flow or topography.
\citet{R77} further analysed nonlinear critical layers arising from singularities
as the wave speed approaches the basic flow speed,
and discussed their relevance for the persistence of Jupiter's Great Red Spot.
The present manuscript demonstrates that weakly nonlinear slow MR waves in spherical containers
yield soliton solutions.
We adopt simple QG MHD models
and asymptotically derive the evolution equation for the long wave
when the basic magnetic field and flow are both azimuthal.
We demonstrate:
(i) the amplitude at the first order is described by the KdV equation
for the chosen basic states,
(ii) the problem is dictated by an ODE, which
has no singularities as the wave speed approaches the basic flow speed,
and (iii) the single soliton (solitary wave) solution to the KdV equation
implies an isolated eddy that progresses in a stable permanent form
on magnetostrophic timescales.
\section{Theoretical foundations}
We consider an inviscid, incompressible, ideal quasi-geostrophic (QG) model of electrically conducting fluid
within a rapidly rotating shell,
bounded by inner and outer spheres of radii $r_\tx{i}$ and $r_\tx{o}$, respectively
\citep[e.g.][]{Bus70,GJ06}.
We use polar coordinates $(s,\varphi,z)$ with rotation $\Omega \hat{\mib{z}}$.
For rapid rotation, the incompressible horizontal QG fluid motion can be expressed
as $\mib{u} \approx \nabla \times \psi (s, \varphi) \hat{\mib{z}}$
with $\psi$ a streamfunction,
so it is independent of $z$.
When the magnetic field is not too strong to violate the QG approximation,
we further assume the magnetic field may be written as
$\mib{B} \approx \nabla \times g (s, \varphi) \hat{\mib{z}}$
with $g$ being the potential
\citep[e.g.][]{Bus76,AJPJ00,TDH07,CFF14}.
No penetration on
the spherical boundaries
at $z=\pm H = \pm \sqrt{r_\tx{o}^2 - s^2}$
enables us to represent the Coriolis term of the axial vorticity equation
in terms of the topography-induced beta parameter. The equations for the $z$-components of the vorticity
and the magnetic potential in dimensionless form are then:
\begin{eqnarray}
\frac{\partial}{\partial t} \Delta_\tx{H} \psi
- \mathcal{J} [ \psi, \Delta_\tx{H} \psi ]
- \frac{1}{Le^2}
\frac{\beta}{s} \frac{\partial \psi}{\partial \varphi}
&=& - \frac{1}{Le^2}
\mathcal{J} [ g, \Delta_\tx{H} g ] \\
\mbox{and\ } \quad
\frac{\partial}{\partial t} g
&=& \mathcal{J} [ \psi, g ] \; , \label{eq:current_sphere}
\end{eqnarray}
where $\Delta_\tx{H} = (1/s) \partial/\partial s (s \partial/\partial s) + ( 1/s^2 ) \partial^2/\partial \varphi^2$,
and
$\mathcal{J} [f_1,f_2]
= ( \partial f_1/\partial s \; \partial f_2/\partial \varphi
-\partial f_2/\partial s \; \partial f_1/\partial \varphi )/s$
for any functions $f_1$ and $f_2$.
Here the length, the magnetic field, and the velocity
are, respectively, scaled by the radius of the outer sphere $r_\tx{o}$,
the mean field strength $B_0$
and the MC wave speed
$B_0^2/(2\Omega r_\tx{o} \rho \mu_0) = c_\tx{M}^2/c_\tx{C}$;
$c_\tx{M}^2 = B_0^2/(\rho\mu_0)$ and $c_\tx{C} = 2\Omega r_\tx{o}$.
The Lehnert number $Le
= c_\tx{M}/c_\tx{C}$, whilst the beta parameter is given by $\beta = s/(1 - s^2)$.
Impermeable boundary conditions are applied so that
\begin{eqnarray}
\frac{1}{s} \frac{\partial \psi}{\partial \varphi}= 0
\quad &\mbox{at\ }& \quad s = \eta, 1 ,
\label{eq:bc_sphere}
\end{eqnarray}
where the aspect ratio $\eta = r_\tx{i}/r_\tx{o}$.
As $\beta \to \infty$ at $s=1$, the governing equations are singular there; these boundary conditions ensure that the regular solution is selected.
Of particular interest is the regime when $Le^{-1}$ is large.
Taking the limit leads to a balance between the vortex stretching and the Lorentz term in the vorticity equation:
\begin{eqnarray}
\beta \frac{1}{s} \frac{\partial \psi}{\partial \varphi}
&=& \mathcal{J} [ g, \Delta_\tx{H} g ]
\; , \label{eq:vorticity_sphere_magnetostrophic}
\end{eqnarray}
whilst (\ref{eq:current_sphere}) retains its same form.
The nonlinear problems have two source terms acting on the magnetostrophic wave:
below we asymptotically solve the weakly nonlinear cases.
To seek solitary long-wave solutions
we introduce slow variables with a small parameter $\epsilon$ ($\ll 1)$ and a real constant $c$:
\begin{equation}
\tau = \epsilon^{3/2} t \;, \qquad
\zeta = \epsilon^{1/2} (\varphi - c t) \; .
\end{equation}
Note that this assumes a long spatial scale in the azimuthal direction compared with the radial direction. This is reasonable for small $m$. We then expand variables with $\epsilon$ as
\begin{eqnarray}
\psi = \psi_0 (s) + \epsilon \psi_1 (s,\zeta,\tau)
+ ... \; , \quad
g = g_0 (s) + \epsilon g_1 (s,\zeta,\tau)
+ ... \;,
\end{eqnarray}
for the basic state satisfying
\begin{equation}
- D \psi_0
= \overline{U}(s) \; , \quad
- D g_0
= \overline{B}(s),
\end{equation}
where $D = d/ds$.
At zeroth order the equations of vorticity (\ref{eq:vorticity_sphere_magnetostrophic})
and of electric potential (\ref{eq:current_sphere}), and the boundary condition (\ref{eq:bc_sphere}) are all trivial.
At $\mathcal{O}(\epsilon)$,
(\ref{eq:vorticity_sphere_magnetostrophic}) and (\ref{eq:current_sphere}) become
\begin{equation}
\beta \frac{\partial \psi_1}{\partial \zeta}
= - \left[ \overline{B} \mathcal{D}^2 - \overline{J} \right] \frac{\partial g_1}{\partial \zeta},
\qquad \textrm{where} \qquad
\mathcal{D}^2 = \frac{1}{s} \frac{\partial}{\partial s} s \frac{\partial}{\partial s} \quad \textrm{and} \quad
\overline{J} = D \frac{1}{s} D (s \overline{B})
\label{eq:psi1_sphere}
\end{equation}
\begin{equation}
\textrm{and} \qquad
\left( \frac{\overline{U}}{s} - c \right)
\frac{\partial g_1}{\partial \zeta}
= \frac{\overline{B}}{s}
\frac{\partial \psi_1}{\partial \zeta}
\; , \label{eq:g1_sphere}
\end{equation}
respectively.
Substituting (\ref{eq:psi1_sphere}) into (\ref{eq:g1_sphere}) gives
a homogeneous PDE with respect to $g_1$:
\begin{equation}
\mathcal{L} \frac{\partial g_1}{\partial \zeta} \equiv
\left\{
\frac{\overline{B}}{ \beta s} \left[ \overline{B} \mathcal{D}^2
- \overline{J} \right]
+ \left( \frac{\overline{U} }{s} - c \right)
\right\} \frac{\partial g_1}{\partial \zeta}
= 0
\label{eq:g1_pde_sphere}
\end{equation}
where $\mathcal{L}$ represents the linear differential operator
comprising of $s, {\partial/\partial s}$ or $D, \overline{B},\beta, \overline{U}$, and $c$.
Inserting the boundary conditions (\ref{eq:bc_sphere}) at this order into (\ref{eq:g1_sphere}) yields
\begin{eqnarray}
\frac{\partial g_1}{\partial \zeta}
= 0
\quad \mbox{at\ } \quad s = \eta , 1 \; .
\end{eqnarray}
We then seek a solution in the form of $g_1 = \Phi(s) G(\zeta,\tau)$, so that
\begin{equation}
\mathcal{L} \Phi = 0
\qquad \mbox{and\ } \qquad
\Phi = 0 \quad\mbox{at\ }\quad s = \eta, 1
\; .
\label{eq:g1_ode_sphere}
\end{equation}
Now the linear operator $\mathcal{L}$ is the ordinary differential operator
with the partial derivatives with respect to $s$ replaced by $D$.
Given a basic state,
the ODE (\ref{eq:g1_ode_sphere}) together with the boundary conditions is
an eigenvalue problem to determine the eigenfunction $\Phi$ with eigenvalue $c$;
it can have many eigensolutions.
We note that the second-order ODE (\ref{eq:g1_ode_sphere}) remains non-singular
as $\overline{U}/s \rightarrow c$,
but not as $\overline{B}^2/\beta \rightarrow 0$ unless $s = 0$.
Below we concentrate on cases in which (\ref{eq:g1_ode_sphere}) has no internal singularities,
i.e. there is a discrete spectrum.
We consider cases where the $z$-averaged
toroidal magnetic fields do not pass through zero
\revthree{(e.g. figure 3 of \citet{SJNF17}; figures 1-2 of \citet{HTJ18})}.
We proceed to the next order to obtain the amplitude function.
Eqs.~(\ref{eq:vorticity_sphere_magnetostrophic}) and (\ref{eq:current_sphere}) at $\mathcal{O}(\epsilon^2)$
yield
\begin{equation}
\beta \frac{\partial \psi_2}{\partial \zeta}
= - \left[ \overline{B} \mathcal{D}^2 - \overline{J}
\right] \frac{\partial g_2}{\partial \zeta}
- \frac{\overline{B}}{s^2} \frac{\partial^3 g_1}{\partial \zeta^3}
+ \left( \frac{\partial g_1}{\partial s} \frac{\partial}{\partial \zeta}
- \frac{\partial g_1}{\partial \zeta} \frac{\partial}{\partial s}
\right) \mathcal{D}^2 g_1
\label{eq:psi2_sphere}
\end{equation}
\begin{equation}
\textrm{and} \qquad
\left( \frac{\overline{U}}{s} - c \right) \frac{\partial g_2}{\partial \zeta}
- \frac{\overline{B}}{s} \frac{\partial \psi_2}{\partial \zeta}
= - \frac{\partial g_1}{\partial \tau}
+ \frac{1}{s} \left( \frac{\partial \psi_1}{\partial s} \frac{\partial g_1}{\partial \zeta}
- \frac{\partial \psi_1}{\partial \zeta}\frac{\partial g_1}{\partial s}
\right)
\; . \qquad \label{eq:g2_sphere}
\end{equation}
Eliminating $\psi_2$ using (\ref{eq:psi2_sphere}) and $\psi_1$ using (\ref{eq:psi1_sphere}),
(\ref{eq:g2_sphere}) becomes the inhomogeneous PDE
\begin{eqnarray}
\mathcal{L} \frac{\partial g_2}{\partial \zeta}
&=& - \frac{\overline{B}^2}{s^3 \beta} \frac{\partial^3 G}{\partial \zeta^3}
\Phi
- \frac{\partial G}{\partial \tau} \Phi \nonumber \\
&+& G \frac{\partial G}{\partial \zeta} \left\{ \frac{2\overline{B}}{\beta s}
\left[ (D \Phi) D^2 \Phi - \Phi D (D^2 \Phi) \right]
- \frac{\Phi D^2 \Phi}{s} D \left(\frac{\overline{B}}{\beta} \right)
+ \frac{\Phi^2}{s} D \left(\frac{\overline{J}}{\beta} \right)
\right\}
\;
\quad \qquad \label{eq:g2_pde_sphere}
\end{eqnarray}
where $D^2 = (1/s) D s D$. The boundary conditions here are
\begin{equation}
\frac{\partial g_2}{\partial \zeta} = 0
\quad \mbox{at\ } \quad s = \eta , 1 \; .
\end{equation}
The adjoint linear problem corresponding to (\ref{eq:g1_pde_sphere}) is
\begin{equation}
\mathcal{L}^\dag \Phi^\dag \equiv
\left\{ \left[
D^2 \overline{B}
- \overline{J} \right] \frac{\overline{B}}{\beta s }
+ \left( \frac{\overline{U}}{s} - c \right) \right\} \Phi^\dag = 0
\; . \label{eq:g1_ode_adjoint_sphere}
\end{equation}
The adjoint boundary conditions are
\begin{equation}
\frac{\overline{B}^2}{s\beta} \Phi^\dag = 0
\quad \mbox{at\ } \quad s = \eta , 1 \; .
\label{eq:g1_bc_adjoint_sphere}
\end{equation}
Note that the substitution $ \overline{B}^2 \Phi^\dag / s \beta = \Phi$ reduces the adjoint problem to the
ordinary linear problem (\ref{eq:g1_pde_sphere}) so, provided $\overline{B}^2 \Phi^\dag /s\beta$ is non-zero
in the sphere, the adjoint eigenfunction $\Phi^\dag$ can simply be found by dividing the solution of
(\ref{eq:g1_pde_sphere})
by $\overline{B}^2/s\beta$.
The solvability condition to (\ref{eq:g2_pde_sphere}) is thus given by
\begin{equation}
\frac{\partial G}{\partial \tau}
+ \alpha \; G \frac{\partial G}{\partial \zeta}
+ \gamma \; \frac{\partial^3 G}{\partial \zeta^3} = 0,
\label{eq:KdV}
\end{equation}
where $\alpha = \alpha_0/\delta_0$, $\gamma = \gamma_0/\delta_0$,
\begin{eqnarray}
&&\alpha_0 = \int _{\eta}^1 \Phi^\dag \left\{ \frac{2\overline{B}}{\beta}
\left[ \Phi D (D^2 \Phi) - (D \Phi) D^2 \Phi \right]
+ \Phi (D^2 \Phi) D \left(\frac{\overline{B}}{\beta} \right)
- \Phi^2 D \left(\frac{\overline{J}}{\beta} \right)
\right\} \,ds
, \nonumber \\
&&\gamma_0 =
\int_{\eta}^{1} \Phi^\dag \frac{\overline{B}^2}{s^2 \beta} \Phi
\, ds
,
\quad \mbox{and\ } \quad
\delta_0 = \int_{\eta}^{1} {\Phi^\dag \Phi} \ s \, ds . \qquad \quad
\label{eq:g2_KdV_sphere}
\end{eqnarray}
Eq.~(\ref{eq:KdV}) is the Korteweg-de Vries equation
if the coefficients, $\alpha$ and $\gamma$, are both nonzero.
In the following section we examine the coefficients
for different choices of the basic state.
We note that the presence of $\overline{U}$ does not directly impact either $\alpha$ or $\gamma$.
It however dictates $\Phi$ and $\Phi^\dag$ through the linear problems
at $\mathcal{O}(\epsilon)$
and then may contribute to the terms at $\mathcal{O}(\epsilon^2)$.
This is in contrast with the hydrodynamic case \citep[e.g.][]{R77},
where the basic flow enters the nonlinear term at $\mathcal{O}(\epsilon^2)$ too.
The mean-flow effect on the magnetostrophic wave arises from the equation for the
magnetic potential (\ref{eq:current_sphere}).
Solutions to (\ref{eq:KdV}) may take the form of solitary (single or multiple soliton),
cnoidal, similarity, and rational waves \citep[e.g.][]{W74,DJ89}.
For instance, for a single soliton the asymptotic solution up to $\mathcal{O}(\epsilon)$ is
\begin{eqnarray}
g (s,\varphi, t)
&
= -\int_{\eta}^{s} \overline{B} ds + \epsilon \; \mathrm{sgn}(\alpha \gamma) \;
\Phi \;{\mathrm{sech}^2 F},
\label{eq:single-soliton-g_sphere} \\
\psi (s,\varphi, t)
&
= -\int_{\eta}^{s} \overline{U} ds
- \epsilon \; \mathrm{sgn}(\alpha \gamma) \;
\left( \frac{\overline{B}}{\beta} D^2 \Phi - \frac{\overline{J}}{\beta} \Phi \right) \;
{\mathrm{sech}^2 F},
\label{eq:single-soliton-psi_sphere}
\end{eqnarray}
where
\begin{equation}
F (\varphi, t)
= \sqrt{ \frac{\alpha }{12 \gamma} \mathrm{sgn}(\alpha \gamma) }
\left[ \epsilon^{1/2} (\varphi -ct)
- \epsilon^{3/2} \; \mathrm{sgn}(\alpha \gamma)
\; \frac{\alpha t}{3}
\right] \; .
\end{equation}
This is an eddy that has the solitary characteristics in azimuth,
riding on the basic state with the linear wave speed.
The finite-amplitude effect $\alpha$ accelerates the retrograde propagation
if $\gamma < 0$, but decelerates it when $\gamma > 0$.
The characteristic waveform is clearly visible in the magnetic potential.
\section{Illustrative examples}
We solve the eigenvalue problem (\ref{eq:g1_ode_sphere})
and the adjoint problem (\ref{eq:g1_ode_adjoint_sphere})-(\ref{eq:g1_bc_adjoint_sphere})
for different basic states
and calculate the respective coefficients of the evolution equation (\ref{eq:KdV})
in a spherical cavity,
with $\eta = 0.35$.
We consider \revthree{three cases investigated in \citet{CFF14}};
the first has a $\overline{B}$ that is a linearly increasing function of $s$
(referred to as a Malkus field hereafter),
the second $\overline{B}$ is inversely proportional to $s$ (an electrical-wire field)\revthree{,
and the third one is $(3/2) \cos{\{ \pi(3/2 - 50 s/19) \} } + 2$,
which was adoped by \citet{CFF14} to model a profile of the radial magnetic field $B_s$ within Earth's core
(a CFF field)}.
For
\revthree{the Malkus and wire}
fields the terms $\overline{J}$
in (\ref{eq:g1_ode_sphere}), (\ref{eq:g1_ode_adjoint_sphere}) and (\ref{eq:g2_KdV_sphere})
all vanish\revthree{, whereas this is not the case for the CFF field}.
The Malkus field
case has been extensively studied in the literature \citep[e.g.][]{M67,RL79,ZLS03,MJT17}.
We also consider the inclusion of a basic zonal flow $\overline{U}$
that is prograde with either a linear or quadratic dependence on $s$.
Table \ref{table:cases_spheres} summarises the results,
listing the eigenvalue $\lambda = \sqrt{|c|}/2$ (see below) and $c$ for the $n$-th mode,
the coefficients $\alpha$, $\gamma$, and $\delta_0$ as calculated from the eigenfunction $\Phi$,
the adjoint eigensolution $\Phi^\dag$ and (\ref{eq:g2_KdV_sphere}),
and whether/at which $s$ the wave speed $c$ approaches the basic angular velocity $\overline{U}/s$.
Here the $n$-th mode has $(n-1)$ zeros within the explored interval.
Negative values of $c$ indicate retrograde waves.
More notably, in the all cases we obtain
nonzero $\alpha$ and $\gamma$ for all $n$ examined and so the KdV equations are appropriate.
\revtwo{The fraction $|\alpha/\gamma|$ and their signs characterise the solitons.}
\begin{table}
\begin{center}
\def~{\hphantom{0}}
\begin{tabular}{ccc cccccc}
$\overline{B}$ & $\overline{U}$ & $n$ & $\lambda$ & $c$ & $\alpha$ & $\gamma$ & $\delta_0 \times 10^{2}$ & $s$ at which $c= \overline{U}/s$ \\[3pt]
$s$ & 0 & 1 & ~1.56402 & ~~-9.7847 & -12.854~ & 0.87465 & 4.9020~~ & --- \\
& & 2 & ~2.88117 & ~-33.2045 & -14.639~ & 1.0480~ & 0.79920~ & --- \\
& & 3 & ~4.18526 & ~-70.0655 & -26.422~ & 1.1156~ & 0.26204~ & --- \\
$1/s$& 0 & 1 & ~2.34412 & ~-21.9795 & -36.930~ & 1.2464~ & 0.92993~ & --- \\
& & 2 & ~4.41698 & ~-78.0389 & -31.920~ & 2.1442~ & 0.14054~ & --- \\
& & 3 & ~6.47665 & -167.788~ & -70.056~ & 2.8417~ & 0.044739 & --- \\
$^{\circ}s$ & 0 & 1 & & ~~-9.7847 & -12.854~ & 0.87465 & 4.9023~~ & --- \\
$^\circ 1/s$& 0 & 1 & & ~-21.9795 & -36.865~ & 1.2464~ & 0.92800~ & --- \\
\revthree{
$^\circ$CFF}
& 0 & 1 & & ~-11.0427 & -11.493~ & 2.8531~ & 0.51035~ & --- \\
& & 2 & & ~-32.2790 & -19.611~ & 4.7250~ & 0.12427~ & --- \\
& & 3 & & ~-71.6553 & -43.375~ & 4.4968~ & 0.053649 & --- \\
$^{\circ}s$ &$s$ & 1 & & ~~-8.7847 & -12.854~ & 0.87465 & 4.9023~~ & none \\
$^\circ 1/s$&$s$ & 1 & & ~-20.9795 & -36.865~ & 1.2464~ & 0.92800~ & none \\
\revthree{
$^\circ$CFF}
&$s$ & 1 & & ~-10.0427 & -11.493~ & 2.8531~ & 0.51035~ & none \\
$^{\circ}s$ &$4s(1-s)$
& 1 & & ~~-8.8379 & ~-9.5075 & 0.90339 & 4.9193~~ & none \\
$^\circ 1/s$&$4s(1-s)$
& 1 & & ~-21.4523 & -35.429~ & 1.2659~ & 0.92748~ & none \\
\revthree{
$^\circ$CFF} &$4s(1-s)$
& 1 & & ~-10.6163 & ~-9.8441 & 2.9722~ & 0.50834~ & none \\
$^{\circ}s$ &$80s(1-s)$
& 1 & & ~~12.9242 & ~31.273~ & 1.4622~ & 4.0079~~ & 0.8384 \\
$^\circ 1/s$&$320s(1-s)$
& 1 & & ~~33.1890 & ~10.307~ & 3.4093~ & 1.3187~~ & 0.8963 \\
\revthree{
$^\circ$CFF} &$320s(1-s)$
& 1 & & ~~44.4360 & ~41.749~ & 13.789~~ & 0.67936~ & 0.8611 \\
\end{tabular}
\caption{Values of $\lambda$, $c$, $\alpha$, $\gamma$, and $\delta_0$ of the $n$-th mode for the
basic magnetic field $\overline{B}$ and flow $\overline{U}$ in the spherical model $\beta = s/(1-s^2)$.
\revthree{The CFF field $\overline{B}$ is given as $(3/2) \cos{\{ \pi(3/2 - 50 s/19) \} } + 2$.}
Cases indicated by $^\circ$ are evaluated with the routine bvp4c and the modified outer boundary condition.}
\label{table:cases_spheres}
\end{center}
\end{table}
For the Malkus field ($\overline{B} = s$) and no mean flow $\overline{U}$,
we let $x = 1-s^2$ and $\Phi(x) = x y(x)$ to rewrite the ODE (\ref{eq:g1_ode_sphere}) as
\begin{equation}
x(1-x) \frac{d^2 y}{dx^2} + (2-3x) \frac{dy}{dx} + (\lambda^2 -1) y =0
\end{equation}
where $\lambda^2 = - c/4$.
This is a hypergeometric equation, which has a solution
\begin{equation}
\Phi (s) = (1-s^2) F(1+\lambda, 1-\lambda ; 2; 1-s^2) ,
\quad \mbox{and\ } \quad
\Phi^\dag
= \frac{\Phi}{1-s^2} , \label{eq:phi1_sol_malkus}
\end{equation}
where $F$ denotes the hypergeometric function \citep[e.g.][]{AS65}.
The eigenvalue $\lambda$ is determined by the condition $\Phi=0$ at $s=\eta$.
The adjoint solution is related to the axial electrical current generated at this order
as $-D^2 \Phi = -c \Phi s\beta/\overline{B}^2 = -c \Phi^\dag$,
implying the current is nonzero at $s=1$.
Figure \ref{fig:sphere_Malkus} shows the solutions in the Malkus case.
\revone{Figure \ref{fig:sphere_Malkus}(a)} shows profiles of $\overline{B}(s)$, the topography $\beta$,
the eigenfunctions $\Phi$ for $n = 1$ and $2$,
and their adjoint eigenfunctions $\Phi^\dag$ (\ref{eq:phi1_sol_malkus}).
This yields
$\alpha \approx -12.85$ and $\gamma \approx 0.87$
for $n=1$; the nonlinear effect is more significant than the dispersive one.
\revone{Figure \ref{fig:sphere_Malkus}(b)} illustrates a single soliton solution
(\ref{eq:single-soliton-psi_sphere}) of $\psi$ for $n=1$.
If the amplitude $\epsilon$ is too large, neglected higher order terms will be significant;
if $\epsilon$ is too small the azimuthal scale of the solitary wave is too large to fit in,
so we choose $\epsilon = 0.1$ as a reasonable compromise.
The streamfunction $\psi$ is negative,
indicating a clockwise solitary eddy.
The retrogradely propagating vortex $\psi_1$ is slightly more concentrated at the outer shell
than the magnetic potential $g_1$ (not shown).
As $c < 0$ and $\gamma > 0$,
the dispersion term reduces the retrograde propagation speed.
We note that
a clockwise vortex is observed
in Earth's core \citep{PJ08} and geodynamo simulations \citep{SJNF17}:
its implications are discussed in the final section.
The same basic states admit high-$n$ modes with more isolated structure
to have the KdV equations with nonzero $\alpha$ and $\gamma$ (Table \ref{table:cases_spheres}).
The speed $|c|$ increases with $n$, confirming the dispersivity of the wave.
The eigenfunction $\Phi$ for $n=2$
is negative at small $s$,
and then turns positive when $s \gtrsim 0.787$ (dashed-dotted curve in \revone{figure \ref{fig:sphere_Malkus}a}),
so the eddy is clockwise in the outer region and anticlockwise in the inner region (\revone{figure \ref{fig:sphere_Malkus}c}).
\begin{figure}
\centerline{
\includegraphics[height=32mm]{./figure1} }
\caption{Spherical case for the Malkus field $\overline{B}=s$ and $\overline{U}=0$.
(a) Profiles of $\overline{B}$ [red solid curve], $\beta$ [green solid],
$\Phi$ for $n=1$ [black dashed] and $n=2$ [black dashed-dotted],
and $\Phi^\dag$ for $n=1$ [blue dashed] and $n=2$ [blue dashed-dotted].
Streamfunctions $\psi$ of the single soliton solution for (b) $n=1$ and (c) $n=2$,
provided $\epsilon = 0.1$.
The dashed (solid) contour lines represent its negative (positive) value, i.e. clockwise (anti-clockwise).}
\label{fig:sphere_Malkus}
\end{figure}
We next consider the basic field given by the wire field, $\overline{B} = 1/s$,
whilst $\overline{U} = 0$.
By using $\Phi(x) = x e^{\lambda x} y(x)$, (\ref{eq:g1_ode_sphere})
may be reduced to a confluent Heun equation
\begin{equation}
x(1-x) \frac{d^2 y}{dx^2}
+ \{2 + (2 \lambda -3)x -2 \lambda x^2 \} \frac{dy}{dx}
+ \{ (\lambda^2 +2 \lambda -1) - (\lambda^2+3 \lambda)x\}y =0 .
\end{equation}
The solution regular at $s=1$ corresponding to the eigenvalue $\lambda$ is
\begin{equation}
\Phi = (1-s^2) e^{\lambda(1-s^2)} H_\tx{c}(q_\tx{c}, \alpha_\tx{c}, \gamma_\tx{c},\delta_\tx{c},\epsilon_\tx{c}; 1-s^2),
\quad \mbox{and\ } \quad
\Phi^\dag
= \frac{s^4}{1-s^2} \Phi ,
\label{eq:phi1_sol_wire}
\end{equation}
where $H_\tx{c}$ represents the confluent Heun function with
the accessory parameter $q_\tx{c} =\lambda^2 +2 \lambda - 1$ and exponent parameters
$\alpha_\tx{c} = \lambda^2 + 3 \lambda, \gamma_\tx{c} = 2, \delta_\tx{c} = 1$ and $\epsilon_\tx{c} = 2 \lambda$ \citep{OLBC10}.
\revone{
This case admits a simple form of the coefficients (\ref{eq:g2_KdV_sphere}) such that
\begin{eqnarray}
&&\alpha_0
= -4 \lambda^2 \int^{1-\eta^2}_0
x (2x+1) (1-x)^2 e^{3\lambda x} H_\tx{c}^3 \,dx , \quad
\gamma_0
= \frac{1}{2} \int^{1-\eta^2}_{0} \frac{x^2}{1-x} e^{2\lambda x} H_\tx{c}^2 \, dx
, \nonumber \\
&&\quad \textrm{and} \quad
\delta_0
= \frac{1}{2} \int^{1-\eta^2}_{0} x(1-x)^2 e^{2\lambda x} H_\tx{c}^2 \,dx .
\end{eqnarray}
}
To evaluate the function we use the algorithm of \citet{Mot18} below.
Figure \ref{fig:sphere_invB}(a) gives profiles of the basic state
and eigenfunctions.
The figure shows that $\Phi$ for $n=1$
has a peak nearer the outer boundary, compared with that for the Malkus field;
it is still propagating retrogradely and is dispersive.
This case yields
$\alpha \approx -36.9$ and $\gamma \approx 1.25$
for $n=1$ and with $\epsilon=0.1$ the soliton is a more compact, clockwise eddy (\revone{figure \ref{fig:sphere_invB}b}).
Analysis of the individual terms of the coefficient $\alpha_0$ in (\ref{eq:g2_KdV_sphere})
implies that the presence of high order derivatives is favourable for nonlinear effects.
For $n=2$,
dispersive effects are enhanced compared to nonlinear ones.
The solitary eddy is clockwise in the outer region when $s \gtrsim 0.894$
and anticlockwise in the inner region (\revone{figure \ref{fig:sphere_invB}c}).
\begin{figure}
\centerline{
\includegraphics[height=32mm]{./figure2} }
\caption{Spherical case for the wire field $\overline{B}=1/s$ and $\overline{U}=0$.
(a) Profiles of $\overline{B}$ [red solid curve], $\beta$ [green solid],
$\Phi$ for $n=1$ [black dashed] and $n=2$ [black dashed-dotted],
and $\Phi^\dag$ for $n=1$ [blue dashed] and $n=2$ [blue dashed-dotted].
Streamfunctions $\psi$ of the single soliton solution for (b) $n=1$ and (c) $n=2$,
provided $\epsilon = 0.1$.}
\label{fig:sphere_invB}
\end{figure}
To explore more general cases
we implement the Matlab routine bvp4c to solve the eigenvalue problems.
We retain the boundary condition $\Phi = 0$ at $s = \eta = 0.35$,
but use the modified condition $\Phi + (1-s) {D\Phi} = 0$ close to the outer boundary $s=0.99999$
to avoid the numerical issue arising from singularities when $s \rightarrow 1$.
We also impose a normalising condition $D\Phi$ at the inner boundary:
\revthree{
the values for the Malkus field and the CFF field are given by (\ref{eq:phi1_sol_malkus}),
whereas the one for the wire field is by (\ref{eq:phi1_sol_wire}).
}
The number of gridpoints in $s$ is $500$ in all cases.
Given the obtained $c$, the same routine is adopted
to solve the boundary value problems for $\Phi^\dag$.
For consistency with the earlier cases we set $\Phi^\dag = 1$ at the outer boundary.
The codes are benchmarked with the exact solutions.
With modified boundary condition,
our computational results match the expected eigenvalues $\lambda = \sqrt{|c|}/2$
and eigenfunctions $\Phi$ for $1 \le n \le 3$
with errors less than 0.01 \% and 0.2 \%, respectively.
\revthree{
Now the third basic field, $\overline{B} = (3/2) \cos{\{ \pi(3/2 - 50 s/19) \} } + 2$, is examined.
Figure \ref{fig:sphere_CFF}(a) depicts the basic state, the eigenfunctions for $n=1$,
and additionally $\overline{J}$ (represented by the red dotted curve).
It is nonzero except at $s \approx 0.40$ and $0.78$
and is negatively peaked at $s \approx 0.59$.
The eigenvalues $c$ do not differ from those in the Malkus case very much (Table \ref{table:cases_spheres}).
For $n=1$,
$\Phi$ has a peak at $s \approx 0.61$ (blue dashed curve), as so does the basic field.
This case gives $\alpha \approx -11.5$ and $\gamma \approx 2.85$.
Indeed the term including $\overline{J}$ dominates over the ODE (\ref{eq:g1_ode_sphere})
and also over $\alpha_0$ (\ref{eq:g2_KdV_sphere});
if the term $\Phi^2 D(\overline{J}/\beta)$ were absent, $\alpha$ would become $\approx 1.68$.
Figure \ref{fig:sphere_CFF}(b) illustrates the magnetic potential $g_1$ (\ref{eq:single-soliton-g_sphere}),
where the basic state is excluded for visualisation,
It is clockwise and centred at the $s \approx 0.61$.
Similarly the streamfunction $\psi$ (\ref{eq:single-soliton-psi_sphere}) is displayed
in figure \ref{fig:sphere_CFF}(c):
now the distinction from the magnetic component is evident.
The solitary eddy is more confined nearer the outer boundary,
as $\overline{B} D^2 \Phi - \overline{J}\Phi$ in (\ref{eq:single-soliton-psi_sphere}) becomes significant
only when $s \gtrsim 0.8$ (not shown).
}
\begin{figure}
\centerline{
\includegraphics[height=32mm]{./figure3} }
\caption{\revthree{Spherical case for the CFF field $\overline{B}=(3/2)\cos{\{ \pi(3/2 - 50s/19) \} }+2$,
$\overline{U}=0$, and $n=1$.
(a) Profiles of $\overline{B}$ [red solid curve], $\beta$ [green solid],
$\overline{J}$ [red dotted; normalised for visualisation], $\Phi$ [black dashed], and $\Phi^\dag$ [blue dashed].
(b) Magnetic potential $g_1$ of the single soliton solution, where the basic state $g_0$ is excluded to help visualisation.
(c) Streamfunctions $\psi$ of the solution, provided $\epsilon = 0.1$.}
}
\label{fig:sphere_CFF}
\end{figure}
Including a basic flow $\overline{U} = s$
is equivalent to the addition of solid body rotation.
Therefore it affects the speed $c$ of propagation of the mode,
whilst leaving its other properties unchanged (Table \ref{table:cases_spheres}).
For a more realistic flow, $\overline{U} = 4 s(1-s)$, with the Malkus field,
the structures of $\Phi$ and $\Phi^\dag$ are not drastically altered
(leading to $\delta_0 \approx 0.049$).
The dominance of the nonlinearity over the dispersion, $|\alpha/\gamma|$, is however weakened.
The presence of the same basic flow in the wire field case also exhibits this property.
Finally, we comment on the behaviour of solutions in the vicinity of the point $s$
at which $\overline{U}/s$ equals $c$,
the location of a critical layer for the hydrodynamic Rossby wave soliton \citep[e.g.][]{R77}.
We impose a fast mean zonal flow, $\overline{U} = 80 s(1-s)$, in the Malkus field case;
figure \ref{fig:sphere_Malkus_shear}(a) shows the basic state
and additionally the deviation from the wave speed, $\overline{U}/s - c$ (blue {dotted} curve).
The curve shows this case has such a critical point at $s \approx 0.838$.
Nevertheless the impact is hardly seen in the eigenfunctions $\Phi$ and $\Phi^\dag$:
there are no discontinuities in the derivative $D\Phi$ (\revone{figure \ref{fig:sphere_Malkus_shear}b})
and hence in the solitary wave solutions (\revone{figure \ref{fig:sphere_Malkus_shear}c}).
This remains true for the wire field case with $\overline{U}=320s(1-s)$.
\begin{figure}
\centerline{
\includegraphics[height=32mm]{./figure4} }
\caption{Spherical case for the Malkus field $\overline{B}=s$, the basic flow $\overline{U} = 80s(1-s)$,
and $n=1$.
(a) Profiles of $\overline{B}$ [red solid curve], $\beta$ [green solid],
$\overline{U}/10$ [{blue solid}; scaled for visualisation],
and the deviation $\overline{U}/s - c$ [blue dotted].
(b) Profiles of $\Phi$ [black dashed], $\Phi^\dag$ [blue dashed], and $D \Phi$ \revone{[black dotted]}.
(c) Streamfunction $\psi_1$ of the single soliton solution,
where the basic state $\psi_0$ is excluded to help visualisation.}
\label{fig:sphere_Malkus_shear}
\end{figure}
\section{Concluding remarks}
In this paper we have performed a weakly nonlinear analysis of magnetostrophic waves
in QG spherical models with azimuthal magnetic fields and flows.
The model we considered is an annulus model \citep{Bus76,CFF14} of the form
utilised by \citet{H66} for linear magnetic Rossby (MR) waves.
We found that
the evolution of the long-wavelength, slow-MR waves in the spherical shells obeyed the KdV equation,
whether the toroidal magnetic field and/or the zonal flow were sheared or not.
The model we consider here is formally valid for cases where the azimuthal lengthscale is
much longer than that in radius; the most obvious application of which is for thin spherical shells.
For thicker spherical shells like those representative of Earth's fluid outer core,
the ratio of these lengthscales is of the order ten.
For thinner shells relevant to other astrophysical objects one might expect the asymptotic procedure
to give a better approximation to the true behaviour.
We find that solutions may take the form of a single soliton solution (for $n=1$)
which is a clockwise, solitary eddy when basic state magnetic field is
\revthree{any of} a Malkus field ($\overline{B} \propto s$),
a magnetic wire field ($\overline{B} \propto 1/s$)\revthree{, and
a CFF field (comprising of a trigonometric function)}.
In addition to these steadily progressing single-solitons we also find $N$-soliton solutions;
as these satisfy the KdV equation we know that these may have peculiar interactions
including a phase shift after a collision and FPU recurrence \citep[e.g.][]{DJ89}.
We conclude by noting that inversion of the geomagnetic secular variation appears
to detect an anticyclonic gyre in Earth's core \citep{PJ08,GJF15,BHFMG18};
it is off-centred with respect to the rotation axis
and is believed to have existed for more than a hundred years.
Moreover, DNS of dynamos driven by convection in rapidly-rotating spherical shells
have exhibited the emergence of a large vortex
which circulated clockwise and modulated very slowly \citep{SJNF17}; in these simulations
the averaged toroidal magnetic field tended to strengthen beneath the outer boundary.
Our solution tentatively supports the idea that such an isolated single eddy should persist,
while drifting on MC timescales of
\revthree{$\mathcal{O}(10^{2\textrm{-}4})$ years}.
The long wave can be initiated through instabilities
due to differentially rotating flows \citep{SAetal08},
due to thermally insulating boundaries \citep{HTS14},
and due to the magnetic diffusivity \citep{RL79,ZLS03}.
The steadily drifting feature of the solitons should of course be altered during the long-term evolution
when dissipation plays a role in the dynamics.
\revthree{The presence of dissipation may also alter the eigenfunction \citep{CFF14}
and thus the detailed morphology of the soliton too.}
\revthree{
We note an alternative to account for the eccentric gyre is
a flow induced by, for example, the coupling with the rocky mantle and the solid inner core,
as DNS by \citet{AFF13} had demonstrated.
The issue ends up in a debate which has lasted for decades:
does the geomagnetic westward drift represent the advection due to a large scale fluid motion
\citep{BFGN50} or hydromagnetic wave motion \citep{H66}.}
We shall investigate these issues further, as well as the role of critical layers,
by solving initial value problems in a future study.
\begin{acknowledgments}
\section*{Acknowledgments}
The authors are grateful to
Andrew Soward, Anna Kalogirou, Adrian Barker,
and Yoshi-Yuki Hayashi for discussion and comments.
K.~H. was supported by the Japan Science and Technology under
the Program to Supporting Research Activities of Female Researchers.
\end{acknowledgments}
\bibliographystyle{jfm}
| proofpile-arXiv_065-210 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The past two decades have seen numerous advances to the approximability of the maximum disjoint paths problem ({\sc edp}) since the seminal paper \cite{GargVY97}.
An instance of \textsc{edp} consists of a (directed or undirected) ``supply'' graph $G=(V,E)$ and a collection of $k$ {\em requests} (aka demands). Each request consists of a pair of nodes $s_i,t_i \in V$. These
are sometimes viewed as a {\em demand graph} $H=(V(G).\{s_it_i: i \in [k]\})$. A subset $S$ of the requests is called {\em routable} if there exist edge-disjoint paths $\{P_i: i \in S\}$ such that $P_i$ has endpoints $s_i,t_i$ for each $i$. We may also be given a profit $w_i$ associated with each request and the goal
is to find a routable subset $S$ which maximizes $w(S)=\sum_{i \in S} w_i$. The {\em cardinality version} is where
we have unit weights $w_i \equiv 1$.
For directed graphs it is known \cite{guruswami2003near} that there is no $O(n^{0.5-\epsilon})$
approximation, for any $\epsilon >0$ under the assumption $P \neq NP$. Subsequently, research shifted to undirected graphs
and two relaxed models. First, in the {\em all-or-nothing flow model} ({\sc anf}) the notion of routability is relaxed. A subset $S$ is called routable if there is a feasible (fractional) multiflow which satisfies each request in $S$. In \cite{Chekuri04a} a polylogarithmic approximation is given for {\sc anf}. Second, in the {\em congestion} model \cite{KleinbergT98} one is allowed to increase the capacity of each edge in $G$ by some constant factor.
Two streams of results ensued. For general graphs, a polylogarithmic approximation is ultimately provided \cite{chuzhoy2012polylogarithimic,ChuzhoyL12,chekuri2013poly} with edge congestion $2$. For planar graphs, a constant factor approximation is given \cite{seguin2020maximum,CKS-planar-constant} with edge congestion $2$. There is also an $f(g)$-factor approximation for bounded genus $g$ graphs with congestion 3.
As far as we know, the only congestion $1$ results known for either maximum {\sc anf} or {\sc edp} are as follows; all of these apply only to the cardinality version.
In \cite{kawarabayashi2018all}, a constant factor approximation is given for {\sc anf} in planar graphs and
for treewidth $k$ graphs there is an $f(k)$-approximation for {\sc edp} \cite{chekuri2013maximum}.
More recent results include a constant-factor approximation in the {\em fully planar} case where $G+H$ is planar \cite{huang2020approximation,garg2020integer}.
In the weighted regime, there is
a factor $4$ approximation for
{\sc edp} in capacitated trees \cite{chekuri2007multicommodity}. We remark that this problem for unit capacity ``stars'' already generalizes the maximum weight matching problem in general graphs. Moreover, inapproximability bounds for {\sc edp} in planar graphs are almost polynomial \cite{chuzhoy2017new}. This lends interest to how far one can push beyond trees. Our main contribution to the theory of maximum throughput flows is the following result which is the first generalization of the (weighted) {\sc edp} result for trees
\cite{chekuri2007multicommodity},
modulo a larger implicit constant of $224$.
\begin{restatable}{theorem}{outerplanarWEDPapprox}
\label{thm:edp}
There is an $224$ approximation algorithm for
the maximum weight {\sc anf} and {\sc edp} problems for capacitated
outerplanar graphs.
\end{restatable}
It is natural to try to prove this is by reducing the problem in outerplanar graphs to trees and then use \cite{chekuri2007multicommodity}.
A promising approach is to use results of
\cite{gupta2004cuts} -- an $O(1)$ distance tree embedding for outerplanar graphs -- and a {\em transfer theorem} \cite{andersen2009interchanging,Racke08} which proves a general equivalence between distance and capacity embeddings.
Combined, these results imply that there is a probabilistic embedding into trees which approximates cut capacity in outerplanar graphs with constant congestion.
One could then try to mimic the success of using low-distortion (distance) tree embeddings to approximate minimum cost network design problems. There is an issue with this approach however. Suppose we have a distribution on trees $T_i$ which approximates cut capacity in expectation. We then apply a known {\sc edp} algorithm which outputs a subset of requests $S_i$ which are routable in each $T_i$. While the tree embedding guarantees the convex combination of $S_i$'s satisfies the cut condition in $G$, it may be that no single $S_i$ obeys the cut condition, even approximately. This is a problem even for {\sc anf}. In fact, this seems to be a problem even when the trees are either dominating or dominated by $G$.
\iffalse
SKETCH WHY: If the $T_i$'s are all dominating $G$ then the issue is already
implied in what I typed. Some of the $S_i$s may massively violate cut capacity even though in expectation they are fine. If $T_i$'s are dominated then this problem is solved. But then while in expectation the $T_i$'s cover at least some constant fraction of $G$'s capacity. There is no guarantee that the $S_i$'s will as well. So I dont see that we can guarantee that one of the $S_i$'s will give at least constant times the OPT in $G$.
\fi
We resolve this by computing a {\bf single} tree which approximates the cuts in $G$ -- see Theorem~\ref{thm:tree}. Our algorithmic proof is heavily inspired by work of Gupta \cite{gupta2001steiner} which gives a method for eliminating Steiner nodes in probabilistic (distance) tree embeddings for general graphs.
It turns out that having a single-tree is not enough for us and we need additional technical properties to apply the algorithm from \cite{chekuri2007multicommodity}. First, our single tree $T$ should have integer capacities and be non-expansive, i.e., $\hat{u}(\delta_T(S)) \leq u(\delta_G(S))$ (where $\hat{u}/u$ are the edge capacities in $T/G$ and $\delta$ is used to denote the edges in the cut induced by $S$).
To see why it is useful that $T$ is an under-estimator of $G$'s cut capacity, consider the classical grid example of \cite{GargVY97}. They give an instance with a set of $\sqrt{n}$ requests which satisfy the cut condition in $2 \cdot G$, but for which one can only route a single request in the capacity of $G$.
If our tree is an under-estimator, then we can ultimately obtain a ``large'' weight subset of requests satisfying the cut condition in $G$ itself. However, even this is not generally sufficient for (integral) routability. For a multiflow instance $G/H$ one normally also requires that $G+H$ is Eulerian,
even for easy instances such as when $G$ is a $4$-cycle. The final ingredient we use is that our single tree $T$ is actually a {\bf subtree} of $G$ which
allows us to invoke the following result -- see Section~\ref{sec:required}.
\begin{restatable}{theorem}{OProute}
\label{thm:OP}
Let G be an outerplanar graph with integer edge capacities $u(e)$. Let $H$ denote a
demand graph such that $G + H = (V (G),E(G) \cup E(H))$ is outerplanar. If $G,H$ satisfies the cut
condition, then $H$ is routable in G.
\end{restatable}
\noindent
The key point here is that we can avoid the usual parity condition needed, such as in \cite{Okamura81,seymour1981matroids,frank1985edge}.
We are not presently aware of the above result's existence in the literature.
\subsection{A Single-Subtree Cut Sparsifier and Related Results}
Our main cut approximation theorem is the following which may be of independent interest.
\begin{restatable}{theorem}{integerTree}
\label{thm:tree}
For any connected outerplanar graph $G=(V,E)$ with integer edge capacities $u(e) > 0$, there is a subtree $T$ of $G$ with integer edge weights $\hat{u}(e) \geq 0$ such that
\[
\frac{1}{14} u(\delta_G(X)) \leq \hat{u}(\delta_{T}(X)) \leq u(\delta_G(X)) \mbox{ for each proper subset $X \subseteq V$}
\]
\end{restatable}
We discuss some connections of this result to prior work on sparsifiers and metric embeddings.
Celebrated work of R\"acke \cite{racke02} shows the existence of a single capacitated tree $T$ (not a subtree) which behaves as a flow sparsifier for a given graph $G$. In particular,
routability of demands on $T$ implies fractional routability in $G$ with edge congestion $polylog(n)$; this bound was further improved to $O(\log^2n \log\log n)$ \cite{harrelson2003polynomial}. Such single-tree results were also instrumental in an application to maximum throughput flows: a polylogarithmic approximation for the maximum all-or-nothing flow problem in general graphs \cite{chekuri2013all}. Even more directly to Theorem~\ref{thm:tree} is work on cut sparsifiers; in \cite{racke2014improved} it is shown that there is a single tree (again, not subtree) which approximates cut capacity in a general tree $G$ within a factor of $O(\log^{1.5} \log\log n)$. As far as we know, our result is the only global-constant factor single-tree cut approximator for a family of graphs.
R\"acke improved the bound for flow sparsification to an optimal congestion of $O(\log n)$ \cite{Racke08}. Rather than a single tree, this work requires a convex combination of (general) trees to simulate the capacity in $G$. His work also revealed a beautiful equivalence between the existence of good (low-congestion) distributions over trees for capacities, and
the existence of good (low-distortion) distributions over trees for distances \cite{andersen2009interchanging}.
This {\em transfer theorem} states very roughly that for a graph $G$ the following are equivalent for a given $\rho \geq 1$. (1) For any edge lengths $\ell(e)>0$, there is a (distance) embedding of $G$ into a distribution of trees which has stretch at most $\rho$. (2) For any edge capacities $u(e)>0$, there is a (capacity) embedding of $G$ into a distribution of trees which has congestion at most $\rho$. This work has been applied in other related contexts such as flow sparsifiers for proper subsets of terminals \cite{englert2014vertex}.
The transfer theorem uses a very general setting where there are a collection of valid {\em maps}. A map $M$ sends an edge of $G$ to an abstract ``path'' $M(e) \subseteq E(G)$. The maps may be refined for the application of interest. In the so-called {\em spanning tree setting}, each $M$ is associated with a subtree $T_M$ of $G$ (the setting most relevant to Theorem~\ref{thm:tree}). $M(e)$ is then the unique path which joins the endpoints of $e$ in $T_M$. For an edge $e$, its {\em stretch} under $M$ is $(\sum_{e' \in <(e)} \ell(e'))/\ell(e)$.
In the context of distance tree embeddings this model has been studied in \cite{alon1995graph,AbrahamBN08,elkin2008lower}.
In capacity settings, the {\em congestion} of an edge under $M$ is $(\sum_{e': e \in M(e)} c(e'))/c(e)$. One can view this as simulating the capacity of $G$ using the tree's edges with bounded congestion. The following result shows that we cannot guarantee a single subtree with $O(1)$ congestion even for outerplanar graphs; this example was found independently by Anastasios Sidiropoulos \cite{tasos}.
\begin{theorem}
\label{thm:lowerbound}
There is an infinite family $\mathcal{O}$ of outerplanar graphs
such that for every $G \in \mathcal{O}$ and every spanning tree $T$ of $G$:
\[
\max_{X} \frac{u(\delta_G(X))}{u(\delta_T(X))} = \Omega(\log|V(G)|),
\]
where the max is taken over fundamental cuts of $T$.
\end{theorem}
This suggests that the single-subtree result Theorem~\ref{thm:tree} is a bit lucky
and critically requires the use of tree capacities different from $u$.
Of course a single tree is sometimes
unnecessarily restrictive. For instance, outerplanar graphs also have an $O(1)$-congestion embedding using a distribution of subtrees by the transfer theorem (although we are not aware of one explicitly given in the literature). This follows implicitly due to existence of an $O(1)$-stretch embedding into subtrees \cite{gupta2004cuts}.
Finally we remark that despite the connections between distance and capacity tree embeddings, Theorem~\ref{thm:tree} stands in contrast to the situation for distance embeddings. Every embedding of the $n$ point cycle into a (single) subtree suffers distortion $\Omega(n)$, and indeed this also holds for embedding into an arbitrary (using Steiner nodes etc.) tree \cite{rabinovich1998lower}.
\iffalse
Subtrees such as in Theorem~\ref{thm:tree} seem to hit a limit with outerplanar graphs. Such trees for
for series parallel graphs would an imply a distribution over dominating arbitrary trees (by scaling). But then via the transfer theorem this contradicts a result in \cite{gupta2004cuts} which shows that dominating tree embeddings of
outerplanar graph distances may suffer logarithmic distortion.
BUG. Just like our result the trees may be O(1)-cut approximators but NOT have O(1)-congestion as defined by AF above.
\fi
\iffalse
BRUCE NOTES
1. Embedding into L1 is the same as embedding into lines (which are trees) but one is allowed to shrink some distances.
2. Series Parallel can be embeded into L1 (Lee and others)
3. There is no embedding of series parallel into dominating trees however (GNRS)
\fi
\iffalse
OLD COMMENTS/DISCUSSION
We first discuss distance-preserving embeddings and return later to capacity-preserving embeddings. In the distance setting, the goal is to replace a graph $G$ with edge distances $d(e) \geq 0$ by
a tree $T$ with edge weights $\hat{d}(e)$ so that the shortest-path distances in $G,d$ are quantitatively similar to those in $T,\hat{d}$. The high level motivation is to attack problems in $G$ using simpler algorithms for trees.
We also use the weights $d/\hat{d}$ to denote the induced shortest-path metrics in $G,T$.
For instance, for all $i,j \in V(G)$ we define $d(ij)=\min_{P \in \mathcal{P}_{ij}} d(P)$ where $\mathcal{P}_{ij}$ is the family of simple $ij$ paths in $G$ and $d(P)=\sum_{e \in P} d(e)$. Similarly we use the notation $\hat{d}(ij)$.
One may view $(G,d) \rightarrow (T,\hat{d})$ as a mapping of the shortest path semi-metric space in $G$ to one in $T$.
Depending on the scenario, additional restrictions on the mapping are considered in the literature.
For instance, one often requires non-contractive mappings, that is, the metric in $T$ {\em dominates} $G$: $\hat{d}(ij) \geq d(ij)$ for all $i,j \in V(G)$.
The {\em distortion} (for $i,j$) is then defined as
$\Delta(ij) := \frac{\hat{d}(ij)}{d(ij)}$ and the distortion of the map is $\max_{i,j \in V(G)} \Delta(ij)$.
One easily checks that
when $G$ is a unit-weight cycle, the distortion of any mapping into a subtree of $G$ is $n-1$. In fact, embedding into any $n$-point tree metric results in a distortion
of $\Omega(n)$ \cite{rabinovich1998lower}; this is true even if additional ``Steiner nodes'' are allowed \cite{gupta2001steiner}. Hence finding a constant-distortion single-tree distance approximator in a graph $G$ is not possible even for the most trivial ``non-tree graph''.
This inspires the important concept of probabilistic tree embeddings. That is,
a distribution $P$ over some family $\mathcal{T}$ of trees.
The goal is then to have a small (maximum) distortion in expectation.
A breakthrough in \cite{bartal1996probabilistic} showed that every graph has a probabilistic embedding into dominating tree metrics with polylogarithmic expected distortion which is improved to an optimal $O(\log n)$ distortion in \cite{FRT04}.
There are some subtleties in this line of research as to which maps are deemed {\em admissible}. For instance, do the maps have to be non-contractive or non-expansive? are the trees $T \in \mathcal{T}$ required to be subtrees of $G$?, e.g. \cite{alon1995graph}. One always assumes that $V(G) \subseteq V(T)$ but is $T$ allowed to have Steiner nodes? Also in some settings the weights on edges of $T$ must be the same as for $G$:
$\hat{d}(e) = d(e)$. We call this the {\em exact weight regime}.
The $O(\log n)$ expected distortion result \cite{FRT04} uses non-contractive maps and arbitrary trees (with Steiner nodes, although these can be eliminated with constant factor loss \cite{gupta2001steiner}). In the subtree-with-exact-weights regime, the current best distortion is $\tilde{O}(\log n)$ \cite{AbrahamBN08}.
AF also demonstrate that distance embeddings of a planar graph are closely related to capacity embeddings in its planar dual.
A similar relationship was described earlier by Emek \cite{emek2009k}.
\fi
\iffalse
\textcolor{blue}{
I'm a bit confused about the next two paragraphs.
A single tree capacity embedding doesn't exist in the sense of A-F, but their capacity embeddings don't represent all of what we call cut-approximators.
Also, I added `probabilistic' to the O(1)-distortion embedding.
}
\fi
\iffalse
Since there is an $O(1)$-distortion probabilistic tree embedding for outerplanar graphs \cite{gupta2004cuts}, the preceding results imply that there is an $O(1)$-congestion approximator of cuts via a distribution of trees.
The duality/polarity arguments of \cite{andersen2009interchanging} between
distance and capacity embeddings are implicit however. There is no obvious way to convert an $O(1)$-distortion distance embedding into an $O(1)$-congestion capacity embedding. In fact, to the best of our knowledge, no such distribution has been explicitly given for capacity mappings in outerplanar graphs. Moreover,
we know that single-tree embeddings do not even exist for distances in outerplanar graphs (or even cycles) so one may be tempted to believe that none exists for capacities either. Theorem~\ref{thm:tree} shows otherwise by producing an explicit single-tree cut approximator for any outerplanar graph.
\fi
\section{Single spanning tree cut approximator in Outerplanar Graphs}
In this section we first show the existence of a single-tree
which is an $O(1)$ cut approximator for an outerplanar graph $G$.
Subsequently we show that there is such a tree with two additional properties. First, its capacity on every cut is at most the capacity in $G$, and second, all of its weights are integral. These additional properties (integrality and conservativeness) are needed in our application to {\sc edp}. The formal statement we prove is as follows.
\integerTree*
In Section~\ref{sec:flowdist}, we show how to view capacity approximators in $G$ as (constrained) distance tree approximators in the planar dual graph. From then on, we look for distance approximators in the dual which correspond to trees in $G$. In Section~\ref{sec:non-conservative} we prove there exists a single-subtree cut approximator. In Appendix~\ref{sec:extend} we show how to make this conservative while maintaining integrality of the capacities.
In Section~\ref{sec:lb} we show that we cannot achieve Theorem~\ref{thm:tree} in the exact weight model.
\subsection{Converting flow-sparsifiers in outerplanar graphs to distance-sparsifiers in trees}
\label{sec:flowdist}
Let $G = (V, E)$ be an outerplanar graph with capacities $u:E\to\mathbb{R}^+$.
Without loss of generality, we can assume that $G$ is 2-node connected,
so the boundary of the outer face of $G$ is a cycle that
contains each node exactly once. Let $G^*$ be the dual of $G$; we assign weights
to the dual edges in $G^*$ equal to the capacities on the corresponding edges in $G$.
Let $G_z$ be the graph obtained by adding an apex node $z$ to $G$ which is connected
to each node of $G$, that is $V(G_z)=V\cup\{z\}$ and
$E(G_z)=E\cup\{(z,v):v\in V\}$. We may embed $z$ into the outer face of $G$, so $G_z$
is planar. Let $G_z^*$ denote the planar dual of $G_z$.
\begin{figure}
\centering
\begin{tikzpicture}[x=0.75pt,y=0.75pt,yscale=-1,xscale=1,scale=0.6]
\draw (101,132.3) .. controls (101,77.46) and (209.18,33) .. (342.63,33) .. controls (476.08,33) and (584.26,77.46) .. (584.26,132.3) .. controls (584.26,187.14) and (476.08,231.6) .. (342.63,231.6) .. controls (209.18,231.6) and (101,187.14) .. (101,132.3) -- cycle ;
\draw (187,56.6) .. controls (224.26,108.6) and (226.26,147.6) .. (220.26,216.6) ;
\draw (187,56.6) .. controls (259.26,85.6) and (482.26,150.6) .. (559.26,176.6) ;
\draw (286,226.6) .. controls (305.26,175.6) and (409.26,159.6) .. (453.26,219.6) ;
\draw (529.26,67.6) .. controls (521.26,120.6) and (523.26,137.6) .. (559.26,176.6) ;
\draw [color={rgb, 255:red, 208; green, 2; blue, 27 } ,draw opacity=1 ][line width=1.5] [dash pattern={on 1.69pt off 2.76pt}] (187,56.6) .. controls (-83.07,31.81) and (11.26,382.98) .. (314.26,314.98) ;
\draw [color={rgb, 255:red, 208; green, 2; blue, 27 } ,draw opacity=1 ][line width=1.5] [dash pattern={on 1.69pt off 2.76pt}] (220.26,216.6) .. controls (216.93,273.29) and (281.26,306.07) .. (314.26,314.98) ;
\draw [color={rgb, 255:red, 208; green, 2; blue, 27 } ,draw opacity=1 ][line width=1.5] [dash pattern={on 1.69pt off 2.76pt}] (286,226.6) .. controls (287.33,238.28) and (290.75,252.71) .. (295.23,267.2) .. controls (300.61,284.59) and (307.54,302.06) .. (314.26,314.98) ;
\draw [color={rgb, 255:red, 208; green, 2; blue, 27 } ,draw opacity=1 ][line width=1.5] [dash pattern={on 1.69pt off 2.76pt}] (453.26,219.6) .. controls (413.93,252.29) and (362.93,289.29) .. (314.26,314.98) ;
\draw [color={rgb, 255:red, 208; green, 2; blue, 27 } ,draw opacity=1 ][line width=1.5] [dash pattern={on 1.69pt off 2.76pt}] (314.26,314.98) .. controls (469.5,317.67) and (564.5,230.67) .. (559.26,176.6) ;
\draw [color={rgb, 255:red, 208; green, 2; blue, 27 } ,draw opacity=1 ][line width=1.5] [dash pattern={on 1.69pt off 2.76pt}] (529.26,67.6) .. controls (714.5,40.81) and (676.5,405.67) .. (314.26,314.98) ;
\draw [color={rgb, 255:red, 208; green, 2; blue, 27 } ,draw opacity=1 ][fill={rgb, 255:red, 208; green, 2; blue, 27 } ,fill opacity=1 ] (310.51,314.98) .. controls (310.51,312.91) and (312.19,311.23) .. (314.26,311.23) .. controls (316.34,311.23) and (318.01,312.91) .. (318.01,314.98) .. controls (318.01,317.05) and (316.34,318.73) .. (314.26,318.73) .. controls (312.19,318.73) and (310.51,317.05) .. (310.51,314.98) -- cycle ;
\draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ][fill={rgb, 255:red, 74; green, 144; blue, 226 } ,fill opacity=1 ] (274.51,139.98) .. controls (274.51,137.91) and (276.19,136.23) .. (278.26,136.23) .. controls (280.34,136.23) and (282.01,137.91) .. (282.01,139.98) .. controls (282.01,142.05) and (280.34,143.73) .. (278.26,143.73) .. controls (276.19,143.73) and (274.51,142.05) .. (274.51,139.98) -- cycle ;
\draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ][fill={rgb, 255:red, 74; green, 144; blue, 226 } ,fill opacity=1 ] (170.51,138.98) .. controls (170.51,136.91) and (172.19,135.23) .. (174.26,135.23) .. controls (176.34,135.23) and (178.01,136.91) .. (178.01,138.98) .. controls (178.01,141.05) and (176.34,142.73) .. (174.26,142.73) .. controls (172.19,142.73) and (170.51,141.05) .. (170.51,138.98) -- cycle ;
\draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ][fill={rgb, 255:red, 74; green, 144; blue, 226 } ,fill opacity=1 ] (409.51,83.98) .. controls (409.51,81.91) and (411.19,80.23) .. (413.26,80.23) .. controls (415.34,80.23) and (417.01,81.91) .. (417.01,83.98) .. controls (417.01,86.05) and (415.34,87.73) .. (413.26,87.73) .. controls (411.19,87.73) and (409.51,86.05) .. (409.51,83.98) -- cycle ;
\draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ][fill={rgb, 255:red, 74; green, 144; blue, 226 } ,fill opacity=1 ] (551.51,124.98) .. controls (551.51,122.91) and (553.19,121.23) .. (555.26,121.23) .. controls (557.34,121.23) and (559.01,122.91) .. (559.01,124.98) .. controls (559.01,127.05) and (557.34,128.73) .. (555.26,128.73) .. controls (553.19,128.73) and (551.51,127.05) .. (551.51,124.98) -- cycle ;
\draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ][fill={rgb, 255:red, 74; green, 144; blue, 226 } ,fill opacity=1 ] (363.51,206.98) .. controls (363.51,204.91) and (365.19,203.23) .. (367.26,203.23) .. controls (369.34,203.23) and (371.01,204.91) .. (371.01,206.98) .. controls (371.01,209.05) and (369.34,210.73) .. (367.26,210.73) .. controls (365.19,210.73) and (363.51,209.05) .. (363.51,206.98) -- cycle ;
\draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ][fill={rgb, 255:red, 74; green, 144; blue, 226 } ,fill opacity=1 ] (151.51,242.98) .. controls (151.51,240.91) and (153.19,239.23) .. (155.26,239.23) .. controls (157.34,239.23) and (159.01,240.91) .. (159.01,242.98) .. controls (159.01,245.05) and (157.34,246.73) .. (155.26,246.73) .. controls (153.19,246.73) and (151.51,245.05) .. (151.51,242.98) -- cycle ;
\draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ][fill={rgb, 255:red, 74; green, 144; blue, 226 } ,fill opacity=1 ] (256.51,254.98) .. controls (256.51,252.91) and (258.19,251.23) .. (260.26,251.23) .. controls (262.34,251.23) and (264.01,252.91) .. (264.01,254.98) .. controls (264.01,257.05) and (262.34,258.73) .. (260.26,258.73) .. controls (258.19,258.73) and (256.51,257.05) .. (256.51,254.98) -- cycle ;
\draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ][fill={rgb, 255:red, 74; green, 144; blue, 226 } ,fill opacity=1 ] (331.51,256.98) .. controls (331.51,254.91) and (333.19,253.23) .. (335.26,253.23) .. controls (337.34,253.23) and (339.01,254.91) .. (339.01,256.98) .. controls (339.01,259.05) and (337.34,260.73) .. (335.26,260.73) .. controls (333.19,260.73) and (331.51,259.05) .. (331.51,256.98) -- cycle ;
\draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ][fill={rgb, 255:red, 74; green, 144; blue, 226 } ,fill opacity=1 ] (474.51,237.98) .. controls (474.51,235.91) and (476.19,234.23) .. (478.26,234.23) .. controls (480.34,234.23) and (482.01,235.91) .. (482.01,237.98) .. controls (482.01,240.05) and (480.34,241.73) .. (478.26,241.73) .. controls (476.19,241.73) and (474.51,240.05) .. (474.51,237.98) -- cycle ;
\draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ][fill={rgb, 255:red, 74; green, 144; blue, 226 } ,fill opacity=1 ] (593.51,186.98) .. controls (593.51,184.91) and (595.19,183.23) .. (597.26,183.23) .. controls (599.34,183.23) and (601.01,184.91) .. (601.01,186.98) .. controls (601.01,189.05) and (599.34,190.73) .. (597.26,190.73) .. controls (595.19,190.73) and (593.51,189.05) .. (593.51,186.98) -- cycle ;
\draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ][fill={rgb, 255:red, 74; green, 144; blue, 226 } ,fill opacity=1 ] (473.51,15.98) .. controls (473.51,13.91) and (475.19,12.23) .. (477.26,12.23) .. controls (479.34,12.23) and (481.01,13.91) .. (481.01,15.98) .. controls (481.01,18.05) and (479.34,19.73) .. (477.26,19.73) .. controls (475.19,19.73) and (473.51,18.05) .. (473.51,15.98) -- cycle ;
\draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ][line width=1.5] [dash pattern={on 5.63pt off 4.5pt}] (174.26,138.98) -- (278.26,139.98) ;
\draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ][line width=1.5] [dash pattern={on 5.63pt off 4.5pt}] (278.26,139.98) -- (413.26,83.98) ;
\draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ][line width=1.5] [dash pattern={on 5.63pt off 4.5pt}] (278.26,139.98) -- (367.26,206.98) ;
\draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ][line width=1.5] [dash pattern={on 5.63pt off 4.5pt}] (260.26,254.98) -- (278.26,139.98) ;
\draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ][line width=1.5] [dash pattern={on 5.63pt off 4.5pt}] (413.26,83.98) -- (477.26,15.98) ;
\draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ][line width=1.5] [dash pattern={on 5.63pt off 4.5pt}] (367.26,206.98) -- (335.26,256.98) ;
\draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ][line width=1.5] [dash pattern={on 5.63pt off 4.5pt}] (155.26,242.98) -- (174.26,138.98) ;
\draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ][line width=1.5] [dash pattern={on 5.63pt off 4.5pt}] (413.26,83.98) -- (555.26,124.98) ;
\draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ][line width=1.5] [dash pattern={on 5.63pt off 4.5pt}] (555.26,124.98) -- (597.26,186.98) ;
\draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ][line width=1.5] [dash pattern={on 5.63pt off 4.5pt}] (81.51,138.98) -- (174.26,138.98) ;
\draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ][fill={rgb, 255:red, 74; green, 144; blue, 226 } ,fill opacity=1 ] (81.51,138.98) .. controls (81.51,136.91) and (83.19,135.23) .. (85.26,135.23) .. controls (87.34,135.23) and (89.01,136.91) .. (89.01,138.98) .. controls (89.01,141.05) and (87.34,142.73) .. (85.26,142.73) .. controls (83.19,142.73) and (81.51,141.05) .. (81.51,138.98) -- cycle ;
\draw [color={rgb, 255:red, 208; green, 2; blue, 27 } ,draw opacity=1 ][line width=1.5] [dash pattern={on 1.69pt off 2.76pt}] (113.5,165.67) .. controls (76.93,226.29) and (106.5,293.95) .. (314.26,314.98) ;
\draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ][line width=1.5] [dash pattern={on 5.63pt off 4.5pt}] (278.26,139.98) .. controls (349.52,143.3) and (520.52,174.3) .. (478.26,237.98) ;
\draw (305.51,321.38) node [anchor=north west][inner sep=0.75pt] [font=\normalsize] {$z$};
\draw (262,46.39) node [anchor=north west][inner sep=0.75pt] [font=\normalsize] {$G$};
\draw (408,94.39) node [anchor=north west][inner sep=0.75pt] [font=\normalsize,color={rgb, 255:red, 74; green, 144; blue, 226 } ,opacity=1 ] {$T^{*}$};
\draw (63,293.6) node [anchor=north west][inner sep=0.75pt] [color={rgb, 255:red, 208; green, 2; blue, 27 } ,opacity=1 ] {$\delta ( z)$};
\end{tikzpicture}
\caption{
The solid edges form the outerplanar graph $G$,
and the dotted edges are the edges incident to the apex node $z$ in $G_z$.
The dashed edges form the dual tree $T^*$.
}
\label{fig:op-dual}
\end{figure}
Note that $\delta(z)=\{(z,v):v\in V\}$ are the edges of a spanning tree of $G_z$, so
$E(G_z)^*\setminus\delta(z)^*$ are the edges of a spanning tree $T^*$ of $G_z^*$.
Each non-leaf node of $T^*$ corresponds to an inner face of $G$, and each leaf of
$T^*$ corresponds to a face of $G_z$ whose boundary contains the apex node $z$.
Also note that we obtain $G^*$ if we combine all the leaves of $T^*$ into a single
node (which would correspond to the outer face of $G$). We will call $T^*$ the dual
tree of the outerplanar graph $G$ (Figure \ref{fig:op-dual}).
Let a central cut of $G$ be a cut $\delta(S)$ such that both of its shores $S$ and
$V\setminus S$ induced connected subgraphs of $G$. Hence, the shores of a central cut are subpaths of
the outer cycle, so the dual of $\delta(S)$ is a leaf-to-leaf path in $T^*$. Since
the edges of any cut in a connected graph is a disjoint union of central cuts, it suffices to
only consider central cuts.
We want to find a strictly embedded cut-sparsifier $T=(V,F,u^*)$ of $G$ (ie. a spanning
tree $T$ of $G$ with edges weights $u^*$) such that for any nonempty $X\subsetneq V$,
we have
\begin{equation}
\alpha u(\delta_G(X)) \le u^*(\delta_T(X)) \le \beta u(\delta_G(X)) .
\label{cut-sparsifier}
\end{equation}
In the above inequality, we can replace $u^*(\delta_T(X))$ with $u^*(\delta_G(X))$
if we set $u^*(e)=0$ for each edge $e\notin E(T)$. In the dual tree (of $G$),
$\delta_G(X)^*$ is a leaf-to-leaf path for any central cut $\delta(X)$,
so inequality \eqref{cut-sparsifier} is equivalent to
\begin{equation}
\alpha u(P) \le u^*(P) \le \beta u(P)
\label{distance-sparsifier}
\end{equation}
for any leaf-to-leaf path $P$ in $T^*$.
Finally, we give a sufficient property on the weights $u^*$ assigned to the
edges such that all edges of positive weight are in the spanning tree of $G$.
Recall that the dual of the edges not in the spanning tree of $G$ would
form a spanning tree of $G^*$. Since we assign weight 0 to edges not in the
spanning tree of $G$, it is sufficient for the 0 weight edges to form a
spanning subgraph of $G^*$. Since $G^*$ is obtained by combining the leaves
of $T^*$ into a single node, it suffices for each node $v\in V(T^*)$ to
have a 0 weight path from $v$ to a leaf of $T^*$.
\subsection{An algorithm to build a distance-sparsifier of a tree}
\label{sec:non-conservative}
In this section, we present an algorithm to obtain a distance-sparsifier
of a tree. In particular, this allows us to obtain a cut-approximator of
an outerplanar graph from a distance-sparsifier of its dual tree.
Let $T=(V,E,u)$ be a weighted tree where $u:E\to\mathbb{R}^+$ is the
length function on $T$. Let $L\subset V$ be the leaves of $T$. We assign
non-negative weights $u^*$ to the edges of $T$. Let $d$ be the shortest
path metric induced by the original weights $u$, and let $d^*$ be the
shortest path metric induced by the new weights $u^*$. We want the following
two conditions to hold:
\begin{enumerate}
\item there exists a 0 weight path from each $v\in V$ to a leaf of $T$.
\item for any two leaves $x,y\in L$, we have
\begin{equation}
\frac14 d(x,y) \le d^*(x,y) \le 2 d(x,y) .
\label{tree-bounds}
\end{equation}
\end{enumerate}
We define $u^*$ recursively as follows. Let $r$ be a non-leaf node of $T$
(we are done if no such nodes exist), and consider $T$ to be rooted at
$r$. For $v\in V$, let $T(v)$ denote the subtree rooted at $v$, and let $h(v)$
denote the \emph{height} of $v$, defined by $h(v)=\min\{d(v,x):x\in L\cap T(v)\}$. Now,
let $r_1, ..., r_k$ be the points in $T$ that are at distance exactly $h(r)/2$
from $r$. Without loss of generality, suppose that each $r_i$ is a node
(otherwise we can subdivide the edge to get a node), and order the $r_i$'s
by increasing $h(r_i)$, that is $h(r_{i-1})\le h(r_i)$ for each $i=2,...,k$.
Furthermore, suppose that we have already assigned weights to the edges in
each subtree $T(r_i)$ using this algorithm, so it remains to assign weights
to the edges not in any of these subtrees. We assign a weight of $h(r_i)$ to
the first edge on the path from $r_i$ to $r$ for each $i=2,...,k$, and weight
0 to all other edges (Figure \ref{fig:algorithm}).
In particular, all edges on the path from $r_1$ to $r$ receive weight $0$.
This algorithm terminates because the length of the
longest path from the root to a leaf decreases by at least half the length
of the shortest edge incident to a leaf in each iteration.
\begin{figure}
\centering
\begin{tikzpicture}[x=0.75pt,y=0.75pt,yscale=-1,xscale=1]
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (145.38,47.73) .. controls (145.38,45.9) and (147.12,44.42) .. (149.27,44.42) .. controls (151.41,44.42) and (153.15,45.9) .. (153.15,47.73) .. controls (153.15,49.56) and (151.41,51.04) .. (149.27,51.04) .. controls (147.12,51.04) and (145.38,49.56) .. (145.38,47.73) -- cycle ;
\draw [dash pattern={on 0.84pt off 2.51pt}] (5.26,148.48) -- (301.05,148.48) ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (56.64,148.48) .. controls (56.64,146.65) and (58.38,145.16) .. (60.53,145.16) .. controls (62.68,145.16) and (64.42,146.65) .. (64.42,148.48) .. controls (64.42,150.31) and (62.68,151.79) .. (60.53,151.79) .. controls (58.38,151.79) and (56.64,150.31) .. (56.64,148.48) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (219.87,149.74) .. controls (219.87,147.91) and (221.61,146.42) .. (223.76,146.42) .. controls (225.91,146.42) and (227.65,147.91) .. (227.65,149.74) .. controls (227.65,151.56) and (225.91,153.05) .. (223.76,153.05) .. controls (221.61,153.05) and (219.87,151.56) .. (219.87,149.74) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (145.38,148.48) .. controls (145.38,146.65) and (147.12,145.16) .. (149.27,145.16) .. controls (151.41,145.16) and (153.15,146.65) .. (153.15,148.48) .. controls (153.15,150.31) and (151.41,151.79) .. (149.27,151.79) .. controls (147.12,151.79) and (145.38,150.31) .. (145.38,148.48) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (174.96,98.1) .. controls (174.96,96.28) and (176.7,94.79) .. (178.84,94.79) .. controls (180.99,94.79) and (182.73,96.28) .. (182.73,98.1) .. controls (182.73,99.93) and (180.99,101.42) .. (178.84,101.42) .. controls (176.7,101.42) and (174.96,99.93) .. (174.96,98.1) -- cycle ;
\draw (149.27,47.73) -- (178.84,98.1) ;
\draw (60.53,148.48) -- (149.27,47.73) ;
\draw (150.74,145.96) -- (180.32,95.59) ;
\draw (225.24,147.22) -- (180.32,95.59) ;
\draw (60.53,148.48) -- (96.68,212.09) -- (24.38,212.09) -- cycle ;
\draw (149.27,148.48) -- (176.87,243.57) -- (121.66,243.57) -- cycle ;
\draw (223.65,149.74) -- (251.15,212.27) -- (196.15,212.27) -- cycle ;
\draw (301.41,45.14) -- (301.05,148.48) ;
\draw [shift={(301.05,148.48)}, rotate = 270.2] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (0,5.59) -- (0,-5.59) ;
\draw [shift={(301.41,45.14)}, rotate = 270.2] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (0,5.59) -- (0,-5.59) ;
\draw (43.21,124.3) node [anchor=north west][inner sep=0.75pt] {$r_{1}$};
\draw (123.94,151.24) node [anchor=north west][inner sep=0.75pt] {$r_{2}$};
\draw (237.48,151.79) node [anchor=north west][inner sep=0.75pt] {$r_{3}$};
\draw (157.17,32.13) node [anchor=north west][inner sep=0.75pt] {$r$};
\draw (310.76,76.38) node [anchor=north west][inner sep=0.75pt] {$\frac{h( r)}{2}$};
\draw (120.92,117.48) node [anchor=north west][inner sep=0.75pt] [font=\small] {$h( r_{2})$};
\draw (210.64,114.48) node [anchor=north west][inner sep=0.75pt] [font=\small] {$h( r_{3})$};
\draw (169.88,64.56) node [anchor=north west][inner sep=0.75pt] {$0$};
\draw (92.65,83.7) node [anchor=north west][inner sep=0.75pt] {$0$};
\draw (41.25,190.2) node [anchor=north west][inner sep=0.75pt] {$T( r_{1})$};
\draw (130.19,220.15) node [anchor=north west][inner sep=0.75pt] {$T( r_{2})$};
\draw (205.09,191.39) node [anchor=north west][inner sep=0.75pt] {$T( r_{3})$};
\end{tikzpicture}
\caption{
The algorithm assigns weights to the edges above $r_1,...,r_k$,
and is run recursively on the subtrees $T(r_1),...,T(r_k)$.
}
\label{fig:algorithm}
\end{figure}
Since we assign 0 weight to edges on the $r_1r$ path,
Condition 1 is satisfied for all nodes above the $r_i$'s in the tree by construction. It remains to prove Condition 2.
We use the following upper and lower bounds. For each leaf $x\in L$,
\begin{align}
d^*(x,r) &\le 2d(x,r) - h(r) \label{upper-bound} , \\
d^*(x,r) &\ge d(x,r) - h(r) \label{lower-bound} .
\end{align}
We prove the upper bound in \eqref{upper-bound} by induction. We are
done if $T$ only has 0 weight edges, and the cases that cause the algorithm
to terminate will only have 0 weight edges. For the induction, we consider
two separate cases depending on whether $x\in T(r_1)$.
\textbf{Case 1}: $x\in T(r_1)$.
\begin{align*}
d^*(x, r)
&= d^*(x, r_1) + d^*(r_1, r)
&& \textrm{($r_1$ is between $x$ and $r$)} \\
&= d^*(x, r_1)
&& \textrm{(by definition of $u^*$)} \\
&\le 2d(x, r_1) - h(r_1)
&& \textrm{(by induction)} \\
&= 2d(x, r) - 2d(r, r_1) - h(r_1)
&& \textrm{($r_1$ is between $x$ and $r$)} \\
&= 2d(x, r) - \frac32 h(r)
&& \textrm{($h(r_1)=h(r)/2$ by definition of $r_1$)} \\
&\le 2d(x, r) - h(r)
\end{align*}
\textbf{Case 2}: $x\in T(r_i)$ for some $i\neq1$.
\begin{align*}
d^*(x, r)
&= d^*(x, r_i) + d^*(r_i, r)
&& \textrm{($r_i$ is between $x$ and $r$)} \\
&= d^*(x, r_i) + h(r_i)
&& \textrm{(by definition of $u^*$)} \\
&\le 2d(x, r_i) - h(r_i) + h(r_i)
&& \textrm{(by induction)} \\
&= 2d(x, r) - 2d(r_i, r)
&& \textrm{($r_i$ is between $x$ and $r$)} \\
&= 2d(x, r) - h(r)
&& \textrm{($d(r_i, r) = h(r)/2$ by definition of $r_i$)}
\end{align*}
This proves inequality \eqref{upper-bound}.
We prove the lower bound in \eqref{lower-bound} similarly.
\textbf{Case 1}: $x \in T(r_1)$.
\begin{align*}
d^*(x, r)
&= d^*(x, r_1) + d^*(r_1, r)
&& \textrm{($r_1$ is between $x$ and $r$)} \\
&= d^*(x, r_1)
&& \textrm{(by definition of $u^*$)} \\
&\ge d(x, r_1) - h(r_1)
&& \textrm{(by induction)} \\
&= d(x, r) - d(r, r_1) - h(r_1)
&& \textrm{($r_1$ is between $x$ and $r$)} \\
&= d(x, r) - h(r)
&& \textrm{(by definition of $r_1$)}
\end{align*}
\textbf{Case 2}: $x \in T(r_i)$ for some $i\neq1$.
\begin{align*}
d^*(x, r)
&= d^*(x, r_i) + d^*(r_i, r)
&& \textrm{($r_i$ is between $x$ and $r$)} \\
&= d^*(x, r_i) + h(r_i)
&& \textrm{(by definition of $u^*$)} \\
&\ge d(x, r_i) - h(r_i) + h(r_i)
&& \textrm{(by induction)} \\
&= d(x, r) - d(r_i, r)
&& \textrm{($r_i$ is between $x$ and $r$)} \\
&= d(x, r) - h(r)/2
&& \textrm{($d(r_i, r) = h(r)/2$ by definition of $r_i$)} \\
&\ge d(x, r) - h(r)
\end{align*}
This proves inequality \eqref{lower-bound}
Finally, we prove property 2, that is inequality \eqref{tree-bounds},
by induction. Let $x,y\in L$ be two leaves of $T$. Suppose that
$x\in T(r_i)$ and $y\in T(r_j)$. By induction, we may assume that
$i\neq j$, so without loss of generality, suppose that $i<j$.
We prove the upper bound.
\begin{align*}
d^*(x, y) &= d^*(x, r_i) + d^*(r_i, r_j) + d^*(r_j, y) \\
&\le 2d(x, r_i) - h(r_i) + 2d(y, r_j) - h(r_j) + d^*(r_i, r_j)
&& \textrm{(by \eqref{upper-bound})} \\
&\le 2d(x, r_i) - h(r_i) + 2d(y, r_j) - h(r_j) + h(r_i) + h(r_j)
&& \textrm{(by definition of $u^*$)} \\
&= 2d(x, r_i) + 2d(y, r_j) \\
&\le 2d(x, y)
\end{align*}
We prove the lower bound.
\begin{align*}
d(x, y)
&= d(x, r_i) + d(r_i, r_j) + d(r_j, y) \\
&\le d(x, r_i) + d(r_j, y) + h(r_i) + h(r_j) && \\
&\qquad\qquad \textrm{(because $d(r, r_i)=h(r)/2\le h(r_i)$ for all $i\in[k]$)} && \\
&\le 2d(x, r_i) + 2d(r_j, y)
&& \textrm{(by definition of $h$)} \\
&\le 2d^*(x, r_i) + 2h(r_i) + 2d^*(y, r_j) + 2h(r_j)
&& \textrm{(by \eqref{lower-bound})} \\
&= 2d^*(x, y) - 2d^*(r_i, r_j) + 2h(r_i) + 2h(r_j) .
\end{align*}
Now we finish the proof of the lower bound by considering two cases.
\textbf{Case 1}: $i = 1$, that is $x$ is in the first subtree.
\begin{align*}
d(x, y)
&\le 2d^*(x, y) - 2d^*(r_1, r_j) + 2h(r_1) + 2h(r_j) \\
&= 2d^*(x, y) - 2h(r_j) + 2h(r_1) + 2h(r_j)
&& \textrm{(by definition of $u^*$)} \\
&\le 2d^*(x, y) + 2h(r_1) \\
&\le 4d^*(x, y)
\end{align*}
\textbf{Case 2}: $i > 1$, that is neither $x$ nor $y$ is in the first subtree.
\begin{align*}
d(x, y)
&\le 2d^*(x, y) - 2d^*(r_i, r_j) + 2h(r_i) + 2h(r_j) \\
&= 2d^*(x, y) - 2h(r_i) - 2h(r_j) + 2h(r_i) + 2h(r_j)
&& \textrm{(by definition of $u^*$)} \\
&= 2d^*(x, y)
\end{align*}
This completes the proof of property 2.
\section{Maximum Weight Disjoint Paths}
In this section we prove our main result for {\sc edp}, Theorem~\ref{thm:edp}.
\subsection{Required Elements}
\label{sec:required}
We first prove the following result which establishes conditions for when
the cut condition implies routability.
\iffalse
\begin{restatable}{theorem}{OProute}
\label{thm:OP}
Let $G$ be an outerplanar graph with integer edge capacities $u(e)$.
Let $H$ denote a demand graph such that $G+H=(V(G),E(G)\cup E(H))$
is outerplanar. If $G,H$ satisfies the cut condition, then $H$ is routable in $G$.
\end{restatable}
\fi
\OProute*
The novelty in this statement is that we do not require the Eulerian condition
on $G+H$. This condition is needed in virtually all classical results for edge-disjoint paths. In fact, even when $G$ is a $4$-cycle and $H$ consists of a matching of size $2$, the cut condition need not be sufficient to guarantee routability. The main exception is the case when $G$ is a tree and a trivial greedy algorithm suffices to route $H$. We prove the theorem by giving a simple (but not so simple) algorithm to compute a routing.
To prove this theorem, we need the following $2$-node reduction lemma which is generally known.
\begin{lemma}
\label{lemma:2con-cc}
Let $G$ be a graph and let $H$ be a collection of demands that satisfies the cut condition.
Let $G_1,...,G_k$ be the blocks of $G$ (the 2-node connected components and the cut edges (aka bridges) of $G$).
Let $H_i$ be the collection of nontrivial (i.e., non-loop) demands after contracting
each edge $e\in E(G)\setminus E(G_i)$.
Then each $G_i,H_i$ satisfies the cut condition.
Furthermore, if $G$ (or $G+H$) was outerplanar (or planar),
then each $G_i$ (resp. $G_i+H_i$) is outerplanar (resp. planar).
Moreover, if each $H_i$ is routable in $G_i$, then $H$ is routable in $G$.
\end{lemma}
\begin{figure}[htbp]
\centering
\scalebox{0.7}{
\begin{tikzpicture}[x=0.75pt,y=0.75pt,yscale=-1,xscale=1]
\draw (120.4,91.54) -- (196.52,70.07) ;
\draw (44.29,113) -- (51.52,176.07) ;
\draw (44.29,113) -- (120.4,91.54) ;
\draw (120.4,91.54) -- (121.52,30.07) ;
\draw (196.52,70.07) -- (121.52,30.07) ;
\draw (51.52,176.07) -- (113.52,146.07) ;
\draw (158.52,160.07) -- (206.52,119.07) ;
\draw (120.4,91.54) -- (206.52,119.07) ;
\draw (120.4,91.54) -- (158.52,160.07) ;
\draw (44.29,113) -- (113.52,146.07) ;
\draw (252,125.27) -- (288.31,125.27) -- (288.31,119) -- (312.52,131.54) -- (288.31,144.07) -- (288.31,137.81) -- (252,137.81) -- cycle ;
\draw (158.52,160.07) -- (181.52,227.07) ;
\draw (181.52,227.07) -- (110.52,201.07) ;
\draw (110.52,201.07) -- (158.52,160.07) ;
\draw [color={rgb, 255:red, 208; green, 2; blue, 27 } ,draw opacity=1 ][line width=2.25] [dash pattern={on 6.75pt off 4.5pt}] (51.52,176.07) .. controls (63.52,223.07) and (141.52,257.07) .. (181.52,227.07) ;
\draw (435.4,96.54) -- (511.52,75.07) ;
\draw (359.29,118) -- (366.52,181.07) ;
\draw (359.29,118) -- (435.4,96.54) ;
\draw (435.4,96.54) -- (436.52,35.07) ;
\draw (511.52,75.07) -- (436.52,35.07) ;
\draw (366.52,181.07) -- (428.52,151.07) ;
\draw (473.52,165.07) -- (521.52,124.07) ;
\draw (435.4,96.54) -- (521.52,124.07) ;
\draw (435.4,96.54) -- (473.52,165.07) ;
\draw (359.29,118) -- (428.52,151.07) ;
\draw (473.52,165.07) -- (496.52,232.07) ;
\draw (496.52,232.07) -- (425.52,206.07) ;
\draw (425.52,206.07) -- (473.52,165.07) ;
\draw [color={rgb, 255:red, 208; green, 2; blue, 27 } ,draw opacity=1 ][line width=2.25] [dash pattern={on 6.75pt off 4.5pt}] (366.52,181.07) .. controls (376.52,160.07) and (378.52,138.07) .. (359.29,118) ;
\draw [color={rgb, 255:red, 208; green, 2; blue, 27 } ,draw opacity=1 ][line width=2.25] [dash pattern={on 6.75pt off 4.5pt}] (473.52,165.07) .. controls (467.52,186.07) and (475.52,217.07) .. (496.52,232.07) ;
\draw [color={rgb, 255:red, 208; green, 2; blue, 27 } ,draw opacity=1 ][line width=2.25] [dash pattern={on 6.75pt off 4.5pt}] (435.4,96.54) .. controls (433.52,116.07) and (448.52,151.07) .. (473.52,165.07) ;
\draw [color={rgb, 255:red, 208; green, 2; blue, 27 } ,draw opacity=1 ][line width=2.25] [dash pattern={on 6.75pt off 4.5pt}] (359.29,118) .. controls (389.52,121.07) and (420.52,112.07) .. (435.4,96.54) ;
\draw (60,136.4) node [anchor=north west][inner sep=0.75pt] {$G_{1}$};
\draw (70,74.4) node [anchor=north west][inner sep=0.75pt] {$G_{2}$};
\draw (131,54.4) node [anchor=north west][inner sep=0.75pt] {$G_{3}$};
\draw (152,117.4) node [anchor=north west][inner sep=0.75pt] {$G_{4}$};
\draw (142,185.4) node [anchor=north west][inner sep=0.75pt] {$G_{5}$};
\draw (467,119.4) node [anchor=north west][inner sep=0.75pt] {$G_{4}$};
\draw (446,195.4) node [anchor=north west][inner sep=0.75pt] {$G_{5}$};
\draw (445,63.4) node [anchor=north west][inner sep=0.75pt] {$G_{3}$};
\draw (379.35,74.67) node [anchor=north west][inner sep=0.75pt] {$G_{2}$};
\draw (382,143.4) node [anchor=north west][inner sep=0.75pt] {$G_{1}$};
\end{tikzpicture}
}
\caption{
The new demand edges that replace a demand edge whose terminals belong in different blocks.
Solid edges represent edges of $G$ and dashed edges represent demand edges.
}
\label{fig:route-contract}
\end{figure}
\begin{proof}
Consider the edge contractions to be done on $G+H$ to obtain $G_i+H_i$.
Then, any cut in $G_i+H_i$ was also a cut in $G+H$.
Since $G,H$ satisfies the cut condition, then $G_i,H_i$ must also satisfy the cut condition.
Furthermore, edge contraction preserves planarity and outerplanarity.
For each $st \in H$ and each $G_i$, the reduction process produces
a request $s_it_i$ in $G_i$. If this is not a loop, then $s_i,t_i$ lie in different components of $G$ after deleting the edges of $G_i$. In this case,
we say that $st$ {\em spawns} $s_it_i$. Let $J$ be the set of edges spawned by a demand $st$.
It is easy to see that the edges of $J$ form an $st$ path.
Hence if each $H_i$ is routable in $G_i$, we have that $H$ is routable in $G$.
\end{proof}
\iffalse
...and let $P$ be any simple path from $s$ to $t$.
For each $G_i$ such that $E(G_i)\cap E(P)\neq\emptyset$,
we obtain a nontrivial demand $s_it_i\in H_i$.
Since $\bigcup_{i=1}^kE(G_i)=E(G)$,
the demands $s_it_i$ form a connected set of edges that join $s$ and $t$ (Figure~ \ref{fig:route-contract}).
Hence, if each $H_i$ is routable in $G_i$, then we may concatenate the paths used to route
the demands $s_it_i$ to route the demand $st$. Hence $H$ is also routable in $G$.
\fi
\begin{proof}[Proof of theorem \ref{thm:OP}]
Without loss of generality, we may assume that the edges of $G$ (resp. $H$) have unit capacity (resp. demand).
Otherwise, we may place $u(e)$ (resp. $d(e)$) parallel copies of such an edge $e$.
In the algorithmic proof, we may also assume that $G$ is 2-node connected.
Otherwise, we may apply Lemma \ref{lemma:2con-cc} and consider each 2-node
connected component of $G$ separately.
When working with 2-node connected $G$, the boundary of its outer face is a simple cycle.
So we label the nodes $v_1,...,v_n$
by the order they appear on this cycle.
\begin{figure}
\centering
\begin{tikzpicture}[x=0.75pt,y=0.75pt,yscale=-1,xscale=1]
\draw (43,123.26) .. controls (43,69.54) and (86.54,26) .. (140.26,26) .. controls (193.97,26) and (237.52,69.54) .. (237.52,123.26) .. controls (237.52,176.97) and (193.97,220.52) .. (140.26,220.52) .. controls (86.54,220.52) and (43,176.97) .. (43,123.26) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (120,218.76) .. controls (120,216.68) and (121.68,215) .. (123.76,215) .. controls (125.84,215) and (127.52,216.68) .. (127.52,218.76) .. controls (127.52,220.84) and (125.84,222.52) .. (123.76,222.52) .. controls (121.68,222.52) and (120,220.84) .. (120,218.76) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (165,215.76) .. controls (165,213.68) and (166.68,212) .. (168.76,212) .. controls (170.84,212) and (172.52,213.68) .. (172.52,215.76) .. controls (172.52,217.84) and (170.84,219.52) .. (168.76,219.52) .. controls (166.68,219.52) and (165,217.84) .. (165,215.76) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (208,189.76) .. controls (208,187.68) and (209.68,186) .. (211.76,186) .. controls (213.84,186) and (215.52,187.68) .. (215.52,189.76) .. controls (215.52,191.84) and (213.84,193.52) .. (211.76,193.52) .. controls (209.68,193.52) and (208,191.84) .. (208,189.76) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (228,155.76) .. controls (228,153.68) and (229.68,152) .. (231.76,152) .. controls (233.84,152) and (235.52,153.68) .. (235.52,155.76) .. controls (235.52,157.84) and (233.84,159.52) .. (231.76,159.52) .. controls (229.68,159.52) and (228,157.84) .. (228,155.76) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (77,200.76) .. controls (77,198.68) and (78.68,197) .. (80.76,197) .. controls (82.84,197) and (84.52,198.68) .. (84.52,200.76) .. controls (84.52,202.84) and (82.84,204.52) .. (80.76,204.52) .. controls (78.68,204.52) and (77,202.84) .. (77,200.76) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (45,156.76) .. controls (45,154.68) and (46.68,153) .. (48.76,153) .. controls (50.84,153) and (52.52,154.68) .. (52.52,156.76) .. controls (52.52,158.84) and (50.84,160.52) .. (48.76,160.52) .. controls (46.68,160.52) and (45,158.84) .. (45,156.76) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (41,108.76) .. controls (41,106.68) and (42.68,105) .. (44.76,105) .. controls (46.84,105) and (48.52,106.68) .. (48.52,108.76) .. controls (48.52,110.84) and (46.84,112.52) .. (44.76,112.52) .. controls (42.68,112.52) and (41,110.84) .. (41,108.76) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (100,33.76) .. controls (100,31.68) and (101.68,30) .. (103.76,30) .. controls (105.84,30) and (107.52,31.68) .. (107.52,33.76) .. controls (107.52,35.84) and (105.84,37.52) .. (103.76,37.52) .. controls (101.68,37.52) and (100,35.84) .. (100,33.76) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (219,74.76) .. controls (219,72.68) and (220.68,71) .. (222.76,71) .. controls (224.84,71) and (226.52,72.68) .. (226.52,74.76) .. controls (226.52,76.84) and (224.84,78.52) .. (222.76,78.52) .. controls (220.68,78.52) and (219,76.84) .. (219,74.76) -- cycle ;
\draw (48.76,156.76) .. controls (96.52,140.07) and (183.52,100.07) .. (222.76,74.76) ;
\draw (168.76,215.76) .. controls (139.52,170.07) and (89.52,152.07) .. (48.76,156.76) ;
\draw [color={rgb, 255:red, 208; green, 2; blue, 27 } ,draw opacity=1 ] [dash pattern={on 4.5pt off 4.5pt}] (48.76,156.76) .. controls (120.52,144.07) and (165.52,151.07) .. (211.76,189.76) ;
\draw [color={rgb, 255:red, 208; green, 2; blue, 27 } ,draw opacity=1 ] [dash pattern={on 4.5pt off 4.5pt}] (44.76,108.76) .. controls (89.52,106.07) and (155.52,96.07) .. (222.76,74.76) ;
\draw [color={rgb, 255:red, 208; green, 2; blue, 27 } ,draw opacity=1 ][line width=3] [dash pattern={on 7.88pt off 4.5pt}] (103.76,33.76) .. controls (141.52,62.07) and (173.52,70.07) .. (222.76,74.76) ;
\draw [color={rgb, 255:red, 208; green, 2; blue, 27 } ,draw opacity=1 ] [dash pattern={on 4.5pt off 4.5pt}] (222.76,74.76) .. controls (194.52,119.07) and (198.52,152.07) .. (211.76,189.76) ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (60,63.76) .. controls (60,61.68) and (61.68,60) .. (63.76,60) .. controls (65.84,60) and (67.52,61.68) .. (67.52,63.76) .. controls (67.52,65.84) and (65.84,67.52) .. (63.76,67.52) .. controls (61.68,67.52) and (60,65.84) .. (60,63.76) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (155,28.76) .. controls (155,26.68) and (156.68,25) .. (158.76,25) .. controls (160.84,25) and (162.52,26.68) .. (162.52,28.76) .. controls (162.52,30.84) and (160.84,32.52) .. (158.76,32.52) .. controls (156.68,32.52) and (155,30.84) .. (155,28.76) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (194,44.76) .. controls (194,42.68) and (195.68,41) .. (197.76,41) .. controls (199.84,41) and (201.52,42.68) .. (201.52,44.76) .. controls (201.52,46.84) and (199.84,48.52) .. (197.76,48.52) .. controls (195.68,48.52) and (194,46.84) .. (194,44.76) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (233.76,119.5) .. controls (233.76,117.42) and (235.44,115.74) .. (237.52,115.74) .. controls (239.59,115.74) and (241.28,117.42) .. (241.28,119.5) .. controls (241.28,121.58) and (239.59,123.26) .. (237.52,123.26) .. controls (235.44,123.26) and (233.76,121.58) .. (233.76,119.5) -- cycle ;
\draw (113,230.4) node [anchor=north west][inner sep=0.75pt] {$v_{1}$};
\draw (57,207.4) node [anchor=north west][inner sep=0.75pt] {$v_{2}$};
\draw (170,228.4) node [anchor=north west][inner sep=0.75pt] {$v_{n}$};
\draw (211,205.4) node [anchor=north west][inner sep=0.75pt] {$v_{n-1}$};
\draw (247.01,161) node [anchor=north west][inner sep=0.75pt] [rotate=-19.44] {$\vdots $};
\draw (23.19,177.54) node [anchor=north west][inner sep=0.75pt] [rotate=-325.57] {$\vdots $};
\draw (80,7.4) node [anchor=north west][inner sep=0.75pt] {$v_{i}$};
\draw (233,54.4) node [anchor=north west][inner sep=0.75pt] {$v_{j}$};
\end{tikzpicture}
\caption{
The solid edges form the outerplanar graph $G$.
The dashed edges are the demand edges.
The thick dashed edge is a valid edge to route
because there are no terminals $v_k$ with $i<k<j$.
}
\label{fig:route-op}
\end{figure}
If there are no demand edges, then we are done. Otherwise, since $G+H$ is
outerplanar, without loss of generality there exists $i<j$ such that $v_iv_j\in E(H)$ and no
$v_k$ is a terminal for $i<k<j$ (Figure \ref{fig:route-op}).
Consider the outer face path $P=v_i,v_{i+1},...,v_j$.
We show that the cut condition is still satisfied after removing both the path $P$
and the demand $v_iv_j$. This represents routing the demand $v_iv_j$ along the path $P$.
Consider a central cut $\delta_G(X)$.
Suppose that $v_i$ and $v_j$ are on opposite sides of the cut. Then, we decrease
both $\delta_G(X)$ and $\delta_H(X)$ by 1, so the cut condition holds.
Suppose that $v_i,v_j\notin X$, that is $v_i$ and $v_j$ are on the same side of the cut.
Then, either $X\subset V(P)\setminus\{v_i,v_j\}$ or $X\cap V(P)=\emptyset$.
We are done if $X\cap V(P)=\emptyset$ because $\delta_G(X)\cap E(P)=0$.
Otherwise, $X\subset V(P)\setminus\{v_i,v_j\}$ contains no terminals,
so we cannot violate the cut condition.
\end{proof}
We also need the following result from \cite{chekuri2007multicommodity}.
\begin{theorem}
\label{thm:cms}
Let $T$ be a tree with integer edge capacities $u(e)$. Let $H$ denote a demand graph such that each fundamental cut of $H$ induced by an edge $e \in T$ contains
at most $k u(e)$ edges of $H$. We may then partition $H$ into at most $4k$ edges sets $H_1, \ldots ,H_{4k}$ such that each $H_i$ is routable in $T$.
\end{theorem}
\subsection{Proof of the Main Theorem}
\outerplanarWEDPapprox*
\begin{proof}
We first run the algorithms to produce a integer-capacitated tree $T,\hat{u}$ which is an $O(1)$ cut approximator for $G$. In addition $T$ is a subtree and it is a conservative approximator for each cut in $G$. First, we prove that the maximum weight routable in $T$ is not too much smaller than for $G$ (in either the {\sc edp} or {\sc anf} model). To see this let $S$ be an optimal solution in $G$, whose value is {\sc opt(G)}. Clearly $S$ satisfies the cut condition in $G$ and hence by Theorem~\ref{thm:tree} it satisfies $14 \cdot$ the cut condition in $T,\hat{u}$.
Thus by Theorem~\ref{thm:cms} there are $56$ sets such that $S = \cup_{i=1}^{56} S_i$ and each $S_i$ is routable in $T$. Hence one of the
sets $S_i$ accrues at least $\frac{1}{56}^{th}$ the profit from {\sc opt(G)}.
Now we use the factor $4$ approximation \cite{chekuri2007multicommodity} to solve the maximum {\sc edp=anf} problem for $T,\hat{u}$. Let $S$ be a subset of requests which are routable in $T$ and have weight at least $\frac{1}{4}$ {\sc opt(T)} $ \geq \frac{1}{224}$ {\sc opt(G)}. Since $T$ is a subtree of $G$ we have that $G+T$ is outerplanar. Since $T,\hat{u}$ is an under-estimator of cuts in $G$, we have that the edges of $T$ (viewed as requests) satisfies the cut condition in $G$. Hence by Theorem~\ref{thm:OP} we may route these single edge requests in $G$. Hence since $S$ can route in $T$, we have that $S$ can also route in $G$, completing the proof.
\end{proof}
\section{Conclusions}
The technique of finding a single-tree constant-factor cut approximator (for a global constant) appears to hit a limit at outerplanar graphs.
It would be
interesting to find a graph parameter $k$ which ensures a single-tree $O(f(k))$ cut approximator.
\noindent
The authors thank Nick Harvey for his valuable feedback on this article.
\iffalse
we still have our question of (1) when do we have O(1) approximators more generally ie beyond outerplanar graphs, (2) When do we have simultaneous approximators for cuts/distances and I suppose (3) do (1) and (2) cooincide for the same class of graphs perhaps? Is it implied by the equivalence theorems?
\fi
\bibliographystyle{plain}
| proofpile-arXiv_065-211 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The goal of any learning system is to identify a mapping from input data space to output classification or regression space based on a finite set of training data with a basic generalization requirement:
Models trained to perform well on a given dataset (empirical performance) should perform well on future examples (expected performance), i.e., the gap between expected and empirical performance must be small.
Today, deep neural networks are at the core of several recent advances in machine learning.
An appropriate deep architecture is closely tied to the dataset on which it is trained and is selected with significant manual engineering by practitioners or by random search based on \emph{subjective heuristics} \cite{blum2015Ladder}.
Approaches based on resubstitution (training) error, which is often near zero in deep learning systems, can be misleading, while held out data (validation) metrics introduce possible selection bias and the data they use might be more valuable if it can be used to train the model\cite{anders1999model}.
However, these methods have steadily improved state of the art metrics on several datasets even though only limited understanding of generalization is available \cite{recht2018cifar} and generally it is not known whether a smaller model trained for fewer epochs could have achieved the same performance \cite{castelvecchi2016can}.
A model is typically said to suffer from overfitting when it performs poorly to test (validation) data while performing well on the training data. The conventional approach to avoid overfitting with error minimization is to avoid training an over-parameterized model to zero loss, for example by penalizing the training process with methods such as weight regularization or early stopping \cite{hastie2009elements, murphy2012machine}.
This perspective has been questioned by recent research,
which has shown that a model with a number of parameters several orders of magnitude bigger than the dataset size, and trained to zero loss, generalizes to new data as well as a constrained model \cite{zhang2016understanding}.
Thus, while conventional wisdom about interpolating estimators \cite{hastie2009elements, murphy2012machine} is that they can achieve zero training error but generally exhibit poor generalization,
Belkin and others \cite{belkin2018overfitting, belkin2018does} propose and theoretically study some specific interpolation-based methods, such as simplicial interpolation and kernel weighted and interpolated nearest neighbors~(wiNN), that can achieve generalization with theoretical guarantees.
\cite{belkin2018overfitting} suggests that neural networks perform interpolation in a transformed space and that this could help explain their generalization performance.
Though this view has spurred renewed interest in interpolating estimators\cite{liang2018just, hastie2019surprises},
there have been no studies of interpolation based classifiers \textit{integrated} with a complete neural network.
This is due in part to their complexity: working with $d$-simplices \cite{belkin2018overfitting} would be impractical if the dimension of the data space $d$ is high, as is the case for problems of interest where neural networks are used. In contrast, a simpler method such as wiNN does not have the same geometric properties as the simplex approach.
In this paper, we propose a practical and realizable interpolation framework based on local polytopes obtained using Non Negative Kernel regression (NNK)\cite{shekkizhar2020} on neural network architectures.
As shown in a simple setup in \Figref{fig:interpolation_difference}, a simplicial interpolation, even when feasible, constrains itself to a simplex structure (triangles in $\mathbb{R}^2$) around each test query, which leads to an arbitrary choice of the containing simplex when data lies on one of the simplicial faces. Thus, in the example of \Figref{fig:interpolation_difference} only one of the triangles can be used, and only two out of the 4 points in the neighborhood contribute to the interpolation.
This situation becomes increasingly common in high dimensions, worsening interpolation complexity.
By relaxing the simplex constraint, one can better formulate the interpolation using generalized convex polytope structures, such as those obtained using NNK, that are dependent on the sampled training data positions in the classification space. While our proposed method uses $k$ nearest neighbors (KNN) as a starting point,
it differs from other KNN-based approaches, such as wiNN schemes \cite{devroye1998hilbert, biau2015lectures, belkin2018overfitting} and DkNN \cite{Papernot2018, wallace2018interpreting}.
In particular, these KNN based algorithms can be potentially biased if data instances have different densities in different directions in space. Instead, as shown in \Figref{fig:KRI_plane} NNK automatically selects data points most influential to interpolation based on their relative position, i.e., only those neighboring representations that provide new (orthogonal) information for data reconstruction are selected for functional interpolation.
In summary, our proposed method combines some of the best features of existing methods, providing a geometrical interpretation and performance guarantees as the simplicial interpolation \cite{belkin2018overfitting}, with much lower complexity, of an order of magnitude comparable to KNN-based schemes.
\begin{figure*}[htbp]
\centering
\begin{subfigure}{0.35\textwidth}
\centering
\includegraphics[width=0.9\textwidth]{figs/interpolation_difference.png}
\caption{}
\label{fig:interpolation_difference}
\end{subfigure}
\begin{subfigure}{0.33\textwidth}
\centering
\includegraphics[width=0.9\textwidth]{figs/NNK_decision_plane.png}
\caption{}
\label{fig:KRI_plane}
\end{subfigure}
\begin{subfigure}{0.25\textwidth}
\centering
\includegraphics[width=0.9\textwidth]{figs/NNK_polytope.png}
\caption{}
\label{fig:KRI_polytope}
\end{subfigure}
\caption{(a) Comparison of simplicial and polytope interpolation methods. In the simplex case, the label for node ${\bm{x}}_i$ can be approximated based on different triangles (simplices), one of which must be chosen. With the chosen triangle two out of the three points are used for interpolation, so that in this example only half the neighboring points are used for interpolation.
Instead, polytope interpolation based on NNK is based on all four data points, which together form a polytope. (b) KRI plane (dashed orange line) corresponding to chosen neighbor ${\bm{x}}_j$. Data points to the right of this plane will be not be selected by NNK as neighbors of ${\bm{x}}_i$. (c) KRI boundary associated convex polytope formed by NNK neighbors at ${\bm{x}}_i$. }
\end{figure*}
To integrate our interpolation framework with a neural network, we replace
the final classification layer, typically some type of support vector machine (SVM) with our NNK interpolator during evaluation at training and at test time,
while relying on the loss obtained with the original SVM-like layer for backpropagation.
This strategy of using a different classifier at final layer is not uncommon in deep learning\cite{koh2017understanding, lundberg2017unified, bontonou2019introducing} and is motivated by the intuition that each layer of a neural network corresponds to an abstract transformation of the input data space catered to the machine task at hand.
Note that, unlike the explicit parametric boundaries defined in general by an SVM-like final layer, local interpolation methods produce boundaries that are implicit, i.e., based on the relative positions of the training data in a transformed space.
In other words, the proposed DeepNNK procedure allows us to characterize the network by the output classification space rather than relying on a global boundary defined on the space.
Equipped with these tools, we tackle model selection in neural networks from an interpolation perspective using data dependent stability: A model is stable for training set ${\mathcal{D}}$ if any change of a single point in ${\mathcal{D}}$ does not affect (or yields very small change in) the output hypothesis \cite{devroye1979distribution, yu2013stability}.
This definition is similar but different from algorithmic stability obtained using jackknifing \cite{bousquet2002stability, mukherjee2006learning} and related statistical procedures such as cross validation \cite{anders1999model}. While the latter is related to using repeatedly the entire training dataset but one for computing many estimators that are combined at the end, the former is concerned with the output estimate at a point not used for its prediction and is the focus of study in our work.
Direct evaluation of algorithmic stability in the context of deep learning is impractical for two reasons: First, the increased runtime complexity associated with training the algorithm for different sets. Second, even if computationally feasible, the assessment within each setting is obscured due to randomness in training, for example in weight initialization and batch sampling, which requires repeated evaluation to reduce variance in the performance.
Unlike these methods \cite{anders1999model, bousquet2002stability}, by focusing on stability to input perturbations at interpolation, our method achieves a practical methodology not involving repetitive training for model selection.
Another challenging issue that prevents the application of deep neural networks in sensitive domains, such as medicine and defense, is the absence of explanations to a prediction obtained\cite{doshi2017towards}.
Explainability or interpretability can be defined as the degree to which a human can understand or rationalize a prediction obtained from a learning algorithm.
A model is more interpretable than another model if its decisions are easier for a human to comprehend, for e.g., a health care technician looking at a flagged retinal scan\cite{de2018clinically}, than decisions from the other model.
Example based explanations can help alleviate the problem of interpretability by allowing humans to understand complex predictions by analogical reasoning with a small set of influential training instances \cite{kim2016examples, koh2017understanding}.
Our proposed DeepNNK classifier is a neighborhood based approach that makes very few modeling assumptions on data. Each prediction in NNK classifier comes with a set of training data points (neighbors selected by NNK) that interpolate to produce the classification/regression.
In contrast to earlier methods such as DkNN\cite{Papernot2018, wallace2018interpreting} that rely on hyperparameters such as $k, \epsilon$ which directly impact explainability and confidence characterizations, our approach adapts to the local data manifold by identifying a stable set of training instances that most influence an estimate.
Further, the gains in interpretability using our framework do not incur a penalty in performance,
so that, unlike earlier methods \cite{koh2017understanding, Papernot2018}, there is no loss in overall performance by using an interpolative last layer, and some cases there are gains, as compared to the the performance achieved with standard SVM-like last layer classifiers. Indeed, we demonstrate performance improvements over standard architectures with SVM-like last layers in case where there is overfitting.
Finally, this paper presents some empirical explanation to generative and adversarial examples, which have gained growing attention in modern machine learning. We show that these instances fall in distinct interpolation regions surrounded by fewer NNK neighbors on average compared to real images.
\section{Preliminaries and Background}
\subsection{Statistical Setup}
The goal of machine learning is to find a function $\hat{f}: X \rightarrow Y$ that minimizes the probability of error on samples drawn from the joint distribution over $X \times Y$ in $\mathbb{R}^d \times [0, 1]$. Assume $\mu$ to be the marginal distribution of $X \in \mathbb{R}^d$ with its support denoted as $supp(\mu)$. Let $\eta$ denote the conditional mean $\mathbb{E}(Y|X={\bm{x}})$.
The risk or error associated with a predictor in a regression setting is given by ${\mathcal{R}}_{gen}(\hat{f}) = \mathbb{E}[{\mathcal{R}}(\hat{f}, {\bm{x}})] = \mathbb{E}[(\hat{f}({\bm{x}}) - y)^2]$.
The Bayes estimator obtained as the expected value of the conditional is the best predictor and upper bounds other predictors as $\mathbb{E}[{\mathcal{R}}(\hat{\eta}, {\bm{x}}) - {\mathcal{R}}(\eta, {\bm{x}})] \leq \mathbb{E}[(\hat{\eta}({\bm{x}}) - \eta({\bm{x}}))^2]$.
Unfortunately, the joint distribution is not known \textit{a priori} and thus a good estimator is to be designed based on labelled samples drawn from $X \times Y$ in the form of training data $D_{train} = \{({\bm{x}}_{1}, y_1), ({\bm{x}}_{2}, y_2) \dots ({\bm{x}}_N, y_N)\}$. Further, assume each $y_i$ is corrupted by i.i.d.~noise and hence can deviate from the Bayes estimate $\eta\{{\bm{x}}_i\}$.
For a binary classification problem, the domain of $Y$ is reduced to $\{0,1\}$, with the plug-in Bayes classifier defined as $f^* = {\mathbb{I}}(\eta({\bm{x}})>1/2)$ where $\eta({\bm{x}}) = P(Y=1|X={\bm{x}})$. The risk associated to a classifier is defined as ${\mathcal{R}}_{gen}(\hat{f}) = \mathbb{E}[{\mathcal{R}}(\hat{f}, {\bm{x}})] = \mathbb{E}[P(\hat{f}({\bm{x}}) \neq y)]$ and is related to the Bayes risk as $\mathbb{E}[{\mathcal{R}}(\hat{f}, {\bm{x}}) - {\mathcal{R}}(f^*({\bm{x}}), {\bm{x}})] \leq \mathbb{E}[P(\hat{f}({\bm{x}}) \neq f^*({\bm{x}}))]$.
Note that the excess risk associated to $\hat{f}$ in both regression and classification setting is related to $\mathbb{E}[(\hat{\eta}({\bm{x}}) - \eta({\bm{x}}))^2]$ and $\mathbb{E}[P(\hat{f}({\bm{x}}) \neq f^*({\bm{x}}))]$, and is the subject of our work. Note that the generalization risk defined above is dependent on the data distribution while in practice one uses empirical error, defined as ${\mathcal{R}}_{emp}({\mathcal{D}}_{train}) = \frac{1}{N} \sum_i l(\hat{\eta}({\bm{x}}_i),y)$ where $l(\hat{\eta}({\bm{x}}_i), y)$ is the error associated in regression or classification setting.
We denote by ${\mathcal{D}}_{train}^i$ the training set obtained by removing the point $({\bm{x}}_i, y_i)$ from ${\mathcal{D}}_{train}$.
\subsection{Deep Kernels}
Given data ${\mathcal{D}} = \{{\bm{x}}_1, {\bm{x}}_2 \dots {\bm{x}}_N\}$, kernel based methods observe similarities in a non linearly transformed feature space ${\mathcal{H}}$ referred to as the Reproducing Kernel Hilbert Space (RKHS)\cite{aronszajn1950theory}. One of the key ingredients in kernel machines is the \emph{kernel trick}: Inner products in the feature space can be efficiently computed using kernel functions. Due to the non linear nature of the data mapping, linear operations in RKHS correspond to non linear operations in the input data space.
\begin{definition}
If ${\mathcal{K}}: \mathbb{R}^d \times \mathbb{R}^d \rightarrow \mathbb{R}$ is a continuous symmetric kernel of a positive integral operator in ${\mathcal{L}}_2$ space of functions,
then there exists a space ${\mathcal{H}}$ and mapping ${\bm{\phi}}: \mathbb{R}^d \rightarrow {\mathcal{H}}$ such that by Mercer's theorem
\begin{align*}
{\mathcal{K}}({\bm{x}}_i, {\bm{x}}_j) = \langle{\bm{\phi}}({\bm{x}}_i), {\bm{\phi}}({\bm{x}}_j)\rangle
\end{align*}
where $\langle \cdot , \cdot \rangle$ denotes the inner product.
\end{definition}
Kernels satisfying above definition are known as Mercer kernels and have wide range of applications in machine learning \cite{hofmann2008kernel}. In this work, we center our experiments around the range normalized cosine kernel defined as,
\begin{align}
{\mathcal{K}}({\bm{x}}_i, {\bm{x}}_j) = \frac{1}{2} \left(1 + \frac{\langle{\bm{x}}_i, {\bm{x}}_j\rangle}{\|{\bm{x}}_i\|\;\|{\bm{x}}_j\|}\right) \label{eq:base_cosine_kernel}
\end{align}
though our theoretical statements and claims make no assumption on the type of kernel, other than it be positive with range $[0,1]$.
Similar to \cite{wilson2016deep}, we combine kernel definitions with neural networks to incorporate the expressive power of neural networks. Given a kernel function ${\mathcal{K}}$, we transform the input data using the non linear mapping ${\bm{h}}_{\bm{w}}$ corresponding to deep neural networks (DNN) parameterized by ${\bm{w}}$.
\begin{align}
{\mathcal{K}}({\bm{x}}_i, {\bm{x}}_j) \rightarrow {\mathcal{K}}_{DNN}({\bm{h}}_{\bm{w}}({\bm{x}}_i), {\bm{h}}_{\bm{w}}({\bm{x}}_j))
\end{align}
Our normalized cosine kernel of \eqref{eq:base_cosine_kernel} is rewritten as
\begin{align}
{\mathcal{K}}_{DNN}({\bm{x}}_i, {\bm{x}}_j) = \frac{1}{2} \left(1 + \frac{\langle{\bm{h}}_{\bm{w}}({\bm{x}}_i), {\bm{h}}_{\bm{w}}({\bm{x}}_j)\rangle}{\|{\bm{h}}_{\bm{w}}({\bm{x}}_i)\|\;\|{\bm{h}}_{\bm{w}}({\bm{x}}_j)\|}\right) \label{eq:deep_cosine_kernel}
\end{align}
\subsection{Non Negative Kernel regression (NNK)}
The starting point for our interpolation-based classifier is our previous work on graph construction using non negative kernel regression (NNK) \cite{shekkizhar2020}. NNK formulates graph construction as a signal representation problem, where each data point is to be approximated by a weighted sum of functions from a dictionary formed by its neighbors. The NNK objective for graph construction can be written as:
\begin{align}
\min_{{\bm{\theta}} \geq 0} \;
\|{\bm{\phi}}_i - {\bm{\Phi}}_S{\bm{\theta}}\|^2 \label{eq:nnk_lle_objective}
\end{align}
where ${\bm{\phi}}_i$ is a lifting of ${\bm{x}}_i$ from observation to similarity space and ${\bm{\Phi}}_S$ contains the transformed neighbors.
Unlike $k$ nearest neighbor approaches, which select neighbors having the $k$ largest inner products ${\bm{\phi}}_i^\top{\bm{\phi}}_j$ and can be viewed as a thresholding-based representation, NNK is an improved basis selection procedure in kernel space leading to a stable and robust representation.
Geometrically, NNK can be characterized in the form of kernel ratio interval (KRI) as shown in \twoFigref{fig:KRI_plane}{fig:KRI_polytope}. The KRI theorem states that for any positive definite kernel with range in $[0, 1]$ (e.g. the cosine kernel \plainref{eq:deep_cosine_kernel}), the necessary and sufficient condition for two data points ${\bm{x}}_j$ and ${\bm{x}}_k$ to be {\em both} NNK neighbors to ${\bm{x}}_i$ is
\begin{align}
{\bm{K}}_{j,k} < \frac{{\bm{K}}_{i,j}}{{\bm{K}}_{i,k}} < \frac{1}{{\bm{K}}_{j,k}}.
\label{eq:kernel_ratio_interval}
\end{align}
Inductive application of the KRI produces a closed decision boundary around the data to be approximated (${\bm{x}}_i$) with the identified neighbors forming a convex polytope around the data ($NNK_{poly}$).
Similar to the simplicial interpolation of \cite{belkin2018overfitting}, the local geometry of our NNK classifier can be leveraged to obtain theoretical performance of bounds as discussed next.
\section{Local Polytope Interpolation}
\label{sec:polytop_interpolation}
In this section, we propose and analyze a polytope interpolation scheme based on local neighbors\footnote{All proofs related to theoretical statements in this section are included in the supplementary material} that asymptotically approaches the $1$-nearest neighbor algorithm \cite{cover1967nearest}. Like $1$-nearest neighbor, the proposed method is not statistically consistent in the presence of label noise, but, unlike the former, it's risk can be studied in the non-asymptotic case with data dependent bounds under mild assumptions on smoothness.
\begin{proposition}
\label{prop:nnk_interpolation}
Given $k$ nearest neighbors of a sample ${\bm{x}}$, $S = \{({\bm{x}}_{1}, y_1), ({\bm{x}}_{2}, y_2) \dots ({\bm{x}}_{k}, y_k)\}$, the following NNK estimate at ${\bm{x}}$ is a valid interpolation function:
\begin{align}
\hat{\eta}({\bm{x}}) = \mathbb{E}(Y | X = {\bm{x}}) = \sum_{i=1}^{\hat{k}}{\bm{\theta}}_{i}\;y_i, \label{eq:biased_nnk_conditional_estimate}
\end{align}
where ${\bm{\theta}}$ are the $\hat{k}$ non zero weights obtained from the minimization of
\eqref{eq:nnk_lle_objective}, that is:
\begin{align}
{\bm{\theta}} &= \min_{{\bm{\theta}} \geq 0} \|{\bm{\phi}}({\bm{x}}) - {\bm{\Phi}}_S{\bm{\theta}}\|^2 \nonumber \\
&= \min_{{\bm{\theta}} \geq 0}\; 1 - 2{\bm{\theta}}^\top{\bm{K}}_{S,*} + {\bm{\theta}}^\top{\bm{K}}_{S,S}{\bm{\theta}} \label{eq:nnk_kernel_objective}
\end{align}
where ${\bm{\Phi}}_S = [{\bm{\phi}}({\bm{x}}_1) \dots {\bm{\phi}}({\bm{x}}_k)]$ corresponds to the kernel space representation of the nearest neighbors with ${\bm{K}}_{S,*}$ denoting the kernel similarity with regards to ${\bm{x}}$.
\end{proposition}
The interpolator from Proposition \ref{prop:nnk_interpolation} is biased and can be bias-corrected by normalizing the interpolation weights.
Thus, the unbiased NNK interpolation estimate is obtained as
\begin{align}
\hat{\eta}({\bm{x}}) = \mathbb{E}(Y | X = {\bm{x}}) = \sum_{i=1}^{\hat{k}}\frac{{\bm{\theta}}_{i}}{\sum_{j=1}^{\hat{k}} {\bm{\theta}}_j}\;y_i \label{eq:nnk_conditional_estimate}
\end{align}
In other words, NNK starts with a crude approximation of neighborhood in the form of $k$ nearest neighbors, but instead of directly using these points as sources of interpolation, optimizes and reweighs the selection (most of which are zero) using \eqref{eq:nnk_kernel_objective} to obtain a stable set of neighbors.
\subsection{A general bound on NNK classifier}
We present a theoretical analysis based on the simplicial interpolation analysis by \cite{belkin2018overfitting} but adapted to the proposed NNK interpolation. We first study NNK framework in a regression setting and then adapt the results for classification. Let $D_{train} = \{ ({\bm{x}}_{1}, y_1), ({\bm{x}}_{2}, y_2) \dots ({\bm{x}}_N, y_N)\}$ in $\mathbb{R}^d\times [0,1]$ be the training data made available to NNK. Further, assume each $y_i$ is corrupted by independent noise and hence can deviate from the Bayes estimate $\eta({\bm{x}}_i)$.
\begin{theorem}
\label{thm:excess_mean_sq_risk}
For a conditional distribution $\hat{\eta}({\bm{x}})$ obtained using unbiased NNK interpolation given training data $D_{train} = \{ ({\bm{x}}_{1}, y_1), ({\bm{x}}_{2}, y_2) \dots ({\bm{x}}_N, y_N)\}$ in $\mathbb{R}^d\times [0, 1]$, the excess mean square risk is given by
\begin{align}
\mathbb{E}[(\hat{\eta}({\bm{x}}) - \eta({\bm{x}}))^2 | D_{train}] \leq \mathbb{E}[\mu(\mathbb{R}^d \backslash {\mathcal{C}})] + A^2\mathbb{E}[\delta^{2\alpha}] \nonumber\\
+ \frac{2A'}{\mathbb{E}_K[\hat{k}]+1}\mathbb{E}[\delta^{\alpha'}] + \frac{2}{\mathbb{E}_K[\hat{k}]+1}\mathbb{E}[(Y - \eta({\bm{x}}))^2] \label{eq:excess_sq_risk}
\end{align}
under the following assumptions
\begin{enumerate}
\item $\mu$ is the marginal distribution of $X \in \mathbb{R}^d$. Let ${\mathcal{C}} = \text{Hull}({\bm{\phi}}({\bm{x}}_1), {\bm{\phi}}({\bm{x}}_2) \dots {\bm{\phi}}({\bm{x}}_N))$ be the convex hull of the training data in transformed kernel space.
\item The conditional distribution $\eta$ is Holder $(A, \alpha)$ smooth in kernel space.
\item Similarly, the conditional variance $var(Y|X={\bm{x}})$ satisfies $(A', \alpha')$ smoothness condition.
\item Let $NNK_{poly}({\bm{x}})$ denote the convex polytope around ${\bm{x}}$ formed by $\hat{k}$ neighbors identified by NNK with non zero weights. The maximum diameter of the polytope formed with NNK neighbors for any data in ${\mathcal{C}}$ is represented as $\delta = \max_{{\bm{x}} \in {\mathcal{C}}} \text{diam}(NNK_{poly}({\bm{x}}))$.
\end{enumerate}
\end{theorem}
\begin{remark}
Theorem \plainref{thm:excess_mean_sq_risk} provides a non-asymptotic upper bound for the excess squared risk associated with unbiased NNK interpolation using a data dependent bound. The first term in the bound is associated to extrapolation, where the test data falls outside the interpolation area for the given training data while the last term corresponds to label noise. Of interest are the second and third terms, which merely reflect the dependence of the interpolation on the size of each polytope defined for test data and the associated smoothness of the labels over this region of interpolation. In particular, when all test samples are covered by a smaller polytope, the corresponding risk is closer to optimal.
Note that NNK approach leads to a polytope having smallest diameter or volume for the number of points ($\hat{k}$) selected from a set of $k$ neighbors.
From the theorem, this corresponds to a better risk bound.
The bound associated with simplicial interpolation is a special case, where each simplex enclosing the data point is a fixed $\hat{k}$, corresponding to a $(d+1)$-sized polytope. Thus, in our approach the number of points forming the polytope is variable (dependent on local data topology), while in the simplicial case it is fixed and depends on the dimension of the space.
Though the latter bound seems better (excess risk is inversely related to $\hat{k}$), the diameter of the polytope~(simplex) increases with $d$ making the excess risk possibly sub optimal.
\end{remark}
\begin{corollary}
\label{coroll:excess_mean_sq_risk_convergence}
Based on an additional assumption that $supp(\mu)$ belongs to a simple polytope, the excess mean square risk converges asymptotically as
\begin{align}
\limsup_{N\rightarrow\infty} \mathbb{E}[(\hat{\eta}({\bm{x}}) - \eta({\bm{x}}))^2] \leq \mathbb{E}[(Y - \eta({\bm{x}}))^2]
\end{align}
\end{corollary}
\begin{remark}
The asymptotic risk of proposed NNK interpolation method is bounded like the $1$-nearest neighbor method in the regression setting by twice the Bayes risk. The rate of convergence of proposed method is dependent on the convergence of the kernel functions centered at the data points.
\end{remark}
We now extend our analysis to classification using the plug-in classifier $\hat{f}({\bm{x}}) = {\mathbb{I}}(\hat{\eta}({\bm{x}}) > 1/2)$ for a given $D_{train} = \{ ({\bm{x}}_{1}, y_1), ({\bm{x}}_{2}, y_2) \dots ({\bm{x}}_N, y_N)\}$ in $\mathbb{R}^d\times \{0,1\}$ using the relationship between classification and regression risk \cite{biau2015lectures}.
\begin{corollary}
\label{coroll:classifier_risk_convergence}
A plug-in NNK classifier under the assumptions of Corollary \ref{coroll:excess_mean_sq_risk_convergence} has excess classifier risk bounded as
\begin{align}
\limsup_{N\rightarrow \infty} \mathbb{E}[{\mathcal{R}}(\hat{f}({\bm{x}})) - {\mathcal{R}}(f({\bm{x}}))] \leq 2\sqrt{\mathbb{E}[(Y - \eta({\bm{x}})^2]}
\end{align}
\end{corollary}
\begin{remark}
The classification bound presented here makes no assumptions on the margin associated to the classification boundary and is thus only a weak bound. The bound can be improved exponentially as in \cite{belkin2018overfitting} when more assumptions such as h-hard margin condition \cite{massart2006risk} are made.
\end{remark}
\subsection{Leave one out stability}
\label{subsec:loo_stability}
The leave one out~(LOO) procedure (also known as deleted estimate or U-method) is an important statistical measure with a long history in machine learning \cite{elisseeff2003leave}. Unlike empirical error, it is \emph{almost unbiased} \cite{luntz1969estimation} and has been often used for model (hyperparameter) selection.
Formally, this is represented by
\begin{align}
{\mathcal{R}}_{loo}(\hat{\eta} | {\mathcal{D}}_{train}) = \frac{1}{N}\sum_{i=1}^N l(\hat{\eta}({\bm{x}}_i)|{\mathcal{D}}_{train}^i, y_i)
\end{align}
where the NNK interpolation estimator in the summation for ${\bm{x}}_i$ is based on all training points except ${\bm{x}}_i$.
We focus our attention to LOO in the context of model stability and generalization as defined in \cite{elisseeff2003leave, devroye2013probabilistic}. A system is stable when small perturbations (LOO) to the input data do not affect its output predictions i.e.,
$\hat{\eta}$ is $\beta(\hat{\eta})$ stable when
\begin{align}
\mathbb{E}_{\mathcal{D}}\left(|l(\hat{\eta}({\bm{x}})| {\mathcal{D}}, y) - l(\hat{\eta}({\bm{x}})| {\mathcal{D}}^i, y)|\right) \leq \beta(\hat{\eta} | {\mathcal{D}}) \label{eq:stability_definition}
\end{align}
Theoretical results by Rogers, Devroye and Wagner \cite{rogers1978finite, devroye1979distribution, devroye1979deleted} about generalization of $k$-nearest neighbor methods using LOO performance are very relevant to our proposed NNK algorithm. The choice of $k$ in our method is dependent on the relative positions of points and hence replaces the fixed $k$ from their results by expectation.
\begin{theorem}
\label{thm:loo_bound_theorem}
The leave one out performance of unbiased NNK classifier given $\gamma$, the maximum number of distinct points that can share the same nearest neighbor, is bounded as
\begin{align*}
P(|{\mathcal{R}}_{loo}(\hat{\eta}|{\mathcal{D}}_{train}) - {\mathcal{R}}_{gen}(\hat{\eta})| > \epsilon) &\leq 2e^{-N\epsilon^2/18} \;+\\
&6e^{-N\epsilon^3/\left(108\mathbb{E}_K[\hat{k}](2 + \gamma)\right)}
\end{align*}
\end{theorem}
\begin{remark}
NNK classifier weighs its neighbors based on RKHS interpolation but obtains the initial set of neighbors based on the input embedded space. This means the value of $\gamma$ in NNK setting is dependent on the dimension of the space of points where the data is embedded and not on the possibly infinite dimension of the RKHS. The above bound is difficult to compute in practice due to $\gamma$ but bounds do exist for this measure based on convex covering literature \cite{rogers1963covering, chen2005new}. The theorem allows us to relate stability of a model using LOO error to that of generalization. Unlike the bound based on hyperparameter $k$, the bound presented here is training data dependent due to the data dependent selection of neighbors.
\end{remark}
More practically, to characterize the smoothness in the classification surface, we introduce variation or spread in LOO interpolation score of the training dataset as
\begin{align}
\nabla({\bm{x}}_i) = \frac{1}{\hat{k}}\sum_{j = 1}^{\hat{k}}\left[\hat{\eta}({\bm{x}}_i) | {\mathcal{D}}_{train}^i - \hat{\eta}({\bm{x}}_j)|{\mathcal{D}}_{train}^j\right]^2 \label{eq:interpolation_score_spread}
\end{align}
where $\hat{k}$ is the number of non zero weighted neighbors identified by NNK and $\hat{\eta}({\bm{x}})$ is the unbiased NNK interpolation estimate of \eqref{eq:nnk_conditional_estimate}. A smooth interpolation region will have variation~$\nabla({\bm{x}})$ in its region close to zero while a spread close to one corresponds to a noisy classification region.
\section{Experiments}
\label{sec:experiments}
In this section, we present an experimental evaluation of DeepNNK for model selection, robustness and interpretability of neural networks.
We focus on experiments with CIFAR-10 dataset to validate our analysis and intuitions on generalization and interpretability.
\begin{figure*}[htbp]
\centering
\begin{subfigure}{\textwidth}
\centering
\includegraphics[trim={0cm, 14cm, 0cm, 0cm}, clip, width=\textwidth]{figs/legend_classifier_error_rate.pdf}
\end{subfigure}
\begin{subfigure}{0.32\textwidth}
\centering
\includegraphics[ width=\textwidth]{figs/underparam_train_classifier_error_rate.pdf}
\end{subfigure}
\begin{subfigure}{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{figs/regularized_train_classifier_error_rate.pdf}
\end{subfigure}
\begin{subfigure}{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{figs/overfit_train_classifier_error_rate.pdf}
\end{subfigure}
\begin{subfigure}{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{figs/underparam_test_classifier_error_rate.pdf}
\caption{Underparameterized}
\end{subfigure}
\begin{subfigure}{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{figs/regularized_test_classifier_error_rate.pdf}
\caption{Regularized}
\end{subfigure}
\begin{subfigure}{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{figs/overfit_test_classifier_error_rate.pdf}
\caption{Overfit}
\end{subfigure}
\caption{Misclassification error ($\xi$) using fully connected softmax classifier model and interpolating classifiers (weighted KNN, NNK) for different values of $k$ parameter at each training epoch on CIFAR-10 dataset. The training data (Top) and test data (Bottom) performance for three different model settings is shown in each column. NNK classification consistently performs as well as the actual model with classification error decreasing slightly as $k$ increases. On the contrary, a weighted KNN model error increases for increasing $k$ showing robustness issues. The classification error gap between DNN model and leave one out DeepNNK model for train data is suggestive of underfitting ($\xi_{\text{NNK}} < \xi_{\text{model}}$) and overfitting ($\xi_{\text{NNK}} > \xi_{\text{model}}$). We claim a good model to be one where the performance of the model agrees with the local NNK model.}
\label{fig:overfitting_study}
\end{figure*}
We consider a simple 7 layer network comprising 4 convolution layers with reLU activations, 2 max-pool layers and 1 full connected softmax layer to demonstrate model selection. We evaluate the test performance and stability of proposed NNK classifier and compare it to weighted KNN (wiNN) approach for different values of $k$ and 5-fold cross validated linear SVM\footnote{Similar to neighborhood methods, the last layer is replaced and trained at each evaluation using a LIBLINEAR SVM \cite{fan2008liblinear} with minimal $\ell_2$ regularization. We use the default library setting for other parameters of the SVM.} for three different network settings,
\begin{itemize}
\item Regularized model training: We used 32 depth channels for each convolution layer with dropout at each convolution layer with keep probability 0.9. The data was augmented with random horizontal flips.
\item Underparametrized model: We keep the same model structure and regularization as in the regularized model but reduce the number of depth channels to 16, equivalently the number of parameters of the model by half.
\item Overfit model: To simulate overfitting, we remove data augmentation and dropout regularization in the regularized model while training for the same number of epochs.
\end{itemize}
\begin{figure}[tbp]
\centering
\includegraphics[width=0.48\textwidth]{figs/nnk_boundary_study_score_spread.pdf}
\caption{Histogram (normalized) of leave one out interpolation score after 20 epochs with $k=50$ on CIFAR-10. While the network performance on train dataset is considerably different in each setting, we see that the change in the interpolation~(classification) landscape associated with the input data is minimal which suggests a small change in generalization of the models. The spread is more shifted towards zero in regularized model indicative of a smoother classification surface.}
\label{fig:intepolation_score_spread}
\end{figure}
\Figref{fig:overfitting_study} shows the difference in performance between our method and weighted KNN (wiNN), in particular, while the proposed DeepNNK method improves marginally with larger values of $k$, the wiNN approach degrades in performance. This can be explained by the fact that NNK accommodates new neighbors only if they belong to a new direction in space that improves its interpolation unlike its KNN counterparts which simply interpolate with \textit{all} $k$ neighbors. More importantly, we observe that while NNK method performs on par if not better than the original classifier with SVM last layer, its LOO performance is a better indicator of the generalization as opposed to the empirical model performance on training data. One can clearly identify a regularized model with better stability by observing the deviation in performance between training and the LOO estimate using our proposed method. Note that the cross validated linear SVM model performed sub-optimally in all settings, which suggests that it is unable to capture the complexity of input data or the generalization difference in different models. The choice of better model is reinforced again in \Figref{fig:intepolation_score_spread}, where we observe that the histogram of interpolation spread for regularized model is more shifted towards zero relative to under-parameterized and overfit models. Note that, the shift is minimal which is expected as the different in test error associated with each model is small as well.
\begin{figure}[htbp]
\centering
\begin{subfigure}{0.48\textwidth}
\includegraphics[width=\textwidth]{figs/test_2595_neighbors_6.pdf}
\end{subfigure}
\begin{subfigure}{0.48\textwidth}
\includegraphics[width=\textwidth]{figs/test_3745_neighbors_6.pdf}
\end{subfigure}
\caption{Two test examples (first image in each set) with identified NNK neighbors from CIFAR-10 for $k=50$. We show the assigned and predicted label for the test sample, and assigned label and NNK weight for neighboring (and influential) training instances. Though we were able to identify the correct label for the test sample, one might want to question such duplicates in dataset for downstream applications.}
\label{fig:explainability_example_duplicates}
\end{figure}
We next present a few interpretability results, showing our framework's ability to capture training instances that are influential in prediction.
Neighbors selected from training data for interpolation by DeepNNK can be used as examples to explain the neural network decision. This \emph{intepretability} can be crucial to problems with transparency requirements by allowing an observer to interpret the region around a test representation as evidence.
In \Figref{fig:explainability_example_duplicates}, we show examples in the training dataset that are responsible for a prediction using the simple regularized model defined previously. Machine models and the datasets used for their training often contain biases, such as repeated instances with small perturbations for class balance, which are often undesirable for applications where fairness is important. DeepNNK framework can help understand and eliminate sources of bias by allowing practitioners to identify the limitations of their current system in a semi supervised fashion.
\Figref{fig:explainability_example_brittleness} shows another application of NNK where the fragile nature of a model over certain training images is brought to light using interpolation spread of \eqref{eq:interpolation_score_spread}. These experiments show the possibility of DeepNNK framework being used as a debugging tool in deep learning.
\begin{figure}[tbp]
\centering
\begin{subfigure}{0.48\textwidth}
\includegraphics[trim={0cm, 0cm, 0cm, 0.2cm}, clip,width=\textwidth]{figs/train_36181_neighbors_12.pdf}
\end{subfigure}
\begin{subfigure}{0.48\textwidth}
\includegraphics[width=\textwidth]{figs/train_29143_neighbors_16.pdf}
\end{subfigure}
\caption{Two training set examples (first images in each set) observed to have maximum discrepancy in LOO NNK interpolation score, as well as their respective neighbors, for $k=50$. We show the assigned and predicted label for the image being classified, and assigned label and NNK weight for the neighbors. These instances exemplify the possible brittleness of the classification model, which can better inform a user about the limits of the model they are working with.}
\label{fig:explainability_example_brittleness}
\end{figure}
Finally, we present experimental analysis of generative and adversarial images from the perspective of NNK interpolation. We study these methods using our DeepNNK framework applied on a Wide-ResNet-28-10 \cite{zagoruyko2016wide} architecture trained with autoaugment \cite{cubuk2019autoaugment}\footnote{DeepNNK achieves $97.3\%$ test accuracy on CIFAR-10 similar to that of the original network.}.
\begin{figure}[tbp]
\centering
\begin{subfigure}{0.24\textwidth}
\includegraphics[width=\textwidth]{figs/ncsn_neighbor_study_data.pdf}
\caption{}
\end{subfigure}
\begin{subfigure}{0.24\textwidth}
\includegraphics[width=\textwidth]{figs/AA_adversarial_neighbor_study_data.pdf}
\caption{}
\end{subfigure}
\caption{Histogram (normalized) of number of neighbors for (a) generated images \cite{song2019generative}, (b) black box adversarial images \cite{croce2020reliable} and actual CIFAR-10 images. We see that generated and adversarial images on average have fewer neighbors than real images suggestive of the fact these examples often fall in interpolating regions where few train images span the space. An adversarial image is born when these areas of interpolation belong to unstable regions in the classification surface. }
\label{fig:generative_adversarial_neighbors}
\end{figure}
\begin{figure}[tbp]
\centering
\begin{subfigure}{0.47\textwidth}
\includegraphics[width=\textwidth]{figs/adv_560_neighbors_5.pdf}
\end{subfigure}
\begin{subfigure}{0.47\textwidth}
\includegraphics[width=\textwidth]{figs/adv_651_neighbors_5.pdf}
\end{subfigure}
\caption{Selected black box adversarial examples (first image) and their NNK neighbors from CIFAR-10 training dataset with $k=50$. Though changes in the input image is imperceptible to a human eye, one can better characterize a prediction using NNK by observing the interpolation region of the test instance.}
\label{fig:explainability_example_adversarial}
\end{figure}
Generative and adversarial examples leverage interpolation spaces where a model (discriminator in the case of generative images or a classifier in the case of black box attacks) is influenced by a smaller number of neighboring points.
This is made evident in \Figref{fig:generative_adversarial_neighbors} where we see that the number of neighbors in the case of generative and adversarial images is on average smaller than that of real images.
We conjecture that this is a property of interpolation where realistic images can be obtained in compact interpolation neighborhoods with perturbations along extrapolating, mislabeled sample directions producing adversarial images.
Though the adversarial perturbations in the input image space is visually indistinguishable, the change in the embedding of the adversarial image in the interpolation space is significantly larger, in some cases as in \Figref{fig:explainability_example_adversarial}, belonging to regions completely different from its class.
\section{Discussion and Future Work}
We discussed various ideas, theoretical and practical, from model interpretability and generalization to adversarial, generative examples. Underlying each of these applications is a single common tool, a local polytope interpolation, whose neighborhood support is determined automatically and is dependent on the input data.
DeepNNK provides a way to incorporate recent theoretical works on interpolation and leads to better understanding of deep learning models by tracing their predictions back to the input data it was trained on.
We hope these attempts help progress neural networks to more real world scenarios and motivate further studies, methods of diagnosing machine models from the lens of the training data.
We conclude with few open thoughts and questions.
\begin{itemize}
\item
Leave one out is a particular instance of the more general problem of how a learning system predicts in response to perturbations of its parameters and data. We believe other kind of perturbations could help better understand neural networks, statistically as well as numerically.
\item
The error in data interpolation of \eqref{eq:nnk_kernel_objective} can be observed as that of data noise or alternatively error arising due to absence of examples in some directions (extrapolation). In either scenario, this error can be used to characterize a notion of distance between the data being interpolated and that available for interpolation.
We believe such a measure could help identify datasets shifts in an unsupervised manner with possible applications in domain adaptation, transfer learning.
\end{itemize}
\bibliographystyle{IEEEtran}
| proofpile-arXiv_065-212 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\label{sec:intro}
Learning structured latent representation of multivariate data is a fundamental problem in machine learning.
Many latent variable generative models have been proposed to date based on different inductive biases that reflect the model's assumptions or people's domain knowledge.
For instance, the objectives of the family of $\beta$-VAEs~\cite{betavae, tcvae, kimdisentangle} try to enforce a coordinate-wise independent structure among latent variables to discover disentangled factors of variations.
While these methods have been proven useful in the field of applications on which they were evaluated, most of them are built-in rather heuristic ways to encode the desired structure. One usually needs to construct an entirely different model whenever the domain of application changes. In general, the type of inductive bias differs significantly across different applications. It is a burden to craft a different architecture for each application, and there have not been many studies done for the general and unified way of explicitly representing an inductive bias to be enforced in generative models.
In this paper, we propose a framework of generative model that can represent various types of inductive biases in the form of Bayesian networks.
Our method can not only unify many existing generative models in previous studies, but it also can lead to new insights about establishing connections between different models across different domains and extending them to new applications.
We summarize our contributions in this work as: (i)~We propose a novel general framework of probabilistic generative model with explicit dependency structure representation to learn structured latent representation of multivariate data. (ii)~We propose an information-theoretic training objective by generalizing the multivariate information bottleneck theory to encode prior knowledge or impose inductive bias.~(Sec.~\ref{sec:framework_mib}) (iii)~We propose a flexible and tractable inference model with linear number of inference networks coupled with super-exponential number of possible dependency structures to model exponential number of inference distributions.~(Sec.~\ref{sec:framework_inference}) (iv)~We show that our proposed framework unifies many existing models and demonstrate its effectiveness in different application tasks, including multi-modal data generative modeling, algorithmic fairness, and out-of-distribution generalization.
\section{Background}
\subsection{Notations}
\label{sec:bg_notation}
We use capital letters (i.e. ${\textnormal{X}} \equiv {\textnormal{X}}_{1:N}$ ) to denote a vector of $N$ random variables, and lower case letters (i.e. ${\mathbf{x}}$) for the values. We use $P({\textnormal{X}})$ to denote the probability distribution and corresponding density with $p({\mathbf{x}})$. Given a set ${\mathbb{S}} \subseteq \left\{ 1, 2, \ldots, N\right\}$ of indexes, we use ${\textnormal{X}}^{\mathbb{S}} \equiv \left[{\textnormal{X}}_i \right]_{i \in {\mathbb{S}}}$ to represent corresponding subset of random variables. Similar notation is used for binary indicator vector ${\mathbf{b}}$ that ${\textnormal{X}}^{\mathbf{b}} \equiv \left[{\textnormal{X}}_i \right]_{{\mathbf{b}}_i =1}$.
\subsection{Probability and information theory}
A \textbf{Bayesian network} ${\mathcal{G}} \equiv \left( {\mathcal{V}}, {\mathcal{E}}\right)$ defined over random variables ${\textnormal{X}}$ is a directed acyclic graph, consisting of a set of nodes ${\mathcal{V}} \equiv \left\{ {\textnormal{X}}_i \right\}_{i=1}^N$ and a set of directed edges ${\mathcal{E}} \subseteq {\mathcal{V}}^2$.
A node ${\mathbf{u}}$ is called a \emph{parent} of ${\mathbf{v}}$ if $({\mathbf{v}}, {\mathbf{u}}) \in {\mathcal{E}}$, and for each random variable ${\textnormal{X}}_i$, the set of parents of ${\textnormal{X}}_i$ is denoted by $\parents^{{\mathcal{G}}}_{{\textnormal{X}}_i}$.
We use ${\mathcal{G}}^{\emptyset}$ to denote an empty Bayesian network ${\mathcal{G}}^{\emptyset} \equiv ({\mathcal{V}}, \emptyset)$.
If a distribution $P({\textnormal{X}})$ is consistent with a Bayesian network ${\mathcal{G}}$, then it can be factorized as $p({\mathbf{x}})=\prod_i p({\mathbf{x}}_i \mid \parents^{{\mathcal{G}}}_{{\mathbf{x}}_i})$, denoted by $p \models {\mathcal{G}}$.
We then briefly introduce the information theory concepts used in this paper here.
The Shannon \textbf{Entropy} is defined as $\mathcal{H} ({\textnormal{X}}) = - \mathbb{E}_{p({\mathbf{x}})}\log p({\mathbf{x}})$ to measure the average number of bits needed to encode values of ${\textnormal{X}} \sim P({\textnormal{X}})$.
The \textbf{Kullback–Leibler Divergence}~(KLD) is one of the most fundamental distance between probability distributions defined as $D_{\mathrm{KL}}\infdivx{P}{Q} = \mathbb{E}_{p}\log \frac{p}{q}$.
\textbf{Mutual Information} $\mathcal{I} ({\textnormal{X}}; {\textnormal{Y}}) = \mathbb{E}_{p({\mathbf{x}}, {\mathbf{y}})} \log \frac{p({\mathbf{x}}, {\mathbf{y}})}{p({\mathbf{x}})p({\mathbf{y}})}$ quantifies the mutual dependence between two random variables ${\textnormal{X}}$ and ${\textnormal{Y}}$. The mutual information is zero if and only if ${\textnormal{X}}$ and ${\textnormal{Y}}$ are independent.
\textbf{Multi-Information} is one of multivariate mutual information defined as $\mathcal{I} ({\textnormal{X}}_{1}, \dots, {\textnormal{X}}_{N}) = D_{\mathrm{KL}}\infdivx{p({\mathbf{x}}_{1:N})}{\prod_{i=1}^N p({\mathbf{x}}_i)}$, which generalizes the mutual information concept to quantify the multivariate statistical independence for arbitrary number of random variables.
\cite{jsd} proposed a \textbf{generalized Jensen-Shannon divergence} defined as $D^{\bm{\pi}}_{\mathrm{JS}} = \mathcal{H} \left(\sum_{i=1}^N \pi_i P_i \right) - \sum_{i=1}^N \pi_i \mathcal{H}(P_i)$, where $P_1, \ldots, P_N$ are $N$ distributions with weights $\pi_1, \ldots, \pi_N$. Commonly used Jensen-Shannon divergence~(JSD) can be seen as a special case when $N = 2$ and $\pi_1=\pi_2 = \frac{1}{2}$.
\cite{jsd_abs_mean} further generalized the arithmetic mean $\sum_{i=1}^N \pi_i P_i$ to other abstract means and proposed closed-form results of geometric mean of exponential family distributions and the divergence among them.
As shown in~\cite{mib}, if a distribution $P({\textnormal{X}}_{1:N})$ is consistent with a Bayesian network ${\mathcal{G}}$, the multi-information $\mathcal{I}({\textnormal{X}})$ can be expressed as a sum of all local mutual information terms: $\mathcal{I}({\textnormal{X}}) = \sum_{i=1}^N \mathcal{I} \left( {\textnormal{X}}_i; \parents^{{\mathcal{G}}}_{{\textnormal{X}}_i} \right)$.
Then the multi-information in $P({\textnormal{X}})$ with respect to an arbitrary valid Bayesian network ${\mathcal{G}}$ can be defined \footnote{Note that $P({\textnormal{X}})$ is not necessarily consistent with ${\mathcal{G}}$ here} as $\mathcal{I}^{{\mathcal{G}}}_{p}({\textnormal{X}}) = \sum_{i=1}^N \mathcal{I}^{{\mathcal{G}}}_p \left( {\textnormal{X}}_i; \parents^{{\mathcal{G}}}_{{\textnormal{X}}_i} \right)$.
The \textbf{M-projection}~\cite{pgm_book,mib} of distribution $P({\textnormal{X}})$ to the set of distribution that is consistent with a Bayesian network ${\mathcal{G}}$ is defined as $\mathbb{D}\infdivx{p}{{\mathcal{G}}} = \min_{q \models {\mathcal{G}}} D_{\mathrm{KL}}\infdivx{p}{q}$. Then the following results was introduced in~\cite{mib}
\begin{equation}
\mathbb{D}\infdivx{p}{{\mathcal{G}}} = \min_{q \models {\mathcal{G}}} D_{\mathrm{KL}}\infdivx{p}{q} = \mathcal{I}_{p}({\textnormal{X}}) - \mathcal{I}^{{\mathcal{G}}}_{p}({\textnormal{X}})
\end{equation}
where we use subscript to denote the distribution that the mutual information term is evaluated with, and we use superscript to denote the graphical structure that the indicates set of parent nodes used in $\mathcal{I}^{{\mathcal{G}}}_p \left( {\textnormal{X}}_i; \parents^{{\mathcal{G}}}_{{\textnormal{X}}_i} \right)$.
\subsection{Variational autoencoder}
Variational autoencoder~(VAE)~\cite{kingma-vae} is a probabilistic latent variable generative model $p_{{\bm{\theta}}}({\mathbf{x}}, {\mathbf{z}}) = p_{{\bm{\theta}}}({\mathbf{z}})p_{{\bm{\theta}}}({\mathbf{x}} \mid {\mathbf{z}})$, where $p_{{\bm{\theta}}}({\mathbf{z}})$ is the prior of latent variables ${\textnormal{Z}}$ and $p_{{\bm{\theta}}}({\mathbf{x}} \mid {\mathbf{z}})$ is the likelihood distribution for observed variable ${\textnormal{X}}$.
The generative model is often optimized together with a tractable distribution $q_{{\bm{\phi}}}({\mathbf{z}} \mid {\mathbf{x}})$ that approximates the posterior distribution.
The distributions are usually parametrized by neural networks with parameters ${\bm{\theta}}$ and ${\bm{\phi}}$.
The inference model and generation model are jointly optimized by a lower-bound of the KLD between $q_{{\bm{\phi}}}$ and $p_{{\bm{\theta}}}$ in the augmented space $({\textnormal{X}}, {\textnormal{Z}})$, namely \emph{ELBO}:
\begin{equation}
\mathbb{E}_{q_{{\bm{\phi}}}} \log p_{{\bm{\theta}}}({\mathbf{x}} | {\mathbf{z}}) - \mathbb{E}_{q_{{\bm{\phi}}}({\mathbf{x}})} D_{\mathrm{KL}}\infdivx{q_{{\bm{\phi}}}({\mathbf{z}} \mid {\mathbf{x}})}{p_{{\bm{\theta}}}({\mathbf{z}})} \equiv \mathcal{L}_{\mathrm{ELBO}}\\
\end{equation}
Note $- \mathcal{L}_{\mathrm{ELBO}} \ge D_{\mathrm{KL}}\infdivx{q_{{\bm{\phi}}}({\mathbf{x}})q_{{\bm{\phi}}}({\mathbf{z}} \mid {\mathbf{x}})}{p_{{\bm{\theta}}}({\mathbf{z}}) p_{{\bm{\theta}}}({\mathbf{x}} \mid {\mathbf{z}})}$ where $q_{{\bm{\phi}}}({\mathbf{x}}) = p_{\rm{data}}({\mathbf{x}})$ denotes the empirical data distribution.
The above objective can be optimized efficiently with the re-parametrization trick~\cite{kingma-vae,kingma2019an}.
\subsection{Multivariate information bottleneck}
\label{sec:bg_mib}
Multivariate Information Bottleneck~(MIB) theory proposed by~\cite{mib,mib_slonim} extends the information bottleneck theory~\cite{ib_1999} to multivariate setting.
Given a set of observed variable ${\textnormal{X}}$, MIB framework introduced a Bayesian network ${\mathcal{G}}^{\mathrm{in}}$ to define the solution space of latent variables ${\textnormal{Z}}$ as $q({\textnormal{X}}, {\textnormal{Z}}) \models {\mathcal{G}}^{\mathrm{in}}$.
Another Bayesian network ${\mathcal{G}}^{\mathrm{out}}$ is introduced to specify the relevant information to be preserved in ${\textnormal{Z}}$.
Then the MIB functional objective is defined as $\mathcal{L}_{MIB}^1(q) = \mathcal{I}_{q}^{{\mathcal{G}}^{\mathrm{in}}}({\textnormal{X}}) - \beta \mathcal{I}^{{\mathcal{G}}^{\mathrm{out}}}_{q}({\textnormal{X}})$.
An alternative structural MIB functional objective is defined as $\mathcal{L}_{MIB}^2(q) = \mathcal{I}_{q}^{{\mathcal{G}}^{\mathrm{in}}}({\textnormal{X}}) + \gamma \mathbb{D}\infdivx{q({\mathbf{x}}, {\mathbf{z}})}{{\mathcal{G}}^{\mathrm{out}}}$, and further relaxed by~\cite{mib_hidden} as $\mathcal{L}_{MIB}^2 (q,p) = \mathcal{I}_{q}^{{\mathcal{G}}^{\mathrm{in}}}({\textnormal{X}}) + \gamma D_{\mathrm{KL}}\infdivx{q({\mathbf{x}}, {\mathbf{z}})}{p({\mathbf{x}}, {\mathbf{z}})}$. We refer to~\cite{mib,mib_hidden} for more details of MIB theory.
\label{sec:mib}
\section{Framework}
\label{sec:framework}
\subsection{Preliminaries}
Given a dataset $\mathcal{D} = \left\{{\mathbf{x}}^{d}\right\}_{d=1}^{|\mathcal{D}|}$, we assume that observations are generated from some random process governed by a set of latent factors, which could be categorized into two types: private latent factors ${\textnormal{U}} \equiv {\textnormal{U}}_{1: N} \equiv \left[{\textnormal{U}}_{1}, {\textnormal{U}}_{2}, \ldots, {\textnormal{U}}_{N}\right]$ and common latent factors ${\textnormal{Z}} \equiv {\textnormal{Z}}_{1: M} \equiv\left[{\textnormal{Z}}_{1}, {\textnormal{Z}}_{2}, \ldots, {\textnormal{Z}}_{M}\right]$. We use ${\textnormal{U}}_i$ to denote the latent factors that are exclusive to the variable ${\mathbf{x}}_i$ and assume a jointly independent prior distribution $P({\textnormal{U}})$. We use ${\textnormal{Z}}$ to denote the latent factors that are possibly shared by some subset of observed variables and assume a prior distribution $P({\textnormal{Z}})$. The dimension of each ${\textnormal{U}}_i$ and ${\textnormal{Z}}_j$ is arbitrary.
\subsection{Generative model with explicit dependency structure representation}
\label{sec:generative_model}
\textbf{Generation model}
We explicitly model the dependency structure from ${\textnormal{Z}}$ to ${\textnormal{X}}$ in the random generation process with a binary matrix variable ${\textnormal{M}}^p \equiv \left[{\textnormal{M}}^p_{ij}\right] \in \left\{0, 1\right\}_{N \times M}$.
${\textnormal{M}}^p_{ij} = 1$ when the latent factor ${\textnormal{Z}}_j$ contributes to the random generation process of ${\textnormal{X}}_i$, or otherwise ${\textnormal{M}}^p_{ij} = 0$.
Let ${\textnormal{M}}^p_i = \left[{\textnormal{M}}^p_{i1}, {\textnormal{M}}^p_{i2}, \ldots, {\textnormal{M}}^p_{iM}\right]$ denote the $i$-th row of ${\textnormal{M}}$.
we can define our generative model $p_{\bm{\theta}}({\mathbf{x}}, {\mathbf{z}}, {\mathbf{u}})$ as
\begin{equation}
\label{eq:p_gen}
p_{\bm{\theta}}({\mathbf{x}}, {\mathbf{z}}, {\mathbf{u}}) = p_{{\bm{\theta}}}({\mathbf{z}}) \prod_{i=1}^N p_{{\bm{\theta}}}({\mathbf{u}}_i) \prod_{i=1}^N p_{{\bm{\theta}}}({\mathbf{x}}_i \mid {\mathbf{z}}^{{\mathbf{m}}^p_i}, {\mathbf{u}}_i)
\end{equation}
where ${\bm{\theta}}$ is the parameter for parameterizing the generation model distribution.
The structure of the generation model is illustrated by Bayesian network ${\mathcal{G}}^p_{\mathrm{full}}$ in Figure~\ref{fig:bn_mvae}, where the structural variable ${\textnormal{M}}^p$ is depicted as the dashed arrows.
\textbf{Inference}
We introduce an inference model to approximate the true posterior distributions.
We introduce another binary matrix variable ${\textnormal{M}}^q \equiv \left[{\textnormal{M}}^q_{ij}\right] \in \left\{0, 1\right\}_{N \times M}$.
${\textnormal{M}}^q_{ij} = 1$ when the observed variable ${\textnormal{X}}_i$ contributes to the inference process of ${\textnormal{Z}}_j$, or otherwise ${\textnormal{M}}^q_{ij} = 0$.
Let ${\textnormal{M}}^q_j = \left[{\textnormal{M}}^q_{1j}, {\textnormal{M}}^q_{2j}, \ldots, {\textnormal{M}}^q_{Nj}\right]$ denote the $j$-th column of ${\textnormal{M}}^q$.
We assume that latent variables are conditional jointly independent given observed variables.
Then we could define our inference model $q_{\bm{\phi}}({\mathbf{x}}, {\mathbf{z}}, {\mathbf{u}})$ as:
\begin{equation}
\label{eq:q_inf}
q_{\bm{\phi}}({\mathbf{x}}, {\mathbf{z}}, {\mathbf{u}}) = q_{{\bm{\phi}}}({\mathbf{x}}) \prod_{i=1}^N q_{{\bm{\phi}}}({\mathbf{u}}_i \mid {\mathbf{x}}_i) \prod_{j=1}^M q_{{\bm{\phi}}}({\mathbf{z}}_j \mid {\mathbf{x}}^{{\mathbf{m}}^q_j})
\end{equation}
where ${\bm{\phi}}$ is the parameter for parameterizing the inference distribution.
The structure of the inference model is illustrated by Bayesian network ${\mathcal{G}}^q_{\mathrm{full}}$ in Figure~\ref{fig:bn_mvae}, where the structural variable ${\textnormal{M}}^q$ is depicted as the dashed arrows.
\subsection{Learning from information-theoretic perspective}
\label{sec:framework_mib}
We motivate our learning objective based on the MIB~\cite{mib} theory.
We can define a Bayesian network ${\mathcal{G}}^q \equiv \left( {\mathcal{V}}^q, {\mathcal{E}}^q \right)$ that is consistent with the inference model distribution $q_{\bm{\phi}}({\mathbf{x}}, {\mathbf{z}}, {\mathbf{u}}) \models {\mathcal{G}}^q$ according to ${\textnormal{M}}^q$.
A directed edge from ${\textnormal{X}}_i$ to ${\textnormal{U}}_i$ is added for each $i \in \left\{1, 2, \ldots, N\right\}$ and an edge from ${\textnormal{X}}_i$ to ${\textnormal{Z}}_j$ is added if and only if ${\textnormal{m}}^q_{ij} = 1$.
Note that we could omit all edges between observed variables in ${\mathcal{G}}^q$ as shown in~\cite{mib, mib_hidden}.
A Bayesian network ${\mathcal{G}}^p \equiv \left( {\mathcal{V}}^p, {\mathcal{E}}^p \right)$ can be constructed according to ${\textnormal{M}}^p$ in a similar way.
As introduced in Eq. \ref{sec:bg_mib}, we have the following structural variational objective from the MIB theory:
\begin{equation}
\label{eq:mib_loss}
\min_{p_{\bm{\theta}} \models {\mathcal{G}}^p, q_{\bm{\phi}} \models {\mathcal{G}}^q} \mathcal{L}({\bm{\theta}}, {\bm{\phi}}) = \mathcal{I}^{{\mathcal{G}}^q}_q + \gamma D_{\mathrm{KL}}\infdivx{q_{\bm{\phi}}}{p_{\bm{\theta}}}
\end{equation}
The above objective provides a principled way to trade-off between (i)the compactness of learned latent representation measured by $\mathcal{I}^{{\mathcal{G}}^q}_q$, and (ii)the consistency between $q_{{\bm{\phi}}}({\mathbf{x}}, {\mathbf{z}}, {\mathbf{u}})$ and $p_{{\bm{\theta}}}({\mathbf{x}}, {\mathbf{z}}, {\mathbf{u}})$ measured by the KLD, through $\gamma > 0$.
We further generalize this objective to enable encoding a broader class of prior knowledge or desired structures into the latent space.
We prescribe the dependency structure and conditional independence rules that the learned joint distribution of $\left( {\mathbf{x}}, {\mathbf{z}}, {\mathbf{u}} \right)$ should follow, in the form of a set of Bayesian networks $\left\{ {\mathcal{G}}^k \equiv \left({\mathcal{V}}^k, {\mathcal{E}}^k \right) \right\}, k = 1,\ldots, K$.
We optimize over the inference distributions $q_{\bm{\phi}}$ to make it as consistent with ${\mathcal{G}}^k$ as possible, measured by its M-projection to ${\mathcal{G}}^k$.
Formally we have the following constrained optimization objective:
\begin{equation}
\begin{aligned}
&\min_{p_{\bm{\theta}} \models {\mathcal{G}}^p, q_{\bm{\phi}} \models {\mathcal{G}}^q} \mathcal{L}({\bm{\theta}}, {\bm{\phi}}) = D_{\mathrm{KL}}\infdivx{q_{{\bm{\phi}}}}{p_{{\bm{\theta}}}}\\
&\mathrm{s.t.}\quad\mathbb{D}\infdivx{q_{\bm{\phi}}}{{\mathcal{G}}^k} = 0 \quad k = 1, 2, \ldots, K
\end{aligned}
\end{equation}
In this way, we impose the preferences over the structure of learned distributions as explicit constraints. We relax the above constrained optimization objective with generalized Lagrangian
\begin{equation}
\max_{{\bm{\beta}} \ge {\bm{0}}} \min_{p_{\bm{\theta}} \models {\mathcal{G}}^p, q_{\bm{\phi}} \models {\mathcal{G}}^q} \mathcal{L} = D_{\mathrm{KL}}\infdivx{q_{\bm{\phi}}}{p_{\bm{\theta}}} + \sum_{k=1}^K \beta_k \mathbb{D}\infdivx{q_{\bm{\phi}}}{{\mathcal{G}}^k}
\end{equation}
where ${\bm{\beta}} \equiv \left[ \beta_1, \beta_2, \ldots, \beta_K \right]$ is the vector of Lagrangian multipliers.
In this work we fix ${\bm{\beta}}$ as constant hyper-parameters, governing the trade-off between structural regularization and distribution consistency matching. Following the idea proposed in~\cite{lagvae}, we could also generalize the distribution matching loss by using a vector of $T$ \emph{cost functions} ${\bm{C}} \equiv \left[{\mathbb{C}}_1, {\mathbb{C}}_2, \ldots, {\mathbb{C}}_T \right]$ and a vector of Lagrangian multipliers ${\bm{\alpha}} \equiv \left[\alpha_1, \alpha_2, \ldots, \alpha_T \right]$.
Each ${\mathbb{C}}_i$ can be any probability distribution divergence between $q_{{\bm{\phi}}}$ and $p_{{\bm{\theta}}}$, or any measurable cost function defined over corresponding samples.
Thus we could decompose the overall objective as
\begin{equation}
\label{eq:loss}
\begin{split}
&\mathcal{L} = \Ls_{\mathrm{dist}} + \Ls_{\mathrm{str\_reg}} \\
&\Ls_{\mathrm{dist}} = \sum_{t=1}^T \alpha_t {\mathbb{C}}_t(q_{\bm{\phi}} \;\|\; p_{\bm{\theta}}), \quad {\bm{\alpha}} \ge {\bm{0}}\\
&\Ls_{\mathrm{str\_reg}} = \sum_{k=1}^K \beta_k \mathbb{D}\infdivx{q_{\bm{\phi}}}{{\mathcal{G}}^k}, \quad {\bm{\beta}} \ge {\bm{0}} \, . \\
\end{split}
\end{equation}
By setting ${\mathbb{C}}_1 = D_{\mathrm{KL}}\infdivx{q_{{\bm{\phi}}}}{p_{{\bm{\theta}}}}$ and ${\mathcal{G}}^1 = {\mathcal{G}}^{\emptyset}$, we can obtain that the original MIB structural variational objective in Eq.~\ref{eq:mib_loss} as a special case. We include the detailed proof in Appendix.~\ref{ap:framework}.
\subsection{Tractable inference and generation}
Though we have our generation and inference model defined in Sec.~\ref{sec:generative_model}, it's not clear yet how we practically parametrize $q_{{\bm{\phi}}}$ and $p_{{\bm{\theta}}}$ in a tractable and flexible way, to handle super-exponential number of possible structures ${\textnormal{M}}^p, {\textnormal{M}}^q$ and efficient inference and optimization.
\textbf{Inference model}~
\label{sec:framework_inference}
We identify the key desiderata of our inference model defined in~Eq.~\ref{eq:q_inf} as
(i)being compatible with any valid structure variable ${\textnormal{M}}^q$ and
(ii)being able to handle missing observed variables in $q({\mathbf{z}}_j \mid {\mathbf{x}}^{{\mathbf{m}}^q_j})$, in an unified and principled way.
Building upon the assumption of our generation model distribution $p_{{\bm{\theta}}}$ in~Eq.~\ref{eq:p_gen} that all observed variables ${\textnormal{X}}$ are conditional jointly independent given ${\textnormal{Z}}$, we have following factorized formulation in the true posterior distribution $p_{{\bm{\theta}}}({\mathbf{z}} \mid {\mathbf{x}})$ by applying Bayes' rule:
\begin{equation}
\begin{aligned}
&p_{\bm{\theta}}\left({\mathbf{z}} \mid {\mathbf{x}}^{{\mathbb{S}}} \right) =\frac{p_{{\bm{\theta}}}({\mathbf{x}}^{{\mathbb{S}}} \mid {\mathbf{z}}) p_{{\bm{\theta}}}({\mathbf{z}})}{p_{{\bm{\theta}}}({\mathbf{x}}^{{\mathbb{S}}})} = \frac{p_{{\bm{\theta}}}({\mathbf{z}})}{p_{{\bm{\theta}}}({\mathbf{x}}^{{\mathbb{S}}})} \prod_{i \in {\mathbb{S}}} p_{{\bm{\theta}}}\left({\mathbf{x}}_{i} \mid {\mathbf{z}} \right) \\
&= \frac{p_{{\bm{\theta}}}({\mathbf{z}})}{p_{{\bm{\theta}}}({\mathbf{x}}^{{\mathbb{S}}})} \prod_{i \in {\mathbb{S}}} \frac{p_{{\bm{\theta}}}({\mathbf{z}} \mid {\mathbf{x}}_i) p_{{\bm{\theta}}}({\mathbf{x}}_i)}{p_{{\bm{\theta}}}({\mathbf{z}})} \propto p_{{\bm{\theta}}}({\mathbf{z}}) \prod_{i \in {\mathbb{S}}} \frac{p_{{\bm{\theta}}}({\mathbf{z}} \mid {\mathbf{x}}_i)}{p_{{\bm{\theta}}}({\mathbf{z}})}
\end{aligned}
\end{equation}
where ${\mathbb{S}} \subseteq \left\{ 1, 2, \ldots, N \right\}$.
In this way, we established the relationship between the joint posterior distribution $p_{{\bm{\theta}}}({\mathbf{z}} \mid {\mathbf{x}})$ and the individual posterior distribution $p_{{\bm{\theta}}}({\mathbf{z}} \mid {\mathbf{x}}_i)$.
We adopt the same formulation in our inference model distribution as $q_{{\bm{\phi}}}({\mathbf{z}} \mid {\mathbf{x}}^{{\mathbb{S}}}) \propto p_{{\bm{\theta}}}({\mathbf{z}}) \prod_{i \in {\mathbb{S}}} \frac{q_{{\bm{\phi}}}({\mathbf{z}} \mid {\mathbf{x}}_i)}{p_{{\bm{\theta}}}({\mathbf{z}})}$, using $N$ individual approximate posterior distributions $q_{{\bm{\phi}}}({\mathbf{z}} \mid {\mathbf{x}}_i)$.
In this work, we assume that $p_{{\bm{\theta}}}({\mathbf{z}})$ and $q_{{\bm{\phi}}}({\mathbf{z}} \mid {\mathbf{x}}_i)$ are all following factorized Gaussian distributions.
And each individual posterior $q_{{\bm{\phi}}}({\mathbf{z}} \mid {\mathbf{x}}_i)$ can be represented as:
\begin{equation}
q_{{\bm{\phi}}}({\mathbf{z}} \mid {\mathbf{x}}_i) = \prod_{j=1}^M q_{{\bm{\phi}}}({\mathbf{z}}_j \mid {\mathbf{x}}_i)^{{\mathbf{m}}^q_{ij}}p_{{\bm{\theta}}}({\mathbf{z}}_j)^{1 - {\mathbf{m}}^q_{ij}}
\end{equation}
where each $q_{{\bm{\phi}}}({\mathbf{z}}_j \mid {\mathbf{x}}_i)$ is a multiplicative mixture between the approximated posterior $q_{{\bm{\phi}}}({\mathbf{z}}_j \mid {\mathbf{x}}_i)$ and the prior $p_{{\bm{\theta}}}({\mathbf{z}}_j)$, weighted by ${\mathbf{m}}^q_{ij}$.
Since the quotient of two Gaussian distributions is also a Gaussian under well-defined conditions, we could parametrize the quotient $\frac{q_{{\bm{\phi}}}({\mathbf{z}} \mid {\mathbf{x}}_i)}{p_{{\bm{\theta}}}({\mathbf{z}})}$ using a Gaussian distribution parametrized by $\tilde{q}_{{\bm{\phi}}}({\mathbf{z}} \mid {\mathbf{x}}_i)$. In this case
\begin{equation}
\begin{split}
&\frac{q_{{\bm{\phi}}}({\mathbf{z}} \mid {\mathbf{x}}_i)}{p_{{\bm{\theta}}}({\mathbf{z}})} = \prod_{j=1}^M \frac{q_{{\bm{\phi}}}({\mathbf{z}}_j \mid {\mathbf{x}}_i)^{{\mathbf{m}}^q_{ij}}p_{{\bm{\theta}}}({\mathbf{z}}_j)^{1 - {\mathbf{m}}^q_{ij}}}{p_{{\bm{\theta}}}({\mathbf{z}}_j)^{{\mathbf{m}}^q_{ij} + 1 - {\mathbf{m}}^q_{ij}}}\\
&= \prod_{j=1}^M \left(\frac{q_{{\bm{\phi}}}({\mathbf{z}}_j \mid {\mathbf{x}}_i)}{p_{{\bm{\theta}}}({\mathbf{z}}_j)}\right)^{{\mathbf{m}}^q_{ij}} = \prod_{j=1}^M \left(\tilde{q}_{{\bm{\phi}}}({\mathbf{z}}_j \mid {\mathbf{x}}_i) \right)^{{\mathbf{m}}^q_{ij}}
\end{split}
\end{equation}
where we use a inference network $\tilde{q}_{{\bm{\phi}}}({\mathbf{z}}_j \mid {\mathbf{x}}_i)$ to parametrize $q_{{\bm{\phi}}}({\mathbf{z}}_j \mid {\mathbf{x}}_i)$ as $q_{{\bm{\phi}}}({\mathbf{z}}_j \mid {\mathbf{x}}_i) = \tilde{q}_{{\bm{\phi}}}({\mathbf{z}}_j \mid {\mathbf{x}}_i)p_{{\bm{\theta}}}({\mathbf{z}}_j)$.
We show our full inference distribution $q_{{\bm{\phi}}}({\mathbf{z}} \mid {\mathbf{x}})$ as:
\begin{equation}
\begin{aligned}
q_{{\bm{\phi}}}({\mathbf{z}} \mid {\mathbf{x}}^{{\mathbb{S}}})
\propto \prod_{j=1}^M \left(p_{{\bm{\theta}}}({\mathbf{z}}_j)\prod_{i\in {\mathbb{S}}} \left(\tilde{q}_{{\bm{\phi}}}({\mathbf{z}}_j \mid {\mathbf{x}}_i) \right)^{{\mathbf{m}}^q_{ij}} \right)
\end{aligned}
\label{eq:inference_poe}
\end{equation}
which is a weighted product-of-experts~\cite{hinton_poe} distribution for each latent variable ${\textnormal{Z}}_j$.
We include the detailed derivation in Appendix.~\ref{ap:framework}.
The structure variable ${\textnormal{M}}^q_{ij}$ controls the weight of each multiplicative component $\tilde{q}_{{\bm{\phi}}}({\mathbf{z}}_j \mid {\mathbf{x}}_i)$ in the process of shaping the joint posterior distribution $q_{{\bm{\phi}}}({\mathbf{z}} \mid {\mathbf{x}})$.
As a result of the Gaussian assumptions, the weighted product-of-experts distribution above has a closed-form solution.
Suppose $p_{{\bm{\theta}}}({\mathbf{z}}) \sim \mathcal{N}\left({\bm{\mu}}_0, \mathrm{diag}\left({\bm{\sigma}}_0\right)\right)$, $\tilde{q}_{{\bm{\phi}}}({\mathbf{z}} \mid {\mathbf{x}}_i) \sim \mathcal{N}\left({\bm{\mu}}_i, \mathrm{diag}\left({\bm{\sigma}}_i\right)\right)$ for $i = 1, 2, \ldots, N$.
We introduce "dummy" variables in ${\mathbf{m}}^q$ that ${\mathbf{m}}^q_{0j} = 1$ for all $j$.
Then we have
\begin{equation}
\begin{split}
&q_{{\bm{\phi}}}({\mathbf{z}} \mid {\mathbf{x}}^{\mathbb{S}}) \sim \mathcal{N} \left( {\bm{\mu}}^q, \mathrm{diag}\left({\bm{\sigma}}^q\right) \right) \\
& \frac{1}{{\bm{\sigma}}_j^q} = \sum_{i \in {\mathbb{S}} \cup \left\{0\right\} } \frac{{\mathbf{m}}^q_{ij}}{{\bm{\sigma}}_{ij}} \quad {\bm{\mu}}^q_j = \frac{1}{{\bm{\sigma}}_j^q} \sum_{i \in {\mathbb{S}} \cup \left\{0\right\} } \frac{{\mathbf{m}}^q_{ij}}{{\bm{\sigma}}_{ij}}{\bm{\mu}}_{ij} \, . \\
\end{split}
\end{equation}
With the derived inference model above, we are now able to model $2^N$ posterior inference distributions $q_{{\bm{\phi}}}({\mathbf{z}} \mid {\mathbf{x}}^{\mathbb{S}}) \; \forall {\mathbb{S}}$, coupled with $2^{N \times M}$ possible discrete structures ${\textnormal{M}}^q$, with $N$ inference networks $\tilde{q}_{{\bm{\phi}}}({\mathbf{z}} \mid {\mathbf{x}}_i)$.
Note that the introduced distribution $q_{{\bm{\phi}}}({\mathbf{z}} \mid {\mathbf{x}})$ remains valid when we extend the value of structure variable ${\textnormal{M}}^q$ to continuous domain $\mathbb{R}^{N \times M}$, which paves the way to gradient-based structure learning.
\textbf{Generation model}
We could parametrize our generation model $p_{{\bm{\theta}}}$ in a symmetric way using the weighted product-of-expert distributions using $p_{{\bm{\theta}}}({\mathbf{x}}_i \mid {\mathbf{z}}_j)$ and ${\textnormal{M}}^p$.
In this work we adopt an alternative approach, due to the consideration that the Gaussian distribution assumption is inappropriate in complex raw data domain, like image pixels.
We instead use ${\textnormal{M}}^p$ as a gating variable and parametrize $p_{{\bm{\theta}}}({\mathbf{x}}_i \mid {\mathbf{z}}^{{\mathbf{m}}^p_i})$ in the form of $p_{{\bm{\theta}}}({\mathbf{x}}_i \mid {\mathbf{z}}^{{\mathbf{m}}^p_i}) = p_{{\bm{\theta}}}({\mathbf{x}}_i \mid {\mathbf{z}} \odot {\mathbf{m}}^p_i)$, where $\odot$ denotes element-wise multiplication. We can see that it's still tractable since the prior $p_{{\bm{\theta}}}({\mathbf{z}})$ is known.
\subsection{Tractable optimization}
\textbf{Structural regularization $\Ls_{\mathrm{str\_reg}}$}
Let's take a close look at the structural regularization term $\Ls_{\mathrm{str\_reg}}$ in our training objective Eq.~\ref{eq:loss}.
As introduced in Sec.\ref{sec:bg_mib}, we have $\mathbb{D}\infdivx{q_{{\bm{\phi}}}}{{\mathcal{G}}^k} = \sum_{{\textnormal{v}} \in \left\{{\mathbf{x}},{\mathbf{z}} \right\} } \mathcal{I}_q\semicolondiv{{\textnormal{v}}}{\parents^{{\mathcal{G}}^q}_{{\textnormal{v}}}} - \sum_{{\textnormal{v}} \in \left\{{\mathbf{x}},{\mathbf{z}} \right\}} \mathcal{I}_q\semicolondiv{{\textnormal{v}}}{\parents^{{\mathcal{G}}^k}_{{\textnormal{v}}}}$.
This objective poses new challenge to estimate and optimize mutual information.
Note that any differentiable mutual information estimations and optimization methods can be applied here.
In this paper, we propose to use tractable variational lower/upper-bounds of the intractable mutual information by re-using distributions $q_{{\bm{\phi}}}$ and $p_{{\bm{\theta}}}$.
We refer to~\cite{mi_bounds} for a detailed review and discussion of state-of-the-art tractable mutual information optimization methods.
\begin{algorithm}[tb]
\caption{Training with optional structure learning}
\label{alg:learning}
\begin{algorithmic}
\REQUIRE dataset $\mathcal{D} = \left\{{\mathbf{x}}^{d}\right\}_{d=1}^{|\mathcal{D}|}$
\REQUIRE parameters ${\bm{\phi}}, {\bm{\theta}}, \bm{\rho^q}, \bm{\rho^p}$
\REQUIRE Bayesian Networks $\left\{ {\mathcal{G}}^k \equiv \left({\mathcal{V}}^k, {\mathcal{E}}^k \right) \right\}$
\REQUIRE hyper-parameters ${\bm{\alpha}}$, ${\bm{\beta}}$
\REQUIRE number of iterations to update distribution parameters $steps\_{dist} > 0$
\REQUIRE number of iterations to update structure parameters $steps\_{str} \ge 0$
\REQUIRE mini-batch size $bs$
\REQUIRE gradient-based optimizer $opt$
\STATE initialize all parameters ${\bm{\phi}}, {\bm{\theta}}, \bm{\rho^q}, \bm{\rho^p}$
\REPEAT
\FOR{$step=1$ {\bfseries to} $steps\_{dist}$}
\STATE randomly sample a mini-batch $\mathcal{B}$ of size $bs$ from dataset $\mathcal{D}$
\STATE evaluate loss $\Ls_{\mathrm{dist}}^\mathcal{B}$ using Eq.~\ref{eq:loss}
\STATE compute gradients $\nabla_{{\bm{\phi}}} \Ls_{\mathrm{dist}}^\mathcal{B}$, $\nabla_{{\bm{\theta}}}\Ls_{\mathrm{dist}}^\mathcal{B}$
\STATE $opt.optimize(\left[{\bm{\phi}}, {\bm{\theta}} \right], \left[ \nabla_{{\bm{\phi}}} \Ls_{\mathrm{dist}}^\mathcal{B}, \nabla_{{\bm{\theta}}}\Ls_{\mathrm{dist}}^\mathcal{B}\right])$
\ENDFOR
\FOR{$step=1$ {\bfseries to} $steps\_{str}$}
\STATE randomly sample a mini-batch $\mathcal{B}$ of size $bs$ from dataset $\mathcal{D}$
\STATE evaluate loss $\mathcal{L}_{\mathrm{score}}^\mathcal{B}$ using Eq.~\ref{eq:loss_score}
\STATE compute gradients $\nabla_{\bm{\rho^q}} \mathcal{L}_{\mathrm{score}}^\mathcal{B}$, $\nabla_{\bm{\rho^p}} \mathcal{L}_{\mathrm{score}}^\mathcal{B}$
\STATE $opt.optimize(\left[\bm{\rho^q}, \bm{\rho^p} \right], \left[\nabla_{\bm{\rho^q}} \mathcal{L}_{\mathrm{score}}^\mathcal{B}, \nabla_{\bm{\rho^p}} \mathcal{L}_{\mathrm{score}}^\mathcal{B} \right])$
\ENDFOR
\UNTIL{converged}
\end{algorithmic}
\end{algorithm}
\label{sec:framework_ldist}
\textbf{Distribution consistency $\Ls_{\mathrm{dist}}$} We aim to achieve the consistency between the joint distribution $q_{{\bm{\phi}}}({\mathbf{x}}, {\mathbf{z}}, {\mathbf{u}})$ and $p_{{\bm{\theta}}}({\mathbf{x}}, {\mathbf{z}}, {\mathbf{u}})$ through $T$ cost functions in $\Ls_{\mathrm{dist}}$.
With the proposed inference model in Sec.~\ref{sec:framework_inference}, we could decompose our $\Ls_{\mathrm{dist}}$ into two primary components:
(i)~\emph{Enforcing $q_{{\bm{\phi}}}({\mathbf{x}}, {\mathbf{z}}, {\mathbf{u}}) = p_{{\bm{\theta}}}({\mathbf{x}}, {\mathbf{z}}, {\mathbf{u}})$}~Many previous works\cite{kingma-vae,wae,ali,bigan} have been proposed to learn a latent variable generative model to model the joint distribution, any tractable objective can be utilized here, we adopt the \emph{ELBO} as the default choice.
(ii)~\emph{Enforcing $q_{{\bm{\phi}}}({\mathbf{z}}) = p_{{\bm{\theta}}}({\mathbf{z}})$}~ The reason that we explicitly include this objective in $\Ls_{\mathrm{dist}}$ is due to our $p_{{\bm{\theta}}}$-dependent parametrization of $q_{{\bm{\phi}}}({\mathbf{z}} \mid {\mathbf{x}}) \propto p_{{\bm{\theta}}}({\mathbf{z}}) \prod_{i=1}^{N} \frac{q_{{\bm{\phi}}}({\mathbf{z}} \mid {\mathbf{x}}_i)}{p_{{\bm{\theta}}}({\mathbf{z}})}$. Thus we explicitly enforce the consistency between the induced marginal distribution $q_{{\bm{\phi}}}({\mathbf{z}}) \equiv \mathbb{E}_{q_{{\bm{\phi}}}} q_{{\bm{\phi}}}({\mathbf{z}} \mid {\mathbf{x}})$ and $p_{{\bm{\theta}}}({\mathbf{z}})$. Tractable divergence estimators for minimizing $ C_T\left(q_{\bm{\phi}}({\mathbf{z}}) \;\|\; p_{\bm{\theta}}({\mathbf{z}}) \right)$ have been proposed and analyzed in previous works,
\begin{equation}
\begin{aligned}
\Ls_{\mathrm{dist}} = \sum_{t=1}^{T-1} \alpha_t C_t(q_{\bm{\phi}} \;\|\; p_{\bm{\theta}}) + \alpha_T C_T\left(q_{\bm{\phi}}({\mathbf{z}}) \;\|\; p_{\bm{\theta}}({\mathbf{z}})\right) \, .
\end{aligned}
\end{equation}
With the distribution consistency objective and the compositional inference model introduced in Sec.~\ref{sec:framework_inference}, we could train the latent variable generative model in a weakly/semi-supervised manner in terms of (i)~incomplete data where ${\textnormal{X}}$ is partially observed (e.g. missing attributes in feature vectors, or missing a modality in multi-modal dataset), and (ii)~partial known dependency structure in ${\textnormal{M}}^q$ and ${\textnormal{M}}^p$.
\textbf{Structure learning}~
In this work, we show that our proposed framework is capable of learning the structure of Bayesian network ${\mathcal{G}}^q$ and ${\mathcal{G}}^p$ based on many existing structure learning methods efficiently, with \emph{gradient-based} optimization techniques, which avoids searching over the discrete super-exponential space.
Specifically, we show that our proposed framework can
(i) represent the assumptions made about the structure of the true data distribution in the form of a set of structural regularization in the form of Bayesian networks $\{{\mathcal{G}}^k\}$ as the \emph{explicit inductive bias}.
A score-based structure learning objective is then introduced where $\Ls_{\mathrm{str\_reg}}$ plays a vital role in scoring each candidate structure;
and (ii) utilize the non-stationary data from multiple environments~\cite{nonlinear_ica_tcl,irm,mila_metacausal} as additional observed random variables.
We show the score-based structure learning objective as below
\begin{equation}
\label{eq:loss_score}
\min_{{\mathbf{m}}^q, {\mathbf{m}}^p} \mathcal{L}_{\mathrm{score}} = \Ls_{\mathrm{dist}} + \Ls_{\mathrm{str\_reg}} + \mathcal{L}_{\mathrm{sparsity}} \, .
\end{equation}
We assume a jointly factorized Bernoulli distribution prior for structure variable ${\textnormal{M}}^q$ and ${\textnormal{M}}^p$, parametrized by $\bm{\rho^q}$ and $\bm{\rho^p}$.
We use the gumbel-softmax trick proposed by \cite{gumbel-softmax,concrete_dist,gumbel_family} as gradient estimators.
Following the Bayesian Structural EM~\cite{structural_em,mib_hidden} algorithm, we optimize the model alternatively between optimizing distributions $\mathcal{L}(q_{{\bm{\phi}}}, p_{{\bm{\theta}}})$ and structure variables $\mathcal{L}_{\mathrm{score}}({\mathbf{m}}^q, {\mathbf{m}}^p)$.
We present the full algorithm to train the proposed generative model with optional structure learning procedure in Alg.~\ref{alg:learning}.
\begin{table}[t]
\caption{Distribution consistency objectives $\Ls_{\mathrm{dist}}$}
\label{tb:pool_ldist}
\begin{center}
\begin{small}
\begin{sc}
\begin{tabular}{cc}
\toprule
$C$ & $definition$ \\
\midrule
$C_0({\mathbf{x}}, {\mathbf{z}}, {\mathbf{u}})$ & $D_{\mathrm{KL}}\infdivx{q_{{\bm{\phi}}}}{p_{{\bm{\theta}}}}$\\
$C_1({\mathbf{x}}, {\mathbf{u}})$ & $-\mathcal{L}_{\mathrm{ELBO}}(q_{{\bm{\phi}}}({\mathbf{x}}, {\mathbf{u}}), p_{{\bm{\theta}}}({\mathbf{x}}, {\mathbf{u}}))$\\
$C_2({\mathbf{x}})$ & $D_{\mathrm{JS}}\infdivx{q_{{\bm{\phi}}}({\mathbf{x}})}{p_{{\bm{\theta}}}({\mathbf{x}})}$\\
$C_3({\mathbf{z}})$ & $D_{\mathrm{KL}}\infdivx{q_{{\bm{\phi}}}({\mathbf{z}})}{p_{{\bm{\theta}}}({\mathbf{z}})}$\\
$C_4({\mathbf{x}}_i, {\mathbf{z}})$ & $D_{\mathrm{KL}}\infdivx{q_{{\bm{\phi}}}({\mathbf{x}}_i, {\mathbf{z}})}{p_{{\bm{\theta}}}({\mathbf{x}}_i, {\mathbf{z}})}$\\
\bottomrule
\end{tabular}
\end{sc}
\end{small}
\end{center}
\vskip -0.1in
\end{table}
\begin{table*}[t]
\caption{A unified view of \{single/multi\}-\{modal/domain/view\} models. $C_i$ is referred to as the definition in Table.~\ref{tb:pool_ldist}, ${\mathcal{G}}$ is referred to as the Bayesian networks in Figure.~\ref{fig:bn_vae},~\ref{fig:bn_mvae}. We use $N$ to denote the number of views/domains/modals. We use \textcircled{1} to denote \textit{shared/private latent space decomposition}, and use \textcircled{2} to denote \textit{dependency structure learning}. Please see Appendix.~\ref{ap:framework} for the full table.}
\label{tb:unified_models}
\begin{center}
\begin{small}
\begin{sc}
\begin{tabular}{lccccccl}
\toprule
Models & $N$ & \textcircled{1} & \textcircled{2} & ${\mathcal{G}}^q$ & ${\mathcal{G}}^p$ & $\Ls_{\mathrm{dist}}$ & $\Ls_{\mathrm{str\_reg}}$ \\
\midrule
VAE & $1$ & $\times$ & $\times$ & $\left[{\mathcal{G}}^q_{\mathrm{single}}\right]$ & $\left[{\mathcal{G}}^p_{\mathrm{single}}\right]$ & $[1, C_1]$ & $[]$\\
\midrule
GAN & $1$ & $\times$ & $\times$ & [] & ${\mathcal{G}}^p_{\mathrm{single}}$ & $[1, C_2]$ & $[]$\\
\midrule
InfoGAN & $1$ & $\times$ & $\times$ & [] & ${\mathcal{G}}^p_{\mathrm{single}}$ & $[1, C_2]$ & $[1, {\mathcal{G}}^{\mathrm{InfoGAN}}]$\\
\midrule
$\beta$-VAE & $1$ & $\times$ & $\times$ & $\left[{\mathcal{G}}^q_{\mathrm{single}}\right]$ & $\left[{\mathcal{G}}^p_{\mathrm{single}}\right]$ & $[1, C_1], [\beta-1, C_3]$ & $[\beta - 1, {\mathcal{G}}^\emptyset]$\\
\midrule
$\beta$-TCVAE & $1$ & $\times$ & $\times$ & $\left[{\mathcal{G}}^q_{\mathrm{single}}\right]$ & $\left[{\mathcal{G}}^p_{\mathrm{single}}\right]$ & $[1, C_1], [\alpha_2, C_2]$ & $[\beta, {\mathcal{G}}^p]$\\
\midrule
JMVAE & $2$ & $\times$ & $\times$ & $\left[{\mathcal{G}}^q_{\mathrm{joint}}\right]$ & $\left[{\mathcal{G}}^p_{\mathrm{joint}}\right]$ & $[1, C_1]$& $[\beta_i, {\mathcal{G}}^{\mathrm{str}}_{\mathrm{cross}}({\mathbf{x}}_i)]$\\
\midrule
MVAE & $N$ & $\times$ & $\times$ & $\left[{\mathcal{G}}^q_{\mathrm{joint}},{\mathcal{G}}^q_{\mathrm{marginal}}\right]$ & $\left[{\mathcal{G}}^p_{\mathrm{joint}}\right]$ & $[1, C_1]$ & $[\beta_i, {\mathcal{G}}^{\mathrm{str}}_{\mathrm{marginal}}({\mathbf{x}}_i)]$\\
\midrule
Wyner & $2$ & $\checkmark$ & $\times$ & $\left[{\mathcal{G}}^q_{\mathrm{joint}},{\mathcal{G}}^q_{\mathrm{marginal}}\right]$ & $\left[{\mathcal{G}}^p_{\mathrm{joint}}\right]$ & $[1, C_1]$ & $[\beta_i, {\mathcal{G}}^{\mathrm{str}}_{\mathrm{cross}}({\mathbf{x}}_i)],[\beta_i, {\mathcal{G}}^{\mathrm{str}}_{\mathrm{private}}({\mathbf{x}}_i)]$\\
\midrule
OURS-MM & $N$ & $\checkmark$ & $\checkmark$ & $\left[{\mathcal{G}}^q_{\mathrm{full}} \right]$ & $\left[{\mathcal{G}}^p_{\mathrm{full}} \right]$ & $[1, C_0]$ & $[\beta_i, {\mathcal{G}}^{\mathrm{str}}_{\mathrm{cross}}(\{{\mathbf{x}}_i\})]$\\
\bottomrule
\end{tabular}
\end{sc}
\end{small}
\end{center}
\vskip -0.1in
\end{table*}
\section{Case study: Generative Data Modeling}
In this section, we show various types of generative data modeling can be viewed as a structured latent space learning problem, which can be addressed by our proposed framework in a principled way.
\begin{figure}[ht]
\vskip 0.2in
\begin{center}
\begin{tabular}{ccc}
\includegraphics[width=0.3\columnwidth]{vae_1.pdf} &
\includegraphics[width=0.3\columnwidth]{vae_2.pdf} &
\includegraphics[width=0.3\columnwidth]{vae_3.pdf}\\
${\mathcal{G}}^q_{\mathrm{single}}$ & ${\mathcal{G}}^p_{\mathrm{single}}$ & ${\mathcal{G}}^{\mathrm{str}}_{\mathrm{info}}$
\end{tabular}
\caption{Bayesian networks for single-modal models}
\label{fig:bn_vae}
\end{center}
\vskip -0.2in
\end{figure}
\begin{figure}[ht]
\vskip 0.2in
\begin{center}
\begin{tabular}{ccl}
\includegraphics[width=0.45\columnwidth]{mvae_1-cropped.pdf} &
\includegraphics[width=0.45\columnwidth]{mvae_2-cropped.pdf} \\
${\mathcal{G}}^q_{\mathrm{full}}$ & ${\mathcal{G}}^p_{\mathrm{full}}$\\
\includegraphics[width=0.45\columnwidth]{mvae_5-cropped.pdf} &
\includegraphics[width=0.45\columnwidth]{mvae_4-cropped.pdf} \\
${\mathcal{G}}^q_{\mathrm{joint}}$ & ${\mathcal{G}}^p_{\mathrm{joint}}$\\
\includegraphics[width=0.45\columnwidth]{mvae_3-cropped.pdf} &
\includegraphics[width=0.45\columnwidth]{mvae_9-cropped.pdf}\\
${\mathcal{G}}^q_{\mathrm{marginal}}$ & ${\mathcal{G}}^{p}_{\mathrm{marginal}}$\\
\includegraphics[width=0.45\columnwidth]{mvae_6-cropped.pdf} &
\includegraphics[width=0.45\columnwidth]{mvae_8-cropped.pdf}\\
${\mathcal{G}}^{\mathrm{str}}_{\mathrm{cross}}$ & ${\mathcal{G}}^{\mathrm{str}}_{\mathrm{private}}$\\
\end{tabular}
\caption{Bayesian networks of various inference, generation models and structural regularizations in multi-modal/domain/view setting.}
\label{fig:bn_mvae}
\end{center}
\vskip -0.2in
\end{figure}
\subsection{Single-modal generative model}
\label{sec:single_model_model}
\textbf{Framework specification}~
In single-modal data generative modeling setting, we have $N = 1$ observed variable ${\textnormal{X}} \equiv \left[{\textnormal{X}}_{1}\right]$ which could be in image, text or other modalities, and we only incorporate private latent variables ${\textnormal{U}}$.
We abuse the notation a little by assuming $M$ latent variables ${\textnormal{U}} \equiv \left[{\textnormal{U}}_1, {\textnormal{U}}_2, \ldots, {\textnormal{U}}_M\right]$\footnote{because we can define arbitrary dimension for ${\textnormal{U}}$.}.
\textbf{A unified view}
We show that our proposed model unifies many existing generative models.
We show that we can impose disentanglement as a special case of the structural regularization in latent space to obtain different existing disentangled representation learning methods.
We summarize how existing generative models can be unified within our proposed information-theoretic framework in table~\ref{tb:unified_models}.
As an interesting example, we show that we can derive the $\beta$-vae objective with $\mathcal{L} = C_1 + (\beta - 1) C_3 + (\beta - 1)\Ls_{\mathrm{str\_reg}}({\mathcal{G}}^{\emptyset})$,
where we impose the structural regularization $(\beta - 1)\mathbb{D}\infdivx{q_{\bm{\phi}}}{{\mathcal{G}}^{\emptyset}}$.
In this way, we also established connections to the results in~\cite{beta_vae_prior,ddvae} that $\beta$-vae is optimizing \emph{ELBO} with a $q_{{\bm{\phi}}}$-dependent implicit prior $r({\mathbf{u}}) \propto q_{{\bm{\phi}}}({\mathbf{u}})^{1 - \beta}p_{{\bm{\theta}}}({\mathbf{u}})^\beta$,
we achieve this in a symmetric way by using a $p_{{\bm{\theta}}}$-dependent posterior $q_{{\bm{\phi}}}({\mathbf{z}} \mid {\mathbf{x}}) \propto p_{{\bm{\theta}}}({\mathbf{z}}) \prod_{i=1}^{N} \frac{q_{{\bm{\phi}}}({\mathbf{z}} \mid {\mathbf{x}}_i)}{p_{{\bm{\theta}}}({\mathbf{z}})}$.
We further show that how we can unify other total-correlation based disentangled representation learning models~\cite{tcvae,hfvae,kimdisentangle} by explicitly imposing Bayesian structure ${\mathcal{G}}^p$ as structural regularization.
We include detailed discussions and proofs in Appendix.~\ref{ap:sm}.
\subsection{Multi-modal/domain/view generative model}
\label{sec:multi_modal_model}
\textbf{Problem setup}
We represent the observed variables as ${\textnormal{X}}_{1: N} \equiv \left[{\textnormal{X}}_{1}, {\textnormal{X}}_{2}, \ldots, {\textnormal{X}}_{N}\right]$, where we have $N$ observed variables in different domains\footnote{We use the word domain to represent domain/modality/view.} and they might be statistically dependent.
We thus aim to learn latent factors ${\textnormal{Z}}$ that explains the potential correlations among ${\textnormal{X}}$.
Meanwhile, we also learn latent factors ${\textnormal{U}}_i$ that explains the variations exclusive to one specific observed variable ${\textnormal{X}}_i$.
In this way, we could achieve explicit control over the domain-dependent and domain-invariant latent factors.
For more details of the data generation process for this task and the model, please see Appendix.~\ref{ap:exp_mm}.
\textbf{A unified view}
We summarize the key results of unifying many existing multi-domain generative models in Table.~\ref{tb:unified_models}. We prove and discuss some interesting connections to related works in more details in Appendix.~\ref{ap:mm}, including BiVCCA~\cite{bivcca}, JMVAE~\cite{jmvae}, TELBO~\cite{telbo}, MVAE~\cite{mvae}, WynerVAE~\cite{wynervae}, DIVA~\cite{diva} and CorEx~\cite{corex,corex-hierarchical,corex-infosieve,corex-vae}.
\textbf{Framework specification}~
We present a specific implementation of our proposed framework for multi-domain generative modeling here.
We show that it generalizes some heuristics used in previous models and demonstrate its effectiveness in several standard multi-modal datasets.
We use $\Ls_{\mathrm{dist}}$ in Table~\ref{tb:pool_ldist} to learn consistent inference model and joint, marginal, conditional generation model over $\left( {\textnormal{X}}, {\textnormal{Z}}, {\textnormal{U}}\right)$.
To embed multi-domain data into a shared latent space, we use the structural regularization that enforces Markov conditional independence structure ${\textnormal{X}}^{{\mathbb{S}}} \rightarrow {\textnormal{Z}} \rightarrow {\textnormal{X}}^{{\mathbb{S}}^\complement}$.
This structural regularization can be represented by ${\mathcal{G}}^{\mathrm{str}}_{\mathrm{cross}}$ in Figure~\ref{fig:bn_mvae}, where ${\textnormal{X}} \equiv \left[{\textnormal{X}}^{{\mathbb{S}}}, {\textnormal{X}}^{{\mathbb{S}}^\complement} \right]$ is a random bi-partition of ${\textnormal{X}}$.
Then we show that the objective can be upper-bounded by $\mathcal{L} = \Ls_{\mathrm{dist}} + \Ls_{\mathrm{str\_reg}} \le \mathcal{L}_{{\mathbf{x}}} + \mathcal{L}_{{\mathbf{u}}} + \mathcal{L}_{{\mathbf{z}}}$, where $\mathcal{L}_{{\mathbf{x}}}=-\mathbb{E}_{q_{{\bm{\phi}}}({\mathbf{z}}, {\mathbf{u}} \mid {\mathbf{x}})}\log p_{{\bm{\theta}}}({\mathbf{x}} \mid {\mathbf{z}}, {\mathbf{u}})$, $\mathcal{L}_{{\mathbf{u}}} = \mathbb{E}_{q_{{\bm{\phi}}}({\mathbf{x}})}D_{\mathrm{KL}}\infdivx{q_{{\bm{\phi}}}({\mathbf{u}} \mid {\mathbf{x}})}{p_{{\bm{\theta}}}({\mathbf{u}})}$ and $\mathcal{L}_{{\mathbf{z}}}=\sum_{i=0}^N \mathbb{E}_{q_{{\bm{\phi}}}({\mathbf{x}})}D_{\mathrm{KL}}\infdivx{q_{{\bm{\phi}}}({\mathbf{z}} \mid {\mathbf{x}})}{q_{{\bm{\phi}}}({\mathbf{z}} \mid {\mathbf{x}}_i)}$.
We use $q_{{\bm{\phi}}}({\mathbf{z}} \mid {\mathbf{x}}_0) \equiv p_{{\bm{\theta}}}({\mathbf{z}})$ for the simplicity of notations.
We further show that for each latent variable ${\textnormal{Z}}_j$, $\mathcal{L}_{{\mathbf{z}}_j}$ term can be viewed as a generalized JS-divergence~\cite{jsd_abs_mean} among $q_{{\bm{\phi}}}({\mathbf{z}}_j \mid {\mathbf{x}}_i)$ for $i \in \left\{1,\ldots,N \right\}$ using geometric-mean weighted by ${\mathbf{m}}^q_j$, which can be seen as a generalization of the implicit prior used in $\beta$-vae as discussed in~\ref{sec:single_model_model}.
The detailed proof is presented in~ Appendix.~\ref{ap:mm}.
\begin{equation}
\begin{aligned}
&\mathcal{L}_{{\mathbf{z}}_j} = D^{{\mathbf{m}}^q_j}_{\mathrm{JS}}\left(q_{{\bm{\phi}}}({\mathbf{z}}_j \mid {\mathbf{x}}_0), q_{{\bm{\phi}}}({\mathbf{z}}_j \mid {\mathbf{x}}_1), \ldots, q_{{\bm{\phi}}}({\mathbf{z}}_j \mid {\mathbf{x}}_N) \right)
\end{aligned}
\label{eq:jsd_objective}
\end{equation}
\begin{figure}[ht]
\vskip 0.2in
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=0.44\columnwidth]{svhn_mnist-cropped.pdf} &
\includegraphics[width=0.48\columnwidth]{mnist_p1-cropped.pdf}\\
SVHN $\rightarrow$ MNIST & MNIST $\rightarrow$ MNIST-Plus-1
\end{tabular}
\caption{Cross-domain generation samples. The leftmost column shows conditioned inputs.}
\label{fig:samples}
\end{center}
\vskip -0.2in
\end{figure}
\textbf{Experiment}
We validate the effectiveness of proposed model in multi-view/modal data modeling setting on bi-modal MNIST-Label, MNIST-SVHN and bi-view MNIST-MNIST-Plus-1 dataset. We show the generated samples in Figure~\ref{fig:samples}.
The left panel in the figure contains the examples of MNIST-style samples generated by the model trained on MNIST-SVHN dataset when conditioned on SVHN data examples.
We can observe that the model is using the shared latent variable ${\textnormal{Z}}$ and private latent variables ${\textnormal{U}}$ to successfully generate the MNIST-style samples of same digit as the SVHN inputs.
On the other hand, the right panel contains the examples of MNIST-style sample generated by the model trained on MNIST-Plus-1 dataset by conditioning on MNIST example.
We can observe that the model is successfully generating $m+1$ digit images when conditioned on $m$ digit input.
More detailed results are included in the Appendix.~\ref{ap:exp_mm}.
\section{Case study: Fair Representation Learning}
\label{sec:fairness}
In this section, we show that fair representation learning can be viewed as a structured latent space learning problem, where we aim to learn a latent subspace that is invariant to sensitive attributes while informative about target label.
\begin{figure}[t]
\vskip 0.1in
\begin{center}
\begin{tabular}{ccc}
\includegraphics[width=0.3\columnwidth]{fair_1-cropped.pdf} &
\includegraphics[width=0.3\columnwidth]{fair_2-cropped.pdf} &
\includegraphics[width=0.3\columnwidth]{fair_3-cropped.pdf} \\
${\mathcal{G}}^q$ & ${\mathcal{G}}^p,{\mathcal{G}}^{\mathrm{str}}_{\mathrm{invariant}}$ & ${\mathcal{G}}^{\mathrm{str}}_{\mathrm{informative}}$\\
\end{tabular}
\caption{Bayesian networks for fair representation learning.}
\label{fig:bn_fair}
\end{center}
\vskip -0.2in
\end{figure}
\textbf{Problem setup}
We use $\left[{\textnormal{X}}, {\textnormal{A}}, {\textnormal{Y}} \right]$ to represent the observed variables, where the variable ${\textnormal{X}}$ represents the multivariate raw observation like pixels of image sample, the variable ${\textnormal{A}}$ represents the sensitive attributes, and the variable ${\textnormal{Y}}$ represents the target label to be predicted.
Following the same setting in previous works~\cite{fair_lagvae,fair_flexible}, the target label is not available during training phase.
A linear classifier using learned representation is trained to predict the held-out label ${\textnormal{Y}}$ in testing time.
We focus on the \emph{Difference of Equal Opportunity}~(DEO) notion in this work~\cite{fair_eod}.
For the details of the data generation process, please see Appendix.~\ref{ap:fairness}.
\textbf{Framework specification}
we learn a joint distribution over $\left[{\textnormal{X}}, {\textnormal{A}}, {\textnormal{Z}}, {\textnormal{U}}\right]$ with the framework proposed.
The shared latent variable ${\textnormal{Z}}$ aims to explain the hidden correlation among ${\textnormal{X}}$ and ${\textnormal{Z}}$.
We also enforce two structural regularizations, represented by
two Bayesian networks ${\mathcal{G}}^{\mathrm{str}}_{\mathrm{invariant}}$ and ${\mathcal{G}}^{\mathrm{str}}_{\mathrm{informative}}$.
The aim of ${\mathcal{G}}^{\mathrm{str}}_{\mathrm{invariant}}$ is to learn the private latent variables ${\textnormal{U}}_{{\mathbf{x}}}$ as the hidden factors that are invariant to the change of ${\textnormal{Z}}$.
Meanwhile, the aim of ${\mathcal{G}}^{\mathrm{str}}_{\mathrm{informative}}$ is to
preserve as much information about ${\textnormal{X}}$ in ${\textnormal{Z}}$.
${\textnormal{M}}^q$ and ${\textnormal{M}}^p$ are illustrated by ${\mathcal{G}}^q$ and ${\mathcal{G}}^p$ in Figure~\ref{fig:bn_fair} correspondingly.
Then we have the following learning objective
\begin{equation}
\begin{aligned}
&\mathcal{L} \le -\mathbb{E}_{q_{{\bm{\phi}}}}\log p_{{\bm{\theta}}}({\mathbf{x}}, {\mathbf{a}} \mid {\mathbf{z}}, {\mathbf{u}}) + \beta_2 \mathcal{I}_q\semicolondiv{{\mathbf{z}}}{{\mathbf{u}}} +\\
&(1+\beta_1) \mathbb{E}_{q_{{\bm{\phi}}}} D_{\mathrm{KL}}\infdivx{q_{{\bm{\phi}}}({\mathbf{z}} \mid {\mathbf{x}}, {\mathbf{a}})}{p_{{\bm{\theta}}}({\mathbf{z}})} + const
\end{aligned}
\label{eq:fair_objective}
\end{equation}
Please refer to Appendix.~\ref{ap:fairness} for the detailed derivation and discussion.
\begin{table}[t]
\caption{Fair representation learning results on German and Adult datasets.}
\label{tb:fair}
\begin{center}
\begin{small}
\begin{sc}
\begin{tabular}{lcccr}
\toprule
Model & \multicolumn{2}{c}{Adult} & \multicolumn{2}{c}{German} \\
& ACC & DEO & ACC & DEO\\
\midrule
Naive SVM & $0.80$ & $0.09$ & $0.74 \pm 0.05$ & $0.12 \pm 0.05$ \\
SVM & $0.79$ & $0.08$ & $0.74 \pm 0.03$ & $0.10 \pm 0.06$ \\
NN & $0.84$ & $0.14$ & $0.74 \pm 0.04$ & $0.47 \pm 0.19$ \\
NN $+ \chi^{2}$ & $0.83$ & $0.03$ & $0.73 \pm 0.03$ & $0.25 \pm 0.14$ \\
FERM & $0.77$ & $0.01$ & $0.73 \pm 0.04$ & $0.05 \pm 0.03$ \\
Ours-MMD & $0.83$ & $0.02$ & $0.72 \pm 0.07$ & $0.07 \pm 0.09$ \\
Ours-TC & $0.81$ & $0.02$ & $0.74 \pm 0.08$ & $0.08 \pm 0.14$ \\
Ours-MINE & $0.79$ & $0.01$ & $0.70 \pm 0.11$ & $0.05 \pm 0.11$ \\
\bottomrule
\end{tabular}
\end{sc}
\end{small}
\end{center}
\vskip -0.1in
\end{table}
\textbf{Experiments}
We investigate the performance of our derived objective on the UCI German credit dataset and the UCI Adult dataset.
For estimating and minimizing $\mathcal{I}_q\semicolondiv{{\mathbf{z}}}{{\mathbf{u}}}$, we adopted MMD~\cite{mmd_nips}, total-correlation estimator in~\cite{hfvae} and MINE~\cite{mine} and summarize all results in Table~\ref{tb:fair}.
We report the classification accuracy (ACC) and the aforementioned DEO in the table.
The performances of all other baseline methods in the table are from~\cite{fair_continuous,fair_erm}.
Please refer to Appendix.~\ref{ap:exp_fair} for more details.
\section{Case study: Out-of-Distribution Generalization}
\label{sec:irm}
\begin{figure}[ht]
\vskip 0.1in
\begin{center}
\begin{tabular}{ccc}
\includegraphics[width=0.3\columnwidth]{irm_1-cropped.pdf} &
\includegraphics[width=0.3\columnwidth]{irm_3-cropped.pdf} &
\includegraphics[width=0.3\columnwidth]{irm_2-cropped.pdf} \\
${\mathcal{G}}^q$ & ${\mathcal{G}}^p$ & ${\mathcal{G}}^{\mathrm{str}}_{\mathrm{ood}}$\\
\end{tabular}
\caption{Bayesian networks for out-of-distribution generalization task. ${\textnormal{E}}$ in the diagram represents the index of the environmental factor, not the real value of ${\textnormal{E}}$ in the data generation process}
\label{fig:bn_irm}
\end{center}
\vskip -0.2in
\end{figure}
\textbf{Problem setup}
We show that discovering of true causation against spurious correlation through invariance can be viewed as a structured latent representation learning problem.
Consider a set of environments ${\mathcal{E}}$ indexed by ${\textnormal{E}}$, we have a data distribution $P^e({\textnormal{X}}, {\textnormal{Y}})$ for each environment ${\textnormal{E}} = e$.
We use $\left[ {\textnormal{X}}, {\textnormal{Y}}, {\textnormal{E}} \right]$ to represent the observed variables, where ${\textnormal{X}}$ is data input, ${\textnormal{Y}}$ is label and ${\textnormal{E}}$ is the index of the corresponding environment index.
The goal of this task is predict $Y$ from $X$ in a way that the performance of the predictor in the presence of the worst ${\textnormal{E}}$ is optimal.
We derive an information-theoretic objective for out-of-distribution generalization task on Colored-MNIST dataset introduced in \cite{irm}.
For more details of this experiment, please see the Appendix.~\ref{ap:exp_irm} as well as the original work ~\cite{causal_invariance,irm}.
\textbf{Framework specification}~
As our structural regularization, we use the Bayesian network ${\mathcal{G}}^{\mathrm{str}}_{\mathrm{ood}}$ in Figure~\ref{fig:bn_irm}.
The purpose of ${\mathcal{G}}^{\mathrm{str}}_{\mathrm{ood}}$ is to enforce that $Z$ is sufficient statistic in making the prediction of $Y$ and that $E \perp Y | Z$.
We present the derived learning objective here
\begin{equation}
\begin{aligned}
&\mathcal{L}_{\mathrm{info}} = \Ls_{\mathrm{dist}} + \beta_1 D_{\mathrm{KL}}\infdivx{q_{{\bm{\phi}}}({\mathbf{z}} \mid {\mathbf{x}}, {\mathbf{e}}, {\mathbf{y}})}{q_{{\bm{\phi}}}({\mathbf{z}} \mid {\mathbf{x}})} + \\
& \beta_2 \mathcal{I}_q({\mathbf{x}},{\mathbf{e}},{\mathbf{y}} \mid {\mathbf{z}})
\end{aligned}
\end{equation}
We further show that the idea in~\cite{irm} can be directly integrated into our proposed framework by imposing stable ${\textnormal{M}}^p$ structure as constraints across environments, measured by gradient-penalty, as discussed in Appendix.~\ref{ap:irm}.
\begin{table}[t]
\caption{Out-of-distribution generalization results on Colored-MNIST}
\label{tb:irm}
\vskip 0.15in
\begin{center}
\begin{small}
\begin{sc}
\begin{tabular}{lcc}
\toprule
Model & ACC. train envs. & Acc. test env.\\
\midrule
Random & $50$ & $50$\\
Optimal & $75$ & $75$\\
Oracle & $73.5 \pm 0.2$ & $73.0 \pm 0.4$\\
ERM & $87.4 \pm 0.2$ & $17.1 \pm 0.6$\\
IRM & $70.8 \pm 0.9$ & $66.9 \pm 2.5$ \\
Ours-full & $67.8 \pm 6.8$ & $62.1 \pm 6.1$ \\
Ours-semi & $71.4 \pm 6.1$ & $58.7 \pm 7.2$\\
\bottomrule
\end{tabular}
\end{sc}
\end{small}
\end{center}
\vskip -0.1in
\end{table}
\textbf{Experiments}
We validate the proposed model on the Colored-MNIST classification task introduced in~\cite{irm}.
We also take the advantage of our proposed framework as a generative model that we could perform semi-supervised learning, where we use only $50\%$ labeled data.
We include more training setting details in Appendix.~\ref{ap:exp_irm}.
We compare our model against the baselines in Table~\ref{tb:irm}.
We see that our proposed information-theoretic objective achieves comparable performance in both supervised and semi-supervised setting on the test-environment.
\section{Conclusion}
In this work, we propose a general information-theoretic framework for learning structured latent factors from multivariate data, by generalizing the multivariate information bottleneck theory.
We show that the proposed framework can provide an unified view of many existing methods and insights on new models for many different challenging tasks like fair representation learning and out-of-distribution generalization.
| proofpile-arXiv_065-213 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Training Algorithm for ISDA}
\section*{Appendex}
\section{Implementation Details of ISDA. }
\label{grad}
\textbf{Dynamic estimation of covariance matrices.}
During the training process using $\overline{\mathcal{L}}_{\infty}$, covariance matrices are estimated by:
\begin{equation}
\label{ave}
\bm{\mu}_j^{(t)} = \frac{n_j^{(t-1)}\bm{\mu}_j^{(t-1)} + m_j^{(t)} {\bm{\mu}'}_j^{(t)}}
{n_j^{(t-1)} +m_j^{(t)}},
\end{equation}
\begin{equation}
\label{cv}
\begin{split}
\Sigma_j^{(t)}
= \frac{n_j^{(t-1)}\Sigma_j^{(t-1)} + m_j^{(t)} {\Sigma'}_j^{(t)}}
{n_j^{(t-1)} +m_j^{(t)}}
+ \frac{n_j^{(t-1)}m_j^{(t)} (\bm{\mu}_j^{(t-1)} - {\bm{\mu}'}_j^{(t)})
(\bm{\mu}_j^{(t-1)} - {\bm{\mu}'}_j^{(t)})^T}
{(n_j^{(t-1)} +m_j^{(t)})^2},
\end{split}
\end{equation}
\begin{equation}
\label{sum}
n_j^{(t)} = n_j^{(t-1)} + m_j^{(t)}
\end{equation}
where $\bm{\mu}_j^{(t)}$ and $\Sigma_j^{(t)}$ are the estimates of average values and covariance matrices of the features of $j^{th}$ class at $t^{th}$ step. ${\bm{\mu}'}_j^{(t)}$ and ${\Sigma'}_j^{(t)}$ are the average values and covariance matrices of the features of $j^{th}$ class in $t^{th}$ mini-batch. $n_j^{(t)}$ denotes the total number of training samples belonging to $j^{th}$ class in all $t$ mini-batches,
and $m_j^{(t)}$ denotes the number of training samples belonging to $j^{th}$ class only in $t^{th}$ mini-batch.
\textbf{Gradient computation.} In backward propagation, gradients of $\overline{\mathcal{L}}_{\infty}$ are given by:
\begin{equation}
\label{g_1}
\frac{\partial{\overline{\mathcal{L}}_{\infty}}}{\partial b_j} =
\frac{\partial{\overline{\mathcal{L}}_{\infty}}}{\partial z_j} =
\begin{cases}
\frac{e^{z_{y_i}}}{\sum_{j=1}^{C}e^{z_{j}}}-1, &j = y_i \\
\frac{e^{z_{j}}}{\sum_{j=1}^{C}e^{z_{j}}}, &j \neq y_i
\end{cases},
\end{equation}
\begin{equation}
\label{g_2}
\frac{\partial{\overline{\mathcal{L}}_{\infty}}}{\partial \bm{w}^{T}_{j}} =
\begin{cases}
(\bm{a}_{i} + \sum_{n=1}^{C}[(\bm{w}^{T}_{n} - \bm{w}^{T}_{y_{i}})\Sigma_{i}])
\frac{\partial{\overline{\mathcal{L}}_{\infty}}}{\partial z_j}, &j = y_i \\
(\bm{a}_{i} + (\bm{w}^{T}_{j} - \bm{w}^{T}_{y_{i}})\Sigma_{i})
\frac{\partial{\overline{\mathcal{L}}_{\infty}}}{\partial z_j}, &j \neq y_i
\end{cases},
\end{equation}
\begin{equation}
\label{g_3}
\frac{\partial{\overline{\mathcal{L}}_{\infty}}}{\partial a_k} = \sum_{j=1}^{C}
w_{jk} \frac{\partial{\overline{\mathcal{L}}_{\infty}}}{\partial z_j}, 1 \leq k \leq A,
\end{equation}
where $w_{jk}$ denotes $k^{th}$ element of $\bm{w}_{j}$. ${\partial{\overline{\mathcal{L}}_{\infty}}}/{\partial \bm{\Theta}}$ can be obtained through the backward propagation algorithm using ${\partial{\overline{\mathcal{L}}_{\infty}}}/{\partial \bm{a}}$.
\section{Training Details}
On CIFAR, we implement the ResNet, SE-ResNet, Wide-ResNet, ResNeXt, DenseNet and PyramidNet.
The SGD optimization algorithm with a Nesterov momentum is applied to train all models. Specific hyper-parameters for training are presented in Table \ref{Training_hp}.
\begin{table*}[h]
\scriptsize
\centering
\vskip -0.2in
\caption{Training configurations on CIFAR. `$l_r$' donates the learning rate.}
\label{Training_hp}
\setlength{\tabcolsep}{0.5mm}{
\vspace{5pt}
\renewcommand\arraystretch{1.15}
\begin{tabular}{c|c|c|c|c|c|c}
\hline
Network & Total Epochs & Batch Size & Weight Decay & Momentum & Initial $l_r$ & $l_r$ Schedule \\
\hline
ResNet & 160 & 128 & 1e-4 & 0.9 & 0.1 & Multiplied by 0.1 in $80^{th}$ and $120^{th}$ epoch. \\
\hline
SE-ResNet & 200 & 128 & 1e-4 & 0.9 & 0.1 & Multiplied by 0.1 in $80^{th}$, $120^{th}$ and $160^{th}$ epoch. \\
\hline
Wide-ResNet & 240 & 128 & 5e-4 & 0.9 & 0.1 & Multiplied by 0.2 in $60^{th}$, $120^{th}$, $160^{th}$ and $200^{th}$ epoch. \\
\hline
DenseNet-BC & 300 & 64 & 1e-4 & 0.9 & 0.1 & Multiplied by 0.1 in $150^{th}$, $200^{th}$ and $250^{th}$ epoch. \\
\hline
ResNeXt & 350 & 128 & 5e-4 & 0.9 & 0.05 & Multiplied by 0.1 in $150^{th}$, $225^{th}$ and $300^{th}$ epoch. \\
\hline
Shake Shake &\multirow{1}{*}{1800}&\multirow{1}{*}{64}&\multirow{1}{*}{1e-4}&\multirow{1}{*}{0.9}&\multirow{1}{*}{0.1}&\multirow{1}{*}{Cosine learning rate.} \\
\hline
PyramidNet &\multirow{1}{*}{1800}&\multirow{1}{*}{128}&\multirow{1}{*}{1e-4}&\multirow{1}{*}{0.9}&\multirow{1}{*}{0.1}&\multirow{1}{*}{Cosine learning rate.} \\
\hline
\end{tabular}}
\end{table*}
On ImageNet, we train ResNet for 120 epochs using the same l2 weight decay and momentum as CIFAR, following \cite{huang2016deep}. The initial learning rate is set as 0.1 and divided by 10 every 30 epochs. The size of mini-batch is set as 256.
All baselines are implemented with the same training configurations mentioned above.
Dropout rate is set as 0.3 for comparison if it is not applied in the basic model, following the instruction in \cite{Srivastava2014DropoutAS}. For noise rate in disturb label, 0.05 is adopted in Wide-ResNet-28-10 on both CIFAR-10 and CIFAR-100 datasets and ResNet-110 on CIFAR 10, while 0.1 is used for ResNet-110 on CIFAR 100. Focal Loss contains two hyper-parameters $\alpha$ and $\gamma$. Numerous combinations have been tested on the validation set and we ultimately choose $\alpha=0.5$ and $\gamma=1$ for all four experiments.
For L$_q$ loss, although \cite{Zhang2018GeneralizedCE} states that $q=0.7$ achieves the best performance in most conditions, we suggest that $q=0.4$ is more suitable in our experiments, and therefore adopted.
For center loss, we find its performance is largely affected by the learning rate of the center loss module, therefore its initial learning rate is set as 0.5 for the best generalization performance.
For generator-based augmentation methods, we apply the GANs structures introduced in \cite{arjovsky2017wasserstein, mirza2014conditional, odena2017conditional, chen2016infogan} to train the generators.
For WGAN, a generator is trained for each class in CIFAR-10 dataset. For CGAN, ACGAN and infoGAN, a single model is simply required to generate images of all classes. A 100 dimension noise drawn from a standard normal distribution is adopted as input, generating images corresponding to their label. Specially, infoGAN takes additional input with two dimensions, which represent specific attributes of the whole training set. Synthetic images are involved with a fixed ratio in every mini-batch. Based on the experiments on the validation set, the proportion of generalized images is set as $1/6$.
\section{Reversing Convolutional Networks}
To explicitly demonstrate the semantic changes generated by ISDA, we propose an algorithm to map deep features back to the pixel space. Some extra visualization results are shown in Figure \ref{Extra}.
An overview of the algorithm is presented in Figure \ref{Reversing}.
As there is no closed-form inverse function for convolutional networks like ResNet or DenseNet, the mapping algorithm acts in a similar way to \cite{mahendran2015understanding} and \cite{Upchurch2017DeepFI}, by fixing the model and adjusting inputs to find images corresponding to the given features. However, given that ISDA augments semantics of images in essence, we find it insignificant to directly optimize the inputs in the pixel space. Therefore, we add a fixed pre-trained generator $\mathcal{G}$, which is obtained through training a wasserstein GAN \cite{arjovsky2017wasserstein}, to produce images for the classification model, and optimize the inputs of the generator instead. This approach makes it possible to effectively reconstruct images with augmented semantics.
\begin{figure*}
\begin{center}
\centerline{\includegraphics[width=\columnwidth]{Reverse_Algorithm.pdf}}
\caption{Overview of the algorithm. We adopt a fixed generator $\mathcal{G}$ obtained by training a wasserstein gan to generate fake images for convolutional networks, and optimize the inputs of $\mathcal{G}$ in terms of the consistency in both the pixel space and the deep feature space.}
\label{Reversing}
\end{center}
\vskip -0.2in
\end{figure*}
\begin{figure*}
\begin{center}
\centerline{\includegraphics[width=\columnwidth]{extra_result.pdf}}
\caption{Extra visualization results.}
\label{Extra}
\end{center}
\vskip -0.3in
\end{figure*}
The mapping algorithm can be divided into two steps:
\textbf{Step I. }Assume a random variable $\bm{z}$ is normalized to $\hat{\bm{z}}$ and input to $\mathcal{G}$, generating fake image $\mathcal{G}(\hat{\bm{z}})$. $\bm{x}_{i}$ is a real image sampled from the dataset (such as CIFAR). $\mathcal{G}(\hat{\bm{z}})$ and $\bm{x}_{i}$ are forwarded through a pre-trained convolutional network to obtain deep feature vectors $f(\mathcal{G}(\hat{\bm{z}}))$ and $\bm{a}_{i}$. The first step of the algorithm is to find the input noise variable $\bm{z}_{i}$ corresponding to $\bm{x}_{i}$, namely
\begin{equation}
\label{ra1}
\bm{z}_{i} = \arg\min_{\bm{z}} \|f(\mathcal{G}(\hat{\bm{z}})) - \bm{a}_{i}\|_{2}^{2} +
\eta\|\mathcal{G}(\hat{\bm{z}}) - \bm{x}_{i}\|_{2}^{2},\
s.t.\ \hat{\bm{z}} = \frac{\bm{z} - \overline{\bm{z}}}{std(\bm{z})},
\end{equation}
where $ \overline{\bm{z}}$ and $std(\bm{z})$ are the average value and the standard deviation of $\bm{z}$, respectively.
The consistency of both the pixel space and the deep feature space are considered in the loss function, and we introduce a hyper-parameter $\eta$ to adjust the relative importance of two objectives.
\textbf{Step II. }We augment $\bm{a}_{i}$ with ISDA, forming $\tilde{\bm{a}}_{i}$ and reconstructe it in the pixel space. Specifically, we search for $\bm{z}_{i}'$ corresponding to $\tilde{\bm{a}}_{i}$ in the deep feature space, with the start point $\bm{z}_{i}$ found in Step I:
\begin{equation}
\label{ra2}
\bm{z}_{i}' = \arg\min_{\bm{z'}} \|f(\mathcal{G}(\hat{\bm{z}}')) - \tilde{\bm{a}}_{i}\|_{2}^{2},\
s.t.\ \hat{\bm{z}}' = \frac{\bm{z'} - \overline{\bm{z'}}}{std(\bm{z'})}.
\end{equation}
As the mean square error in the deep feature space is optimized to 0, $\mathcal{G}(\hat{\bm{z}_{i}}')$
is supposed to represent the image corresponding to $\tilde{\bm{a}}_{i}$.
The proposed algorithm is performed on a single batch. In practice, a ResNet-32 network is used as the convolutional network. We solve Eq. (\ref{ra1}), (\ref{ra2}) with a standard gradient descent (GD) algorithm of 10000 iterations. The initial learning rate is set as 10 and 1 for Step I and Step II respectively, and is divided by 10 every 2500 iterations. We apply a momentum of 0.9 and a l2 weight decay of 1e-4.
\section{Extra Experimental Results}
\begin{figure}[htp]
\begin{center}
\subfigure[ResNet-110 on CIFAR-10]{
\label{fig:evaluationC10}
\includegraphics[width=0.45\columnwidth]{Re110C10.pdf}
}
\subfigure[ResNet-110 on CIFAR-100]{
\label{fig:evluationC100}
\includegraphics[width=0.45\columnwidth]{Re110C100.pdf}}
\caption{Comparison with state-of-the-art image classification methods.}
\label{compare}
\end{center}
\vskip -0.2in
\end{figure}
Curves of test errors of state-of-the-art methods and ISDA are presented in Figure \ref{compare}. ISDA outperforms other methods consistently, and shows the best generalization performance in all situations. Notably, ISDA decreases test errors more evidently in CIFAR-100, which demonstrates that our method is more suitable for datasets with fewer samples. This observation is consistent with the results in the paper. In addition, among other methods, center loss shows competitive performance with ISDA on CIFAR-10, but it fails to significantly enhance the generalization in CIFAR-100.
\section{Introduction}\label{sec:introduction}}
\input{introduction.tex}
\input{related.tex}
\input{method.tex}
\input{experiments.tex}
\input{conclusion.tex}
\ifCLASSOPTIONcompsoc
\section*{Acknowledgments}
\else
\section*{Acknowledgment}
\fi
This work is supported in part by the Ministry of Science and Technology of China under Grant 2018AAA0101604, the National Natural Science Foundation of China under Grants 62022048, 61906106 and 61936009, the Institute for Guo Qiang of Tsinghua University and Beijing Academy of Artificial Intelligence.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
\section{Conclusion}
In this paper, we proposed an efficient implicit semantic data augmentation algorithm (ISDA) to complement existing data augmentation techniques. Different from existing approaches leveraging generative models to augment the training set with semantically transformed samples, our approach is considerably more efficient and easier to implement. In fact, we showed that ISDA can be formulated as a novel robust loss function, which is compatible with any deep network using the softmax cross-entropy loss. Additionally, ISDA can also be implemented efficiently in semi-supervised learning via the semantic consistency training technique. Extensive experimental results on several competitive vision benchmarks demonstrate the effectiveness and efficiency of the proposed algorithm.
\section{Experiments}
In this section, we empirically validate the proposed algorithm on several tasks.
First, we present the experimental results of supervised image classification on widely used benchmarks, i.e., CIFAR \cite{krizhevsky2009learning} and ImageNet \cite{5206848}. Second, we show the performance of several deep semi-supervised learning algorithms with and without ISDA on CIFAR \cite{krizhevsky2009learning} and SVHN \cite{goodfellow2013multi}. Third, we apply ISDA to the semantic segmentation task on the Cityscapes dataset \cite{cordts2016cityscapes}.
In addition, to demonstrate that ISDA encourages models to learn better representations, we conduct experiments by employing the models trained with ISDA as backbones for the object detection task and the instance segmentation task on the MS COCO dataset \cite{lin2014microsoft}, which are presented in Appendix \ref{COCO_results}.
Furthermore, a series of analytical experiments are conducted to provide additional insights into our algorithm. We provide the visualization results of both the augmented samples and the representations learned by deep networks. We also present the empirical results to check the tightness of the upper bound used by ISDA. The performance of explicit and implicit semantic data augmentation is compared. Finally, ablation studies and sensitivity tests are conducted to show how the components and hyper-parameters affect the performance of ISDA.
\subsection{Experimental Setups for Image Classification}
\label{sec:dataset-baseline}
\textbf{Datasets.} We use three image classification benchmarks in the experiments. (1) The two \emph{CIFAR} datasets consist of 32x32 colored natural images in 10 classes for CIFAR-10 and 100 classes for CIFAR-100, with 50,000 images for training and 10,000 images for testing, respectively.
(2) \emph{The Street View House Numbers (SVHN)} dataset \cite{goodfellow2013multi} consists of 32x32 colored images of digits. 73,257 images for training, 26,032 images for testing and 531,131 images for additional training are provided.
(3) \emph{ImageNet} is a 1,000-class dataset from ILSVRC2012\cite{5206848}, providing 1.2 million images for training and 50,000 images for validation.
\textbf{Validation set and data pre-procession.}
(1) On CIFAR, in our supervised learning experiments, we hold out 5,000 images from the training set as the validation set to search for the hyper-parameter $\lambda_0$. These samples are also used for training after an optimal $\lambda_0$ is selected, and the results on the test set are reported. Images are normalized with channel means and standard deviations for pre-processing.
We follow the basic data augmentation operations in \cite{He_2016_CVPR, 2016arXiv160806993H, wang2020collaborative, wang2021revisiting}: 4 pixels are padded at each side of the image, followed by a random 32x32 cropping combined with random horizontal flipping. In semi-supervised learning experiments, we hold out $25\%$ of labeled images as the validation set to select $\lambda_0$, $\eta_1$ and $\eta_2$, following \cite{miyato2018virtual}. Similarly, these samples are also used for training with the selected hyper-parameters. Following the common practice of semi-supervised learning \cite{miyato2018virtual, tarvainen2017mean, laine2016temporal, verma2019interpolation}, we apply ZCA whitening for pre-processing and random 2x2 translation followed by random horizontal flip for basic augmentation.
(2) The SVHN dataset is used for semi-supervised learning experiments, where $25\%$ of labeled images are held out as the validation set. The validation set is put back for training after the hyper-parameter searching. Following \cite{luo2018smooth, tarvainen2017mean}, we perform random 2x2 translation to augment the training data.
(3) On ImageNet, we adopt the same augmentation configurations as \cite{krizhevsky2012imagenet,He_2016_CVPR,2016arXiv160806993H, yang2020resolution, wang2020glance}.
\begin{table*}[t]
\centering
\caption{Single crop error rates (\%) of different deep networks on the validation set of ImageNet. We report the results of our implementation with and without ISDA. The better results are \textbf{bold-faced}, while the numbers in brackets denote the performance improvements achieved by ISDA. For a fair comparison, we present the baselines reported by other papers \cite{2016arXiv160806993H, yun2019cutmix} as well. We also report the theoretical computational overhead and the additional training time introduced by ISDA in the last two columns, which is obtained with 8 Tesla V100 GPUs.
}
\vskip -0.13in
\label{ImageNet Results}
\setlength{\tabcolsep}{2mm}{
\vspace{5pt}
\renewcommand\arraystretch{1.21}
\begin{tabular}{c|c||c|c|c|c|c}
\hline
\multirow{2}{*}{Networks} & \multirow{2}{*}{Params} & \multicolumn{3}{c|}{Top-1/Top-5 Error Rates (\%)} & Additional Cost & Additional Cost \\
\cline{3-5}
& &Reported in \cite{2016arXiv160806993H, yun2019cutmix} & Our Implementation & ISDA & (Theoretical) & (Wall Time) \\
\hline
ResNet-50 \cite{He_2016_CVPR} & 25.6M & 23.7 / 7.1 & 23.0 / 6.8 & \textbf{21.9$_{(1.1)}$ / 6.3} & 0.25\% &7.6\%\\
ResNet-101 \cite{He_2016_CVPR} & 44.6M & 21.9 / 6.3 & 21.7 / 6.1 & \textbf{20.8$_{(0.9)}$ / 5.7} & 0.13\% & 7.4\%\\
ResNet-152 \cite{He_2016_CVPR} & 60.3M & 21.7 / 5.9 & 21.3 / 5.8 & \textbf{20.3$_{(1.0)}$ / 5.5} & 0.09\% &5.4\%\\
\hline
DenseNet-BC-121 \cite{2016arXiv160806993H} & 8.0M & 25.0 / 7.7 & 23.7 / 6.8 & \textbf{23.2$_{(0.5)}$ / 6.6} & 0.20\% &5.6\%\\
DenseNet-BC-265 \cite{2016arXiv160806993H} & 33.3M & 22.2 / 6.1 & 21.9 / 6.1 & \textbf{21.2$_{(0.7)}$ / 6.0} & 0.24\% &5.4\%\\
\hline
ResNeXt-50, 32x4d \cite{xie2017aggregated} & 25.0M & \ \ --\ \ / \ \ --\ \ & 22.5 / 6.4 & \textbf{21.3$_{(1.2)}$ / 5.9} & 0.24\% &6.6\%\\
ResNeXt-101, 32x8d \cite{xie2017aggregated} & 88.8M & \ \ --\ \ / \ \ --\ \ & 21.1 / 5.9 & \textbf{20.1$_{(1.0)}$ / 5.4} & 0.06\% &7.9\%\\
\hline
\end{tabular}}
\vskip -0.1in
\end{table*}
\begin{table*}[t]
\centering
\caption{Evaluation of ISDA on CIFAR with different models. We report mean values and standard deviations in five independent experiments. The better results are \textbf{bold-faced}.}
\vskip -0.125in
\label{different_networks}
\setlength{\tabcolsep}{4mm}{
\vspace{5pt}
\renewcommand\arraystretch{1.21}
\begin{tabular}{c|c||cc|cc}
\hline
\multirow{2}{*}{Networks} & \multirow{2}{*}{Params} & \multicolumn{2}{c|}{CIFAR-10} & \multicolumn{2}{c}{CIFAR-100} \\
& & Basic & ISDA & Basic & ISDA \\
\hline
ResNet-32 \cite{He_2016_CVPR} & 0.5M & 7.39 $\pm$ 0.10\% & \textbf{7.09 $\pm$ 0.12\%} & 31.20 $\pm$ 0.41\% & \textbf{30.27 $\pm$ 0.34\%}\\
ResNet-110 \cite{He_2016_CVPR} & 1.7M & 6.76 $\pm$ 0.34\%& \textbf{6.33 $\pm$ 0.19\%} & 28.67 $\pm$ 0.44\% & \textbf{27.57 $\pm$ 0.46\%}\\
SE-ResNet-110 \cite{hu2018squeeze} & 1.7M & 6.14 $\pm$ 0.17\%& \textbf{5.96 $\pm$ 0.21\%} &27.30 $\pm$ 0.03\%& \textbf{26.63 $\pm$ 0.21\%}\\
Wide-ResNet-16-8 \cite{Zagoruyko2016WideRN} & 11.0M & 4.25 $\pm$ 0.18\%&\textbf{4.04 $\pm$ 0.29\%} & 20.24 $\pm$ 0.27\%& \textbf{19.91 $\pm$ 0.21\%}\\
Wide-ResNet-28-10 \cite{Zagoruyko2016WideRN} & 36.5M & 3.82 $\pm$ 0.15\% & \textbf{3.58 $\pm$ 0.15\%} & 18.53 $\pm$ 0.07\% & \textbf{17.98 $\pm$ 0.15\%}\\
ResNeXt-29, 8x64d \cite{xie2017aggregated} & 34.4M & 3.86 $\pm$ 0.14\% & \textbf{3.67 $\pm$ 0.12\%} & 18.16 $\pm$ 0.13\%& \textbf{17.43 $\pm$ 0.25\%}\\
DenseNet-BC-100-12 \cite{2016arXiv160806993H} & 0.8M & 4.90 $\pm$ 0.08\% & \textbf{4.54 $\pm$ 0.07\%}& 22.61 $\pm$ 0.10\%& \textbf{22.10 $\pm$ 0.34\%}\\
Shake-Shake (26, 2x32d) \cite{gastaldi2017shake} & 3.0M & 3.45 $\pm$ 0.01\% & \textbf{3.20 $\pm$ 0.03\%} & 20.12 $\pm$ 0.39\% & \textbf{19.45 $\pm$ 0.16\%} \\
Shake-Shake (26, 2x112d) \cite{gastaldi2017shake} & 36.4M & 2.92 $\pm$ 0.02\% & \textbf{2.61 $\pm$ 0.09\%} & 17.42 $\pm$ 0.44\% & \textbf{16.73 $\pm$ 0.18\%}\\
\hline
\end{tabular}}
\vskip -0.1in
\end{table*}
\begin{table*}[t]
\centering
\caption{The theoretical computational overhead and the empirical additional time consumption of ISDA on CIFAR. The results are obtained with a single Tesla V100 GPU.}
\vskip -0.125in
\label{ComputationalCost}
\setlength{\tabcolsep}{1.5mm}{
\renewcommand\arraystretch{1.21}
\begin{tabular}{c|c|cc|c|cc|c}
\hline
\multirow{4}{*}{Networks} & & \multicolumn{3}{c|}{CIFAR-10} & \multicolumn{3}{c}{CIFAR-100} \\
\cline{3-8} & Standard & \multicolumn{2}{c|}{Additional Cost} & \multirowcell{3}{Additional Cost\\(Wall Time)} & \multicolumn{2}{c|}{Additional Cost} & \multirowcell{3}{Additional Cost\\(Wall Time)} \\
& FLOPs & \multicolumn{2}{c|}{(Theoretical)} & & \multicolumn{2}{c|}{(Theoretical)} & \\
\cline{3-4} \cline{6-7} & & Absolute & Relative & & Absolute & Relative & \\
\hline
ResNet-32 \cite{He_2016_CVPR} & 69.43M & 0.05M & 0.07\% & 1.85\% & 0.44M & 0.63\% & 1.85\%\\
ResNet-110 \cite{He_2016_CVPR}& 254.20M & 0.05M & 0.02\% & 4.48\% & 0.44M & 0.17\% & 2.94\%\\
SE-ResNet-110 \cite{hu2018squeeze}& 254.73M & 0.05M & 0.02\% & 2.17\% & 0.44M & 0.17\% & 1.08\%\\
Wide-ResNet-16-8 \cite{Zagoruyko2016WideRN}& 1.55G & 2.96M & 0.19\% & 3.39\% & 27.19M & 1.76\% & 10.77\%\\
Wide-ResNet-28-10 \cite{Zagoruyko2016WideRN}& 5.25G & 4.61M & 0.09\% & 2.37\% & 42.46M & 0.81\% & 12.35\%\\
ResNeXt-29, 8x64d \cite{xie2017aggregated}& 5.39G & 11.80M & 0.22\% & 1.90\% & 109.00M & 2.02\% & 12.32\%\\
DenseNet-BC-100-12 \cite{2016arXiv160806993H}& 292.38M & 1.35M & 0.46\% & 5.50\% & 12.43M & 4.25\% & 12.03\%\\
Shake-Shake (26, 2x32d) \cite{gastaldi2017shake} & 426.69M & 0.19M & 0.04\% & 5.77\% & 1.76M & 0.41\% & 2.21\% \\
Shake-Shake (26, 2x112d) \cite{gastaldi2017shake} & 5.13G & 2.31M & 0.05\% & 3.85\% & 21.30M & 0.42\% & 5.07\% \\
\hline
\end{tabular}}
\vskip -0.1in
\end{table*}
\begin{table*}[t]
\centering
\caption{Evaluation of ISDA with state-of-the-art \textit{non-semantic} augmentation techniques. `RA' and `AA' refer to RandAugment \cite{cubuk2020randaugment} and AutoAugment \cite{cubuk2018autoaugment}, respectively. We report mean values and standard deviations in five independent experiments. The better results are \textbf{bold-faced}.}
\vskip -0.1in
\label{complementary_result}
\setlength{\tabcolsep}{1.4mm}{
\renewcommand\arraystretch{1.21}
\begin{tabular}{c|c|cc|cc|cc}
\hline
Dataset & Networks & Cutout \cite{devries2017improved} & Cutout + ISDA & RA \cite{cubuk2020randaugment} & RA + ISDA & AA \cite{cubuk2018autoaugment} & AA + ISDA \\
\hline
\multirow{3}{*}{CIFAR-10} & Wide-ResNet-28-10 \cite{Zagoruyko2016WideRN} & 2.99 $\pm$ 0.06\% & \textbf{2.83 $\pm$ 0.04\%} & 2.78 $\pm$ 0.03\% & \textbf{2.42 $\pm$ 0.13\%} & 2.65 $\pm$ 0.07\%& \textbf{2.56 $\pm$ 0.01\%}\\
&Shake-Shake (26, 2x32d) \cite{gastaldi2017shake} & 3.16 $\pm$ 0.09\%& \textbf{2.93 $\pm$ 0.03\%}& 3.00 $\pm$ 0.05\% & \textbf{2.74 $\pm$ 0.03\%} & 2.89 $\pm$ 0.09\% & \textbf{2.68 $\pm$ 0.12\%} \\
&Shake-Shake (26, 2x112d) \cite{gastaldi2017shake} & 2.36\%& \textbf{2.25\%}& 2.10\%& \textbf{1.76\%} & 2.01\%& \textbf{1.82\%}\\
\hline
\multirow{3}{*}{CIFAR-100} & Wide-ResNet-28-10 \cite{Zagoruyko2016WideRN} & 18.05 $\pm$ 0.25\% & \textbf{16.95 $\pm$ 0.11\%} &17.30 $\pm$ 0.08\%& \textbf{15.97 $\pm$ 0.28\%}& 16.60 $\pm$ 0.40\%& \textbf{15.62 $\pm$ 0.32\%}\\
&Shake-Shake (26, 2x32d) \cite{gastaldi2017shake} & 18.92 $\pm$ 0.21\%& \textbf{18.17 $\pm$ 0.08\%}&18.11 $\pm$ 0.24\% &\textbf{17.84 $\pm$ 0.16\%}&17.50 $\pm$ 0.19\% &\textbf{17.21 $\pm$ 0.33\%} \\
&Shake-Shake (26, 2x112d) \cite{gastaldi2017shake} & 17.34 $\pm$ 0.28\%& \textbf{16.24 $\pm$ 0.20\%}&15.95 $\pm$ 0.15\% &\textbf{14.24 $\pm$ 0.07\%}&15.21 $\pm$ 0.20\% &\textbf{13.87 $\pm$ 0.26\%} \\
\hline
\end{tabular}}
\vskip -0.1in
\end{table*}
\begin{table*}[t]
\centering
\caption{Comparisons with the state-of-the-art methods. We report mean values and standard deviations of the test errors in five independent experiments. The best results are \textbf{bold-faced}.}
\vskip -0.1in
\label{Tab02}
\setlength{\tabcolsep}{6mm}{
\renewcommand\arraystretch{1.21}
\begin{tabular}{l|cc|cc}
\hline
\multirow{2}{*}{Method} & \multicolumn{2}{c|}{ResNet-110} & \multicolumn{2}{c}{
Wide-ResNet-28-10}\\
& CIFAR-10 & CIFAR-100 & CIFAR-10 & CIFAR-100\\
\hline
Large Margin \cite{liu2016large} & 6.46 $\pm$ 0.20\% & 28.00 $\pm$ 0.09\% & 3.69 $\pm$ 0.10\% & 18.48 $\pm$ 0.05\%\\
Disturb Label \cite{Xie2016DisturbLabelRC} & 6.61 $\pm$ 0.04\% & 28.46 $\pm$ 0.32\% & 3.91 $\pm$ 0.10\%& 18.56 $\pm$ 0.22\%\\
Focal Loss \cite{Lin2017FocalLF} & 6.68 $\pm$ 0.22\% & 28.28 $\pm$ 0.32\% & 3.62 $\pm$ 0.07\% & 18.22 $\pm$ 0.08\%\\
Center Loss \cite{wen2016discriminative} & 6.38 $\pm$ 0.20\% & 27.85 $\pm$ 0.10\% & 3.76 $\pm$ 0.05\% & {18.50 $\pm$ 0.25\%}\\
L$_q$ Loss \cite{Zhang2018GeneralizedCE} & 6.69 $\pm$ 0.07\% & 28.78 $\pm$ 0.35\% & 3.78 $\pm$ 0.08\% & 18.43 $\pm$ 0.37\%\\
Label Smoothing \cite{muller2019does} & 6.58 $\pm$ 0.42\% & 27.89 $\pm$ 0.20\% & 3.79 $\pm$ 0.16\% & 18.48 $\pm$ 0.24\% \\
DistributionNet \cite{yu2019robust} & 6.35 $\pm$ 0.11\% & 28.18 $\pm$ 0.44\% & 3.68 $\pm$ 0.06\% & 18.34 $\pm$ 0.16\%\\
\hline
WGAN \cite{arjovsky2017wasserstein} & 6.63 $\pm$ 0.23\% & - & 3.81 $\pm$ 0.08\% & -\\
CGAN \cite{mirza2014conditional} & 6.56 $\pm$ 0.14\% & 28.25 $\pm$ 0.36\% & 3.84 $\pm$ 0.07\% & 18.79 $\pm$ 0.08\%\\
ACGAN \cite{odena2017conditional} & 6.32 $\pm$ 0.12\% & 28.48 $\pm$ 0.44\% & 3.81 $\pm$ 0.11\% & 18.54 $\pm$ 0.05\%\\
infoGAN \cite{chen2016infogan} & 6.59 $\pm$ 0.12\% & 27.64 $\pm$ 0.14\% & 3.81 $\pm$ 0.05\% & 18.44 $\pm$ 0.10\%\\
\hline
Basic & 6.76 $\pm$ 0.34\% & 28.67 $\pm$ 0.44\% & - & -\\
Basic + Dropout & {6.23 $\pm$ 0.11\%} & {27.11 $\pm$ 0.06\%} & 3.82 $\pm$ 0.15\% & 18.53 $\pm$ 0.07\%\\
ISDA & 6.33 $\pm$ 0.19\% & 27.57 $\pm$ 0.46\% & - & -\\
ISDA + Dropout & \textbf{5.98 $\pm$ 0.20\%} & \textbf{26.35 $\pm$ 0.30\%} & \textbf{3.58 $\pm$ 0.15\%} & \textbf{17.98 $\pm$ 0.15\%}\\
\hline
\end{tabular}}
\vskip -0.1in
\end{table*}
\textbf{Non-semantic augmentation techniques. }
To study the complementary effects of ISDA to traditional data augmentation methods, three state-of-the-art non-semantic augmentation techniques are applied with and without ISDA.
(1) \textit{Cutout} \cite{devries2017improved} randomly masks out square regions of inputs during training.
(2) \textit{AutoAugment} \cite{cubuk2018autoaugment} automatically searches for the best augmentation policy using reinforcement learning.
(3) \textit{RandAugment} \cite{cubuk2020randaugment} searches for augmentation policies using grid search in a reduced searching space.
\textbf{Baselines for supervised learning.}
Our method is compared with several baselines including state-of-the-art robust loss functions and generator-based semantic data augmentation methods.
(1) \textit{Dropout} \cite{Srivastava2014DropoutAS} is a widely used regularization approach that randomly mutes some neurons during training.
(2) \textit{Large-margin softmax loss} \cite{liu2016large} introduces a large decision margin, measured by a cosine distance, to the standard CE loss.
(3) \textit{Disturb label} \cite{Xie2016DisturbLabelRC} is a regularization mechanism that randomly replaces a fraction of labels with incorrect ones in each iteration.
(4) \textit{Focal loss} \cite{Lin2017FocalLF} focuses on a sparse set of hard examples to prevent easy samples from dominating the training procedure.
(5) \textit{Center loss} \cite{wen2016discriminative} simultaneously learns a center of features for each class and minimizes the distances between the deep features and their corresponding class centers.
(6) \textit{$L_q$ loss} \cite{Zhang2018GeneralizedCE} is a noise-robust loss function, using the negative Box-Cox transformation.
(7) \textit{Label Smoothing} \cite{muller2019does} smooth the one-hot label to a soft one with equal values for other classes.
(8) \textit{DistributionNet} \cite{yu2019robust} models the deep features of training samples as Gaussian distributions, and learns the covariance automatically.
(9) For generator-based semantic augmentation methods, we train several state-of-the-art GANs \cite{arjovsky2017wasserstein, mirza2014conditional, odena2017conditional, chen2016infogan}, which are then used to generate extra training samples for data augmentation.
\textbf{Baselines for semi-supervised learning.}
In semi-supervised learning experiments, the performance of ISDA is tested on the basis of several modern deep semi-supervised learning approaches.
(1) \textit{$\Pi$-model} \cite{laine2016temporal} enforces the model to have the same prediction on a sample with different augmentation and dropout modes.
(2) \textit{Temp-ensemble} \cite{laine2016temporal} attaches a soft pseudo label to each unlabeled sample by performing a moving average on the predictions of networks.
(3) \textit{Mean teacher} \cite{tarvainen2017mean} establishes a teacher network by performing an exponential moving average on the parameters of the model, and leverages the teacher network to produce supervision for unlabeled data.
(4) \textit{Virtual Adversarial Training (VAT)} \cite{miyato2018virtual} adds adversarial perturbation to each sample and enforces the model to have the same prediction on the perturbed samples and the original samples.
For a fair comparison, all methods are implemented with the same training configurations. Details for hyper-parameter settings are presented in Appendix \ref{hyper-para-baseline}.
\textbf{Implementation details.}
For supervised learning, we implement the ResNet, SE-ResNet, Wide-ResNet, ResNeXt, DenseNet and Shake-shake net on the two CIFAR datasets, and implement ResNet, DenseNet and ResNeXt on ImageNet.
For semi-supervised learning, we implement the widely used CNN-13 network \cite{luo2018smooth, xie2019unsupervised, verma2019interpolation, laine2016temporal, tarvainen2017mean, miyato2018virtual, wang2020meta}. Details for implementing these models are given in Appendix \ref{Training_Details_sup} and Appendix \ref{Training_Details_semi_sup} for supervised learning and semi-supervised learning, respectively.
The hyper-parameter $\lambda_0$ for ISDA is selected from the set $\{0.1, 0.25, 0.5, 0.75, 1\}$ according to the performance on the validation set. On ImageNet, due to GPU memory limitation, we approximate the covariance matrices by their diagonals, i.e., the variance of each dimension of the features. The best hyper-parameter $\lambda_0$ is selected from $\{1, 2.5, 5, 7.5, 10\}$. For semi-supervised learning tasks, the hyper-parameter $\eta_1$ is selected from $\{0.5, 1, 2\}$. In all experiments, the average test error of the last 10 epochs is calculated as the result to be reported.
\subsection{Supervised Image Classification}
\subsubsection{Main Results}
\textbf{Results on ImageNet. }
Table \ref{ImageNet Results} presents the performance of ISDA on the large scale ImageNet dataset with state-of-the-art deep networks. It can be observed that ISDA significantly improves the generalization performance of these models. For example, the Top-1 error rate of ResNet-50 is reduced by $1.1\%$ via being trained with ISDA, approaching the performance of ResNet-101 ($21.9\%$ v.s. $21.7\%$) with $43\%$ fewer parameters. Similarly, the performance of ResNet-101+ISDA surpasses that of ResNet-152 with $26\%$ less parameters. Compared to ResNets, DenseNets generally suffer less from overfitting due to their architecture design, and thus appear to benefit less from our algorithm.
\textbf{Results on CIFAR. }
We report the error rates of several modern deep networks with and without ISDA on CIFAR-10/100 in Table \ref{different_networks}. Similar observations to ImageNet can be obtained. On CIFAR-100, for relatively small models like ResNet-32 and ResNet-110, ISDA reduces test errors by about $1\%$, while for larger models like Wide-ResNet-28-10 and ResNeXt-29, 8x64d, our method outperforms the competitive baselines by nearly $0.7\%$.
\textbf{Complementing explicit augmentation techniques. }
Table \ref{complementary_result} shows the experimental results with recently proposed traditional image augmentation methods (i.e., Cutout \cite{devries2017improved}, RandAugment \cite{cubuk2020randaugment} and AutoAugment \cite{cubuk2018autoaugment}). Interestingly, ISDA seems to be even more effective when these techniques exist. For example, when applying AutoAugment, ISDA achieves performance gains of $1.34\%$ and $0.98\%$ on CIFAR-100 with the Shake-Shake (26, 2x112d) and the Wide-ResNet-28-10, respectively. Note that these improvements are more significant than the standard situations. A plausible explanation for this phenomenon is that non-semantic augmentation methods help to learn a better feature representation, which makes semantic transformations in the deep feature space more reliable. The curves of test errors during training on CIFAR-100 with Wide-ResNet-28-10 are presented in Figure \ref{bound_fig}. It is clear that ISDA achieves a significant improvement after the third learning rate drop, and shows even better performance after the fourth drop.
\subsubsection{Comparisons with Other Approaches}
We compare ISDA with a number of competitive baselines described in Section \ref{sec:dataset-baseline}, ranging from robust loss functions to semantic data augmentation algorithms based on generative models.
The results are summarized in Table \ref{Tab02}.
One can observe that ISDA compares favorably with all these baseline algorithms.
On CIFAR-100, the best test errors of other robust loss functions are 27.85\% and 18.22\% with ResNet-110 and Wide-ResNet-28-10, respectively, while ISDA achieves 27.57\% and 17.98\%, respectively. Note that all the results with Wide-ResNet-28-10 use the dropout technique.
Among all GAN-based semantic augmentation methods, ACGAN gives the best performance, especially on CIFAR-10. However, these models generally suffer a performance reduction on CIFAR-100, which does not contain enough samples to learn a valid generator for each class. In contrast, ISDA shows consistent improvements on both the two datasets. In addition, GAN-based methods require additional computation to train the generators, and introduce significant overhead to the training process. In comparison, ISDA not only leads to lower generalization error, but is simpler and more efficient.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{bound_fig_c100.pdf}
\vskip -0.149in
\caption{Curves of test errors on CIFAR-100 with Wide-ResNet (WRN). `AA' refers to AutoAugment \cite{cubuk2018autoaugment}. \label{bound_fig}}
\vskip -0.1in
\end{figure}
\begin{table*}[t]
\centering
\caption{Performance of state-of-the-art semi-supervised learning algorithms with and without ISDA. We conduct experiments with different numbers of labeled samples. Mean results and standard deviations of five independent experiments are reported. The better results are \textbf{bold-faced}. }
\vskip -0.109in
\label{semi-supervised}
\setlength{\tabcolsep}{1.5mm}{
\renewcommand\arraystretch{1.21}
\begin{tabular}{c|ccc|cc|c}
\hline
Dataset & \multicolumn{3}{c|}{CIFAR-10} & \multicolumn{2}{c|}{CIFAR-100} & SVHN \\
\hline
Labeled Samples & 1,000&2,000&4,000 & 10,000&20,000 & 500 \\
\hline
$\Pi$-model \cite{laine2016temporal} & 28.74 $\pm$ 0.48\% & 17.57 $\pm$ 0.44\%& 12.36 $\pm$ 0.17\% & 38.06 $\pm$ 0.37\% & 30.80 $\pm$ 0.18\% & - \\
$\Pi$-model + ISDA & \textbf{23.99 $\pm$ 1.30\%} & \textbf{14.90 $\pm$ 0.10\%}& \textbf{11.35 $\pm$ 0.19\%} & \textbf{36.93 $\pm$ 0.28\%} & \textbf{ 30.03 $\pm$ 0.41\% } & - \\
\hline
Temp-ensemble \cite{laine2016temporal} & 25.15 $\pm$ 1.46\% &15.78 $\pm$ 0.44\% & 11.90 $\pm$ 0.25\% & 41.56 $\pm$ 0.42\% & 35.35 $\pm$ 0.40\% & - \\
Temp-ensemble + ISDA & \textbf{22.77 $\pm$ 0.63\%} & \textbf{14.98 $\pm$ 0.73\%}& \textbf{11.25 $\pm$ 0.31\%} & \textbf{40.47 $\pm$ 0.24\%} & \textbf{34.58 $\pm$ 0.27\%} & - \\
\hline
Mean Teacher \cite{tarvainen2017mean} & 18.27 $\pm$ 0.53\%&13.45 $\pm$ 0.30\%&10.73 $\pm$ 0.14\% & 36.03 $\pm$ 0.37\% & 30.00 $\pm$ 0.59\%& 4.18 $\pm$ 0.27\% \\
Mean Teacher + ISDA &\textbf{17.11 $\pm$ 1.03\%} & \textbf{12.35 $\pm$ 0.14\%}&\textbf{9.96 $\pm$ 0.33\%} & \textbf{34.60 $\pm$ 0.41\%} & \textbf{29.37 $\pm$ 0.30\%} & \textbf{4.06 $\pm$ 0.11\%} \\
\hline
VAT \cite{miyato2018virtual} & 18.12 $\pm$ 0.82\%&13.93 $\pm$ 0.33\%&11.10 $\pm$ 0.24\% &40.12 $\pm$ 0.12\% &34.19 $\pm$ 0.69\% & 5.10 $\pm$ 0.08\% \\
VAT + ISDA &\textbf{14.38 $\pm$ 0.18\%} & \textbf{11.52 $\pm$ 0.05\%}&\textbf{9.72 $\pm$ 0.14\%} &\textbf{36.04 $\pm$ 0.47\%} &\textbf{30.97 $\pm$ 0.42\%} & \textbf{4.86 $\pm$ 0.18\%} \\
\hline
\end{tabular}}
\vskip -0.1in
\end{table*}
\begin{table*}[t]
\centering
\caption{Performance of state-of-the-art semantic segmentation algorithms on Cityscapes with and without ISDA. `Multi-scale' and `Flip' denote employing the averaged prediction of multi-scale (\{0.75, 1, 1.25. 1.5\}) and left-right flipped inputs during inference. We present the results reported in the original papers in the `original' row. The numbers in brackets denote the performance improvements achieved by ISDA. The better results are \textbf{bold-faced}.
}
\vskip -0.12in
\label{Segmentation}
\setlength{\tabcolsep}{4mm}{
\vspace{5pt}
\renewcommand\arraystretch{1.21}
\begin{tabular}{c|c|c|c|c}
\hline
\multirow{2}{*}{Method} & \multirow{2}{*}{Backbone} & \multicolumn{3}{c}{mIoU (\%)} \\
\cline{3-5}
&& Single Scale & Multi-scale & Multi-scale + Flip \\
\hline
PSPNet \cite{zhao2017pyramid} & ResNet-101 & 77.46 & 78.10 & 78.41 \\
PSPNet + ISDA & ResNet-101 & \textbf{78.72 ${(\uparrow1.26)}$} & \textbf{79.64 ${(\uparrow1.54)}$} & \textbf{79.44 ${(\uparrow1.03)}$} \\
\hline
DeepLab-v3 (Original) \cite{chen2017rethinking} & ResNet-101 & 77.82 & 79.06 & 79.30 \\
DeepLab-v3 & ResNet-101 & 78.38 & 79.20 & 79.47 \\
DeepLab-v3 + ISDA & ResNet-101 & \textbf{79.41 ${(\uparrow1.03)}$} & \textbf{80.30 ${(\uparrow1.10)}$} & \textbf{80.36 ${(\uparrow0.89)}$} \\
\hline
\end{tabular}}
\end{table*}
\begin{figure*}[t]
\begin{center}
\vskip -0.1in
\includegraphics[width=1.6\columnwidth]{augmentation_result_1.pdf}
\vskip -0.1in
\caption{Visualization of the semantically augmented images on CIFAR.}
\vskip -0.1in
\label{Visual}
\end{center}
\vskip -0.1in
\end{figure*}
\subsubsection{Efficiency of ISDA}
\label{Computational_Cost_ISDA}
We report the theoretical computational overhead (measured by FLOPs) and the practical additional time consumption of ISDA in Table \ref{ImageNet Results} and Table \ref{ComputationalCost}. The results are obtained with 8 and 1 Tesla V100 GPUs on ImageNet and CIFAR using the public code we provide (\color{blue}{\textit{https://github.com/blackfeather-wang/ISDA-for-Deep-Networks}}\color{black}). One can see that ISDA increases the computational cost by no more than $1\%$ for most of the widely used deep networks on both ImageNet and CIFAR. The observation is consistent with our analysis in Section \ref{Complexity}. Notably, we find that our method involves a bit more computational consumption with ResNeXt and DenseNet on CIFAR-100. That is because the dimensions of the deep features learned by the two networks are relatively large. As a consequence, the size of the estimated covariance matrices is large, and thus more computation is required to update the covariance. Whereas, the additional computational cost is up to $4.25\%$, which will not significantly affect the training efficiency. Empirically, due to implementation issues, we observe $5\%$ to $7\%$ and $1\%$ to $12\%$ increase in training time on ImageNet and CIFAR, respectively.
\subsection{Semi-supervised Image Classification}
\label{semi_ISDA_result}
To test the performance of ISDA on semi-supervised learning tasks, we divide the training set into two parts, a labeled set and an unlabeled set, by randomly removing parts of the labels, and implement the proposed semi-supervised ISDA algorithm on the basis of several state-of-the-art semi-supervised learning algorithms. Results with different numbers of labeled samples are presented in Table \ref{semi-supervised}. It can be observed that ISDA complements these methods and further improves the generalization performance significantly. On CIFAR-10 with 4,000 labeled samples, adopting ISDA reduces the test error by $1.38\%$ with the VAT algorithm. In addition, ISDA performs even more effectively with fewer labeled samples and more classes. For example, VAT + ISDA outperforms the baseline by $3.74\%$ and $4.08\%$ on CIFAR-10 with 1,000 labeled samples and CIFAR-100 with 10,000 labeled samples, respectively.
\subsection{Semantic Segmentation on Cityscapes}
\label{Semantic_segmentation}
As ISDA augments training samples in the deep feature space, it can also be adopted for other classification based vision tasks, as long as the softmax cross-entropy loss is used. To demonstrate that, we apply the proposed algorithm to the semantic segmentation task on the Cityscapes dataset \cite{cordts2016cityscapes}, which contains 5,000 1024x2048 pixel-level finely annotated images and 20,000 coarsely annotated images from 50 different cities. Each pixel of the image is categorized among 19 classes. Following \cite{huang2019ccnet, liu2020structured}, we conduct our experiments on the finely annotated dataset and split it by 2,975/500/1,525 for training, validation and testing.
We first reproduce two modern semantic segmentation algorithms, PSPNet \cite{zhao2017pyramid} and Deeplab-v3 \cite{chen2017rethinking}, using the standard hyper-parameters\footnote{\color{blue}https://github.com/speedinghzl/pytorch-segmentation-toolbox\color{black}}. Then we fix the training setups and utilize ISDA to augment each pixel during training. Similar to ImageNet, we approximate the covariance matrices by their diagonals to save GPU memory. The results on the validation set are shown in Table \ref{Segmentation}. It can be observed that ISDA improves the performance of both the two baselines by nearly $1\%$ in terms of mIoU. In our experiments, we witness about $6\%$ increase in training time with ISDA.
\begin{figure*}
\begin{center}
\includegraphics[width=1.8\columnwidth]{augmentation_result_pami_imagenet.pdf}
\vskip -0.1in
\caption{Visualization of the semantically augmented images on ImageNet. ISDA is able to alter the semantics of images that are unrelated to the class identity, like backgrounds, actions of animals, visual angles, etc. We also present the randomly generated images of the same class.}
\vskip -0.1in
\label{fig:Visual_imagenet}
\end{center}
\vskip -0.1in
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[width=1.9\columnwidth]{tsne.pdf}
\vskip -0.1in
\caption{Visualization of deep features on CIFAR-10 using the t-SNE algorithm \cite{maaten2008visualizing}. Each color denotes a class. (a), (b) present the results of supervised learning with ResNet-110, while (c), (d) present the results of semi-supervised learning with the VAT algorithm and 4000 labeled samples. The standard non-semantic data augmentation techniques are implemented.}
\vskip -0.1in
\label{tsne}
\end{center}
\vskip -0.1in
\end{figure*}
\subsection{Analytical Results}
\label{Analytical_results}
\subsubsection{Visualization}
\textbf{Visualization of augmented images. }
To demonstrate that our method is able to generate meaningful semantically augmented samples, we introduce an approach to map the augmented features back to the pixel space to explicitly show semantic changes of the images. Due to space limitations, we defer the detailed introduction of the mapping algorithm and present it in Appendix \ref{reverse_alg}. Figure \ref{Visual} and Figure \ref{fig:Visual_imagenet} show the visualization results. The first column represents the original images. The 'Augmented' columns present the images augmented by the proposed ISDA. It can be observed that ISDA is able to alter the semantics of images, e.g., backgrounds, visual angles, actions of dogs and color of skins, which is not possible for traditional data augmentation techniques. For a clear comparison, we also present the randomly generated images of the same class.
\textbf{Visualization of Deep Features.}
We visualize the learned deep features on CIFAR-10 with and without ISDA using the t-SNE algorithm \cite{maaten2008visualizing}. The results of both supervised learning with ResNet-110 and semi-supervised learning with VAT are presented. It can be observed that, with ISDA, the deep features of different classes form more tight and concentrated clusters. Intuitively, they are potentially more separable from each other. In contrast, features learned without ISDA are distributed in clusters that have many overlapped parts.
\subsubsection{Tightness of the Upper Bound $\overline{\mathcal{L}}_{\infty}$}
As aforementioned, the proposed ISDA algorithm uses the upper bound of the expected loss as the the surrogate loss. Therefore, the upper bound $\overline{\mathcal{L}}_{\infty}$ is required to be tight enough to ensure that the expected loss is minimized. To check the tightness of $\overline{\mathcal{L}}_{\infty}$ in practice, we empirically compute ${\mathcal{L}}_{\infty}$ and $\overline{\mathcal{L}}_{\infty}$ over the training iterations of ResNet-110, shown in Figure \ref{loss_curve}. We can observe that $\overline{\mathcal{L}}_{\infty}$ gives a very tight upper bound on both CIFAR-10 and CIFAR-100.
\begin{figure}[t]
\begin{center}
\subfigure[CIFAR-10]{
\label{fig:loss_curve_1}
\includegraphics[width=0.49\columnwidth]{loss_curve_1.pdf}
}
\hspace{-0.15in}
\subfigure[CIFAR-100]{
\label{fig:loss_curve_2}
\includegraphics[width=0.49\columnwidth]{loss_curve_2.pdf}}
\vskip -0.1in
\caption{Values of $\mathcal{L}_{\infty}$ and $\overline{\mathcal{L}}_{\infty}$ over the training process. The value of ${\mathcal{L}}_{\infty}$ is estimated using Monte-Carlo sampling with a sample size of 1,000.}
\label{loss_curve}
\end{center}
\vskip -0.2in
\end{figure}
\begin{figure}[t]
\hspace{-0.4in}
\begin{center}
\subfigure[w/ Cutout]{
\label{fig:samlp_M_cutout}
\includegraphics[width=0.5\columnwidth]{samlp_M_cutout.pdf}
}
\hspace{-0.2in}
\subfigure[w/ AutoAugment]{
\label{fig:samlp_M_AA}
\includegraphics[width=0.5\columnwidth]{samlp_M_AA.pdf}}
\vskip -0.1in
\caption{Comparisons of explicit semantic data augmentation (explicit SDA) and ISDA. For the former, we vary the value of the sample times $M$, and train the networks by minimizing Eq. (\ref{eq2}). As a baseline, we also consider directly updating the covariance matrices (Cov) $\Sigma_{1}$, $\Sigma_{2}$, $...$, $\Sigma_{C}$ with gradient decent. The results are presents in red lines. We report the test errors of Wide-ResNet-28-10 on CIFAR-100 with the Cutout and AutoAugment augmentation. $M=0$ refers to the baseline results, while $M=\infty$ refers to ISDA.}
\label{fig:samlp_M}
\end{center}
\vskip -0.2in
\end{figure}
\subsubsection{Comparisons of Explicit and Implicit Semantic Data Augmentation}
To study whether the proposed upper bound $\overline{\mathcal{L}}_{\infty}$ leads to better performance than the sample-based explicit semantic data augmentation (i.e., explicit SDA, minimizing Eq. (\ref{eq2}) with certain sample times $M$), we compare these two approaches in Figure \ref{fig:samlp_M}. We also consider another baseline, namely learning the covariance matrices $\Sigma_{1}$, $\Sigma_{2}$, $...$, $\Sigma_{C}$ directly by gradient decent. For explicit SDA, this is achieved by the re-parameterization trick \cite{Kingma2014AutoEncodingVB, yu2019robust}. To ensure learning symmetrical positive semi-definite covariance matrices, we let $\Sigma_{i} = \bm{D}_{i}\bm{D}_{i}^{\textnormal{T}}$, and update $\bm{D}_{i}$ instead, which has the same size as $\Sigma_{i}$. In addition, to avoid trivially obtaining all-zero covariance matrices, we add the feature uncertainty loss in \cite{yu2019robust} to the loss function for encouraging the augmentation distributions with large entropy. Its coefficient is tuned on the validation set.
From the results one can observe that explicit SDA with small $M$ manages to reduce test errors, but the effects are less significant. This might be attributed to the high dimensions of feature space. For example, given that Wide-ResNet-28-10 produces 640-dimensional features, small sample numbers (e.g., $M=1,2,5$) may result in poor estimates of the expected loss. Accordingly, when $M$ grows larger, the performance of explicit SDA approaches ISDA, indicating that ISDA models the case of $M \to \infty$. On the other hand, we note that the dynamically estimated intra-class covariance matrices outperform the directly learned ones consistently for both explicit and implicit augmentation. We tentatively attribute this to the rich class-conditional semantic information captured by the former.
\subsubsection{Sensitivity Test}
To study how the hyper-parameter $\lambda_0$ affects the performance of our method, sensitivity tests are conducted for both supervised learning and semi-supervised learning. The results are shown in Figure \ref{sensitivity}.
It can be observed that ISDA achieves superior performance robustly with $ 0.25\!\leq\!\lambda_0\!\leq\!1$, and the error rates start to increase with $\lambda_0\!>\!1$. However, ISDA is still effective even when $\lambda_0$ grows to $5$, while it performs slightly worse than baselines when $\lambda_0$ reaches $10$.
Empirically, we recommend $\lambda_0\!=\!0.5$ for naive implementation or a start point of hyper-parameter searching.
\begin{figure}[t]
\begin{center}
\subfigure[Supervised learning]{
\label{fig:sense_1}
\includegraphics[width=0.49\columnwidth]{sense_1.pdf}
}
\hspace{-0.15in}
\subfigure[Semi-supervised learning]{
\label{fig:sense_2}
\includegraphics[width=0.49\columnwidth]{sense_2.pdf}}
\vskip -0.1in
\caption{Sensitivity analysis of ISDA. For supervised learning, we report the test errors of Wide-ResNet-28-10 on CIFAR-100 with different values of $\lambda_0$. The Cutout augmentation is adopted. For semi-supervised learning, we present the results of VAT + ISDA on CIFAR-10 with 4,000 labels.}
\label{sensitivity}
\end{center}
\vskip -0.2in
\end{figure}
\begin{table}[t]
\centering
\caption{The ablation study for ISDA.}
\vskip -0.1in
\label{Tab05}
\setlength{\tabcolsep}{0.7mm}{
\renewcommand\arraystretch{1.21}
\begin{tabular}{l|c|c|c}
\hline
\multirow{2}{*}{Setting} & \multirow{2}{*}{CIFAR-10} & \multirow{2}{*}{CIFAR-100} & CIFAR-100\\
&&& + Cutout\\
\hline
Basic & 3.82 $\pm$ 0.15\% & 18.58 $\pm$ 0.10\% & 18.05 $\pm$ 0.25\%\\
\hline
Identity matrix & 3.63 $\pm$ 0.12\% & 18.53 $\pm$ 0.02\% & 17.83 $\pm$ 0.36\% \\
Diagonal matrix & 3.70 $\pm$ 0.15\% & 18.23 $\pm$ 0.02\% & 17.54 $\pm$ 0.20\% \\
Single covariance matrix & 3.67 $\pm$ 0.07\% & 18.29 $\pm$ 0.13\% & 18.12 $\pm$ 0.20\% \\
Constant $\lambda_0$ & 3.69 $\pm$ 0.08\% & 18.33 $\pm$ 0.16\% & 17.34 $\pm$ 0.15\% \\
\hline
ISDA & \textbf{3.58 $\pm$ 0.15\%} & \textbf{17.98 $\pm$ 0.15\%} & \textbf{16.95 $\pm$ 0.11\%} \\
\hline
\end{tabular}}
\vskip -0.1in
\end{table}
\subsubsection{Ablation Study}
To get a better understanding of the effectiveness of different components in ISDA, we conduct a series of ablation studies. In specific, several variants are considered:
(1) \textit{Identity matrix} means replacing the covariance matrix $\Sigma_j$ by the identity matrix.
(2) \textit{Diagonal matrix} means using only the diagonal elements of the covariance matrix $\Sigma_j$.
(3) \textit{Single covariance matrix} means using a global covariance matrix computed from the features of all classes.
(4) \textit{Constant $\lambda_0$} means using a constant $\lambda_0$ instead of a function of the training iterations.
Table \ref{Tab05} presents the ablation results. Adopting the identity matrix increases the test error by 0.05\%, 0.55\% and 0.88\% on CIFAR-10, CIFAR-100 and CIFAR-100+Cutout, respectively. Using a single covariance matrix greatly degrades the generalization performance as well. The reason is likely to be that both of them fail to find proper directions in the deep feature space to perform meaningful semantic transformations. Adopting a diagonal matrix also hurts the performance as it does not consider correlations of features.
\section{Introduction}
\IEEEPARstart{D}{ata} augmentation is an effective technique to alleviate the overfitting problem in training deep networks \cite{krizhevsky2009learning,krizhevsky2012imagenet,2014arXiv1409.1556S,He_2016_CVPR,2016arXiv160806993H}.
In the context of image recognition, this usually corresponds to applying content preserving transformations, e.g., cropping, horizontal mirroring, rotation and color jittering, on the input samples.
Although being effective, these augmentation techniques are not capable of performing semantic transformations, such as changing the background of an object or the texture of a foreground object. Recent work has shown that data augmentation can be more powerful if these (class identity preserving) semantic transformations are allowed \cite{NIPS2017_6916, bowles2018gan, antoniou2017data}. For example, by training a generative adversarial network (GAN) for each class in the training set, one could then sample an infinite number of samples from the generator. Unfortunately, this procedure is computationally intensive because training generative models and inferring them to obtain augmented samples are both nontrivial tasks. Moreover, due to the extra augmented data, the training procedure is also likely to be prolonged.
\begin{figure}[t]
\vskip -0.05in
\begin{center}
\centerline{\includegraphics[width=\columnwidth]{Illustration_of_SDA.pdf}}
\vspace{-3ex}
\caption{The comparison of traditional and semantic data augmentation. Conventionally, data augmentation usually corresponds to naive image transformations (like flipping, rotating, translating, etc.) in the pixel space. Performing class identity preserving semantic transformations (like changing the color of a car, changing the background of an object, etc.) is another effective approach to augment the training data, which is complementary to traditional techniques.}
\label{illustration_SDA}
\end{center}
\vskip -0.32in
\end{figure}
\begin{figure*}[t]
\begin{center}
\centerline{\includegraphics[width=2\columnwidth]{Overview.pdf}}
\vskip -0.15in
\caption{An overview of ISDA. Inspired by the observation that certain directions in the feature space correspond to meaningful semantic transformations, we augment the training data semantically by translating their features along these semantic directions, without involving auxiliary deep networks. The directions are obtained by sampling random vectors from a zero-mean normal distribution with dynamically estimated class-conditional covariance matrices.
In addition, instead of performing augmentation explicitly, ISDA boils down to minimizing a closed-form upper-bound of the expected cross-entropy loss on the augmented training set, which makes our method highly efficient.
}
\label{overview}
\end{center}
\vskip -0.2in
\end{figure*}
In this paper, we propose an implicit semantic data augmentation (ISDA) algorithm for training deep networks. The ISDA is highly efficient as it does not require training/inferring auxiliary networks or explicitly generating extra training samples. Our approach is motivated by the intriguing observation made by recent work showing that the features deep in a network are usually linearized \cite{Upchurch2017DeepFI, bengio2013better}. Specifically, there exist many semantic directions in the deep feature space, such that translating a data sample in the feature space along one of these directions results in a feature representation corresponding to another sample with the same class identity but different semantics. For example, a certain direction corresponds to the semantic translation of "make-bespectacled". When the feature of a person, who does not wear glasses, is translated along this direction, the new feature may correspond to the same person but with glasses (The new image can be explicitly reconstructed using proper algorithms as shown in \cite{Upchurch2017DeepFI}). Therefore, by searching for many such semantic directions, we can effectively augment the training set in a way complementary to traditional data augmenting techniques.
However, explicitly finding semantic directions is not a trivial task, which usually requires extensive human annotations \cite{Upchurch2017DeepFI}. In contrast, sampling directions randomly is efficient but may result in meaningless transformations. For example, it makes no sense to apply the "make-bespectacled" transformation to the ``car'' class. In this paper, we adopt a simple method that achieves a good balance between effectiveness and efficiency. In specific, we perform an online estimate of the covariance matrix of the features for \emph{each} class, which captures the intra-class variations. Then we sample directions from a zero-mean multi-variate normal distribution with the estimated covariance and apply them to the features of training samples in that class to augment the dataset. In this way, the chance of generating meaningless semantic transformations can be significantly reduced.
To further improve the efficiency, we derive a closed-form upper bound of the \emph{expected} cross-entropy (CE) loss with the proposed data augmentation scheme. Therefore, instead of performing the augmentation procedure explicitly, we can directly minimize the upper bound, which is, in fact, a novel robust surrogate loss function. As there is no need to generate explicit data samples, we call our algorithm \emph{implicit semantic data augmentation (ISDA)}. Compared to existing semantic data augmentation algorithms, the proposed ISDA is able to be conveniently implemented on top of most deep models without introducing auxiliary models or noticeable extra computational cost.
In addition to supervised learning tasks, we further apply the proposed ISDA algorithm to more realistic semi-supervised learning scenarios, where only a small subset of all available training data are associated with labels \cite{rasmus2015semi, kingma2014semi,miyato2018virtual, tarvainen2017mean, laine2016temporal}. For samples with labels, we simply minimize the aforementioned upper bound as the surrogate loss. For unlabeled samples, as it is unable to obtain the surrogate loss for ISDA directly, we propose to enforce their semantic consistency. To be specific, since ISDA performs class identity preserving semantic transformations, which should not affect the model prediction on categories, we augment the deep features of unlabeled data, and minimize the KL-divergence between the predictions of the augmented features and the original features. Similarly, an upper bound of the expected KL-divergence is derived as the optimization objective. ISDA can be implemented together with state-of-the-art deep semi-supervised learning algorithms and significantly improves their performance.
Although being simple, the proposed ISDA algorithm is surprisingly effective. Extensive empirical evaluations including supervised / semi-supervised image classification on CIFAR, SVHN and ImageNet and semantic segmentation on Cityscapes are conducted. Results show that ISDA consistently improves the generalization performance of popular deep networks and enable the models to learn better representations.
Parts of the results in this paper were published originally in its conference version \cite{wang2019implicit}. However, this paper extends our earlier work in several important aspects:
\begin{itemize}
\item We extend the proposed ISDA algorithm to deep semi-supervised learning, and empirically validate it on widely used image classification benchmarks (Section \ref{semi_ISDA} \& \ref{semi_ISDA_result}).
\item We present more results on ImageNet (Table \ref{ImageNet Results}) with different deep networks (i.e. ResNets, ResNeXts and DenseNets).
\item We further apply our algorithm to the semantic segmentation task on Cityscapes (Table \ref{Segmentation}), and report positive results.
\item An analysis of the computational complexity is given (Section \ref{Complexity}), showing that ISDA introduces negligible computational overhead theoretically. We also report the additional time consumption of ISDA in practice (Section \ref{Computational_Cost_ISDA}).
\item Additional analytical results including a visualization on ImageNet, a t-SNE visualization of deep features, a sensitivity test and an analysis of the tightness of the upper bound are presented (Section \ref{Analytical_results}).
\end{itemize}
\section{Semantic Transformations in Deep Feature Space}
\label{Semantic Transformations in Deep Feature Space}
Deep networks have been known to excel at extracting high-level representations in the deep feature space \cite{He_2016_CVPR, 2016arXiv160806993H, Upchurch2017DeepFI, ren2015faster}, where the semantic relationships between samples can be captured by the spatial positions of their deep features \cite{bengio2013better}. It has been shown in previous work that translating deep features towards certain directions corresponds to meaningful semantic transformations when the features are mapped back to the input space \cite{Upchurch2017DeepFI,Li2016ConvolutionalNF, bengio2013better}. As a matter of fact, such an observation can be leveraged to edit the semantics of images without the help of deep architectures. An example is shown in Figure \ref{linearizing}. Consider feeding an image of a blue car into the deep network and obtaining its deep feature. Then if we translate the deep feature along the directions corresponding to `change-color' or `change-background', we will get the deep features corresponding to the images of the same car but with a red paint or under a different background.
\begin{figure}[t]
\begin{center}
\centerline{\includegraphics[width=\columnwidth]{linearizing.pdf}}
\vspace{-1ex}
\caption{
An illustration of the insight from deep feature interpolation \cite{Upchurch2017DeepFI} and other existing works \cite{Li2016ConvolutionalNF, bengio2013better}, which inspires our method. Transformations like `changing the color of the car' or `changing the background of the image' can be realized by linearly translating the deep features towards the semantic directions corresponding to these transformations.}
\label{linearizing}
\end{center}
\vskip -0.2in
\end{figure}
\begin{figure*}[t]
\vskip 0.1in
\begin{center}
\centerline{\includegraphics[width=2\columnwidth]{Semantic_directions.pdf}}
\vskip -0.15in
\caption{Three different ways to obtain semantic directions for augmentation in the deep feature space. Human annotation is the most precise way. But it requires collecting annotated images for each transformation of each class in advance, which is expensive and time-consuming. In addition, it will inevitably omit potential augmentation transformations. In contrast, finding semantic directions by random sampling is highly efficient, but yields a large number of meaningless transformations. To achieve a nice trade-off between effectiveness and efficiency, we propose to estimate a covariance matrix for the deep features of each class, and sample semantic directions from a zero-mean normal distribution with the estimated class-conditional covariance matrix. The covariance matrix captures the intra-class feature distribution of the training data, and therefore contains rich information of potential semantic transformations.
}
\label{Semantic_directions}
\end{center}
\vskip -0.225in
\end{figure*}
Based on this intriguing property, we propose to directly augment the semantics of training data by translating their corresponding deep features along many meaningful semantic directions.
Our method is highly efficient compared with traditional approaches of performing semantic augmentation. Conventionally, to achieve semantic changes, one needs to train, deploy and infer deep generators such as cycle-GAN \cite{zhu2017unpaired} or W-GAN \cite{arjovsky2017wasserstein}. The procedure is both computationally expensive and time-consuming. In contrast, translating deep features just introduces the negligible computational cost of linear interpolation.
One may challenge that although semantic transformations are efficient to be realized via deep features, showing the results out in the pixel space is difficult \cite{Upchurch2017DeepFI}. However, our goal is not to edit the semantic contents and obtain the results, but to train deep networks with these semantically altered images for the data augmentation purpose.
Since the augmented features can be directly used for training, it is not necessary to explicitly show the semantic transformations we perform.
In the following, we will show that our method integrates the augmentation procedure into the training process of deep networks.
\section{Implicit Semantic Data Augmentation (ISDA)}
As aforementioned, certain directions in the deep feature space correspond to meaningful semantic transformations like `make-bespectacled' or `change-visual-angle'. By leveraging this observation, we propose an implicit semantic data augmentation (ISDA) approach to augment the training set semantically via deep features.
Our method has two important components, i.e., online estimation of class-conditional covariance matrices and optimization with a robust loss function. The first component aims to find a distribution from which we can sample meaningful semantic transformation directions for data augmentation, while the second saves us from explicitly generating a large amount of extra training data, leading to remarkable efficiency compared to existing data augmentation techniques.
\subsection{Semantic Direction Sampling}
\label{Semantic Directions Sampling}
A challenge faced by our method is how to obtain suitable semantic directions for augmentation. The directions need to correspond to the semantic transformations that are meaningful for the main object in the image, while do not change the class identity of the image. For example, transformations like wearing glasses or dressing up are suitable to augment the images of persons, but others like flying or sailing are meaningless. In addition, it is obvious that persons in the images should not be transformed to horses or other objects that do not belong to their original class.
Previous work \cite{Upchurch2017DeepFI} proposes to find semantic directions by human annotation. Their method is shown in Figure \ref{Semantic_directions} (a). Take changing the color of a car from blue to red for example. Firstly, they collect two sets of images of blue cars and red cars, respectively, and fed them into deep networks to obtain their deep features. Then they take the vector from the average feature of blue cars to the average feature of red cars. The vector corresponds to the transformation of `changing the color of the car from blue to red'. At last, for a new image to transform, they translate its deep feature along the vector, and map the feature back to the pixel space.
It has been shown that their method is able to perform the specific transformation precisely \cite{Upchurch2017DeepFI}.
Whereas, human annotation is not a feasible approach in the context of semantic data augmentation. For one thing, one needs to collect sufficient annotated images for each possible transformation of each class. This procedure is inefficient. For another, it is difficult to pre-define all possible semantic transformations for each class. The omission will lead to inferior performance.
In terms of efficiency, a possible solution is to obtain semantic directions by random sampling. However, since the deep feature space is highly sparse (e.g., ResNets \cite{He_2016_CVPR} generate 64-dimensional features on CIFAR. Even if each dimension has two possible values, there will be $2^{64}$ possible features.), sampling totally at random will yield many meaningless semantic directions. As shown in Figure \ref{Semantic_directions} (b), transformations like `getting older' or `flying' may be performed for a car.
To achieve a nice trade-off between the effectiveness and efficiency, we propose to approximate the procedure of human annotation by sampling random vectors from a zero-mean normal distribution with a covariance that is proportional to the intra-class covariance matrix of samples to be augmented.
The covariance matrix captures the variance of samples in that class and is thus likely to contain rich semantic information. Intuitively, features of the \emph{person} class may vary along the `wearing glasses' direction, as both images of persons with glasses and persons without glasses are contained in the training set. In contrast, the variance along the `having propeller' direction will be nearly zero as all persons do not have propellers. Similarly, features of the \emph{plane} class may vary along the `having propeller' direction, but will have nearly zero variance along the `wearing glasses' direction.
We hope that directions corresponding to meaningful transformations for each class are well represented by the principal components of the covariance matrix of that class.
In addition to its efficiency, the proposed approach can actually leverage more potential semantic transformations than human annotation, as the obtained semantic directions are continuously distributed in the deep feature space.
Consider training a deep network $G$ with weights $\bm{\Theta}$ on a training set
$\mathcal{D} = \{(\bm{x}_{i}, y_{i})\}$, where $y_{i} \in \{ 1, \ldots, C \}$ is the label of the $i^{\textnormal{th}}$ sample $\bm{x}_{i}$ over $C$ classes. Let the $A$-dimensional vector $\bm{a}_{i} = [a_{i1}, \ldots, a_{iA}]^{\textnormal{T}} = G(\bm{x}_{i}, \bm{\Theta})$ denote the deep feature of $\bm{x}_{i}$ learned by $G$, and $a_{ij}$ indicate the $j^{\textnormal{th}}$ element of $\bm{a}_{i}$.
To obtain semantic directions to augment $\bm{a}_{i}$, we establish a zero-mean multi-variate normal distribution $\mathcal{N}(0, \Sigma_{y_i})$, where $\Sigma_{y_i}$ is the class-conditional covariance matrix estimated from the features of all the samples in class $y_i$. In implementation, the covariance matrix is computed in an online fashion by aggregating statistics from all mini-batches.
Formally, the online estimation algorithm for the covariance matrices is given by:
\begin{equation}
\label{ave}
\bm{\mu}_j^{(t)} = \frac{n_j^{(t-1)}\bm{\mu}_j^{(t-1)} + m_j^{(t)} {\bm{\mu}'}_j^{(t)}}
{n_j^{(t-1)} +m_j^{(t)}},
\end{equation}
\vskip -0.1in
\begin{equation}
\label{cv}
\begin{split}
\Sigma_j^{(t)}
= &\frac{n_j^{(t-1)}\Sigma_j^{(t-1)}\!+\!m_j^{(t)} {\Sigma'}_j^{(t)}}
{n_j^{(t-1)} +m_j^{(t)}} \\
& + \frac{n_j^{(t-1)}m_j^{(t)} (\bm{\mu}_j^{(t-1)}\!-\!{\bm{\mu}'}_j^{(t)})
(\bm{\mu}_j^{(t-1)}\!-\!{\bm{\mu}'}_j^{(t)})^{\textnormal{T}}}
{(n_j^{(t-1)} +m_j^{(t)})^2},
\end{split}
\end{equation}
\begin{equation}
\label{sum}
n_j^{(t)} = n_j^{(t-1)} + m_j^{(t)},
\end{equation}
where $\bm{\mu}_j^{(t)}$ and $\Sigma_j^{(t)}$ are the estimates of average values and covariance matrices of the features of $j^{\textnormal{th}}$ class at $t^{\textnormal{th}}$ step. ${\bm{\mu}'}_j^{(t)}$ and ${\Sigma'}_j^{(t)}$ are the average values and covariance matrices of the features of $j^{\textnormal{th}}$ class in $t^{\textnormal{th}}$ mini-batch. $n_j^{(t)}$ denotes the total number of training samples belonging to $j^{\textnormal{th}}$ class in all $t$ mini-batches,
and $m_j^{(t)}$ denotes the number of training samples belonging to $j^{\textnormal{th}}$ class only in $t^{\textnormal{th}}$ mini-batch.
During training, $C$ covariance matrices are computed, one for each class. The augmented feature $\tilde{\bm{a}}_{i}$ is obtained by translating $\bm{a}_{i}$ along a random direction sampled from $\mathcal{N}(0, \lambda\Sigma_{y_i})$. Equivalently, we have:
\begin{equation}
\tilde{\bm{a}}_{i} \sim \mathcal{N}(\bm{a}_{i}, \lambda\Sigma_{y_i}),
\end{equation}
where $\lambda$ is a positive coefficient to control the strength of semantic data augmentation. As the covariances are computed dynamically during training, the estimation in the first few epochs is not quite informative when the network is not well trained. To address this issue, we let $\lambda = (t/T)\!\times\!\lambda_0$ be a function of the current iteration $t$, thus to reduce the impact of the estimated covariances on our algorithm early in the training stage.
\subsection{Upper Bound of the Expected Loss}
\label{sec_4_2}
A naive method to implement the semantic data augmentation is to explicitly augment each $\bm{{a}}_{i}$ for $M$ times, forming an augmented feature set $\{(\bm{a}_{i}^{1}, y_{i}), \ldots, (\bm{a}_{i}^{M}, y_{i})\}_{i=1}^{N}$ of size $MN$, where $\bm{a}_{i}^{m}$ is $m^{\textnormal{th}}$ sample of augmented features for sample $\bm{x}_i$.
Then the networks are trained by minimizing the cross-entropy (CE) loss:
\begin{equation}
\label{eq2}
\mathcal{L}_{M}(\bm{W}, \bm{b}, \bm{\Theta})\!=\!
\frac{1}{N}\!
\sum_{i=1}^{N}\!\frac{1}{M}\!\sum_{m=1}^{M}
-\log (\frac{e^{\bm{w}^{\textnormal{T}}_{y_{i}}\bm{a}_{i}^{m}+ b_{y_{i}}}}
{\sum_{j=1}^{C}e^{\bm{w}^{\textnormal{T}}_{j}\bm{a}_{i}^{m} + b_{j}}}),
\end{equation}
where $\bm{W} = [\bm{w}_{1},\dots, \bm{w}_{C}]^{\textnormal{T}} \in \mathcal{R}^{C \times A}$ and $\bm{b} = [b_{1},\dots, b_{C}]^{\textnormal{T}} \in \mathcal{R}^C$ are the weight matrix and biases corresponding to the final fully connected layer, respectively.
Obviously, the naive implementation is computationally inefficient when $M$ is large, as the feature set is enlarged by $M$ times. In the following, we consider the case that $M$ grows to infinity, and find that an easy-to-compute upper bound can be derived for the loss function, leading to a highly efficient implementation.
In the case $M\rightarrow\infty$, we are in fact considering the expectation of the CE loss under all possible augmented features. Specifically, $\mathcal{L}_{\infty}$ is given by:
\begin{equation}
\label{expectation}
\mathcal{L}_{\infty}(\bm{W}, \bm{b}, \bm{\Theta}|\bm{\Sigma})
\!=\!\frac{1}{N}\!\sum_{i=1}^{N}\mathrm{E}_{\tilde{\bm{a}}_{i}}[
-\log(
\frac{e^{\bm{w}^{\textnormal{T}}_{y_{i}}\tilde{\bm{a}}_{i}+ b_{y_{i}}}}
{\sum_{j=1}^{C}e^{\bm{w}^{\textnormal{T}}_{j}\tilde{\bm{a}}_{i} + b_{j}}}
)].
\end{equation}
If $\mathcal{L}_{\infty}$ can be computed efficiently, then we can directly minimize it without explicitly sampling augmented features. However, Eq. (\ref{expectation}) is difficult to compute in its exact form. Alternatively, we find that it is possible to derive an easy-to-compute upper bound for $\mathcal{L}_{\infty}$, as given by the following proposition.
\begin{proposition}
\label{proposition}
Suppose that $\tilde{\bm{a}}_{i} \sim \mathcal{N}(\bm{a}_{i}, \lambda\Sigma_{y_i})$. Then we have an upper bound of $\mathcal{L}_{\infty}$, given by:
\begin{equation}
\label{proposition_1}
\begin{split}
{\mathcal{L}}_{\infty}
\leq \frac{1}{N} \! \sum_{i=1}^{N} \! - \log(\!
\frac{e^{
\bm{w}^{\textnormal{T}}_{y_{i}}\bm{a}_{i} + b_{y_{i}}
}}{\sum_{j=1}^{C}\!e^{
\bm{w}^{\textnormal{T}}_{j}\bm{a}_{i}+b_{j}+\frac{\lambda}{2}\bm{v}^{\textnormal{T}}_{jy_{i}}\!\Sigma_{y_i}\!\bm{v}_{jy_{i}}
}}\!) \triangleq \overline{\mathcal{L}}_{\infty},
\end{split}
\end{equation}
where $\bm{v}_{jy_{i}} = \bm{w}_{j} - \bm{w}_{y_{i}}$.
\end{proposition}
\begin{proof}
According to the definition of $\mathcal{L}_{\infty}$ in Eq. (\ref{expectation}), we have:
\begin{align}
\label{Js_1}
\mathcal{L}_{\infty}
= & \frac{1}{N} \sum_{i=1}^{N}\mathrm{E}_{\tilde{\bm{a}}_{i}}[
\log(
\sum_{j=1}^{C}
e^{\bm{v}^{\textnormal{T}}_{jy_{i}}\tilde{\bm{a}}_{i} + (b_{j} - b_{y_{i}})}
)] \\
\label{Js_3}
\leq & \frac{1}{N}
\sum_{i=1}^{N}
\log(\sum_{j=1}^{C}\mathrm{E}_{\tilde{\bm{a}}_{i}}
[e^{\bm{v}^{\textnormal{T}}_{jy_{i}}\tilde{\bm{a}}_{i} + (b_{j} - b_{y_{i}})}]) \\
\label{Js_4}
= & \frac{1}{N}
\sum_{i=1}^{N}
\log(\sum_{j=1}^{C}e^{\bm{v}^{\textnormal{T}}_{jy_{i}}\bm{a}_{i}\!+\!(b_{j}\!-\!b_{y_{i}})
\!+\!\frac{\lambda}{2}\bm{v}^{\textnormal{T}}_{jy_{i}}\Sigma_{y_i}\bm{v}_{jy_{i}}}
)
\\
\label{Js_5}
= & \overline{\mathcal{L}}_{\infty}.
\end{align}
In the above, the Inequality (\ref{Js_3}) follows from the Jensen's inequality $\mathrm{E}[\log \!X] \leq \log\mathrm{E}[X]$, as the logarithmic function $\log(\cdot)$ is concave. The Eq. (\ref{Js_4}) is obtained by leveraging the moment-generating function:
\begin{equation*}
\mathrm{E}[e^{tX}] = e^{t\mu + \frac{1}{2} \sigma^2 t^2},\ \ X \sim \mathcal{N}(\mu,\sigma^2),
\end{equation*}
due to the fact that $\bm{v}^{\textnormal{T}}_{jy_{i}}\tilde{\bm{a}}_{i}\!+\!(b_{j}\!-\!b_{y_{i}})$ is a Gaussian random variable, i.e.,
\begin{equation*}
\label{proof_1}
\begin{split}
\bm{v}^{\textnormal{T}}_{jy_{i}}\tilde{\bm{a}}_{i}\!+\!(b_{j}\!-\!b_{y_{i}}) \sim
\mathcal{N}(\bm{v}^{\textnormal{T}}_{jy_{i}}\bm{a}_{i}\!+\!(b_{j}\!-\!b_{y_{i}}),
\lambda\bm{v}^{\textnormal{T}}_{jy_{i}}\Sigma_{y_i}\bm{v}_{jy_{i}}).
\end{split}
\qedhere
\end{equation*}
\end{proof}
Essentially, Proposition \ref{proposition} provides a surrogate loss for our implicit data augmentation algorithm. Instead of minimizing the exact loss function $\mathcal{L}_{\infty}$, we can optimize its upper bound $\overline{\mathcal{L}}_{\infty}$ in a much more efficient way. Therefore, the proposed ISDA boils down to a novel robust loss function, which can be easily adopted by most deep models.
In addition, we can observe that when $\lambda \rightarrow 0$, which means no features are augmented,
$\overline{\mathcal{L}}_{\infty}$ reduces to the standard CE loss.
In summary, the proposed ISDA approach can be simply plugged into deep networks as a robust loss function, and efficiently optimized with the stochastic gradient descent (SGD) algorithm. We present the pseudo code of ISDA in Algorithm \ref{alg}.
\begin{center}
\vskip -0.1in
\begin{algorithm}[H]
\caption{The ISDA algorithm.}
\label{alg}
\begin{algorithmic}[1]
\STATE {\bfseries Input:} $\mathcal{D}$, $\lambda_0$
\STATE Randomly initialize
$\bm{W}, \bm{b}$ and $\bm{\Theta}$
\FOR{$t=0$ {\bfseries to} $T$}
\STATE Sample a mini-batch $\{ \bm{x}_i, {y_i} \}_{i=1}^B$ from $\mathcal{D}$
\STATE Compute $\bm{a}_{i} = G(\bm{x}_{i}, \bm{\Theta})$
\STATE Estimate the covariance matrices $\Sigma_{1}$, $\Sigma_{2}$, $...$, $\Sigma_{C}$
\STATE Compute $\overline{\mathcal{L}}_{\infty}$
according to Eq. (\ref{proposition_1})
\STATE Update $\bm{W}, \bm{b}$, $\bm{\Theta}$ with SGD
\ENDFOR
\STATE {\bfseries Output:} $\bm{W}, \bm{b}$ and $\bm{\Theta}$
\end{algorithmic}
\end{algorithm}
\end{center}
\subsection{Complexity of ISDA}
\label{Complexity}
Here we present a theoretical analysis to show that ISDA does not involve notable additional computational cost. As shown above, ISDA requires extra computation for estimating the covariance matrices and computing the upper bound of the excepted loss. For a single sample, the computational complexity of the former is $O(D^2)$ (using the online update formulas Eqs. (\ref{ave})(\ref{cv})), while that of the later is $O(C \! \times \! D^2)$, where $D$ is the dimension of feature space. In comparison, a typical ConvNet with $L$ layers requires $O(D^2 \!\times\! K^2 \!\times\! H \!\times\! W \! \times \! L)$ operations, where $K$ is the filter kernel size, and $H$ and $W$ are the height and width of feature maps. Consider ResNet-110 on CIFAR (C10 \& C100) as an example, for which we have $K\!=\!3$, $H\!=\!W\!=\!8$ and $L\!=\!109$ (ignoring the last FC-layer), then the extra computation cost of ISDA is up to \emph{three orders of magnitude less} than the total computation cost of the network. In our experiments, the results of both theoretical and practical cost of ISDA are provided in Table \ref{ComputationalCost}.
\section{ISDA for Deep Semi-supervised Learning}
\label{semi_ISDA}
Deep networks have achieved remarkable success in supervised learning tasks when fueled by sufficient annotated training data.
However, obtaining abundant annotations is usually costly and time-consuming in practice. In comparison, collecting training samples without labels is a relatively easier task. The goal of deep semi-supervised learning is to improve the performance of deep networks by leveraging both labeled and unlabeled data simultaneously \cite{kingma2014semi, rasmus2015semi, laine2016temporal, tarvainen2017mean}. In this section, we further introduce how to apply the proposed algorithm to semi-supervised learning tasks.
It is not straightforward to directly implement the aforementioned ISDA algorithm in semi-supervised learning, since unlabeled samples do not have ground truth labels, which are essential to compute the supervised ISDA loss $\overline{\mathcal{L}}_{\infty}$. Inspired by other consistency based semi-supervised learning methods \cite{tarvainen2017mean, laine2016temporal, miyato2018virtual, luo2018smooth}, we propose a semantic consistency training approach to exploit unlabeled data in ISDA. Our major insight here is that the prediction of a given sample should not be significantly changed when it is augmented, because ISDA performs class identity preserving semantic transformations. In specific, we first augment the deep features of unlabeled data, and then minimize the KL-divergence between the predictions of the augmented samples and the corresponding original samples. Interestingly, we find it feasible to derive a closed-form upper-bound of the expected KL-divergence as a surrogate loss, which makes our semi-supervised ISDA algorithm highly efficient, similar to the case of supervised learning. ISDA can be incorporated into state-of-the-art deep semi-supervised learning algorithms to further improve their performance.
Consider training a deep network with weights $\bm{\Theta}$ on a labeled training set $\mathcal{D}^{\text{L}} = \{(\bm{x}_{i}^{\text{L}}, y_{i}^{\text{L}})\}$ and an unlabeled training set $\mathcal{D}^{\text{U}} = \{\bm{x}_{i}^{\text{U}}\}$. For labeled samples, we simply minimize the upper bound in Proposition \ref{proposition}. For unlabeled samples, given an input $\bm{x}_{i}^{\text{U}}$, we first obtain its deep feature $\bm{a}_{i}^{\text{U}}$ and the corresponding prediction $\bm{p}_i^{\text{U}} \in (0, 1)^C$. Then we obtain the augmented feature:
\begin{equation}
\tilde{\bm{a}}_{i}^{\text{U}} \sim \mathcal{N}(\bm{a}_{i}^{\text{U}}, \lambda\Sigma_{\tilde{y}_i^{\text{U}}}),\ \ \tilde{y}_i^{\text{U}} = \arg\max_{j} p_{ij}^{\text{U}},
\end{equation}
where $p_{ij}^{\text{U}}$ indicates the $j^{\textnormal{th}}$ element of $\bm{p}_i^{\text{U}}$ and $\tilde{y}_i^{\text{U}}$ is the pseudo label of $\bm{x}_{i}^{\text{U}}$. The covariance matrix $\Sigma_{\tilde{y}_i^{\text{U}}}$ is estimated using the deep features of labeled data.
Then the prediction of $\tilde{\bm{a}}_{i}^{\text{U}}$ is able to be calculated, denoted as $\tilde{\bm{p}}_i^{\text{U}}$. Since ISDA performs transformations that do not affect the class identity of samples, we enforce $\bm{p}_i^{\text{U}}$ and $\tilde{\bm{p}}_i^{\text{U}}$ to be similar by minimizing the KL-divergence between them. As $\tilde{\bm{a}}_{i}^{\text{U}}$ is a random variable, a straightforward approach to achieve that is to obtain $M$ samples of $\tilde{\bm{a}}_{i}^{\text{U}}$ and minimize the averaged KL-divergence of all the $M$ samples. However, as discussed in Section \ref{sec_4_2}, such a naive implementation is inefficient due to the enlarged feature set. To alleviate the problem, we consider the case where $M \to \infty$ and minimize the expected KL-divergence over $\tilde{\bm{a}}_{i}^{\text{U}}$:
\begin{equation}
\min_{\bm{W}, \bm{b}, \bm{\Theta}}\mathrm{E}_{\tilde{\bm{a}}_{i}^{\text{U}}} [\textnormal{D}_{\text{KL}}(\bm{p}_i^{\text{U}}|\!|\tilde{\bm{p}}_i^{\text{U}})].
\end{equation}
Here, we treat $\bm{p}_i^{\text{U}}$ as a constant to stabilize the training procedure following \cite{miyato2018virtual, laine2016temporal, tarvainen2017mean}. Formally, a semantic consistency loss for unlabeled data is given by:
\begin{equation}
\label{ul_loss}
\begin{split}
\mathcal{L}^{\text{U}}_{\infty}(& \bm{W}, \bm{b}, \bm{\Theta}|\bm{\Sigma})\!=\! \\
& \frac{1}{N}\!\sum_{i=1}^{N}\!\sum_{k=1}^{C} p_{ik}^{\text{U}}
\mathrm{E}_{\tilde{\bm{a}}_{i}}[
-\log(
\frac{e^{\bm{w}^{\textnormal{T}}_{k}\tilde{\bm{a}}_{i}+ b_{k}}}
{\sum_{j=1}^{C}e^{\bm{w}^{\textnormal{T}}_{j}\tilde{\bm{a}}_{i} + b_{j}}}
)].
\end{split}
\end{equation}
It is difficult to compute Eq. (\ref{ul_loss}) in the exact form. Therefore, instead of directly using Eq. (\ref{ul_loss}) as the loss function, we show in the following proposition that a closed-form upper bound of $\mathcal{L}^{\text{U}}_{\infty}$ can be obtained as a surrogate loss. Similar to the supervised learning case, our semi-supervised ISDA algorithm amounts to minimizing a novel robust loss, and can be implemented efficiently.
\begin{proposition}
\label{proposition_ul}
Suppose that $\tilde{\bm{a}}_{i}^{\textnormal{U}} \sim \mathcal{N}(\bm{a}_{i}^{\textnormal{U}}, \lambda\Sigma_{\tilde{y}_i^{\textnormal{U}}})$. Then we have an upper bound of $\mathcal{L}^{\textnormal{U}}_{\infty}$, given by:
\begin{equation}
\label{proposition_1_ul}
\begin{split}
&\mathcal{L}^{\textnormal{U}}_{\infty}
\leq \! \frac{1}{N}\!\!\sum_{i=1}^{N}\!\sum_{k=1}^{C}\!\!- p_{ik}^{\textnormal{U}}\log(\!
\frac{e^{
\bm{w}^{\textnormal{T}}_{k}\bm{a}_{i} + b_{k}
}}{\sum_{j=1}^{C} \! e^{
\bm{w}^{\textnormal{T}}_{j}\!\bm{a}_{i}\!+\!b_{j}\!+\!\frac{\lambda}{2}\!\bm{v}^{\textnormal{T}}_{jk} \! (\Sigma_{\tilde{y}_i^{\textnormal{U}}}) \bm{v}_{jk}
}}\!) \triangleq \overline{\mathcal{L}}^{\textnormal{U}}_{\infty},
\end{split}
\end{equation}
where $\bm{v}_{jk} = \bm{w}_{j} - \bm{w}_{k}$.
\end{proposition}
\begin{proof}
According to Eq. (\ref{ul_loss}), we have:
\begin{align}
\label{ul_1}
\mathcal{L}^{\textnormal{U}}_{\infty}
= &
\sum_{k=1}^{C} \left\{\frac{1}{N}\!\sum_{i=1}^{N}
p_{ik}^{\textnormal{U}}
\mathrm{E}_{\tilde{\bm{a}}_{i}}[
-\log(
\frac{e^{\bm{w}^{\textnormal{T}}_{k}\tilde{\bm{a}}_{i}+ b_{k}}}
{\sum_{j=1}^{C}e^{\bm{w}^{\textnormal{T}}_{j}\tilde{\bm{a}}_{i} + b_{j}}})]\right\}
\\
\label{ul_2}
\begin{split}
\leq &
\!\sum_{k=1}^{C}\!
\left[\!\frac{1}{N}\!\! \sum_{i=1}^{N} \!-p_{ik}^{\textnormal{U}} \!\log(\!
\frac{e^{
\bm{w}^{\textnormal{T}}_{k}\bm{a}_{i} + b_{k}
}}{\sum_{j=1}^{C} \! e^{
\bm{w}^{\textnormal{T}}_{j}\!\bm{a}_{i}\!+\!b_{j}\!+\!\frac{\lambda}{2}\!\bm{v}^{\textnormal{T}}_{jk} \! (\Sigma_{\tilde{y}_i^{\textnormal{U}}}) \bm{v}_{jk}
}}\!)\!\right]
\end{split}
\\
\label{ul_3}
= & \overline{\mathcal{L}}^{\textnormal{U}}_{\infty}.
\end{align}
In the above, Inequality (\ref{ul_2}) follows from the conclusion of Proposition \ref{proposition}.
\qedhere
\end{proof}
In sum, the loss function of our method is given by:
\begin{equation}
\label{overall}
\overline{\mathcal{L}}^{\textnormal{L}}_{\infty} + \eta_1 \overline{\mathcal{L}}^{\textnormal{U}}_{\infty} + \eta_2 \mathcal{L}_{\textnormal{regularization}},
\end{equation}
where $\overline{\mathcal{L}}^{\textnormal{L}}_{\infty}$ is the ISDA loss on labeled data. As most deep semi-supervised learning algorithms model unlabeled data in regularization terms, they can be conveniently integrated with ISDA by appending the corresponding regularization term $\mathcal{L}_{\textnormal{regularization}}$ to the loss function. The coefficient $\eta_1$ and $\eta_2$ are pre-defined hyper-parameters to determine the importance of different regularization terms.
\section{Related Work}
In this section, we briefly review existing research on related topics.
\textbf{Data augmentation} is a widely used technique to regularize deep networks.
For example, in image recognition tasks, augmentation methods like random flipping, mirroring and rotation are applied to enforce the geometric invariance of convolutional networks \cite{He_2016_CVPR, 2016arXiv160806993H, 2014arXiv1409.1556S, srivastava2015training}. These classic techniques are fundamental to obtain highly generalized deep models.
It is shown in some literature that abandoning certain information in training images is also an effective approach to augment the training data. Cutout \cite{devries2017improved} and random erasing \cite{zhong2017random} randomly cut a rectangle region of an input image out to perform augmentation. In addition, several studies focus on automatic data augmentation techniques. E.g., AutoAugment \cite{2018arXiv180509501C} is proposed to search for a better augmentation strategy among a large pool of candidates using reinforcement learning. A key concern on AutoAugment is that the searching algorithm suffers from extensive computational and time costs. Similar to our method, learning with marginalized corrupted features \cite{maaten2013learning} can be viewed as an implicit data augmentation technique, but it is limited to simple linear models. Feature transfer learning \cite{yin2019feature} explicitly augments the under-represented data in the feature space, while it merely focuses on the imbalance face images. Complementarily, recent research shows that semantic data augmentation techniques which apply class identity preserving transformations (e.g. changing backgrounds of objects or varying visual angles) to the training data are effective as well \cite{jaderberg2016reading, bousmalis2017unsupervised, NIPS2017_6916, antoniou2017data}. This is usually achieved by generating extra semantically transformed training samples with specialized deep structures such as DAGAN \cite{antoniou2017data}, domain adaptation networks \cite{bousmalis2017unsupervised}, or other GAN-based generators \cite{jaderberg2016reading, NIPS2017_6916}. Although being effective, these approaches are nontrivial to implement and computationally expensive, due to the need to train generative models beforehand and infer them during training.
\textbf{Robust loss function. }
As shown in the paper, ISDA amounts to minimizing a novel robust loss function. Therefore, we give a brief review of related work on this topic.
Recently, several robust loss functions are proposed to improve the generalization performance of deep networks.
For example, the L$_q$ loss \cite{Zhang2018GeneralizedCE} is a balanced form between the cross entropy (CE) loss and mean absolute error (MAE) loss, derived from the negative Box-Cox transformation. It is designed to achieve the robustness against corrupted labels in the training set, but also achieves effective performance to improve the generalization performance. Focal loss \cite{Lin2017FocalLF} attaches high weights to a sparse set of hard examples to prevent the vast number of easy samples from dominating the training of the network. The idea of introducing a large decision margin for CE loss has been studied in \cite{liu2016large, Liang2017SoftMarginSF, Wang2018EnsembleSS}. These researches propose to maximize the cosine distance between deep features of samples from different classes, in order to alleviate the overfitting brought by the distribution gap between the training data and the real distribution. In \cite{Sun2014DeepLF}, the CE loss and the contrastive loss are combined to learn more discriminative features. From a similar perspective, center loss \cite{wen2016discriminative} simultaneously learns a center for deep features of each class and penalizes the distances between the samples and their corresponding class centers in the feature space, enhancing the intra-class compactness and inter-class separability.
\textbf{Semantic transformations via deep features. }
Our work is inspired by the fact that high-level representations learned by deep convolutional networks can potentially capture abstractions with semantics \cite{Bengio2009DeepArchitechture, bengio2013better}.
In fact, translating deep features along certain directions has been shown to be corresponding to performing meaningful semantic transformations on the input images. For example, deep feature interpolation \cite{Upchurch2017DeepFI} leverages linear interpolations of deep features from a pre-trained neural network to edit the semantics of images. Variational Autoencoder (VAE) and Generative Adversarial Network (GAN) based methods \cite{Choi2018StarGANUG, zhu2017unpaired, He2018AttGANFA} establish a latent representation corresponding to the abstractions of images, which can be manipulated to perform semantic transformations. Generally, these methods reveal that there exist some semantically meaningful directions in the deep feature space, which can be leveraged to perform semantic data augmentation efficiently.
\textbf{Uncertainty modeling. }
Similar to us, some of previous works on deep learning with uncertainty \cite{kendall2017uncertainties, gal2015bayesian, gal2016dropout} also assume a Gaussian distribution for the deep feature or prediction of each sample. For example, in the context of face recognition and person re-identification, the probabilistic representations are leveraged to address the issues of ambiguous faces \cite{shi2019probabilistic} and data outliers/label noises \cite{yu2019robust}. In multi-task learning, the homoscedastic task uncertainty is used to learn the weights of different tasks \cite{kendall2018multi}. This technique is also exploited to object detection to model the uncertainty of bounding boxes \cite{he2019bounding}. Given that the proposed ISDA algorithm aims at augmenting training data semantically, our motivation is fundamentally different from these works. In addition, ISDA involves novel techniques such as estimating class-conditional covariance matrices and the derivation of the surrogate loss.
\textbf{Deep semi-supervised learning. }
Since ISDA can also be applied to semi-supervised learning tasks, we also briefly review recent work in this field. For modern deep learning, the precise annotations of a sufficiently large training set are usually expensive and time-consuming to acquire. To save the cost of annotation, a nice solution is training models on a small set of labeled data together with a large number of unlabeled samples, which is named semi-supervised learning.
The main methods on this topic can be divided into two sorts, teacher-based methods and perturbation-based methods. The former establish a `teacher model' to provide supervision for unlabeled data.
For example, temporal ensemble \cite{laine2016temporal} uses the moving averaged prediction of the model on unannotated samples as pseudo labels. Mean teacher \cite{tarvainen2017mean} performs an exponential moving average on the parameters of models to obtain a teacher network. On the other hand, perturbation-based methods add small perturbations to the input images and enforce the prediction consistency of the network between the perturbed images and original images. VAT \cite{miyato2018virtual} proposes to apply adversarial perturbations. $\Pi$-model \cite{laine2016temporal} minimizes the mean-square distance between the same image with different augmentation schemes. As an augmentation technique, the proposed semi-supervised ISDA algorithm is complementary to both the two types of methods.
| proofpile-arXiv_065-214 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Einstein's general relativity (GR) is a generally recognized theory of gravitation that is successfully applied
on different spatial and time scales. In particular, it is used to model processes which took place in the early Universe.
It is, however, obvious that to describe the very early stages of the evolution of the Universe, it is already needed to take into account quantum effects.
To do this, it is necessary to have a full theory of quantum gravity, which is absent at present. For this reason, starting from 1960's,
it was being suggested to use some effective description of quantum effects in strong gravitational fields by a change
of the classical Einstein gravitational Lagrangian $\sim R$ by various
modified Lagrangians containing different curvature invariants. In the simplest case this can be some function $f(R)$ of the scalar curvature $R$.
Such modified gravity theories (MGTs) have been widely applied to
model various cosmological aspects of the early and present Universe
(for a general review on the subject, see, e.g., Refs.~\cite{Nojiri:2010wj,Nojiri:2017ncd}).
On the other hand, in considering processes and objects on relatively small scales comparable to sizes of galaxies and even of stars,
the effects of modification of gravity can also play a significant role.
For example, within $f(R)$ gravity, there have been constructed models of
relativistic stars~\cite{rel_star_f_R_1,rel_star_f_R_2},
wormholes~\cite{wh_f_R_1,wh_f_R_2}, and neutron stars~\cite{Cooney:2009rr,Arapoglu:2010rz,Orellana:2013gn,Alavirad:2013paa,Astashenok:2013vza,Ganguly:2013taa}.
It was shown that the modification of gravity can affect, in particular, a number of important physical characteristics of neutron stars which may be verified observationally.
One of the problems of interest is determining the mass-radius (or the mass-central density of matter) relations,
which have been obtained, for example, in Refs.~\cite{Astashenok:2014pua,Astashenok:2014dja,Capozziello:2015yza,Bakirova:2016ffk,Astashenok:2017dpo,Folomeev:2018ioy,Feola:2019zqg,Astashenok:2020isy}.
It was shown there that, as in GR, in MGT, for some central density of neutron matter, the
mass-central density curves possess maxima whose location depends on the physical properties of the specific matter.
In GR, the presence of such a maximum indicates the fact that there is a transition from stable systems (located to the left of the maximum) to unstable configurations
(located to the right of the maximum).
This fact is established by studying the behavior of matter and metric perturbations using the variational approach~\cite{Chandrasekhar:1964zz}.
In MGTs, different types of perturbations have been repeatedly studied as well (see, e.g., Refs.~\cite{Blazquez-Salcedo:2018pxo,Blazquez-Salcedo:2020ibb} and references therein).
In considering these problems, a transition from $f(R)$ gravity to a scalar-tensor theory is usually performed.
Correspondingly, studies of the perturbations are carried out not
in the Jordan frame but in the Einstein frame. However, the question of the physical equivalence between these two frames is still under discussion,
and it cannot be regarded as completely solved~\cite{Capozziello:2010sc,Kamenshchik:2014waa,Kamenshchik:2016gcy,Bahamonde:2016wmz,Ruf:2017xon}.
Here, the following potential difficulties may be noted:
(i)~ In performing the transition from $f(R)$ gravity to a scalar-tensor theory, there may, in general, occur some undesirable consequences (singularities, fixed points, etc.); as a
result, the equivalence between the frames can be violated.
(ii)~Objects that are stars in one frame may represent some other configurations in another.
(iii)~In constructing perturbation theory, the equivalence between the frames can be lost in view of the approximate nature of such a theory.
In this connection, it may be of some interest to study the stability directly in the Jordan frame, and this is the goal of the present paper.
For the sake of simplicity, we will work within $R^2$ gravity, where linear radial perturbations of a strongly gravitating system supported by a polytropic fluid will be investigated.
For this purpose, in Sec.~\ref{gen_eqs}, we first derive the general equations for $f(R)$ gravity.
Using these equations, in Sec.~\ref{stat_conf}, we numerically find static solutions describing equilibrium configurations, on the background of which the behavior of matter and spacetime
perturbations is studied in Sec.~\ref{stab_anal}.
\section{General equations}
\label{gen_eqs}
We consider modified gravity
with the action [the metric signature is $(+,-,-,-)$]
\begin{equation}
\label{action_mod}
S=-\frac{c^3}{16\pi G}\int d^4 x \sqrt{-g} f(R) +S_m,
\end{equation}
where $G$ is the Newtonian gravitational constant,
$f(R)$ is an arbitrary nonlinear function of $R$, and $S_m$ denotes the action
of matter.
For our purposes, we represent the function $f(R)$ in the form
\begin{equation}
\label{f_mod}
f(R)=R+\alpha h(R),
\end{equation}
where $h(R)$ is a new arbitrary function of $R$
and $\alpha$ is an arbitrary constant. When $\alpha=0$, one recovers Einstein's general relativity.
The corresponding field equations can be derived by
varying action \eqref{action_mod} with respect to the metric, yielding
\begin{equation}
\label{mod_Ein_eqs_gen}
\left(1+\alpha h_R\right) G_i^k-\frac{1}{2}\alpha\left(h-R\,h_R \right)\delta_i^k+
\alpha \left(\delta_i^k g^{m n}-\delta_i^m g^{k n}\right)\left(h_R\right)_{;m;n}=\frac{8\pi G}{c^4}T_i^k.
\end{equation}
Here $G_i^k\equiv R_i^k-\frac{1}{2}\delta_i^k R$ is the Einstein tensor, $h_R\equiv dh/dR$, and
the semicolon denotes the covariant derivative.
To obtain the modified Einstein equations and the equation for the fluid, we choose
the spherically symmetric metric in the form
\begin{equation}
\label{metric_schw}
ds^2=e^{\nu}(dx^0)^2-e^{\lambda}dr^2-r^2 \left(d\Theta^2+\sin^2\Theta\, d\phi^2\right),
\end{equation}
where $\nu$ and $\lambda$ are in general functions of $r, x^0$,
and $x^0=c\, t$ is the time coordinate.
As a matter source in the field equations, we take an isotropic
fluid with
the energy-momentum tensor
\begin{equation}
\label{fluid_emt_anis}
T_{i}^k=\left(\varepsilon +p\right)u^k u_i-\delta_i^k p,
\end{equation}
where $\varepsilon$ is the fluid energy density and $p$ is the pressure.
The trace of Eq.~\eqref{mod_Ein_eqs_gen} yields the equation for the scalar curvature
\begin{equation}
\label{scal_cur_eq_gen}
-R+\alpha\left[h_R R-2 h+3\left(h_R\right)^{;i}_{;i}\right]=\frac{8\pi G}{c^4}T,
\end{equation}
where $T$ is the trace of the energy-momentum tensor \eqref{fluid_emt_anis}.
Using Eqs.~\eqref{metric_schw} and \eqref{fluid_emt_anis}, the $(^t_t)$, $(^r_r)$, $(^\theta_\theta)$, and $(^r_t)$ components of Eq.~\eqref{mod_Ein_eqs_gen}
can be written as
\begin{eqnarray}
\label{mod_00_gen}
&&
\left(1+\alpha h_R\right)
\left[-e^{-\lambda}\left(\frac{1}{r^2}-\frac{\lambda^\prime}{r}\right)+\frac{1}{r^2}\right]
-\frac{\alpha}{2}\left\{h-h_R R
+e^{-\lambda}\left[2 h_R^{\prime\prime}-\left(\lambda^\prime-\frac{4}{r}\right)h_R^{\prime}\right]-e^{-\nu}\dot{h}_R\dot\lambda
\right\}
=\frac{8\pi G}{c^4} \varepsilon,
\\
\label{mod_11_gen}
&&\left(1+\alpha h_R\right)
\left[-e^{-\lambda}\left(\frac{1}{r^2}+\frac{\nu^\prime}{r}\right)+\frac{1}{r^2}\right]
-\frac{\alpha}{2}\left[h-h_R R-e^{-\nu}\left(2\ddot{h}_R-\dot{h}_R\dot\nu\right)
+e^{-\lambda}\left(\nu^\prime+\frac{4}{r}\right)h_R^{\prime}
\right]
=-\frac{8\pi G}{c^4} p,
\\
\label{mod_22_gen}
&&\left(1+\alpha h_R\right)
\left\{\frac{e^{-\lambda}}{2}\left[\frac{1}{r}\left(\lambda^\prime-\nu^\prime\right)+\frac{1}{2}\lambda^\prime\nu^\prime-\frac{1}{2}\nu^{\prime 2}-\nu^{\prime\prime}
\right]+\frac{e^{-\nu}}{2}\left(\ddot\lambda+\frac{1}{2}\dot\lambda^2-\frac{1}{2}\dot\lambda\dot\nu
\right)
\right\}\nonumber\\
&&-\frac{\alpha}{2}\left\{h-h_R R+e^{-\lambda}\left[2 h_R^{\prime\prime}-\left(\lambda^\prime-\nu^\prime-\frac{2}{r}\right)h_R^{\prime}\right]
-e^{-\nu}\left[2\ddot{h}_R+\left(\dot\lambda-\dot \nu\right)\dot{h}_R\right]
\right\}=-\frac{8\pi G}{c^4} p,
\\
\label{mod_10_gen}
&&-\left(1+\alpha h_R\right)\frac{e^{-\lambda}}{r}\dot\lambda-\alpha e^{-\lambda}
\left[\frac{1}{2}\left(\dot\lambda h_R^\prime+\nu^\prime \dot{h}_R\right)-\dot{h}_R^\prime
\right]=\frac{8\pi G}{c^4} \left(\varepsilon+p\right)u_0 u^1,
\end{eqnarray}
where the dot and prime denote differentiation with respect to $x^0$ and $r$, respectively.
In turn, Eq.~\eqref{scal_cur_eq_gen} yields
\begin{equation}
\label{scal_cur_eq_gen1}
-R+\alpha\left\{
-2 h+ h_R R+\frac{3}{2}\left[e^{-\nu}\left\{2\ddot{h}_R+\left(\dot\lambda-\dot\nu\right)\dot{h}_R\right\}-
e^{-\lambda}\left\{2 h_R^{\prime\prime}-\left(\lambda^\prime-\nu^\prime-\frac{4}{r}\right)h_R^\prime
\right\}
\right]
\right\}=\frac{8\pi G}{c^4}\left(\varepsilon-3 p\right).
\end{equation}
Finally, the $i=r$ component of the
law of conservation of energy and momentum, $T^k_{i;k}=0$, gives
\begin{equation}
\label{conserv_osc}
\frac{\partial T^0_1}{\partial x^0}+\frac{\partial T^1_1}{\partial r}+\frac{1}{2}\left(\dot{\nu}+\dot{\lambda}\right)T^0_1+
\frac{1}{2}\left(T_1^1-T_0^0\right)\nu^\prime+\frac{2}{r}\left[T_1^1-\frac{1}{2}\left(T^2_2+T^3_3\right)\right]=0.
\end{equation}
\section{Equilibrium configurations}
\label{stat_conf}
\subsection{Static equations}
General equations derived in the previous section can be employed to construct static solutions describing equilibrium configurations. For this purpose,
it is sufficient to set that all functions entering these equations depend on the radial coordinate $r$ only.
Also, bearing in mind the necessity of a physical interpretation of the results, it is convenient to introduce a new function $M(r)$, defined as
\begin{equation}
\label{metr_g11}
e^{-\lambda}=1-\frac{2 G M(r)}{c^2 r}.
\end{equation}
Then Eq.~\eqref{mod_00_gen} can be recast in the form
\begin{equation}
\label{mass_eq}
\left[1+\alpha\left(h_R+\frac{1}{2}r h_R^\prime\right)\right]\frac{d M}{d r}=
\frac{4\pi}{c^2} r^2 \varepsilon+\alpha \frac{c^2}{2 G}r^2
\left[\frac{1}{2}\left(h-h_R R\right)+h_R^{\prime}\left(\frac{2}{r}-\frac{3G M}{c^2 r^2}\right)+h_R^{\prime\prime}\left(1-\frac{2 G M}{c^2 r}\right)
\right].
\end{equation}
In GR (when $\alpha=0$),
the function $M(r)$ plays the role of the current mass inside a sphere of radius $r$. Then outside the fluid
(i.e., when $\varepsilon=0$), $M=\text{const.}$ is the total gravitational mass of the object.
A different situation occurs in the MGT (when $\alpha\neq 0$): outside the fluid
the scalar curvature is now nonzero (one can say that the star is surrounded by a gravitational sphere~\cite{Astashenok:2017dpo}).
This sphere gives an additional contribution to the total mass measured by a distant observer.
Depending on the sign of $\alpha$, the metric function $\lambda$ [and correspondingly the scalar curvature $R$ and
the mass function $M(r)$] either decays asymptotically or demonstrates an oscillating behavior. In the latter case $M(r)$ cannot already be regarded as the mass function.
Consistent with this, here, we use only such $\alpha$'s that ensure a nonoscillating behavior of $M(r)$; this enables us to interpret $M(r\to \infty)$ as the total
mass.
In turn, the conservation law given by Eq.~\eqref{conserv_osc} yields the equation
\begin{equation}
\label{conserv_gen}
\frac{d p}{d r}=-\frac{1}{2}\left(\varepsilon+p\right)\frac{d \nu}{d r}.
\end{equation}
For a complete description of the configuration under consideration, the above equations
must be supplemented by an equation of state (EoS) for the fluid.
Here, for the sake of simplicity, we consider
a barotropic EoS where the pressure is a function of the mass density $\rho_b$.
For our purpose, we restrict ourselves to a simplified variant of the EoS,
where a more or less realistic matter
EoS is approximated in the form of the following polytropic EoS:
\begin{equation}
\label{eqs_NS_WH}
p=K \rho_{b}^{1+1/n}, \quad \varepsilon = \rho_b c^2 +n p,
\end{equation}
with the constant $K=k c^2 (n_{b}^{(ch)} m_b)^{1-\gamma}$,
the polytropic index $n=1/(\gamma-1)$,
and $\rho_b=n_{b} m_b$ denotes the rest-mass density
of the fluid. Here, $n_{b}$ is the baryon number density,
$n_{b}^{(ch)}$ is a characteristic value of $n_{b}$,
$m_b$ is the baryon mass,
and $k$ and $\gamma$ are parameters
whose values depend on the properties of the matter.
Next, introducing the new variable $\theta$,
$$\rho_b=\rho_{b c} \theta^n,$$
where $\rho_{b c}$ is the central density of the fluid,
we may rewrite the pressure and the energy density, given by
Eq.~\eqref{eqs_NS_WH}, in the form
$$p=K\rho_{b c}^{1+1/n} \theta^{n+1}, \quad
\varepsilon = \left( \rho_{b c} c^2 +
n K \rho_{b c}^{1 + {1}/{n} } \theta \right) \theta^n.$$
Making use of these expressions, from Eq.~\eqref{conserv_gen},
we obtain
for the internal region with $\theta \ne 0$,
\begin{equation}
\label{conserv_3}
2\sigma(n+1)\frac{d\theta}{d r}=
-\left[1+\sigma(n+1) \theta\right]\frac{d\nu}{dr},
\end{equation}
where $\sigma=K \rho_{b c}^{1/n}/c^2=p_c/(\rho_{b c} c^2)$ is a relativity parameter,
related to the central pressure $p_c$ of the fluid.
This equation may be integrated to give the metric function $e^{\nu}$
in terms of $\theta$,
$$e^{\nu}=e^{\nu_c}\left[\frac{1+\sigma (n+1)}{1+\sigma (n+1)\theta}\right]^{2},$$
and $e^{\nu_c}$ is the value of $e^{\nu}$ at the center where $\theta=1$.
The integration constant $\nu_c$ is fixed
by the requirement of the asymptotical flatness of the spacetime,
i.e., $e^{\nu}=1$ at infinity.
\subsection{Numerical results}
Thus, we have four unknown functions~-- $R, \theta$, $\nu$, and $M$~-- for which there are four equations,
\eqref{mod_22_gen}, \eqref{scal_cur_eq_gen1}, \eqref{mass_eq}, and \eqref{conserv_3}
whose solution will depend on the choice of the particular type of gravity theory, i.e., of the function $h$.
In the present paper, we consider the simplest case of quadratic gravity when $h=R^2$,
which is often discussed in the literature as a
viable alternative cosmological model describing the accelerated expansion of the early and present Universe~\cite{Nojiri:2010wj,Nojiri:2017ncd}.
For such gravity theory,
the value of the free parameter $\alpha$ appearing in \eqref{f_mod} is constrained from observations as follows:
(i)~in the weak-field limit, it is constrained by binary pulsar data as $|\alpha| \lesssim 5\times 10^{15} \text{cm}^2$~\cite{Naf:2010zy};
(ii)~in the strong gravity regime, the constraint is $|\alpha| \lesssim 10^{10} \text{cm}^2$~\cite{Arapoglu:2010rz}.
Consistent with this, for the calculations presented below, we
take $\alpha = -10^{10} \text{cm}^2$ (notice that we take an opposite sign for $\alpha$
as compared with that used in Ref.~\cite{Astashenok:2017dpo} since here we employ another metric signature).
If one takes another sign of $\alpha$,
it can result in the appearance of ghost modes and
instabilities in the cosmological context~\cite{Barrow:1983rx};
also, in this case, the scalar curvature $R$ demonstrates an oscillating behavior outside the star,
which appears to be unacceptable if one intends to
construct realistic models of compact configurations (for a detailed discussion, see Ref.~\cite{Astashenok:2017dpo}).
For numerical calculations, it is convenient to rewrite the equations in terms of the dimensionless variables
\begin{equation}
\label{dmls_var}
x=r/L, \quad v(x)=\frac{M(r)}{4\pi \rho_{bc} L^3}, \quad \Sigma=R L^2, \quad \bar{\alpha}=\alpha/L^2,\quad
\text{where}
\quad L=\frac{c}{\sqrt{8\pi G \rho_{bc}}}.
\end{equation}
As a result, we get the static equations
\begin{eqnarray}
\label{mod_Einstein-00_stat}
&&v^\prime=\frac{x^2}{1+\bar\alpha\left(2 \Sigma+x \Sigma^\prime\right)}\left\{
\left(1+\sigma n \theta\right)\theta^n+\frac{\bar\alpha}{2}\left[
- \Sigma^2+2 \frac{\Sigma^\prime}{x}\left(4-3\frac{v}{x}\right)+4 \Sigma^{\prime\prime}\left(1-\frac{v}{x}\right)
\right]
\right\},
\\
\label{mod_Einstein-22_stat}
&&-\left(1-\frac{v}{x}\right)\left(\nu^{\prime\prime}+\frac{\nu^{\prime 2}}{2}\right)
+\frac{v^\prime}{x^2}+\left(v^\prime+\frac{v}{x}-2\right)\frac{\nu^\prime}{2 x}
-\frac{v}{x^3}+2\sigma\theta^{n+1}\nonumber \\
&&+\bar\alpha\Big\{
\Sigma^2+2\Sigma^\prime\left[-\frac{2}{x}+\frac{v}{x^2}+\frac{v^\prime}{x}-\nu^\prime\left(1-\frac{v}{x}\right)
\right]-4\left(1-\frac{v}{x}\right)\Sigma^{\prime\prime}+\frac{v \Sigma}{x}\left[2\nu^{\prime\prime}+\nu^{\prime 2}+\frac{\nu^\prime}{x}-\frac{2}{x^2}
\right] \nonumber\\
&&+\Sigma\left[v^\prime\left(\frac{\nu^\prime}{x}+\frac{2}{x^2}\right)-2\frac{\nu^\prime}{x}-2\nu^{\prime\prime}-\nu^{\prime 2}
\right]
\Big\}=0,
\\
\label{conserv_stat}
&&2\sigma(n+1)\theta^\prime=-\left[1+\sigma\left(n+1\right)\theta\right]\nu^\prime,
\\
\label{curv_stat}
&&\Sigma+\left[1+\sigma(n-3)\theta\right]\theta^n+\bar\alpha\Big\{6\left(1-\frac{v}{x}\right)\Sigma^{\prime\prime}
+3\frac{\Sigma^\prime}{x}\left[-\frac{v}{x}\left(3+x \nu^\prime\right)+4-v^\prime+x \nu^\prime
\right]
\Big\}=0,
\end{eqnarray}
which follow from Eqs.~\eqref{mass_eq}, \eqref{mod_22_gen}, \eqref{conserv_3}, and \eqref{scal_cur_eq_gen1}, respectively.
When $\bar\alpha=0$, one recovers the general-relativity equations.
It is also convenient to recast the mass and radius of the configuration in terms of the parameters $K, n$, and $\sigma$~\cite{Tooper2}.
By eliminating $\rho_{b c}$ from the expressions for $x$ and $v$ in Eq.~\eqref{dmls_var}, we obtain
$$r=r^*\sigma^{-n/2} x, \quad M(r)=M^*\sigma^{-n/2}v(x), \quad \alpha=\alpha^* \sigma^{-n}\bar\alpha,$$
where $r^*=(8\pi G)^{-1/2}K^{n/2}c^{1-n},
M^*=(1/4)(2\pi)^{-1/2}G^{-3/2}K^{n/2}c^{3-n}$, and $\alpha^*=(8\pi G)^{-1}K^{n}c^{2(1-n)}$.
The quantities $r^*$ and $M^*$ define the scales of the radius and mass.
Equations \eqref{mod_Einstein-00_stat}-\eqref{curv_stat} are to be solved subject to the boundary conditions given in the neighborhood of the center by the expansions
\begin{equation}
\label{bound_mod_Ein}
\theta\approx 1+\frac{1}{2}\theta_2 x^2, \quad \nu\approx\nu_c+\frac{1}{2}\nu_2 x^2, \quad v\approx \frac{1}{6} v_3 x^3, \quad
\Sigma\approx\Sigma_c+\frac{1}{2}\Sigma_2 x^2,
\end{equation}
where the expansion coefficients $\theta_2, \nu_2, v_3,$ and $\Sigma_2$
are determined from Eqs.~\eqref{mod_Einstein-00_stat}-\eqref{curv_stat}. The central value of the scalar curvature $\Sigma_c$
is an eigenparameter of the problem, and it
is chosen so that asymptotically $\Sigma(x\to \infty)\to 0$.
The integration of Eqs.~\eqref{mod_Einstein-00_stat}-\eqref{curv_stat}
is performed numerically from the center (i.e., from $x\approx 0$) to the point $x=x_b$,
where the fluid density goes to zero.
We take this point to be a boundary of the star.
In turn, for $x>x_b$ the matter is absent, i.e., $\rho_b=p=0$. In GR,
this corresponds to the fact that the scalar curvature $\Sigma=0$.
But in the MGT this is not the case: there is an external
gravitational sphere around the star where $\Sigma\neq 0$.
Consistent with this, the internal solutions should be matched with the external
ones at the edge of the fluid. This is done by equating the
corresponding values of both the scalar curvature and the metric functions.
For negative $\alpha$'s employed here,
the scalar curvature is damped exponentially fast outside the fluid as
$
\Sigma\sim \exp{\left(-x/\sqrt{6|\bar \alpha|}\right)}/ x.
$
This enables us to introduce a well-defined notion for the Arnowitt-Desser-Misner mass through Eq.~\eqref{metr_g11},
unlike the case of positive $\alpha$'s for which $\Sigma$ demonstrates an oscillating behavior~\cite{Astashenok:2017dpo}.
The results of numerical calculations are shown in Fig.~\ref{fig_M_sigma}, where the dependence of the total mass on
the relativity parameter $\sigma$ (or the central density $\rho_{bc}$) is plotted. It is seen that both in GR and in the MGT the curves
have a maximum.
In GR, such a maximum corresponds
to the transition from stable to unstable systems, and this is confirmed by the linear stability analysis~\cite{Tooper2}.
In the next section, we will study this problem for the case of $R^2$ gravity under consideration.
\begin{figure}[t]
\centering
\includegraphics[height=7cm]{M_sigma.eps}
\caption{The dimensionless total mass versus the relativity parameter $\sigma$. The numbers near the curves denote the values of the square of the lowest eigenfrequency
$\bar \omega^2$ corresponding to the configuration at a given point of the curve.
The segments of the curves corresponding to stable configurations are shown as solid lines, whereas the unstable segments are shown dashed.
}
\label{fig_M_sigma}
\end{figure}
\section{Linear stability analysis}
\label{stab_anal}
Consider that the equilibrium systems described above are perturbed in such a way that spherical
symmetry is maintained. In obtaining the equations for the perturbations, we will neglect all quantities which are
of the second and higher order. The components of the four-velocity in the metric~\eqref{metric_schw} are given by \cite{Chandrasekhar:1964zz}
$$
u^0=e^{-\nu_0/2}, \quad u_0=e^{\nu_0/2}, \quad u^1=e^{-\nu_0/2} \mathpzc{v}, \quad u_1=-e^{\lambda_0-\nu_0/2} \mathpzc{v},
$$
with the three-velocity $\mathpzc{v}=d r/d x^0 \ll 1$.
The index 0 in the metric functions indicates the static, zeroth order solution of the gravitational equations.
Now we consider perturbations of the static solutions of the form
\begin{equation}
\label{perturbations}
y=y_0+y_p ~,
\end{equation}
where the index 0 refers to the static solutions, the index $p$ indicates the perturbation,
and $y$ denotes one of the functions $\lambda, \nu, \varepsilon, p$ or the scalar curvature $R$.
We will use the variational approach of Ref.~\cite{Chandrasekhar:1964zz} when one introduces a ``Lagrangian displacement'' $\zeta$ with respect to $x^0$,
$\mathpzc{v}=\partial \zeta/\partial x^0$.
Then, substituting the expansions \eqref{perturbations} in Eqs.~\eqref{mod_00_gen}-\eqref{conserv_osc} and seeking solutions in a
harmonic form
$$y_p(x^0,r) = \tilde{y}_p(r) e^{i\omega x^0}$$
[for convenience, we hereafter drop the tilde sign on $\tilde{y}_p(r)$],
one can obtain the following set of equations for the perturbations $\theta_p, \Sigma_p$, and $\lambda_p$:
\begin{eqnarray}
\label{eq_theta_pert}
&&\sigma(n+1) s_1 \theta_0^n \theta_p^\prime+
\frac{1}{2}\theta_0^{n}\Big\{\sigma(n+1)x\theta_0^n e^{\lambda_0}\left[1+\sigma(n+1)\theta_0\right]+
s_1\left[\sigma(n+1)^2\nu_0^\prime+\frac{n}{\theta_0}\Big(2\sigma (n+1)\theta_0^\prime+\nu_0^\prime\Big)\right]
\Big\}\theta_p\nonumber\\
&&+\frac{\lambda_p}{2x}e^{-\nu_0}\Big\{
8\bar\alpha^2\bar\omega^2 \Sigma_0^2+2\bar\omega^2\left(1+\bar\alpha x\Sigma_0^\prime\right)^2+
2\bar\alpha \Sigma_0\left[4\bar\omega^2\left(1+\bar\alpha x\Sigma_0^\prime\right)+e^{\nu_0}\theta_0^n\left(1+x\nu_0^\prime\right) s_2
\right]\nonumber \\
&&+e^{\nu_0}\theta_0^n\left[1+x\nu_0^\prime+\bar\alpha x\Sigma_0^\prime\left(4+x \nu_0^\prime\right)
\right]s_2
\Big\}-\frac{\bar\alpha}{2}e^{-\nu_0}\left[8\bar\alpha\bar\omega^2\Sigma_0+4\bar\omega^2\left(1+\bar\alpha x\Sigma_0^\prime\right)+e^{\nu_0}\theta_0^n\left(4+x\nu_0^\prime\right) s_2
\right]\Sigma_p^\prime
\nonumber\\
&&+\frac{\bar\alpha}{2 x}e^{-\nu_0}\left\{
2\bar\omega^2 x s_1 \nu_0^\prime+2 s_2 e^{\nu_0} \theta_0^n \left[
e^{\lambda_0}\left(1-\bar\omega^2 x^2 e^{-\nu_0}+\frac{1}{2}x^2 \Sigma_0\right)-x\nu_0^\prime-1
\right]
\right\}\Sigma_p=0,
\end{eqnarray}
\begin{eqnarray}
\label{eq_R_pert}
&&\bar\alpha s_1 \Sigma_p^{\prime\prime}-\frac{\bar\alpha}{2x}\left\{
x\left(1+\bar\alpha x \Sigma_0^\prime\right)\lambda_0^\prime-x\nu_0^\prime+2\bar\alpha \Sigma_0\left[x\left(\lambda_0^\prime-\nu_0^\prime\right)-4\right]-4
\right\}\Sigma_p^\prime-\frac{\bar\alpha}{2}\Sigma_0^\prime s_1 \lambda_p^\prime \nonumber\\
&&+\frac{e^{-\nu_0}}{6x}\left\{
x e^{\lambda_0}\left(e^{\nu_0}+6\bar\alpha \bar\omega^2\right)+\bar\alpha x e^{\lambda_0}\Sigma_0\left[
2\left(e^{\nu_0}+6\bar\alpha\bar\omega^2\right)+3 \bar\alpha x e^{\nu_0}\Sigma_0^\prime
\right]+\bar\alpha e^{\nu_0}\Sigma_0^\prime\left[
-6\bar\alpha\left(1+x\nu_0^\prime\right)+e^{\lambda_0}\left(x^2+6\bar\alpha\right)
\right]
\right\}\Sigma_p\nonumber\\
&&+\frac{e^{\lambda_0}}{6}\theta_0^{n-1}\left\{
n\left(1+\bar\alpha x \Sigma_0^\prime\right)+2\bar\alpha \left[n+\sigma\left(n^2-2n-3\right)\theta_0
\right]\Sigma_0+\sigma(n+1)\theta_0\left[n-3+\bar\alpha n x\Sigma_0^\prime\right]
\right\}\theta_p\nonumber\\
&&+\frac{\bar\alpha}{2x}\left\{
\bar\alpha x^2\Sigma_0^{\prime 2}\lambda_0^\prime-2x\left(1+2\bar\alpha \Sigma_0\right)\Sigma_0^{\prime\prime}+
\Sigma_0^\prime\left[\left(x\lambda_0^\prime-3\right)\left(1+2\bar\alpha\Sigma_0\right)-2\bar\alpha x^2\Sigma_0^{\prime\prime}
\right]
\right\}\lambda_p=0,
\end{eqnarray}
\begin{eqnarray}
\label{eq_lambda_pert}
&&\bar\alpha \Sigma_p^{\prime\prime}+\frac{\bar\alpha}{x}\left(2-\frac{1}{2}x\lambda_0^\prime\right)\Sigma_p^\prime-
\bar\alpha\left(\frac{1}{2}e^{\lambda_0}\Sigma_0+\frac{x\lambda_0^\prime+e^{\lambda_0}-1}{x^2}\right)\Sigma_p-\frac{s_1}{2x}\lambda_p^\prime \nonumber\\
&&+\frac{1}{2x^2}\left[\left(x\lambda_0^\prime-1\right)\left(1+2\bar\alpha\Sigma_0\right)+\bar\alpha x \Sigma_0^\prime\left(x\lambda_0^\prime-4\right)-2\bar\alpha x^2\Sigma_0^{\prime\prime}
\right]\lambda_p+\frac{n}{2}e^{\lambda_0}\theta_0^{n-1}s_2\theta_p=0,
\end{eqnarray}
where $s_1=1+\alpha\left(2 \Sigma_0+x \Sigma_0^\prime\right), s_2=1+\sigma(n+1)\theta_0$, and the dimensionless frequency $\bar\omega=L \omega$.
Here, Eq.~\eqref{eq_theta_pert} follows from the conservation law~\eqref{conserv_osc}, Eq.~\eqref{eq_R_pert} follows from the equation for the scalar curvature~\eqref{scal_cur_eq_gen1},
Eq.~\eqref{eq_lambda_pert} is the $(^t_t)$ component~\eqref{mod_00_gen}.
In deriving Eq.~\eqref{eq_theta_pert}, we have used the expression (here $\psi=\zeta/L$ is the dimensionless Lagrangian displacement)
$$
\lambda_p=-\frac{x}{1+\bar \alpha \left(2 \Sigma_0+x \Sigma_0^\prime\right)}\left\{
e^{\lambda_0}\theta_0^n\left[1+\sigma (n+1)\theta_0\right]\psi+\bar\alpha\left(\Sigma_p\nu_0^\prime-2\Sigma_p^\prime\right)
\right\}
$$
[which follows from the $(^r_t)$ component \eqref{mod_10_gen}],
expressing from it $\psi$ and eliminating it from~\eqref{eq_theta_pert}.
For this set of equations, we choose
the following boundary conditions near the center $x=0$:
\begin{equation}
\label{bound_cond_pert}
\theta_p\approx \theta_{pc}+\frac{1}{2}\theta_{p2} x^2, \quad \Sigma_p\approx \Sigma_{pc}+\frac{1}{2}\Sigma_{p2} x^2, \quad
\lambda_p\approx \frac{1}{2}\lambda_{p2} x^2,
\end{equation}
where the expansion coefficients $\theta_{pc}$ and $\Sigma_{pc}$ are arbitrary and
$\theta_{p2} , \lambda_{p2}$, and $\Sigma_{p2}$ can be found from Eqs.~\eqref{eq_theta_pert}-\eqref{eq_lambda_pert}.
The set of equations~\eqref{eq_theta_pert}-\eqref{eq_lambda_pert}
together with the boundary conditions \eqref{bound_cond_pert}
defines an eigenvalue problem for $\bar\omega^2$.
The question of stability is therefore reduced to a study of the possible
values of $\bar\omega^2$.
If any of the values of $\bar\omega^2$ are found to be negative,
then the perturbations will increase and the
configurations in question will be unstable against radial oscillations.
The choice of eigenvalues of $\bar \omega^2$ is carried out such that we have asymptotically decaying solutions for the perturbations $\Sigma_p$ and $\lambda_p$.
In doing so, it is necessary to ensure the following properties of the solutions:
(i)~The function $\theta_p$ must be finite (though not necessarily zero) at the boundary of the star.
This is sufficient to ensure that the perturbation of the fluid pressure $p_p\sim \theta_0^n \theta_p$ meets the condition $p_p=0$
at the edge of the star where $\theta_0= 0$ [see, e.g., Eq.~(60) in Ref.~\cite{Chandrasekhar:1964zz}].
(ii)~The function $\lambda_p$ must be nodeless; this corresponds to a zero mode of the solution
(on this point, see, e.g., Ref.~\cite{Gleiser:1988ih}).
In this connection,
it is useful to write out the asymptotic behavior of the solutions.
\noindent (A) {\it Static solutions}:
$$v \to v_{\infty}-C_{\Sigma_0} \sqrt{\frac{2 |\bar \alpha|}{3}}\,x \exp{\left(-x/\sqrt{6|\bar \alpha|}\right)}, \quad
\Sigma_0 \to -C_{\Sigma_0} \exp{\left(-x/\sqrt{6|\bar \alpha|}\right)}\Big/x, \quad
e^{\nu_0}\to 1-v_{\infty}/x.$$
\noindent (B) {\it Perturbations}:
$$\Sigma_p\to C_{\Sigma_p} \exp{\left(-\sqrt{\frac{1+6|\bar \alpha|\bar\omega^2}{6|\bar \alpha|}}\,x\right)}\Big/x, \quad
\lambda_p \to v_{\infty}/x.$$
Here, $v_{\infty}$ is an asymptotic value of the mass function $v$,
$C_{\Sigma_0}>0$ and $C_{\Sigma_p}$ are integration constants.
\begin{figure}[t]
\centering
\includegraphics[height=4cm]{perturbs.eps}
\caption{The typical behavior of the perturbations within GR (solid curves) and $R^2$ gravity (dashed curves).
The graphs are plotted for the case of $\sigma\approx 0.151$ (GR) and of $\sigma= 0.17$ ($R^2$ gravity)
that correspond to the maxima of the mass curves (cf. Fig.~\ref{fig_M_sigma}).
The thin vertical lines denote the boundaries of the fluid $x=x_b$.
}
\label{fig_pertubs}
\end{figure}
\begin{table}[h!]
\caption{The computed values of the square of the lowest eigenfrequency
$\bar \omega^2$ and of the eigenparameter $\theta_{pc}$ for the configurations within $R^2$ gravity. For all the cases $\Sigma_{pc}=-10^{-4}$.
The eigenparameter $\Sigma_c$ is the central value of the scalar curvature from Eq.~\eqref{bound_mod_Ein}.}
\vspace{.3cm}
\begin{tabular}{|c|c|c|c|}
\hline
$\sigma$ &$\Sigma_c$ & $\bar \omega^2$ & $\theta_{pc}$\\
\hline
0.1&-0.494418253625 & 0.0185 & 0.00020166833\\
\hline
0.17&-0.3646950715 & $\approx 0$ & 0.00034320286\\
\hline
0.25&-0.26573801601 &-0.0118 & 0.000644452\\
\hline
0.3&-0.2202699654 & -0.0175 & 0.000993096\\
\hline
\end{tabular}
\label{tab1}
\end{table}
Examples of solutions for the perturbations $\theta_p, \Sigma_p$, and $\lambda_p$ are shown in Fig.~\ref{fig_pertubs}.
The procedure for determining the eigenvalues of $\bar \omega^2$ is as follows:
\begin{enumerate}
\itemsep=-0.2pt
\item[(1)]
In the case of GR (when $\bar\alpha=0$), there are only two equations \eqref{eq_theta_pert} and \eqref{eq_lambda_pert}
for the functions $\theta_p$ and $\lambda_p$. Considering that these equations are linear, the rescaling of the central
$\theta_{pc}\to \beta \theta_{pc}$ results in the corresponding rescaling of the metric perturbation
$\lambda_p\to \beta \lambda_p$, but the qualitative behavior of the solutions remains unchanged. That is, it is possible to take any central
$\theta_{pc}\ll 1$, and the eigenfrequency $\bar \omega^2$ will not change.
\item[(2)] In the case of $R^2$ gravity (when $\bar\alpha\neq0$),
all three equations~\eqref{eq_theta_pert}-\eqref{eq_lambda_pert} need to be solved. In doing so, there are two arbitrary central values
$\theta_{pc}$ and $\Sigma_{pc}$.
Numerical calculations indicate that to ensure regular asymptotically decaying solutions for the perturbations
$\Sigma_p$ and $\lambda_p$ one has to adjust the values both of the eigenfrequency
$\bar \omega^2$ and of one of these two arbitrary parameters. That is, either $\theta_{pc}$ or $\Sigma_{pc}$ is an eigenparameter of the problem.
The corresponding numerical values of these parameters are given in Table~\ref{tab1} for several values of $\sigma$.
It is seen from the table and Fig.~\ref{fig_M_sigma} that the square of the lowest eigenfrequency $\bar \omega^2$
is positive to the left of the maximum and negative to the right of it. That is, as in GR,
in $R^2$ gravity under consideration the transition from stable to unstable systems occurs strictly at the maximum of the mass.
\end{enumerate}
Summarizing the results obtained,
within $R^2$ gravity, we have examined the question of stability of compact configurations supported by a polytropic fluid against linear radial perturbations. In contrast to the
studies performed earlier in the literature, here the calculations have been carried out in the Jordan frame to avoid the potential difficulties related to the conformal transformation
to the Einstein frame (see Introduction). In doing so, we regard the scalar curvature as a dynamical variable for which the behavior of the corresponding perturbation modes is studied as well.
As a result, it is shown that, as in GR, within the framework of $R^2$ gravity the transition from stable to unstable configurations takes place at the point of the maximum of the curve mass-central density of the fluid.
One may expect that similar results will also be obtained for another, more realistic EoSs of matter,
including those that are used in constructing models of neutron stars (see, e.g., Refs.~\cite{Astashenok:2017dpo,Folomeev:2018ioy,Astashenok:2020isy}).
\section*{Acknowledgements}
The authors are very grateful to S.~Odintsov for fruitful discussions and comments.
We gratefully acknowledge support provided by Grant No.~BR05236322
in Fundamental Research in Natural Sciences by the Ministry of Education and Science of the Republic of Kazakhstan.
We are also grateful to the Research Group Linkage Programme of the Alexander von Humboldt Foundation for the support of this research.
| proofpile-arXiv_065-215 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{\textbf{ Introduction}}
we know that, according to Feit-Thompson Theorem, every group with an odd order is solvable. As a consequence of this Theorem, one can say that every finite group $G$ that has a normal Sylow $2$-subgroup, i.e., with $v_2(G) = 1$, is a solvable group, where $v_p(G)$ is the number of Sylow p-subgroups of $G$. Also, if $v_p(G)=1$ for each prime $p$, then $G$ is nilpotent and reciprocally. This result show that the number of Sylow $p$-subgroups for a prime p is restricted arithmetically by the properties of a group $G$. Most recently, the author in \cite{Rob}, proved that if $v_p(G)\leq p^2-p+1$ for each prime p, then $G$ is solvable. Here, first we show that it is not necessary to consider the amount of $v_p(G)$ for each prime number $p$ and also improve the upper bound to $p^2$. In fact, we give a substantial generalization as follows:
\begin{thm}
Every finite group $G$ containing at most $4$ Sylow $2$-subgroups, is a solvable group.
\end{thm}
Also, the author, raised the following conjecture.
\begin{con}
Let G be a finite group. If $v_p(G)\leq p^2-p+1$ for each odd prime number $p$, then $G$ is solvable.
\end{con}
Finally, we give the positive answer to this conjecture and improve it as follows:
\begin{thm}
Every finite group $G$ containing at most $p^2$ Sylow $p$-subgroups for each odd prime number $p$, is solvable.
\end{thm}
\section{\textbf{The Proofs}}
$\mathbf{Proof~~ of ~~Theorem ~~1.1}.$
Suppose, on the contrary, that there exists a non-solvable finite group $G$ of the least possible order with $v_2(G)\leq 4$. In this case $G$ should be a simple group. Otherwise, if there exists a non-trivial proper normal subgroup $M$ of $G$, then as $v_2(M)\leq v_2(G)\leq 4$ and $ v_2(G/M)\leq v_2(G)\leq 4$, both $M$ and $G/M$ are soluble (note that if $N$ or $G/N$ is group with odd order, then by Feit-Thompson Theorem they are solvable). It follows that $G$ is solvable, which is a contradiction. Therefore $G$ is a minimal simple group with $v_2(G)\leq 4$. By Thompson's classification of minimal simple groups \cite{Tho}, $G$ is isomorphic to one of the following simple groups: $A_5$ the alternating group of degree $5$, $L_2(2^p )$, where $p$ is an odd prime,
$L_2(3^p)$, where $p$ is an odd prime,
$L_2(p)$, where $5 < p$ is prime and $p \equiv 2 (mod~~5)$,
$L_3(3)$, and $^2B_2(q)$ where $q=2^{2m+1}\geq 8$.\\
Now we show that in each case we obtain a contradiction. This completes the proof.
Clearly $v_2(A_5)=5$ and $v_2(L_3(3))=351$, a contradiction.
If $G$ is isomorphic to $L_2(2^p)$, then by Case 2 of the proof of Proposition 2.4 of \cite{Shi}, we get that $n_2(G)=2^p+1\geq 9$, a contradiction.
If $G$ is isomorphic to $L_2(3^p)$, then one can again imply, from Proposition 2.4 of \cite{Shi}, that $5 < v_2(G)=3^{2p}-1$ or $(3^{3p}- 3^{p})/24$, a contradiction.
If $G$ is isomorphic to $L_2(p)$, where $5 < p$ is prime and $p = 2 (mod~~5)$, then by an argument similar to $L_2(3^p)$ we obtain that $5 < v_2(G)=p^{2}-1$ or $(p^{3}- p)/24$, a contradiction.
If $^2B_2(q)$, $q=2^{p}$ and $p$ an odd prime, then by Theorem 3.10 (and its proof) of Chapter XI of \cite{Hup}, we have $|G| = (q-1)(q^{2})(q^{2} +1)$ and $v_2(G) = q^{2} + 1> 65$, a contradiction. \\
We note that the bound 4 in Theorem 1.1 is the best possible, as $v_2(A_5)=5$.
Now by similar argument we prove Theorem 1.3.\\\\
$\mathbf{Proof~~ of ~~Theorem ~~1.3}.$
Suppose, on the contrary, that there exists a non-solvable finite group $G$ of the least possible order with $v_p(G)\leq p^2$ for all its prime odd divisors. In the sequel, by an argument similar to the proof of Theorem 1.1, to prove it is enough to consider the following groups (note that $v_3(A_5)=10$, $v_3(L_3(3))=52$): \\
If $G$ is isomorphic to $L_2(q)$ with $q=2^p$, then we consider an odd prime divisor of $|G|$, like $r$. Then it is easy to see that $r$ divides either $q+1$ or $q-1$. Now if $R$ is a Sylow $r$-subgroup of $G$, then $R$ is cyclic such that $N_G(R)=D_{q-1}$ or $D_{q+1}$, where $D_m$ is the dihedral group of order $m$. Therefore, the number of Sylow $r$-subgroups is $q(q + 1)/2$ or $q(q - 1)/2$ and so $n_r(G)> r^2$, a contradiction.
If $G$ is isomorphic to $L_2(q)$ with $q=3^p$ and $p$ is an odd prime, then it is easy to see that $$v_3(L_2(q))=v_3(SL(2,q)/Z(SL(2,q)))=v_3(SL(2,q)),$$where $Z(SL(2,q))$ is the center of the group $SL(2,q)$. Assume that $R \in Syl_G(3)$, then
$N_G(R)$ is the set of upper triangular matrices with determinant 1. Therefore, the order of the normalizer $N_G(R)$ is $q(q-1)$. Thus $v_3(G)=q(q^2 -1)/q(q-1)=q +1 > 3^2$, a contradiction.
If $G$ is isomorphic to $L_2(p)$, where $5 < p$ is prime and $p\equiv 2 (mod~~5)$, then by an argument similar to $L_2(2^p)$ we obtain that $v_r(G)=q(q+1)/2$ or $q(q-1)/2$, where $r\neq 2$, and so $n_r(G)> r^2$, a contradiction.
If $G=^2B_2(q)$, where $q=2^{2m+1}\geq 8$, then it is well-known that the Suzuki group $^2B_2(q)$ contains a maximal subgroup like $T$ of order $4(q-r+1)$, where $r=2^{m+1}$ and also $T$ has a normal cyclic subgroup $C$ in which $|C|=q-r+1$. Moreover, $C$ includes a Sylow $5$-subgroup like $P$ (note that as $q^2+1\equiv 0 (mod ~5)$ so $5$ is a prime divisor of $|G|$). From this one can follow that $T \leq N_G(P)$ and so $T=N_G(P)$, as $T$ is maximal. Since $|G| = (q-1)(q^{2})(q^{2} +1)$, the number of conjugates of $P$ in $G$ is $$v_5(G)=|G|/|T|=(q-1)(q^{2})(q^{2} +1)/4(q-r+1)> 25.$$ Thus in each case we obtain a contradiction. This completes the proof.\\
Finally, it is well-known that the only nonabelian simple finite groups in which its order is not divisible by 3 are the Suzuki groups. From this one can show, by induction on the order, that: if $H$ is a group such that $v_3(H) = 1$ and has no composition factor isomorphic to $^2B_2(q)$, then $H$ is a solvable group. As a result, one can find out that some of the odd prime numbers (for instance, 3) have stronger influence on the solvability of groups. In fact, most probably for the solvability of finite groups in terms of the number of Sylow $p$ subgroups, we do not need to consider all odd prime numbers. Therefore, it might seem reasonable to pose the following question:
\begin{que}
What is the smallest positive integer $n$ such that whenever there exist a finite group $G$ satisfying $v_{p_i}(G )\leq p_i^2$ where $p_i$ is odd number and $i\in \{1,\dots, n\}$, which guarantees the solvability of $G$?
\end{que}
| proofpile-arXiv_065-216 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\textit{Introduction.|} Photonic topological insulators host protected boundary modes that are robust against a range of defects and imperfections~\cite{Ozawa2019}. While the paradigmatic case of two-dimensional (2D) topological photonic crystals (PhCs) hosting one-dimensional (1D) edge modes immune to back-scattering has been extensively studied~\cite{lu2014topological}, a hierarchy of protected boundary states of lower dimensionality are possible in higher-order topological insulators (HOTIs)~\cite{benalcazar2017quantized}. For instance, quantized quadrupole insulators in 2D, which were introduced in a generalization of the Su-Schrieffer-Heeger (SSH) model to a square lattice with a flux~\cite{benalcazar2017quantized}, host 1D edge states, as well as zero-dimensional (0D) corner modes. These higher-order topological modes (HOTMs) localized at the 0D corners of a 2D lattice benefit from topological protection. Just as HOTIs in condensed matter systems are characterized by charge fractionalization due to a filling anomaly of the bulk states~\cite{Kempkes2019,wieder2020strong,benalcazar2018quantization,zhu2020identifying}, classical wave HOTIs reveal an analogous fractional corner anomaly of the density of states~\cite{peterson2020fractional}. In systems with short-range hoppings and approximate chiral symmetry, these corner modes are mid-gap states~\cite{ssh1979, Asboth2016}.
HOTMs have been realised in a variety of classical systems including PhCs~\cite{ota2018photonic,xie2019visualization,chen2019direct,Li2020}, coupled photonic waveguides~\cite{Noh2018,Mittal2019,ElHassan2019},
phononic crystals~\cite{serra2018observation}, acoustic systems~\cite{Ni_2017,Ni2019,qi2020acoustic}, elastic systems~\cite{Fan2019elastic} and microwave circuits~\cite{peterson2018quantized}, and their robustness has been exploited for stable lasing~\cite{kim2020lasing, han2020lasing, gong2020topological}.
However,
a rigorous study of the effect of long-range interactions (the coupling between elements) which is unavoidable in many
photonic systems \cite{koenderink2006complex,Pocock2018, Pocock2019, Li2020}, as well as a detailed analysis of the robustness of the HOTMs has not been undertaken.
Here we consider a PhC with a $C_6$-symmetric lattice~\cite{Wu2015, Noh2018}, and fill the aforementioned gap by taking advantage of a semi-analytical model with long-range interactions~\cite{abajo2007colloquium}, that is, interactions beyond nearest neighbours between all the lattice elements. This allows us to perform an extensive study of the robustness of these modes against defects and imperfections. Crucially, we show that the HOTMs are protected by lattice symmetries; we quantify their degree of robustness against chiral-symmetry breaking long-range interactions, as well as to strong defects.
\begin{figure}
\includegraphics[width=\columnwidth]{fig1_final.png}
\caption{(a) Unit cells of the bulk lattice in the contracted and expanded phases, characterised by a contraction/expansion parameter, $\delta$. The relevant Wyckoff positions are labelled 1a (black circle) and 3c (red star). (b) Band structure of the silicon photonic crystal in the expanded phase for the TM modes.
The expansion coefficient $\delta$ is 0.11, and the radius of the cylinders is $0.12 a_0$, $a_0$ being the lattice constant of the crystal.
(c) Wilson loops for bands 4 to 6 (Wilson loops for bands 1 to 3 are similar, see~\cite{supplemental}). }
\label{fig:photonic_crystal_bulk}
\end{figure}
\textit{Photonic crystal.|}
We consider the breathing honeycomb PhC
introduced in Ref.~\cite{Wu2015}, Fig.~\ref{fig:photonic_crystal_bulk}(a). Each unit cell in the triangular lattice consists of six silicon rods ($\varepsilon=11.7$) in vacuum of radius $r=0.12a_0$ located at a distance $R = R_0(1 \pm \delta)$ from the origin of the unit cell. Here, $a_0$ is the lattice parameter, and $R_0=a_0/3$ the location of the rods in the unperturbed honeycomb arrangement. The perturbation of the honeycomb lattice of rods by $\pm \delta$ yields expanded and contracted phases, respectively, where the doubly degenerate Dirac point at $\Gamma$ splits and a bulk band gap opens between $\omega a/(2\pi c) = 0.4$ to $0.5$.
Although this band gap hosts 1D edge states as measured in several photonic experiments~\cite{barik2018topological,gorlach2018far,peng2019,Smirnova2019,parappurath2020topological,Liu2020photonic,Yang2020SpinMomentum}, we now discuss how they are not an instance of a $\mathbb{Z}_2$ topological insulator~\cite{Fragile2017}.
\begin{figure*}
\includegraphics[width=\textwidth]{fig2_final.png}
\caption{
(a) Scheme for the topological particle supercell. (b) Particle lattice, with sublattices $a$ (green) and $b$ (purple). (c,d) Modes of a photonic crystal particle: Frequency ($\omega$) of topological corner (red), edge (cyan) and bulk (grey) states (c), and displacement field plots, showing $D_z$ (d). (e,f) Quasistatic model of the topological particle. (e) Frequency of topological corner, edge and bulk states, for silver nanoparticles with radius $10$~nm and height $40$~nm. (f) Dipole moments of the six corner eigenmodes. In the color scale used in (d) and (f) red (blue) represents positive (negative) values. In both cases, $\delta = 0.11$. }
\label{fig:photonic_crystal_particle}
\end{figure*}
Figure~\ref{fig:photonic_crystal_bulk}(b) presents the band structure of the expanded phase for $\delta=0.11$. We first determine the topological properties of the system though the application of topological quantum chemistry~\cite{NaturePaper,dePaz2019}.
The irreducible representations of the eigenfields at the high symmetry points (irrep labels), displayed in the band structure, are calculated using GTPack~\cite{gtpack1,gtpack2}. Using the catalogue of Elementary Band Representations (EBRs) in the Bilbao Crystallographic Server~\cite{Bilbao1,Bilbao2,Bilbao3, NaturePaper,GraphDataPaper,GroupTheoryPaper}, along with the irrep labels we can identify the topological properties of each set of connected bands of our PhC.
Counting from $\omega = 0$, bands 4-6 are all interconnected and their irrep labels are accordant to Wannier functions centered in the $3c$ Wyckoff position transforming in the $(E_1\uparrow G)_{3c}$ band representation.
Since these bands can be identified with an EBR, we can conclude that the system presents a trivial $\mathbb{Z}_2$ topological invariant. Nevertheless, the $3c$~Wyckoff position of the band representation indicates that the Wannier functions of this set of bands are not centered around the origin of the unit cell, but at their edges. This situation can be understood as a 2D analog of the topological hybridization of eigenstates of a 1D SSH chain. This topological phase was labeled in the past in analogy with solid-state systems as the photonic obstructed atomic limit (OAL), because although an atomic limit exists it is `obstructed' since the Wannier centers are not located at the position where the photonic “atoms” sit~\cite{dePaz2020}. Note that here the photonic atom is the collection of the six contracted/expanded cylinders inside the unit cell. Moreover,
we characterize our system through the calculation of the eigenvalues of the Wilson loop~\cite{dePaz2020} for this set of connected bands, Fig.~\ref{fig:photonic_crystal_bulk}(c).
The resulting Wilson loops present no windings (which are characteristic of $\mathbb{Z}_2$ or Chern insulators), but the Wannier centers are not only localized in the origin of the unit cell ($W = 0$) as in a trivial system, but also at its edges ($W = \pm \pi$), indicating that the system presents an obstruction similar to the 1D SSH chain~\cite{vanderbilt2018}. On the other hand, the PhC in the contracted phase is a trivial photonic insulator. This can be seen from the Wannier centers of the EBRs being located at the origin of the unit cell ($1a$~Wyckoff position), and by looking at the eigenvalues of Wilson loop
(see~\cite{supplemental}).
It should be emphasized here that in 2D systems, there is a subtle relationship between OAL and HOTIs. In toy models with nearest neighbour interactions, it is often possible to define a chiral symmetry, which forces the spectrum to be symmetric about a fixed energy (often taken to be zero in the literature). If an OAL model has chiral symmetry, then it is sometimes possible to define a bulk topological invariant which counts the number of 0D corner modes in a finite-sized system preserving the crystal symmetries~\cite{miert2020topological}. Systems with non-zero values of this invariant are properly termed HOTIs. In the absence of chiral symmetry, as is the case in photonic systems with long-range interactions, however, there is no guarantee that a finite-sized system will have corner modes pinned to a special frequency. These systems are regarded as OAL systems, and can be characterized by the centers of their Wannier functions (as above), by real-space invariants
(see~\cite{supplemental} and Ref.~\cite{song2020twisted}), or a filling anomaly (see ~\cite{supplemental} and Refs.~\cite{wieder2020strong,benalcazar2018quantization}).
In order to make semi-analytical predictions about the presence and robustness of corner modes, for the remainder of this work we will exploit the fact that our model is deformable to a chiral-symmetric limit although this symmetry is strictly broken by unavoidable long-range interactions.
While the 0D corner modes in 2D SSH-like PhC particles (that is, finite size crystals containing several unit cells) with $C_4$ symmetry have been extensively explored~\cite{ota2018photonic,xie2019visualization,chen2019direct,kim2020lasing, han2020lasing}. In photonic crystal particles with $C_6$ symmetry, only the 1D edge states have been studied~\cite{Siroki2017, Jalali2020,barik2019}.
Firstly, we analyze the emergence of 0D photonic corner states in this system by looking at 2D particles made of cells in the expanded phase and surrounded by cells in the contracted phase, see Figs.~\ref{fig:photonic_crystal_particle}(a) and~\ref{fig:photonic_crystal_particle}(b)~\footnote{We build supercells of 21 unit cells in the $\mathbf{a}_1$ and $\mathbf{a}_2$ lattice directions, filling a central hexagonal portion of the supercell with 5 lattice constants in the expanded phase ($\delta=0.11$). To prevent the leaking of energy
to the vacuum, we surround the central hexagon by cells in the contracted phase ($\delta=-0.11$), which behaves as a trivial photonic insulator with a matched band gap.}.
Results of MPB supercell calculations~\cite{johnson2001block} are shown in Figs.~\ref{fig:photonic_crystal_particle}(c) and~\ref{fig:photonic_crystal_particle}(d). The frequency eigenvalues ~\ref{fig:photonic_crystal_particle}(c) show a clear band-ga
, with 6 mid-gap states.
The real part of the displacement field eigenvectors $D_z$ for these 6 states are shown in Fig.~\ref{fig:photonic_crystal_particle}(d). These are concentrated at the corners of the particles, thus classify them as corner modes | marked in red in Fig.~\ref{fig:photonic_crystal_particle}(c). States $2$A,B and $3$A,B are degenerate pairs. The states immediately above and below the bandgap can be classified as edge states (cyan) \cite{Siroki2017,barik2019,Jalali2020}, followed by bulk eigenstates (gray). Thus, the 0D corner states in this PhC particle are hosted within the gapped 1D edge states, in contrast to HOTMs in $C_{3}$- and $C_4$-symmetric PhCs~\cite{xie2019visualization,chen2019direct,Li2020}.
\textit{Coupled dipole model.|} Since the spectrum of the PhC particle is determined by lattice symmetries together with long-range interactions, we now exploit a semi-analytical model to unveil the properties of corner modes in a closely related nanophotonic system. The coupled dipole model is a versatile method for investigating the optical response of arrays of subwavelength elements such as cold atoms or plasmonic nanoparticles (NPs)~\cite{abajo2007colloquium}. Within this model we can reproduce all the relevant features found in full field simulations of the PhC topological particle. Then, we use it to shed further light on the properties of the corner modes, particularly on their robustness against disorder. This model goes beyond tight-binding, nearest neighbour models by including interactions between all the lattice elements (excluding self-interactions) with the appropriate propagator.
In this formalism, the modes can be found by solving a generalised eigenvalue equation,
\begin{align}
\left( \sum_{i\neq j} \hat{\mathbf{I}}\frac{1}{\alpha(\omega)} - \hat{\textbf{G}}(\textbf{d}_{ij}, \omega)\right) \cdot \mathbf{p}_j = 0,
\label{eqn:CDA}
\end{align}
where $\mathbf{p}_j$ are the dipole moments, $\hat{\textbf{G}}$ is the dyadic Green's function that describes dipole-dipole interactions, $\alpha$ is the polarizability of the subwavelength elements, $\omega$ is the frequency and the separation between NPs is $\mathbf{d}_{ij} = \mathbf{d}_i - \mathbf{d}_j$.
The specifics of the physical dipolar elements enter through the polarizability, from which the resonance frequencies of the modes can be extracted~\cite{supplemental}.
In Figs.~\ref{fig:photonic_crystal_particle}(e) and~\ref{fig:photonic_crystal_particle}(f) we present results of the dipole model for the OAL particle with the same geometry as the PhC in Figs.~\ref{fig:photonic_crystal_particle}(c) and~\ref{fig:photonic_crystal_particle}(d). Here we particularise the system to the out-of-plane modes of subwavelength spheroidal metallic NPs, which correspond to the TM modes in the PhC~\footnote{We take silver NPs with parameters $\epsilon_\infty = 5$, $\omega_p = 8.9$~eV~\cite{Yang2015}, radius $r = 10$~nm and height $h = 40$~nm.}. We take a quasistatic approximation, and only include the near-field interaction term in the Green's function ($\propto 1/d^3$), which is accurate for these subwavelength NPs. In this approximation, the eigenvalues of Eq.~\eqref{eqn:CDA}, $E=1/\alpha(\omega)$, only depend on the particular geometrical arrangement of the dipoles. Figure~\ref{fig:photonic_crystal_particle}(e) shows the frequency spectrum around the band gap with corner modes within the gapped edge and bulk bands~\footnote{For plasmonic NPs the energy ordering of the modes is opposite to that of dielectric cylinders.
This is because the bonding mode of out-of-plane dipoles which minimises energy corresponds to the hexapole, while the monopole has antibonding mode and lies at highest energy.}.
For the plasmonic system, zero eigenvalue ($E=0$) maps to $\omega_\text{LSP}$, the localized surface plasmon frequency of the NPs. We see that the center of the band gap is located close to but not exactly at $\omega_\text{LSP}$, and that the spectrum is not exactly symmetric around that point. This is a consequence of chiral-symmetry breaking due to long-range interactions, as we discuss below in detail.
In Fig.~\ref{fig:photonic_crystal_particle}(f) we plot the real space dipole moments of the first mid-gap corner eigenmode, which reproduce well the $D_z$ field distributions of the PhC.
Importantly, the corner modes are localized on a particular sublattice, while the dipole moments in the opposite sublattice remain virtually zero, shown in Fig.~\ref{fig:photonic_crystal_particle}(b). A similar sublattice localization of coner modes is present in the PhC, though weaker due to the fully retarded interactions. Nevertheless, this shows that both systems are approximately chiral-symmetric despite the long-range interactions,
which has implications on the robustness of these 0D modes.
In addition, these modes are well separated from the gapped bulk and edge states and are tightly confined to the corners.
We now use the coupled dipole model to better characterise the properties of the corner modes. First, we study the behaviour of the system as a function of $\delta$, the deviation the lattice of NPs away from a perfect honeycomb. In Fig.~\ref{fig:quasistatic}(a), we plot the eigenvalue spectrum,
such that the symmetry properties of the spectrum around zero eigenvalue are clearer.
Starting from the unperturbed honeycomb lattice ($\delta=0$), we see how increasing $\delta$ controls the size of the bulk band-gap. At the same time, the corner modes (red) stay at approximately constant eigenvalue, only slightly shifted away from zero due to the inherent breaking of chiral symmetry.
In addition, edge modes (cyan) appear at the edges of the bulk bands. As $\delta$ increases, the corner modes are more isolated in the band structure, and hence more strongly confined to corners of the particle.
For $\delta \gtrsim 0.12$ new sets of corner modes (magenta) emerge from the bulk for positive and negative eigenvalues.
In contrast to the corner modes discussed here, these modes do not lie at the middle of the gap, and
they are not localized only on one of the sublattices.
\begin{figure}
\includegraphics[width=\columnwidth]{fig3_final.png}
\caption{Topological particle eigenvalues. (a) Evolution with increasing unit cell perturbation $\delta$, with topological corner modes (red) well separated from the edge (cyan) and bulk (grey) modes. Other corner modes are shown in magenta. (b) Dependence on interaction length between lattice sites, $\gamma$, for $\delta = 0.2$. As interactions go from nearest neighbours $\gamma=0.1$ to long range, chiral symmetry is broken and the spectrum is no longer symmetrical about $E=0$.}
\label{fig:quasistatic}
\end{figure}
The coupled dipole model also enables us to analyze the photonic corner modes analytically, as detailed in the SM~\cite{supplemental}.
We find that when interactions are short-range, the eigenvalue problem for the coupled dipoles maps onto a tight-binding Schr\"odinger equation for a system with six $s$-orbitals at the $6d$~Wyckoff position in the unit cell (there is one $s$-orbital at the position of each NP). As $\delta$ increases, the model undergoes a transition between an atomic limit phase with Wannier centers on the $1a$ position, to an OAL phase with Wannier centers on the $3c$ position; in the short-range limit these Wannier functions are compactly supported, and can be found exactly. For a finite-sized system, the two atomic limits are distinguished by the $p6mm$ real space invariants of Ref.~\cite{song2020twisted}, which confirms that HOTMs are protected by lattice symmetries. Furthermore, we can solve for the corner modes in a topological particle in the long-wavelength approximation. We find that the low-energy theory of the domain between trivial and OAL particle naively resembles the edge of a quantum-spin Hall (QSH) insulator if only the lowest-order terms are considered.
However, when we include crystalline- and chiral-symmetric perturbations, we find that the QSH edge states gap to yield six corner modes pinned to mirror lines and related by sixfold rotational symmetry. Since the corner modes are eigenstates of the chiral symmetry, they must be localized to a single sublattice. We can then include chiral symmetry breaking perturbatively to find that the corner modes are lifted from zero eigenvalue (or $\omega = \omega_{\mathrm{LSP}}$), consistent with calculations as we discuss next.
We study the effect of long-range interactions by introducing an artificial cut-off in the coupled dipole model. We introduce an exponential decay to the dipole-dipole interactions,
$f_{\mathrm{c.o.}}(d_{ij}) = \exp[-(d_{ij} - d_{ij}^0)/(d_{ij}^0\gamma)]$, where $d_{ij}^0$ is the nearest neighbour separation for each dipole and $\gamma$ is a cut-off parameter to control the interaction range~\cite{supplemental}.
This allows us to continuously tune the interaction range from nearest neighbours ($\gamma = 0.1$), to electronic-like exponentially suppressed ones, all the way to full dipolar interactions ($\gamma \approx 5$),
as we show in Fig.~\ref{fig:quasistatic}(b) for fixed $\delta = 0.2$. For small values of $\gamma$, interactions in practice are only between nearest neighbours, such that there is no coupling between dipoles of the same sublattice.
This preserves chiral symmetry and results in a spectrum that is symmetric about zero eigenvalue, with six degenerate topological corner modes (red) that are pinned at zero.
Increasing the range of the interaction breaks chiral symmetry through coupling of elements in the same sublattice. This shifts the corner modes away from zero eigenvalue, lifts their degeneracy (from six degenerate states to 1+2+2+1, as in Fig.~\ref{fig:photonic_crystal_particle}), and removes the symmetry of the spectrum about zero eigenvalue [or $\omega = \omega_{\mathrm{LSP}}$ in Fig.~\ref{fig:photonic_crystal_particle}(e)].
Finally, it is interesting to note that the other set of corner modes (magenta) are
not pinned at zero even for nearest neighbour interactions.
This is different from the type II corner states identified in Ref.~[\onlinecite{Li2020}] for the breathing kagome lattice, which emerge due to long-range interactions.
\textit{Robustness against defects and disorder.|} We now take advantage of the coupled dipole model to test the degree of protection of the corner modes against defects.
Hence, we quantify protection by evaluating if the number of states within the band gap, together with the symmetries and degeneracies they satisfy, are left invariant.
First, we create a strong defect in the crystal by removing one lattice site next to the corner of the particle,
Fig.~\ref{fig:disorder}(a). Since this breaks the $C_6$ and mirror symmetries that protect the corner modes, one of them disappears and the remaining five satisfy new symmetry relations and degeneracies, see field plots and eigenvalue spectrum in Fig.~\ref{fig:disorder}(a). Next, we consider removing one lattice site at exactly the corner Fig.~\ref{fig:disorder}(b), breaking the $C_6$ symmetry but respecting one mirror symmetry. Remarkably, the corner states are robust against this defect: there are 6 mid-gap states and they satisfy the same symmetries and degeneracies as before the perturbation.
This is a consequence of the system being deformable to a chirally-symmetric system. Despite the presence of long-range interactions, the modes still sit on alternate sublattices, and the mode intensity is virtually zero at the removed lattice site.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{fig4_final.png}
\caption{Robustness of corner states against defects and disorder in the quasistatic model, for $\delta=0.2$. (a) A $C_6$-symmetry breaking defect in the lattice affects the mid-gap corner modes. (b) The corner states are robust against another kind of $C_6$-symmetry breaking defect due to the corner modes being close to chiral-symmetric. (c) The degeneracy of the corner modes is lifted by random disorder: the position of the lattice sites is shifted randomly up to 5\%. (d) Eigenvalue spectrum for increasing positional disorder.}
\label{fig:disorder}
\end{figure}
Finally, we test robustness against random positional disorder. In Fig.~\ref{fig:disorder}(c) we consider a system with maximum $5\%$ random disorder in lattice sites. Crucially, this breaks the $C_6$ symmetry across the whole lattice, such that the degeneracies of the corner modes are lifted, and each of the six mid-gap states localizes at one of the corners. On the other hand, we see in the spectrum how, despite the other corner modes and edge modes being lost to the bulk,
the mid-gap corner modes remain well isolated at mid-gap energies. For practical purposes they are robust against random spatial perturbations. This is confirmed in Fig.~\ref{fig:disorder}(d), where
we plot a close up of the band gap and the HOTMs for increasing random positional disorder, up to a maximum of $10\%$.
\textit{Conclusions.|}
We have studied the emergence of topologically protected corner modes in breathing honeycomb PhC particles. By analyzing the lattice through topological quantum chemistry, Wilson loops and the calculation of real space topological invariants, we conclude that the topological properties emerge from an obstructed atomic limit phase, which in 2D is reminiscent of higher-order topology. Finally, we quantify the robustness of topological corner modes in PhCs to different kinds of perturbations. We conclude that, while long-range interactions inevitably break chiral symmetry, the corner modes are still protected by lattice symmetries. Although we have focused here on the breathing honeycomb lattice PhC, our analysis applies to all classical wave systems.
\smallskip
\begin{acknowledgments}
M.P. and P.A.H. acknowledge funding from the Leverhulme Trust. P.A.H. acknowledges funding from Funda\c c\~ao para a Ci\^encia e a Tecnologia and Instituto de Telecomunica\c c\~oes under projects CEECIND/03866/2017 and UID/EEA/50008/2020. B.B. acknowledges support of the Alfred P. Sloan foundation. M.G.V. acknowledges support from DFG INCIEN2019-000356 from Gipuzkoako Foru Aldundia and the Spanish Ministerio de Ciencia e Innovacion (grant number PID2019-109905GB-C21). D.B. acknowledges supported by the Spanish Ministerio de Ciencia, Innovation y Universidades (MICINN) through the project FIS2017-82804-P, and by the Transnational Common Laboratory \emph{Quantum-ChemPhys}.
\end{acknowledgments}
\section{Bulk Band Structures and Wilson Loops}
Here we present the unit cell arrangements, as well as the band structure and the Wilson loops characterization for the lowest bands of the expanded and contracted lattice.
The Wilson loops for the expanded lattice show that the Wannier functions are centered at the 3c position, whereas for the contracted lattice they are centered at 1a.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{delta_bands_irreps_wl_exp_cont.png}
\caption{(a) Unit cell arrangements for the honeycomb, contracted and expanded lattices. Band structures and Wilson loops for expanded lattice, $R = R_0 + 0.11$, in panels (b, d) and contracted lattice, $R = R_0-0.11$, in panels (c, e).}
\label{fig:bands_irreps_wl_exp_cont}
\end{figure}
\section{Coupled Dipole Model}
In the quasistatic (QS) approximation, we consider an array of point dipoles and model interactions between the them using the coupled dipole method \cite{abajo2007colloquium}. In the absence of an external electric field, the (electric) dipole moment at position $\mathbf{d}_i$ due to a dipole at position $\mathbf{d}_j$ is given by,
\begin{align}\label{eqn:cda}
\frac{1}{\alpha(\omega)}\mathbf{p}_i = \hat{\textbf{G}}(\textbf{d}_{ij}, \omega) \cdot \mathbf{p}_j,
\end{align}
where $\omega$ is the frequency and the separation between dipoles is $\mathbf{d}_{ij} = \mathbf{d}_i - \mathbf{d}_j$. The dyadic Green's function, which describes the dipole-dipole interaction; it can be written as:
\begin{align}\label{eqn:dyadic_gf}
\hat{\textbf{G}}(\textbf{d}_{ij}, \omega) &= k^2\frac{e^{ikd}}{d} \biggl[
\biggl(
1 + \frac{i}{kd} - \frac{1}{k^2d^2}
\biggr)\hat{\textbf{I}} \,-
\biggl(
1 + \frac{3i}{kd} - \frac{3}{k^2d^2}
\biggr)\textbf{n}\otimes\textbf{n}
\biggr],
\end{align}
where $d = |\mathbf{d}_{ij}|$, $\mathbf{n}=\mathbf{d}_{ij}/d$ and wavenumber $k = \sqrt{\epsilon_m}\omega/c$; we assume the permittivity of the medium $\epsilon_m = 1$.
In the QS approximation, we retain only the quickly decaying $1/d^3$ terms in the Green's function by letting $k\rightarrow0$. Then for a periodic array of dipoles, we can we write the following eigenvalue equation,
\begin{align}\label{eqn:eigenvalue}
\left(\hat{\mathbf{I}}\frac{1}{\alpha(\omega)} - \hat{\textbf{H}}(\textbf{k}_B, \omega)\right)\cdot\textbf{p} = 0,
\end{align}
where $\mathbf{p}$ is a vector which contains all dipole moments in the unit cell. The interaction matrix $\hat{\textbf{H}}(\textbf{k}_B, \omega)$ has elements,
\begin{align}\label{eqn:interaction_matrix}
H_{ij}=
\begin{cases}
\sum\limits_{\textbf{R}} \hat{\textbf{G}}(\textbf{d}_i - \textbf{d}_j + \textbf{R}, \omega) \hspace{2px} e^{i\textbf{k}_B\cdot\textbf{R}} & i \neq j\\
\sum\limits_{|\textbf{R}|\neq0} \hat{\textbf{G}}(\textbf{R}, \omega) \hspace{2px} e^{i\textbf{k}_B\cdot\textbf{R}} & i = j
\end{cases},
\end{align}
with Bloch wavevector $\mathbf{k}_B$ and lattice sites $\mathbf{R} = n\mathbf{a}_1 + m\mathbf{a}_2$, where the lattice vectors are defined in the main text.
The dipole model accurately describes a nanophotonic system of resonators such as metallic nanoparticles (NPs), provided the NP radius satisfies $r<3R$, where $R$ is the nearest neighbour spacing. The optical response of an individual NP is given by the polarizability $\alpha(\omega)$. In the following, we assume a static polarizability,
\begin{align}
\alpha(\omega) = \frac{V}{4\pi}\frac{\epsilon(\omega) - 1}{L\,[\epsilon(\omega) + 2]},
\end{align}
$V$ is the NP volume, $L$ is a geometrical factor and $\epsilon(\omega)$ is the Drude permittivity \cite{Moroz2009}. The quasistatic Drude permittivity is written,
\begin{align}
\epsilon(\omega) = \epsilon_\infty - \frac{\omega_p^2}{\omega^2}.
\end{align}
In this manuscript, we use silver spheroidal NPs with material parameters $\epsilon_\infty = 5$, $\omega_p = 8.9$~eV and size parameters radius $r = 10$~nm, height $h = 40$~nm.~\cite{Yang2015} The spheroidal shape causes the in-plane and out-of-plane resonances of the NP to split in frequency and become completely decoupled, meaning we can consider them separately. To make comparisons with the 2D photonic crystal, we only consider the out-of-plane interactions and take the $\hat{z}\hat{z}$ component of the dyadic in Eq.~\eqref{eqn:dyadic_gf}, $\hat{\textbf{G}}(\textbf{d}_{ij}, \omega) = -1/r^3$. The size of the interaction matrix in \label{eqn:interaction_matrix} will then be $N\times N$ where N is the number of elements in the supercell.
Additionally, to model a finite system we only consider normal incidence and solve the eigenvalue problem at $\Gamma$, $\mathbf{k}_B = (0, 0)$.
\section{Topological analysis of the quasistatic model: Real Space Invariants}
To analyze the topological properties of our nanophotonic resonator (coupled dipole) system, we can reinterpret the interaction matrix $H_{ij}$ of the quasistatic model as a (long-ranged) Hamiltonian for a topological phase transition. While $H_{ij}$ is in general long range (it has power-law decaying matrix elements in position space) which can lead to cusp singularities in the band structure (which are removed when a fully retarded Green's function is used), we can nevertheless probe the presence and topological protection of edge and corner modes originating from analytic regions in the band structure. To this end, we can truncate the interaction matrix at the nearest neighbor level. Doing so, we can reinterpret $H_{ij}$ as a tight-binding model for dipolar resonators at the $6d$ Wyckoff position in space group $p6mm$. In reduced coordinates, the positions of the dipoles are $q_0=(s,0),q_1=(s,-s),q_2=(0,-s),q_3=(-s,0),q_4=(-s,s),q_5=(0,s)$. In the basis of these six orbitals, the $C_6$ symmetry is represented by
\begin{equation}
\rho(C_6)=\left(\begin{array}{cccccc}
0 & 1 & 0 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 & 0 & 0 \\
0 & 0 & 0 & 0 & 1 & 0 \\
0 & 0 & 0 & 0 & 0 & 1 \\
1 & 0 & 0 & 0 & 0 & 0
\end{array}\right),
\end{equation}
mirror symmetry about the $x$-axis is represented by
\begin{equation}
\rho(m_x)=\left(\begin{array}{cccccc}
1 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 1 \\
0 & 0 & 0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 & 0 & 0
\end{array}\right),
\end{equation}
and time-reversal symmetry is represented by complex conjugation. We can write the interaction matrix $H_{ij}$ as the sum of two terms
\begin{subequations}
\begin{equation}
H(\mathbf{k},s)=(1-t(s))M + t(s) N(\mathbf{k})
\end{equation}
where
\begin{equation}
M=\left(\begin{array}{cccccc}
0 & 1 & 0 & 0 & 0 & 1 \\
1 & 0 & 1 & 0 & 0 & 0 \\
0 & 1 & 0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 & 1 & 0 \\
0 & 0 & 0 & 1 & 0 & 1 \\
1 & 0 & 0 & 0 & 1 & 0
\end{array}\right),
\end{equation}
\begin{equation}
N(\mathbf{k}) = \left(\begin{array}{cccccc}
0 & 0 & 0 & e^{ik_1} & 0 & 0\\
0 & 0 & 0 & 0 & e^{i(k_1-k_2)} & 0 \\
0 & 0 & 0 & 0 & 0 & e^{-ik_2} \\
e^{-ik_1} & 0 & 0 & 0 & 0 & 0 \\
0 & e^{-i(k_1-k_2)} & 0 & 0 & 0 & 0\\
0 & 0 & e^{ik_2} & 0 & 0 & 0
\end{array}\right)
\end{equation}
\end{subequations}
Here $M$ is the intra-cell hopping matrix, and $N$ is the inter-cell hopping matrix. Note that this is written in an embedding where we keep the positions of the dipoles fixed at $s=0$, while we vary the hoppings $t(s)$ in accordance with the analysis of the main text (c.f. the treatment of the Peierls transition in the Su-Schrieffer-Heeger model~[\onlinecite{ssh1979}]). The function $t(s)$ smoothly and monotonically interpolates between $t(0)=0$ in the maximally contracted (triangular) lattice, and $t(1)=1$ in the maximally expanded (kagome) lattice. There is a critical point $t(s_*)=1/2$, where the intra- and inter-cell hopping amplitudes are equal. $H(\mathbf{k},s)$ has a gap at zero energy for all $\mathbf{k}$ and all $s\neq s_*$, with three negative and three positive energy bands.
We will now proceed to show that the critical point separates the trivial and obstructed atomic limit phases of our model. First, we will compute the band representations carried by the occupied (negative energy) states in both gapped phases, and show that there is a transition between a phase with Wannier centers at the $1a$ position, and a phase with Wannier centers at the $3c$ position. Furthermore, we will show that the little group representations in these phases are consistent with what is found in the photonic crystal model. Then we will compute the ``real space invariants\cite{song2020twisted}'' for the trivial and OAL phases, and show that point group symmetric topological particles in the two phases are topologically distinct. Finally, by analyzing the low-energy theory of the critical point $H(\mathbf{k},s_*+\delta s)$ we will show that the interface between the trivial and topological phase must host a set of six corner states of topological origin.
\subsection{Band representation analysis}
Here we will establish that the Hamiltonians $H(\mathbf{k},s<s_*)$ and $H(\mathbf{k},s>s_*)$ describe topologically distinct atomic limits. To do so, let us note that, since $H(\mathbf{k},s)$ is gapped for all $s\neq s_*$, we can always adiabatically deform the Hamiltonian either to $s=0$ or $s=1$. It is thus sufficient to determine the topology of the bands when $s=0,1$.
Let us focus first on $s=0$, where we have
\begin{equation}
H(\mathbf{k},0)=M.
\end{equation}
We can easily diagonalize the $\mathbf{k}$-independent matrix to find that the three occupied $(E<0)$ states have eigenvectors
\begin{subequations}
\begin{align}
\mathbf{v}_1&=\frac{1}{\sqrt{6}}(1,-1,1,-1,1,-1)^T, \\
\mathbf{v}_2&=\frac{1}{\sqrt{12}}(2,-1,-1,2,-1,-1)^T, \\
\mathbf{v}_3&=\frac{1}{2}(0,1,-1,0,1,-1)^T,
\end{align}
\end{subequations}
with corresponding energies
\begin{equation}
E_1=-2,\;\; E_2=E_3=-1
\end{equation}
Since these eigenvectors give us $\mathbf{k}$-independent linear combinations of our basis orbitals, they can be Fourier transformed to yield exponentially localized (in fact, delta-function localized) Wannier functions at the $1a$ position of the unit cell. To determine the band representation under which these Wannier functions transform, we project the symmetry operations into the space of occupied states to obtain the sewing matrices
\begin{subequations}
\begin{align}
\mathcal{B}^{(0)}(C_6)_{ij}\equiv \langle \mathbf{v}_i | \rho(C_6) | \mathbf{v}_j\rangle &= \begin{pmatrix} -1 & 0 & 0 \\ 0 & -\frac{1}{2} & \frac{\sqrt{3}}{2} \\ 0 & \frac{\sqrt{3}}{2} & -\frac{1}{2} \end{pmatrix}, \\
\mathcal{B}^{(0)}(m_x)_{ij}\equiv \langle \mathbf{v}_i | \rho(m_x) | \mathbf{v}_j\rangle &= \begin{pmatrix}
1 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & -1
\end{pmatrix}.
\end{align}
\end{subequations}
Comparing with the character tables on the Bilbao Crystallographic server, we see that this is the $B_2\oplus E_2$ representation of the site-symmetry group $G_{1a}\approx \mathrm{p}6\mathrm{mm}$ of the $1a$ Wyckoff position. Hence, when $s=0$ the occupied bands transform in the $(B_2\oplus E_2)_{1a}\uparrow G$ band representation\cite{NaturePaper,EBRTheoryPaper,GroupTheoryPaper}.
Next, let us analyze the case when $s=1$, where the Hamiltonian takes the form
\begin{equation}
H(\mathbf{k},1)=N(\mathbf{k})
\end{equation}
We can diagonalize $N(\mathbf{k})$ to obtain the three occupied-band eigenvectors, which now have energies $E_1=E_2=E_3=-1$,
\begin{subequations}
\begin{align}
\mathbf{w}_1&=\frac{1}{\sqrt{2}}(0,0,-e^{-ik_2},0,0,1)^T \\
\mathbf{w}_2&=\frac{1}{\sqrt{2}}(0,-e^{i(k_1+k_2)},0,0,1,0)^T \\
\mathbf{w}_3&=\frac{1}{\sqrt{2}}(-e^{ik_1},0,0,1,0,0)^T
\end{align}
\end{subequations}
Although these eigenvectors are $\mathbf{k}$-dependent, they are periodic and analytic, and hence can be Fourier transformed to yield compactly-supported Wannier functions. In this case, we can see from computing the position matrix elements
\begin{equation}
\langle \mathbf{w}_i | \mathbf{x} | \mathbf{w}_j \rangle = -i \langle \mathbf{w}_i | \nabla_\mathbf{k} \mathbf{w}_j \rangle
\end{equation}
that these Wannier functions will be centered at the $3c$ Wyckoff position, with reduced coordinates $(1/2,0)$, $(0, 1/2)$, $(1/2,1/2)$. To determine under which band representation these Wannier functions transform, we can again compute the sewing matrices for the symmetry operations, yielding
\begin{subequations}
\begin{align}
\mathcal{B}^{(1)}(C_6)_{ij}&\equiv \langle \mathbf{w}_i(C_6\mathbf{k}) | \rho(C_6) | \mathbf{w}_j(\mathbf{k})\rangle =\frac{1}{2} \begin{pmatrix}
0 & 0 & -e^{ik_1} \\
1 & 0 & 0 \\
0 & 1 & 0
\end{pmatrix}, \\
\mathcal{B}^{(1)}(m_x)_{ij}&\equiv \langle \mathbf{w}_i(m_x\mathbf{k}) | \rho(m_x) | \mathbf{w}_j(\mathbf{k})\rangle = \begin{pmatrix}
0 & -e^{i(k_1-k_2}) & 0 \\
-e^{-ik_2} & 0 & 0 \\
0 & 0 & 1
\end{pmatrix}.
\end{align}
\end{subequations}
Specializing to the high-symmetry points, we can verify that these are the sewing matrices obtained via induction from the $B_1$ representation of the site symmetry group $G_{3c}\approx \mathrm{p}2\mathrm{mm}$ of the $3c$ Wyckoff position. Thus, when $s=1$ the occupied bands transform in the $(B_1)_{3c}\uparrow G$ band representation. Thus, we have verified that as the parameter $s$ is tuned, the Hamiltonian $H(\mathbf{k},s)$ describes an obstructed atomic limit transition between the $1a$ and $3c$ Wyckoff positions.
\subsection{Real Space Invariants}
Having established the presence of a bulk OAL transition for the Hamiltonian $H(\mathbf{k},s)$, we know that bulk systems with $s<s_*$ are topologically distinct from bulk systems with $s>s_*$. We would like to extend this analysis, however, to the case of finite-sized topological particles, and hence establish that the topological particles for the two different bulk phases are topologically distinguishable. To do this, we will employ the method of Real Space Invariants (RSIs) presented in Ref.~\cite{song2020twisted}. In that work, it was shown that there exist point group invariants which distinguish the classes of occupied states of a topological particle that can be deformed into each other through point group symmetric deformations of the Hamiltonian, as well as point-group symmetric addition of states from outside the topological particle. While these invariants are most generally formulated in terms of real-space point group irreps, in many cases they can be calculated from the momentum-space irreps of a band structure. In p6mm, there are seven invariants which can be computed in terms of the multiplicities of momentum-space irreps: they are
\begin{subequations}
\begin{align}
\delta_{1,1a}&=n(M_3)-n(K_1)-n(\Gamma_2) \\
\delta_{2,1a}&=n(\Gamma_3)+n(\Gamma_5)-n(\Gamma_2)-n(K_1) \\
\delta_{3,1a}&=n(\Gamma_3)-2n(\Gamma_2)-n(\Gamma_6)-n(K_1)+n(K_2)+n(M_3) \\
\delta_{1,2b}&=n(K_1)-n(\Gamma_1)-n(\Gamma_3) \\
\delta_{1,3c}&=n(\Gamma_3)+n(\Gamma_6)-n(M_3) \\
\delta_{1,6d}&=n(K_2)=n(K_1) \\
\delta_{1,6e}&=2n(\Gamma_2)-2n(\Gamma_1)+n(K_1)-n(K_2)
\end{align}
\end{subequations}
where $n(\rho)$ is the multiplicity of the little group representation $\rho$ in the set of occupied bands. Note that each RSI is labelled by a Wyckoff position, indicating that it is an invariant computed from the set of orbitals localized to that Wyckoff position in the topological particle.
For the case at hand, as $s$ is tuned from $0$ to $1$, our Hamiltonian undergoes a band inversion at the $\Gamma$ point. From the sewing matrices computed above, we find that as we tune from the trivial to the OAL phase, $n(\Gamma_5)$ decreases by $1$, while $n(\Gamma_6)$ increases by $1$. This implies that the real space invariants $\delta_{2,1a}, \delta_{3,1a},$ and $(-)\delta_{1,3c}$ each differ by $(-)1$ between the trivial and the OAL phases. This implies that even in a finite-sized topological particle, the trivial and OAL phases can be distinguished by their transformation properties under the point group 6mm. We now analyze the consequences of this distinguishability in terms of corner states.
\subsection{Corner states}
The key consequence of the topological distinction between the trivial and obstructed topological particles is the presence of protected corner states at a point-group symmetric boundary between the two phases. To see that the corner states are an inevitable consequence of the bulk topology, we will here adapt the method of Ref.~\cite{wieder2020strong} to analyze the low-energy theory of the topological particle system. This does not alter the topological properties of the Hamiltonian, but will simplify the analysis below. To begin, we replace the matrix $M$ with the spectrally flattened
\begin{equation}
\tilde{M}=\mathbb{I}-2\sum_{i=1}^{3}\mathbf{v}_i\otimes\mathbf{v}_i,
\end{equation}
which shares the same negative energy eigenspace as the matrix $M$, but moves all states to the same eigenvalue $E_1=E_2=E_3=1$. We can then focus on the deformed Hamiltonian
\begin{equation}
\tilde{H}(\mathbf{k},s)= (1-t(s))\tilde{M}+t(s)N(\mathbf{k})
\end{equation}
Our strategy here is to expand the Hamiltonian about the band-inversion point $(\Gamma,s_*)$, Fourier transform to position space, and allow the mass parameter $s$ to be spatially varying with $s(R)=s_*$, where $R\gg 1$. We will then perform a Jackiw-Rebbi analysis of the boundary states near $\mathbf{r}\approx R$, and analyze their stability to perturbations of the bulk Hamiltonian. Following this procedure, we will establish the existence of corner modes and a filling anomaly for our OAL topological particles even in the absence of chiral symmetry.
Let us project the Hamiltonian near $\mathbf{k}=0$, $s=s_*$ into the low-energy subspace of the topological band inversion. We find that at the gap-closing point, there is a fourfold band degeneracy at the $\Gamma$ point. This fourfold degenracy is the critical point between the trivial and OAL phases. Diagonalizing the critical Hamiltonian $\tilde{H}(0,s_*)=\tilde{M}+N(0)$ at the $\Gamma$ point, we find that the space of states at the critical point is spanned by the four zero-energy eigenvectors
\begin{align}
\mathbf{u}_1&=\frac{1}{\sqrt{2}}(0,-1,0,0,0,1)^T, \\
\mathbf{u}_2&=\frac{1}{\sqrt{2}}(-1,0,0,0,1,0)^T, \\
\mathbf{u}_3&=\frac{1}{\sqrt{6}}(0,-1,0,2,0,-1)^T, \\
\mathbf{u}_4&=\frac{1}{\sqrt{6}}(-1,0,2,0,-1,0)^T
\end{align}
After a suitable transformation to Cartesian coordinates, we can expand the Hamiltonian to first order in $\mathbf{k}$ and $m=t(s)-t(s_*)$ to find the Dirac-like Hamiltonian
\begin{equation}
\tilde{H}(\mathbf{k},\delta s)\approx \frac{1}{4} (k_x\Gamma_x - k_y\Gamma_y -8m\Gamma_z) \label{eq:criticalham}
\end{equation}
where we have introduced anticommuting $4\times 4$ gamma matrices
\begin{subequations}
\begin{align}
\Gamma_x&=\frac{1}{2}(\tau_z-\sqrt{3}\tau_x)\sigma_y=\sigma_y\tau_z', \\
\Gamma_y&=\frac{1}{2}(\tau_x+\sqrt{3}\tau_z)\sigma_y=\sigma_y\tau_x',\\
\Gamma_z&=\frac{1}{2}(\sigma_x\tau_0+\sqrt{3}\sigma_y\tau_y) \\
\Gamma_4&=\frac{1}{2}(\sigma_y\tau_y - \sqrt{3}\sigma_x\tau_0)
\end{align}
\end{subequations}
where the $\bm{\tau}$ Pauli matrices act in the block subspace of $\{(\mathbf{u}_1,\mathbf{u}_2),(\mathbf{u_3},\mathbf{u}_4)\}$, while the $\bm{\sigma}$ Pauli matrices act within the blocks.
We will now let $m\rightarrow m(\mathbf{r})$ depend on position. To be concrete, we assume that $m(\mathbf{r}\rightarrow 0) = -t_0, m(\mathbf{r}\rightarrow\infty) = t_0, m(\mathbf{r}=\mathbf{R})=0$, and we furthermore assume that $m(\mathbf{r})=m(r)$ is circularly symmetric. We will look for zero-energy states localized near the domain wall $r=R$ by solving the eigenvalue equation~\cite{jackiw1976solitons}:
\begin{equation}
(-i\partial_x\Gamma_x + i \partial_y\Gamma_y - 2m(r)\Gamma_z) f(r)|\phi\rangle = Ef(r)|\phi\rangle
\end{equation}
Re-expressing this in polar coordinates, we have
\begin{align}
\left[-2m(r)\Gamma_z -i\sigma_y\tau_1(\theta)\partial_r +i\sigma_y\frac{1}{r}\tau_2(\theta)\partial_\theta\right]f(r)|\phi\rangle = Ef(r)|\phi\rangle
\end{align}
where we have introduced
\begin{subequations}
\begin{align}
\tau_1(\theta)&=\tau_z'\cos\theta -\tau_x'\sin\theta, \\
\tau_2(\theta)&=\tau_z'\sin\theta + \tau_x'\cos\theta
\end{align}
\end{subequations}
We would like to look for solutions to this equation near $r=R$, where the mass changes sign. for $R$ sufficiently large, we can then treat the angular dispersion term $1/r\partial_\theta\approx 1/R\partial_\theta$ as a small perturbation. We will then find the spectrum of edge states by first solving
\begin{equation}
\left[-2m(r)\Gamma_z -i\sigma_y\tau_1(\theta)\partial_r \right]f(r)|\phi\rangle = 0,\label{eq:radial}
\end{equation}
from which we will derive a low-energy edge Hamiltonian by projecting the angular velocity into this eigenbasis. Equation~\eqref{eq:radial} is solved by functions of the form
\begin{align}
f(r)&\propto e^{-\int_R^r 2m(r') dr'}
-i\Gamma_z\sigma_y\tau_1(
\theta)|\phi_i\rangle -|\phi_i\rangle
\end{align}
We can write $|\phi_1\rangle, |\phi_2\rangle$ explicitly as
\begin{align}
|\phi_1\rangle&=\frac{e^{i\theta/2}}{\sqrt{2}}\biggl(i\sin(\frac{\pi}{6}-\frac{\theta}{2}), \cos(\frac{\pi}{6}+\frac{\theta}{2}),-i\cos(\frac{\pi}{6}-\frac{\theta}{2}),-\sin(\frac{\pi}{6}+\frac{\theta}{2})\biggr)^T \\
|\phi_2\rangle&=\frac{e^{-i\theta/2}}{\sqrt{2}}\biggl(-i\sin(\frac{\pi}{6}-\frac{\theta}{2}), \cos(\frac{\pi}{6}+\frac{\theta}{2}),i\cos(\frac{\pi}{6}-\frac{\theta}{2}),-\sin(\frac{\pi}{6}+\frac{\theta}{2})\biggr)^T.
\end{align}
We have chosen this basis because it yields particularly simple projections of the symmetry operations:
\begin{align}
\langle\phi_i(\theta) | TR |\phi_j(\theta)\rangle &= s_x \\
\langle\phi_i(\theta+\pi/3) | C_6 |\phi_j(\theta)\rangle &= \exp(i\pi s_z/3) \\
\langle\phi_i(-\theta) | m_x |\phi_j(\theta)\rangle &= -s_x,
\end{align}
where we have introduced Pauli matrices $s_i$ acting in the space of $|\phi_i\rangle$. Using this basis, we can project the angular dispersion into the space of low-lying edge states to find the effective Hamiltonian
\begin{equation}
\frac{1}{R}\langle\phi_i|i\sigma_y\tau_2(\theta)\partial_\theta|\phi_j\rangle=\frac{1}{R}(is_z\partial_\theta-\frac{1}{2}s_0),\label{eq:edgeham}
\end{equation}
which is the Hamiltonian for a pair of counter-propagating edge excitations. The term proportional to the identity accounts for the fact that our topological particle geometry has a constant-curvature edge\cite{wieder2020strong}; we will neglect it in the following as it does not contribute to our topological analysis.
At first glance, Eq.~\eqref{eq:edgeham} resembles the edge theory for the helical states of a two-dimensional topological insulator. In fact, the low-energy critical point Eq.~\eqref{eq:criticalham} coincides with the critical theory of a 2D TI. This observation led Wu and Hu to predict that topological particles such as ours should have a $\mathbb{Z}_2$ invariant with gapless counterpropagating edge states~\cite{Wu2015}. However, there is a fundamental distinction between our model and a two-dimensional TI due to the symmetries we require. To analyze the edge of our topological particle system, we should include higher-order terms in the bulk that preserve the 6mm point group symmetry, and ask what effect they have on the edge dispersion. Here, we will focus only on terms that cannot close a bulk gap, and that simultaneously gap the edge theory (\ref{eq:edgeham}). This means we look for potentials $V(\theta)$ that anticommute with both the bulk mass $m\Gamma_z$ and the edge kinetic term $\sigma_y\tau_2(\theta)$. However, we also require that $V(\theta)$ commute with $\Gamma_z\sigma_y\tau_1(\theta)$, in order that $\langle \phi_i | V(\theta) | \phi_j\rangle\neq 0$. We find that this restricts the form of $V(\theta)$ to
\begin{equation}
V(\theta)=m_4(\theta)\Gamma_4 + m_5(\theta)\Gamma_5,
\end{equation}
where we have introduced $\Gamma_5=i\Gamma_x\Gamma_y\Gamma_z\Gamma_4$. Crucially, both $\Gamma_4$ and $\Gamma_5$ anticommute with the sewing matrices for $C_6$ and $m_x$, we find:
\begin{subequations}
\begin{align}
\langle \mathbf{w}_i | C_6 | \mathbf{w}_j\rangle &= \frac{1}{4}\left(\tau_0(\sigma_x-3i\sigma_y) + \sqrt{3}\tau_y(\sigma_y-i\sigma_x)\right),\\
\langle \mathbf{w}_i | m_x | \mathbf{w}_j\rangle &= -\frac{1}{2}(\sigma_0\tau_z' - \sqrt{3}\sigma_z\tau_x').
\end{align}
\end{subequations}
Accounting for the action of the symmetries on the angular coordinate $\theta$, we can thus write a Fourier expansion
\begin{equation}
V(\theta)=\sum_n m_{4n}\sin((3+6n)\theta)\Gamma_4 + m_{5n}\cos((3+6n)\theta),
\end{equation}
where $n$ indexes the different Fourier harmonics. Projecting these onto the edge, we find that the edge Hamiltonian becomes
\begin{align}
\nonumber H_\mathrm{edge} &= \frac{1}{R}(is_z\partial_\theta - 1/2s_0) + \sum_n \left[m_{4n}\sin((3+6n)\theta)\begin{pmatrix} 0 & ie^{-i\theta} \\ -ie^{i\theta} & 0 \end{pmatrix} + m_{5n}\cos((3+6n)\theta)\begin{pmatrix} 0 & e^{-i\theta} \\ e^{i\theta} & 0 \end{pmatrix}\right].
\end{align}
Let us focus on the case when only the $n=0$ masses are nonzero. To analyze this, we will without loss of generality take $m_{40}\neq 0$, $m_{50}=0$ to start, and then we will perturbatively reintroduce $m_{50}$: the mass term $m_{40}\sin(3\theta)$ vanishes at the special values
\begin{equation}
\theta_m = \frac{m\pi}{3}.
\end{equation}
Near each zero we have corner states which satisfy
\begin{equation}
\frac{1}{R}\left[\partial_\theta + 3im_{40}(-1)^ms_z\theta\begin{pmatrix} 0 & ie^{-i\theta_m} \\ -ie^{i\theta_m} & 0 \end{pmatrix}\right]|\Theta_m\rangle=0,
\end{equation}
and so repeating our Jackiw-Rebbi analysis we find a zero-energy corner state satisfying
\begin{align}
im_{40}(-1)^ms_z\theta\left(\begin{array}{cc} 0 & ie^{-i\theta_m}\nonumber \\ -ie^{i\theta_m} & 0 \end{array}\right)|\Theta_m\rangle = (-1)^{m+1}|\Theta_m\rangle,
\end{align}
yielding a total of six zero-energy corner states. We thus see that symmetry-allowed mass terms gap the counterpropagating edge states of Ref.~[\onlinecite{Wu2015}], yielding corner states consistent with our MPB and coupled dipole simulations.
To complete the analysis, we next perturbatively restore $m_{50}$. Projecting into the space of corner modes for each $m$, we find
\begin{equation}
m_{50}\cos(3\theta_m)\langle\Theta_m | \left(\begin{array}{cc} 0 & e^{-i\theta} \\ e^{i\theta} & 0 \end{array}\right) | \Theta_m\rangle = +m_{50}.
\end{equation}
This means that although $m_{50}$ breaks chiral symmetry and shifts the corner modes away from zero energy, it does not break the degeneracy of the corner modes. This leads to the so-called ``filling anomaly'': when both $m_{4}$ and $m_5$ are nonzero, the difference between the number of states in the positive and negative energy subspaces of the model is six.
\begin{figure*}[!t]
\subfloat[]{\includegraphics[width=0.4\textwidth]{critical-bands.pdf}
}
\subfloat[]{\includegraphics[width=0.4\textwidth]{nano-corners-c2_cropped.pdf}
}
\caption{(a) Band structure for the nanophotonic tight-binding model at the transition point between trivial and OAL phases. (b) Corner states for a $C_2$ symmetric topological particle in the OAL phase. The blue (red) circles represent the probability densities for the first (second) corner state.}\label{fig:tbcalc}
\end{figure*}
Note that we could have performed our same analysis with $m_5$ initially nonzero instead, which would result in corner modes localized at $\theta'_m = (2m+1)\pi/3$ (the other conjugacy class of mirror lines in the point group). Additionally, we could have considered higher Fourier harmonics in the mass term, which would yield additional sets of $12$ corner modes at generic points along the boundary, which gap non-anomalously. Finally, our analysis holds as well for a $C_2$-symmetric topological particle, in which case we can add mass terms of the form $\Gamma_5\cos 2\theta$ and $\Gamma_4\sin 2\theta$, which gap all but one pair of corner modes, yielding a filling anomaly of 2. We can see an example of this in the topological particle pictures in Fig.~\ref{fig:tbcalc}.
To conclude, let us comment on the applicability of our tight-binding calculation to the nanophotonic calculation. Because the full interaction matrix contains power-law decaying terms in position space, we cannot guarantee a priori that the Bloch Hamiltonian will permit a series expansion near the $\Gamma$ point in the Brillouin zone. However, for our model we find that the cusp singularities arising in the band structure due to the long-range hopping appear only in the highest positive and lowest negative energy bands in the band structure (one of which maps to the cusp singularity at $\omega=0$ in the full photonic model). Crucially, however, we have seen that it is only the bands close to the mid-gap band inversion that contribute to the formation of corner states in this model. Thus, we expect that our analysis here is robust to the inclusion of long range hoppings. It is an interesting open problem for future work to consistently incorporate band structure singularities due to long-range hoppings into the general theory of topological photonic systems.
\section{Exponential cutoff}
\begin{figure}[h]
\centering
\includegraphics[width=0.5\linewidth]{cutoff.png}
\caption{Example exponential cut off function, $f_{c.o.}$, for a nearest neighbour distance $d^0 = 40$~nm, for varying $\gamma$, from $0.1$ ($\log\gamma = -1$) (red) to $5.01$ ($\log\gamma = 0.7$) (blue). A cut off factor with $\gamma = 0.1$ for interactions $\propto -1/d^3$ is approximately nearest neighbour.}
\label{fig:exp_cutoff}
\end{figure}
\section{Effect of disorder}
\begin{table*}[ht]
\centering
\begin{tabular}{|c|c|c|c|}
\hline
& $C_6$ breaking defects & Effect on topological corner states & Emergence of new states \\ \hline \hline
\multirow{5}{4em}{Corner defects}
& Remove 1 particle, “trivial” sublattice & 6 degenerate corner modes unaffected & one new localized state\\ \cline{2-4}
& Remove 1 particle, “topological” sublattice & 5 degenerate corner modes w/o C6 symmetry & no \\ \cline{2-4}
& Remove trimer at corner & 5 degenerate corner modes w/o C6 symmetry & no\\ \cline{2-4}
& Expanded cell at corner & 5 degenerate corner modes w/o C6 symmetry & yes, on both sublattices \\ \cline{2-4}
& Contracted cell at corner & 5 degenerate corner modes w/o C6 symmetry & yes, on both sublattices \\ \hline\hline
\multirow{4}{4em}{Edge defects}
& One expanded cell at edge, & 4+2 degenerate corner modes, & yes, on both sublattices \\
& preserving 1 mirror symmetry & w/ mirror symmetry & \\ \cline{2-4}
& One expanded cell at edge, & 4 degenerate corner modes w/ mirror symmetry, & yes, on both sublattices \\
& breaking all mirror symmetries & + 2 non-degenerate w/o mirror symmetry & \\ \hline\hline
\multirow{2}{4em}{Bulk defects}
& Random position disorder & 6 non-degenerate state, & no \\
& & on “topological” sublattice & \\ \hline
\end{tabular}
\caption{Effect of different $C_6$ breaking defects on the topological corner states for
particles with long range interactions. For a nearest neighbour model the corner states survive all perturbations except for the second corner defect type. }
\label{tab:disorder_summary}
\end{table*}
Table~\ref{tab:disorder_summary} summarizes the effect of different kinds of $C_6$ symmetry breaking defects on the topological corner modes of Types A and B particles: defects at corners, edges and random bulk disorder are considered. From the main text, the 6 degenerate corner modes survive when one particle belonging to the sublattice immediately at the corner is removed, even if $C_6$ symmetry is broken. All the other defects have an effect to some extent as shown in the table. In contrast, in a nearest neighbour model the topological corner modes are robust against all the perturbations considered in the table (except if one of the particles at the corner where the mode resides is removed). The effect on the topological robustness of the corner modes then emerges both from the spatial symmetries and the range of the interactions. At the critical point $s_*$, the Hamiltonian $H(\mathbf{k},s)$ is gapless with a fourfold Dirac degeneracy at the $\Gamma$ ($\mathbf{k}=0$) point.
These results hold true regardless of the edge termination, provided the particle has the same lattice symmetries. It should be noted that corner modes in
particles with complete unit cells at the interface are more strongly affected by edge and bulk disorder, compared to the broken unit cell interface termination presented in the main text.
This is due to the longer localization length of these modes.
\clearpage
| proofpile-arXiv_065-217 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\rom{My main major comment concerns the structure of the paper. In the current version I
would call it “unconventional” because the paper starts immediately from discussion
of the single family of tilings that is discussed in the paper without introducing a
general context and/or framework. While there is nothing wrong with such approach,
I think that the paper will benefit from a short overview of substitution tilings and
particularly some review of known results or examples of sofic tilings. In the current
version, and only subsection titled “Previous results” discusses these property, but in
my opinion it does not give any context for the current paper. Particularly, I suggest
to refer to the book “Aperiodic Order” by Baake in Grimm for general context and
to the “Tilings Encyclopedia” website for more examples.
Particularly, I suggest to add a new section that introduces general substitution and
sofic tilings and give a short overview of sofic tilings. After that, the current Section 1
may follow with a new title, for example “Tilings with golden right triangles”. That
way the context will be properly introduced and I think that may help the paper
to reach wider audience. Additionally, the new introduction could state the main
results of the paper and refer where the relevant definitions are introduced.}
\paragraph{Tilings and local rules.}
Assume that a finite family of polygons $\{P_1,\dots,P_k\}$, called \emph{proto-tiles}, is given.
Isometric images of those polygons are called \emph{tiles}.
A \emph{tiling} $T$ is a family of pair-wise non-overlapping tiles, which
means that the interiors of the tiles are disjoint.
\emph{A patch} is a finite tiling.
A patch $P$ is called a
\emph{fragment} of a tiling $T$, if $P$ is a subset of $T$.
If the diameter of a patch (the maximal distance between points of its tiles) is at most
$d$, then we call that patch a \emph{$d$-patch}.
In a similar way we define {$d$-fragments}.
Local matching rules govern how tiles may be attached to each other in a tiling.
More specifically, a local rule is identified by a positive real $d$ and
by a set of $d$-patches, whose members are called \emph{illegal patches}.
A tiling \emph{satisfies} the local rule,
if it does not include illegal patches.
For instance, all polygons $P_1,\dots,P_k$ may be unit squares with colored sides, and the local rule may require that
tiles are attached side-to-side and the colors on the adjacent sides match (the so-called \emph{Wang tiles}).
\paragraph{Aperiodic tile sets.}
The pair (a set of proto-tiles, a local rule) is called \emph{aperiodic} if
all tilings of the plane satisfying the local rule are non-periodic and
such tilings exist.
Aperiodic sets of Wang tiles were used to prove
the undecidability of Berger's \emph{Domino problem}: find an algorithm that given a Wang tile set finds out whether
that set tiles the entire plane~\cite{rob}.
\paragraph{Substitutions.}
The usual scheme to prove non-periodicity is based on the notion of a \emph{substitution}.
A substitution $\sigma$ is defined by a similarity ratio $\phi<1$
and a way to cut every polygon $P_i$
into a finite number of parts where each part is congruent to some
polygon from the family $\{\phi P_1,\dots,\phi P_k\}$.
The substitution
acts on tilings as follows.
Given a tiling, for all $i$, we cut every tile $P_i$ from the tiling, as
defined by the substitution.
We obtain a tiling of the same set by tiles of smaller size.
Then we apply to the resulting tiling some fixed pre-chosen homothety $H$ with the
coefficient $1/\psi$ to obtain a tiling with initial tiles $P_1,\dots,P_k$.
We call the resulting tiling
\emph{the decomposition} of the initial tiling and denote
it by $\sigma T$.
The inverse operation is called \emph{composition}.
That is, a tiling $T$ is a composition of a tiling $T'$
if $T'=\sigma T$.
\emph{A supertile} is a tiling, which can be obtained from
an initial tile $P_i$ by applying decomposition several times.
A supertile of the form $\sigma^n P_i$ is called a \emph{supertile of order $n$}.
Thus each supertile of order $n$ consists of several supertiles of order $n-1$.
Assume that the substitution and the local rule have the following properties:
\begin{enumerate}\label{properties}
\item[P1.] All supertiles satisfy the local rule.
\item[P2.] For every tiling $T$ of the plane satisfying the local rule
there is a tiling $T'$ satisfying the local rule with $\sigma T'=T$ and such tiling $T'$ is unique
``the unique composition property''). This tiling $T'$ is then denoted by $\sigma^{-1}T$.
\end{enumerate}
In this case it is not hard to show that
all tilings of the plane satisfying the local rule are non-periodic and
that such tilings exist. This can be shown as follows.
\emph{Existence.}
By P1 each supertile satisfies local rule.
Obviously the linear size of a supertile $\sigma^n P_i$
is $(1/\psi)^n$ times larger than that of $P_i$.
Hence there are tiling of arbitrary large parts of
the plane satisfying local rule. By compactness arguments this implies
that there are such tilings of
the entire plane.
\emph{Non-periodicity.}
Assume that a tiling $T$
satisfying local rule has a non-zero period $\mathbf a$, that is,
$T+\mathbf a=T$.
Then the vector $\psi\mathbf a$ is the period of $\sigma^{-1}T$. Indeed,
let $H$ denote the fixed homothety used in the definition of substitution.
The decomposition of the tiling
$\sigma^{-1}T +\psi\mathbf a$ is equal to the decomposition
of $\sigma^{-1}T$ shifted by the vector $H \psi\mathbf a=\mathbf a$, that is, to $T+\mathbf a$.
By our assumption, we have $T+\mathbf a=T$.
Thus both $\sigma^{-1}T +\psi\mathbf a$ and $\sigma^{-1}T$
are compositions of $T$ and they both satisfy local rule. By P2
we then have $\sigma^{-1}T +\psi\mathbf a= \sigma^{-1}T$.
Repeating the argument, we can conclude that
the vector $\psi^2\mathbf a$ is a period of the tiling
$\sigma^{-2}T $.
In this way we can construct a tiling
whose period is much smaller than
the linear sizes of tiles,
which is impossible.
This scheme was used to prove aperiodicity of many tile sets. Perhaps,
the most famous example is Penrose--Robinson P4 tilings,
where the set of proto-tiles consists
of two isosceles triangles (see \cite{penrose,GS}).
Other famous examples are Ammann tilings (two L-shaped hexagonal tiles) and
Ammann--Beenker tilings (a rhombus and a square).
For the definition of these tilings and for more examples we refer two the textbooks \cite{BG,GS}
and to the Tilings Encyclopedia~\cite{te}.
A similar approach was used to show non-periodicity of the famous Robinson tilings~\cite{rob} with Wang tiles.
Robinson's construction does not fit exactly the described framework, as in that construction supertiles of order $n$ are
built from 4 supertiles of order $n-1$ and several proto-tiles. However,
for the version of Robinson tilings from the paper~\cite{dls},
the proof of non-periodicity follows exactly the above pattern.
In the tiling of~\cite{dls}, there are $2^{14}$ proto-tiles, which are unit squares, and every tile
is cut in four smaller squares.
\paragraph{Substitution tilings.}
A tiling $T$ is called a \emph{substitution tiling}\footnote{We use here the terminology of~\cite{G}.
Another name for substitution tilings, \emph{self-affine tilings},
was used in~\cite{solomyak}.} associated with substitution $\sigma$, if
for each finite subset $P\subset T$ there is a
supertile including $P$. The property P1 implies that
every substitution tiling satisfies local rule.
In some cases the reverse is also true. We will call this property
P3:
\begin{itemize}
\item[P3.] Every tiling of the plane satisfying local rule is a substitution tiling associated with the substitution $\sigma$.
\end{itemize}
For instance, it happens that the family of Penrose--Robinson tilings coincides with
the family of substitution tilings associated with the respective substitution.
The same happens for Ammann A2 tilings, see~\cite{dsv}.
\paragraph{From a substitution to local rule.}
Assume now that we are given only a substitution $\sigma$ acting on a set $\{ P_1,\dots, P_k\}$
of polygons and no local rule. Then
it is natural to ask whether there is a local rule such that the properties P1 and P2
hold. In many cases there is no such local rule. In such cases
we would like to find a decoration of the family $\{ P_1,\dots, P_k\}$
and a decoration of the substitution such that
a local rule exists for the decorated family of polygons and substitution.
This means the following:
\begin{itemize}
\item each proto-tile is replaced by a finite number of proto-tiles of the same shape (we think that
they have different colors);
\item a substitution $\tilde \sigma$ for decorated proto-tiles is defined such that
each decorated proto-tile is cut exactly in the same way as prescribed by the initial
substitution $\sigma$
(see an example on Fig.~\ref{shen7});
\item a local rule is defined for decorated tiles.
\end{itemize}
\begin{figure}[t]
\begin{center}
\includegraphics[scale=1]{shen-7.pdf}
\end{center}
\caption{A substitution (a) and its decoration (b).}\label{shen7}
\end{figure}
We achieve our goal if the resulting substitution $\tilde \sigma$ and the local
rule have the above properties P1 and P2.
We could also desire property P3 hold.
To achieve P1 and P2, we can try to use a general Goodman-Strauss theorem ~\cite{G} that claims that for every ``good''
substitution $\sigma$
there is a local rule for decorated tiles with properties P1 and P2.\footnote{Goodman-Strauss formulates property P2, as
``local rule enforce the hierarchical structure associated with $\sigma$'', which means
that every tiling satisfying local rule can be uniquely partitioned into supertiles of order $n$ for each $n$.}
However, the resulting tile sets are generally gigantic and not really explicit.
Besides, Goodman-Strauss theorem does not achieve the property P3.
Both minor points of Goodman-Strauss theorem are inherited by its version by Fernique and Ollinger from~\cite{fo}.
Assume that there is a decorated substitution $\tilde \sigma$ and a local
rule for decorated tilings with properties P1, P2 and P3.
Let $L$ denote the family of substitution tilings of the plane associated with
initial distribution $\sigma$ and $\tilde L$ the family of tilings of the plane with decorated tiles that satisfy the local rule.
Note that property P3 (together with P1) for $\tilde\sigma$ implies that $L$ coincides with the family of tilings obtained
from tilings $\tilde T\in\tilde L$ by removing colors.
In one direction this is trivial: assume that $T$ is obtained from a tiling $\tilde T\in \tilde L$ by removing colors and
assume that $P$ is a finite subset of $T$. Then $P$ is obtained from a fragment
$\tilde P\subset \tilde T$ by removing colors. By P3 the fragment $\tilde P$ occurs in a colored supertile $\tilde\sigma^n\tilde P_i$. Thus
$P$ is included in the supertile $\sigma^n P_i$.
In the reverse direction: let $T\in L$.
Then every finite $P\subset T$ occurs in a supertile $S$, and by property
P1 it has a \emph{correct} decoration, which means that the resulting decorated tiling $\tilde P$ is in $\tilde L$.
Those decorations for different fragments $P$ may be inconsistent.
Using compactness arguments, we can show that it is possible to choose consistent such decorations
\footnote{Here are more details. Consider the tree whose vertices are correct decorations of fragments
of the form $\{F_1,F_2,\dots, F_i\}$. Edges connect a decoration of a fragment
$\{F_1,F_2,\dots, F_i\}$ to a decoration of the
fragment $\{F_1,F_2,\dots, F_i, F_{i+1}\}$ whenever the decorations are consistent. This
tree has arbitrary long branches. Any vertex of the tree has finitely many neighbors.
By K\"onig lemma~\cite{konig}, the tree has an infinite branch, which provides
a correct decoration of the entire tiling.}
\paragraph{This paper.}
In this paper, we consider tilings with right ``golden'' triangles and the substitution introduced by Danzer and van Ophuysen~\cite{do}.
Danzer and van Ophuysen showed that
there is no local rule with properties P1 and P2 for that substitution and
defined a decoration of the substitution and a local rule for decorated tilings with properties P1, P2 and P3.
However, their decoration of the substitution is not intuitive
and the local rule is complicated. The local rule stipulates that every crown (= vertex star) in the tiling occurs in a supertile.
There are 65 such crowns (up to isometry) and the paper does not even provide their list.
The goal of the present paper is to provide a more intuitive
decoration of the substitution and a simpler
local rule for decorated tilings, also having properties P1, P2 and P3.
\rom{Although there exist some general techniques to show that tiling spaces defined by substitutions are sofic, they are not always easy to implement: on the one hand they are based on some non-trivial assumptions about the substitution itself (very roughly: "the image of a tile is big enough to be able to pass the information without any bottleneck"), on the other hand the resulting sets of tiles are generally gigantic and not really explicit.
the condition in Goodman-Strauss are more than only side-to-side (complicated notion of "good substitution"). Also, you should cite the other construction of Ollinger "Combinatorial substitutions and sofic tilings" and explain why it does not work (or at last why it does not easily work). I think it is because of the shape of triangular substitution: it is hard to make the required information to reach the corners of the triangle (tiles at corners make a sort of bottleneck).}
\section{Tilings with golden right triangles}
In this paper, we consider a specific substitution and the associated family of substitution tilings with right ``golden'' triangles introduced by Danzer and van Ophuysen~\cite{do} and later in~\cite{ver}.
\paragraph{Golden right triangles and tilings.}
The altitude of any right triangle cuts it into two similar triangles.
Those triangles are denoted by $S,L$ on Fig.~\ref{pic2}(a).
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=1]{pic-2.pdf}
\end{center}
\caption{Golden right triangles.}\label{pic2}
\end{figure}
If the angles of the original right triangle are chosen appropriately,
then the ratio of the size of the initial
triangle to the size of $L$ equals
the ratio of the size of $L$ to the size of $S$.
More specifically, the ratio of the legs
of the initial triangle should be equal to the square root of
the golden ratio $\psi=\sqrt{\frac{\sqrt5-1}2}$.
Such a triangle is shown on Fig.~\ref{pic2}(b). The lengths of the sides
of triangles $S,L$ are shown on Fig.~\ref{pic2}(с).
We will call triangles of this shape
\emph{golden right triangles}.\footnote{The name ``golden'' in a similar context
was used to call isosceles triangles whose all angles are integer multiples of $36^{\circ}$ (Robinson triangles).
To avoid confusion, we add the attribute ``right''.}
We will use triangles
$L$ and $S$ as proto-tiles and their isometric images are called \emph{tiles}.
More specifically, isometric images of $L$ are called \emph{large tiles},
and isometric images of $S$ are called \emph{small tiles}.
A \emph{tiling} $T$ is a family of pair-wise non-overlapping tiles, which
means that the interiors of the tiles are disjoint.
On Fig.~\ref{pic5}, we can see an example of a tiling.
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=1]{pic-5.pdf}
\end{center}
\caption{A tiling, which is a union of supertiles of orders $0,1,2,3,4$.}\label{pic5}
\end{figure}
We denote by $[T]$ the
union of all tiles from $T$ and say that
$T$ \emph{tiles} $[T]$, or that
$T$ \emph{is a tiling of} $[T]$.
\paragraph{The substitution, decomposition and composition of tilings.}
We consider the following substitution:
\begin{center}
\includegraphics[scale=.7]{dan-pic-52.pdf}
\end{center}
The result of applying this substitution to a tiling is called \emph{the decomposition} of the initial tiling.
That is, given a tiling we cut every large tile by its altitude into two triangles, and all small tiles remain intact.
We obtain a tiling of the same set by tiles of smaller size.
Then we apply to the resulting tiling some fixed pre-chosen homothety $H$ with the
coefficient $1/\psi$. That homothety will be called \emph{the reference homothety} in the sequel.
The resulting tiling is called
\emph{the decomposition} of the initial tiling.
Each large tile produces a large and a small tile in
the decomposed tiling and each small tile becomes a large tile.
The decomposition of the tiling $T$ is denoted by $\sigma T$.
The inverse operation is called \emph{composition}.
That is, we call a tiling $T$ a composition of a tiling $T'$
if $T'$ is the decomposition of $T$.
There are tilings that have no composition, for instance,
the tiling consisting of a single small tile.
On the other hand, every tiling has at most one composition.
This property of our substitution is called \emph{the unique composition property}.
The composition of a tiling $T$ (if exists) is denoted by $\sigma^{-1}T$.
It may happen that the composition of a tiling again has a composition.
In this case the initial tiling is called \emph{doubly composable}. If
a tiling can be composed any number of times, we call it \emph{infinitely composable}.
In terms of \cite G, infinitely composable tilings are those that have ``hierarchical structure''.
\paragraph{Supertiles.}
\emph{A supertile} is a tiling, which can be obtained from
a small or a large tile by applying decomposition several times.
Since every large tile is a decomposition of a small tile,
every supertile can be obtained from a small tile by
applying decomposition some $n$ times.
The number $n-1$ is called then the \emph{order}
of the supertile. (In particular, the small tile
is a supertile of order $-1$.)
Supertiles of order $i$ are denoted by $S_{i}$. Fig.~\ref{pic5} shows supertiles of orders $0,1,2,3,4$.
\paragraph{Substitution tilings.}
A tiling $T$ is called a \emph{substitution tiling}
if for each finite $P\subset T$ there is a
supertile $S$ including $P$. For instance, all supertiles are substitution tilings.
There exist substitution tilings of the entire plane. This can be deduced by compactness
arguments from the existence of substitution tilings of arbitrarily large parts of the
plane. However, it is easier to prove this using the following argument.
There are supertiles of orders 0 and 8, $S_0,S_{8}$, such that
$S_0\subset S_{8}$ and
$[S_0]$ is included in the interior of $ [S_{8}]$.
Indeed, on Fig.~\ref{fs} we can see a supertile $T$ of order 8.
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=1]{flc-3.pdf}
\end{center}
\caption{The green triangle $A$ is strictly inside a supertile of order 8.}\label{fs}
\end{figure}
The interior of the triangle $[T]$
includes a large tile $A$ (shown in green color).
Applying 8 decompositions to the supertiles
$\{A\}$ and $T$ we get supertiles $S_{8}=\sigma^8\{A\}$ and $S_{16}=\sigma^8T$,
of orders 8 and 16, respectively. Since $A\in T$, we have
$S_{8}=\sigma^8\{A\}\subset \sigma^8T=S_{16}$.
In this way we can construct
a tower of supertiles
$$
S_{0}\subset S_8\subset S_{16}\subset S_{24}\subset\dots
$$
where each set $[S_{8n}]$ extends the previous set $[S_{8(n-1)}]$
in all directions.
Therefore the tiling $S_0\cup S_{8}\cup S_{16}\cup \dots$
tiles the entire plane and is a substitution tiling by construction.
It is not hard to see
that every substitution tiling of the plane has a composition,
which is again a substitution tiling.
Thus every substitution tiling of the plane
is infinitely composable. In particular,
every substitution tiling of the plane contains supertiles of all orders.
(To find a
supertile of order $n$ in a substitution tiling $T$
of the plane, we can compose it $n$ times and then pick any
large tile in the resulting tiling. The $n$-fold decomposition
of that tile is a supertile of order $n$ and is included in the original
tiling $T$.) As mentioned in the Introduction,
the unique composition property implies
that any infinitely composable (and hence any substitution) tiling
of the plane is non-periodic.
In~\cite{do} and later in~\cite{ver}, it was shown that there is
no local rule such that the family of tilings of the plane
satisfying that local rule coincides with
the family of substitution tilings.
More specifically, it was proved that for any positive
$d$ there is a periodic
(and hence not substitution) tiling $T_d$ of the plane,
whose all $d$-fragments occur in supertiles.
For any local rule consisting of $d$-patches,
either all $d$-patches from $T_d$ are declared legal,
or a $d$-patch from $T_d$ is declared illegal.
In the first case, the tiling $T_d$ satisfies the local rule.
In the second case,
some supertile has an illegal patch, and since all
substitution tilings include that supertile, all
substitution tilings do not satisfy the local rule.
\section{Tilings with decorated triangles}
However, there is such local rule provided tiles from substitution tilings are colored in a finite number of colors. In this section
we explain how to do that.
\subsection{The local rule of Danzer and van Ophuysen}
We color both tiles in five colors 0, 1, 2, 3, 4.
The substitution $\tilde\sigma$ acts on decorated tiles as follows:
\begin{center}
\includegraphics[scale=.7]{dan-pic-54.pdf}
\end{center}
Addition and multiplication refer to the respective operations modulo 5.
To define the local rule, we need the notion of a crown (= vertex star).
Let $T$ be a tiling and $A$ a vertex of a triangle from $T$.
\emph{The crown centered at vertex $A$ in $T$} is a fragment of $T$
consisting of all tiles from $T$ that include the point $A$ (not necessarily as a vertex).
A crown is called \emph{legal} if it is a crown in a supertile.
The local rule is the following:
\begin{quote}
\emph{All crowns in the tiling must be legal.}
\end{quote}
To make this local rule explicit, we need to list all legal crowns.
There are 65 of them (up to an isometry), thus the list is quite long.
However, we can reduce the list using the following observation.
Let us define on tilings the following operation called \emph{shift}.
To shift a tiling by $y=0,1,2,3,4$, we increment the markings of
all large tiles by $y$ and the markings of
all small tiles by $2y$ (modulo 5). It is not hard to see
that the shift of any legal crown is legal.
Indeed, let $C$ be a crown in a supertile $S_i$ of order $i$, that is obtained from
a large tile with color $k$. By induction on $i$ it is easy to prove the following:
for each tile $A$ from $S_i$, its color
is obtained from $k$ by applying a linear function of the form
$2^{i+1}x+c_A$ for small tiles and $2^{i}x+c_A$
for large tiles. Here $c_A$ denotes a number depending on the location of $A$ within $S_i$.
Thus, if we increase $k$ by $3^{i}y$,
the colors of all large tiles are increased by $y$ and the colors of all small tiles by $2y$, as $2\cdot 3\equiv1 \pmod 5$.
Let us call two legal crowns \emph{equivalent} if they can be obtained from each other by a shift.
Obviously each equivalence class has 5 legal crowns and hence there are 13 equivalence classes
denoted $C_1,C_2,C_3,C_4,C_5,C_6,C_7$ и $C'_1,C'_2,C'_3,C'_4,C'_5,C'_6$.
On Fig.~\ref{dan-crowns} we present
one legal crown from each equivalence class.
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=.6]{dan-pic-3.pdf}
\end{center}
\caption{Legal crowns for Danzer and van Ophuysen substitution. Every crown
represents 5 crowns obtained by shifts from it.
Arrows indicate the action of substitution.}\label{dan-crowns}
\end{figure}%
This local rule guarantees all the above properties P1, P2 and P3.
\subsection{Our local rule}
We define first how we color proto-tiles.
We first choose for every side of the tile its orientation (depicted by an arrow).
Besides, every side is labeled by an integer number from
0 to 3. There is the following restriction for those labels: the
hypotenuse and the small leg of the large triangle,
and the large leg of the small triangle have even labels, and the remaining sides have odd labels.
Tiles bearing orientation and digital labels on sides are
called
\emph{colored tiles}. Each of 2 proto-tiles produces $2^3\cdot 2^3=64$ colored proto-tiles.
Actually, only 22 of these 128 tiles can occur in supertiles and hence we can reduce
the number of proto-tiles to 22. We will not prove this, since anyway
we obtain more proto-tiles than Danzer and van Ophuysen.
\paragraph{Decomposition and composition of colored tilings.}
The substitution is extended to colored tiles as follows:
\begin{itemize}
\item for small tiles: we increment
all digital labels by 1 modulo 4 and keep orientation of all sides
\item for large tiles: we first increment
all digital labels by 1 modulo 4 keeping orientation of all sides, and then we
label the newly appeared altitude by 0 and orient it
from the foot to the vertex. The axis of the altitude is divided into two segments, those segments
keep their labels and orientations.
\end{itemize}
It is not hard to verify that
the requirement of evenness/oddness of labels is preserved
and thus we obtain again a tiling
by legally colored tiles.
On Fig.~\ref{pic6} we have shown a large colored tile, its decomposition, the decomposition of its decomposition
and so on.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=1]{pic-6.pdf}
\end{center}
\caption{Colored supertiles $S_0,S_1,S_2,S_3,S_4,S_5$.}\label{pic6}
\end{figure}
The digital labels are represented by colors: red is 0, yellow is 1, green is 2, blue is 3,
orientations are shown by arrows.
Long line segments of the same color represent identically oriented sides with the same
digital label. This orientation is shown by an arrow at an end of the segment.
The inverse operation is called \emph{the composition of colored tilings}.
\emph{A colored supertile of order $n$} is defined as the $n$-fold decomposition of
a large colored tile.
Colored supertiles of orders 0,1,2,3,4,5
are shown on Fig.~\ref{pic6}. A tiling with decorated tiles is called a \emph{substitution tiling}
if every its fragment occurs in a (colored) supertile.
\paragraph{The intuition behind our decoration of supertiles.}
Each supertile $S_i$, $i>0$, sends a signal whose color equals $(i-1)\bmod 4$ along its altitude from its foot to the top.
If supertiles $S_{n-1}$ and $S_{n}$ form a supertile $S_{n+1}$,
then the vertex of the right angle of $[S_{n+1}]$ receives two signals, $(n-2)\bmod 4$ and $(n-1)\bmod 4$, and in turn sends the signal $n\bmod 4$.
The local rule will ensure that all these three signals are ``coherent'', that is, are equal to $(i-2)\bmod 4,(i-1)\bmod 4,i\bmod 4$ for some $i$.
If we were allowed infinitely many colors,
the signal sent by $S_i$ would be just $i-1$. In that case the proof would be much easier.
Each supertile $S_n$ has hierarchical structure, that is, for each $i<n$ it can be partitioned into supertiles $S_i$ and $S_{i-1}$. Hence $S_n$ hosts
many signals. The crucial point is that all those signals are sent along non-overlapping paths.
We will explain later why the number of colors is 4 (see Remark~\ref{rem4}).
\paragraph{Our local matching rule L.}
To define our local rule, we need a new notion,
similar to that of a crown (aka a vertex star). We call this notion \emph{a star}.
Let $A$ be a vertex of a tile from a tiling $T$.
Consider all non-decorated tiles from $T$ that include the point $A$
together with digital marks and orientations
of all the sides that \emph{include the point $A$}.
That information forms \emph{the star of $T$ centered at $A$}.
It is important that we forget orientations and digital labels
of the outer sides of tiles from a star.
A star may be \emph{incomplete}, which means that there is no
neighborhood of $A$ that is included in the union of tiles from the star. Incomplete stars appear on the borders
of tilings of parts of the plane.
Two examples of stars are shown on Fig.~\ref{pi101}.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=1]{pi-101.pdf}
\end{center}
\caption{A complete star and an incomplete star. The colors and orientations
of outer sides are not shown, as this information is not included in the star.}\label{pi101}
\end{figure}
\begin{definition}
A complete star is called \emph{legal} if it is one of the stars shown on Fig.~\ref {pic31}.
The black line segment on that figure is called
\emph{the axis of a legal star} and may have any orientation and any color.
All
digital labels and orientations of all sides lying on the axis must coincide.
An example of a legal star is shown on Fig.~\ref{pi101} on the left.
\end{definition}
\begin{figure}[t]
\begin{center}
\includegraphics[scale=1]{pic-31.pdf}
\end{center}
\caption{The list of legal stars.
} \label{pic31}
\end{figure}
\begin{definition}
A tiling of the plane with decorated tiles satisfies our local rule L
if (1) any two sides
that share a common interval have matching orientations and digital labels
and (2) every its star is legal. For a tiling of a part of the plane the second item reads:
(2) every its complete star is legal.
Tilings satisfying the local rule L are called \emph{L-tilings}.
A coloring of tiles in a tiling with non-decorated tiles is called \emph{correct}
if the resulting tiling is an L-tiling.
\end{definition}
\paragraph{Several remarks on legal stars.}
\begin{remark}
We will prove that a star is legal if and only if it is a complete star of a supertile.
\end{remark}
\begin{remark}
Observing the triangles in the legal stars we can easily conclude that
the parity of the digital label of the axis of a legal star must be equal to that of the
index of the star. Hence every star from Fig.~\ref {pic31} represents 4 legal stars: there are two ways
to choose orientation of the axis and two ways to label it.
Thus there are $7\cdot2\cdot2=28$ different legal stars, up to an isometry.
\end{remark}
\begin{remark}
There are one or two outgoing arrows from the center of any legal star and
those arrows are orthogonal to the axis of the star. All the remaining arrows are directed towards the center of the star and form with the axis the acute angles $\arcsin\left((\sqrt5-1)/2\right)$ and $\arccos\left((\sqrt5-1)/2\right)$,
called the \emph{smaller} and the \emph{larger} ones, respectively.
Let $n$
denote the index of a legal star. Then the digital labels of the arrows that go into or out of
the center of a legal star are the following. On one side of the star
the arrow that goes into the center of the star and forms with the axis the smaller acute angle (if any) is labeled by $n+1$, the arrow that goes into the center of
the star and forms with the axis the larger acute angle
(if any) is labeled by $n+2$,
and the outgoing arrow (if any) is labeled by $n+3$ (addition modulo 4).
On the other side of the axis the digital labels are $n-1$, $n$ and $n+1$, respectively.
\end{remark}
\section{Results}
The following three theorems claim that our decoration and local rule have the properties P1, P2 and P3.
\begin{theorem}\label{thm0}
(1) Decomposition of any L-tiling of the plane is again an L-tiling.
(2) Every supertile is an L-tiling. (3) Conversely, all legal stars occur in supertiles.
\end{theorem}
\begin{corollary}\label{c1}
There exists an L-tiling of the plane.
\end{corollary}
\begin{theorem}\label{l3}
Any $L$-tiling of the plane has a composition, which is again
an L-tiling.
\end{theorem}
\begin{theorem}\label{thm1}
A tiling of the plane with colored tiles is a
substitution tiling if and only if it is an L-tiling.
\end{theorem}
It follows from Theorem~\ref{l3} that all L-tilings of the plane are non-periodic.
Indeed, they are infinitely composable, and hence non-periodic, as explained above.
We first prove Theorem~\ref{thm0} and Corollary~\ref{c1}, then we derive Theorem~\ref{thm1} from Theorem~\ref{l3}
and then we prove the latter.
\begin{proof}[Proof of Theorem~\ref{thm0}]
Let us extend decomposition to stars: to decompose
a star, we decompose the respective tiling and then delete all the resulting tiles that do not include the center of the star.
It is not hard to verify that the family of legal stars
is closed under decomposition: see Fig.~\ref{pic3} where
the action of decomposition is shown by grey arrows.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=1]{pic-3.pdf}
\end{center}
\caption{The action of decomposition on legal stars is shown by grey arrows.
The stars $C_6,C_7$ in the course of 4 decompositions are mapped to themselves, since the decomposition
works as rotation by the right angle (ignoring colors) on the stars $C_6,C_7$, and
we increment digital labels modulo 4.
Note that the decomposition maps both stars $C_{5}$ and $C_7$
to $C_6$. This is not surprising, as
the stars $C_{5}$ and $C_7$ differ only in one tile and
after decomposition this difference disappears.}\label{pic3}
\end{figure}
(1)
Let $T$ be an L-tiling of the plane and $A$ a vertex of a tile from $\sigma T$.
We have to show that the star centered at $A$ in $\sigma T$ is legal. We consider two cases.
\emph{Case 1:} $A$ is also a vertex of a tile from $T$. Then the star centered at $A$ in $T$ is legal, as $T$ is an L-tiling.
The star centered at $A$ in $\sigma T$ is obtained by decomposition from that star and hence is legal as well.
\emph{Case 2: } $A$ is not a vertex of a tile from $T$.
Then $A$ is a foot of the altitude of a large tile $F$ from $T$ and hence lies on the hypotenuse of $F$.
Let $B,C$ denote endpoints of that hypotenuse (see Fig.~\ref{pic54}). Consider the star of $T$ centered at $B$.
Observing Fig.~\ref{pic31}, we can see that such situation (the center of the star is an endpoint of the hypotenuse of
a large tile and the foot of the altitude of that tile is not a vertex) occurs only in
stars $C_3$--$C_7$ and in all cases the star includes the triangle $\tilde F$ obtained by the central symmetry
\begin{figure}[t]
\begin{center}
\includegraphics[scale=1]{pic-54.pdf}
\end{center}\caption{}\label{pic54}
\end{figure}
centered at the middle point of the hypotenuse: $F$ together with $\tilde F$ from a rectangle.
Decomposition of that rectangle produces two stars $C_1$.
(2) We use induction on the order of the supertile. Base of induction is trivial, since supertiles of order less than 4
have no complete stars. To make the induction step,
we would like to extend item (1) to tilings of parts of the plane.
However this cannot be done, as there is a tiling of a part of the plane with no complete
stars such that its decomposition has a complete illegal star (see Fig.~\ref{pic55}).
\begin{figure}[t]
\begin{center}
\includegraphics[scale=1]{pic-55.pdf}
\end{center}
\caption{An L-tiling of a part of the plane whose decomposition is not an L-tiling}\label{pic55}
\end{figure}
Why cannot we repeat the above arguments to show that
decomposition of any L-tiling is an L-tiling?
The only reason why the above arguments fail, is that in Case 2 we
cannot claim that the star of $T$ centered at $B$ is legal and hence the tiling $T$ includes
the large tile $\tilde F$.
Indeed, it might happen, that although $A$ is an inner
point of $[T]$, neither $B$, nor $C$ are inner points of $[T]$.
In that case we have no information about the stars of $T$ centered at $B$ and $C$.
To handle this case, we will show that if $T$ is a supertile, then
\begin{itemize}
\item[(*)]
ignoring orientations and digital marks,
\emph{all} stars in $S_i$, including incomplete ones, can be extended to legal stars by adding some tiles.
\end{itemize}
This statement obviously holds for supertiles of order 0: the triangle $S_0$ has three stars
and they all can be extended to legal stars, provided we ignore digital marks and orientation.\footnote{This ignoring
is important, as there are colored supertiles of order 0 (i.e. colored large tiles) which possess stars that
cannot be extended to legal stars.}
The inductive step is done exactly, as in item (1). Indeed, if $A$ is a vertex in $S_i$ then
the incomplete star centered at $A$ in $S_{i+1}$ can be extended to decomposition of the completion of
the star of $A$ in $S_i$. Otherwise
$A$ is a foot of the altitude of a large tile $F$ from $T$ and hence lies on the hypotenuse of $F$.
Again we
consider $B,C$, the endpoints of that hypotenuse, see Fig.~\ref{pic54}.
By our assumption, the star centered at $B$ in $S_i$ can be completed to a legal star (ignoring digital marks and orientation).
As verified above, that star contains the triangle $\tilde F$, as shown on Fig.~\ref{pic54}.
Hence the star centered at $A$ in the decomposition of $S_i$ can be completed to the star $C_1$.
As we have just seen, property (*) implies that
in the hard case ($T$ is a supertile, $A$ is an inner
point of $[T]$, and neither $B$, nor $C$ are inner points of $[T]$)
the star of $T$ centered at $B$, or its completion, includes the triangle $\tilde F$ (see Fig.~\ref{pic54}).
The triangle $\tilde F$ must be in $T$, as otherwise $A$ lies on the border of $[T]$, and we are done.
(3)
It suffices to prove the statement for stars $C_1$ only. Indeed,
for every $i>1$ the star $C_i$ is obtained from $C_1$ by $i-1$ decompositions.
Thus, if $C_1$ occurs in $S_n$, then $C_i$ occurs in $S_{n+i-1}$.
There are four stars of the type $C_1$: we have two ways to label the axis (yellow or blue)
and two ways to choose its orientations. The stars $C_1$ with yellow axis
of both orientations appear on the altitude
of the supertile $S_{10}$ (Fig.~\ref{pic1}).
\begin{figure}[t]
\begin{center}
\includegraphics[scale=.7]{pic-1.pdf}
\end{center}
\caption{A colored supertile $S_{10}$ has all four
starts of the type $C_1$.}\label{pic1}
\end{figure}
The stars $C_1$ with blue axis appear on the altitude of the supertile $S_{4}$ on
Fig.~\ref{pic6} (they appear also on Fig.~\ref{pic1}, inside the supertile $S_{10}$).
\end{proof}
\begin{remark}\label{rem4}
We can explain now why 4 digital marks. Assume that digital marks belong to $\mathbb Z_k=\{0,1,\dots, k-1\}$,
the substitution $\tilde\sigma$ works as before and the local rule $L_k$ stipulates
that all stars occur in supertiles. Then we would have lcm$(4,k)$
legal stars of the shape $C_6,C_7$. Indeed, geometrically, substitution acts as $90^\circ$
rotation on stars of this shape (see Fig.~\ref{pic3}). And on digital marks it works as
adding 1. Thus we have two independent cycles
of lengths 4 and $k$, whose superposition is a cycle of length lcm$(4,k)$.
One can show that the choice $k=1,2$ does not work, as for $k=1,2$ there is a periodic tiling
satisfying the local rule $L_k$.
The choice $k=3$ might work. However, there are lcm$(4,3)=12$
legal stars of the shape $C_6,C_7$ for that $k$, thus the local rule becomes too complicated any way.
So the choice $k=4$ seems to be optimal.
\end{remark}
\begin{proof}[Proof of Corollary~\ref{c1}]
Consider the supertile of order 0 shown on Fig.~\ref{pic6}.
Decomposing that large tile 8 times,
we obtain a supertile of order 8 which is an L-tiling by Theorem~\ref{thm0} (see Fig.~\ref{pic141}). The green triangle $A$
\begin{figure}[t]
\begin{center}
\includegraphics[scale=.5]{pic-14.pdf}
\end{center}
\caption{A colored supertile $S_{8}$ contains a large triangle of the same color as the initial tile $S_{0}$ which it was obtained from.}\label{pic141}
\end{figure}
has the same color as the initial large tile. Thus, if we decompose this supertile $S_8$ eight times the green triangle
will produce another supertile $S_8$. In this way we can construct
a tower of supertiles $S_0\subset S_8\subset S_{16}\subset \dots$ whose union is an L-tiling of the entire plane.
\end{proof}
\begin{remark}
One can show that there are only 7 different small tiles
and 15 different large tiles (up to isometry) that occur in supertiles as inner tiles (a tile is called \emph{an inner
tile of a supertile},
if no its side lies on the border of the supertile). Thus we can reduce the total number of proto-tiles to 22. Indeed,
to prove Corollary~\ref{c1}, we do not need remaining tiles.
\end{remark}
\begin{proof}[A derivation of Theorem~\ref{thm1} from Theorem~\ref{l3}]
By definition,
every fragment of every substitution tiling of the plane occurs in a colored
supertile, hence is legal by Theorem~\ref{thm0}(2). Therefore every substitution tiling is an L-tiling.
To prove the converse, consider any fragment $P$
of an $L$-tiling $T$ of the plane. We have to show that $P$ occurs in a (colored) supertile. To this end add in $P$ a finite number of tiles
from $T$ so that $P$ becomes an inner part of the resulting fragment $Q$.
By Theorem~\ref{l3}(2) we can compose $T$ any number of times
and the resulting tiling is an L-tiling.
Consider the sets of the form $\sigma^k C$ where $C$ is a star of the tiling
$\sigma^{-k}T$.
As $k$ increases, these sets increase as well. If $k$ is large enough, then the set $Q$
is covered by a single such set, say by $\sigma^k C$,
that is, $Q\subset \sigma^k C$
(Lemma~\ref{l-bd} below). As $\sigma^{-k}T$
is an L-tiling, all its stars are legal. In particular, the star $C$
is legal. By Theorem~\ref{thm0}(3), the star $C$ appears in a supertile, say in $S_n$. Therefore
the tiling $\sigma^{k}C$ appears in the supertile $S_{n+k}$. Hence the patch $Q$
appears in that supertile provided we ignore labels and orientations of its
outer sides. Since no side of the patch $P$ is an outer side of $Q$,
we are done. It remains to prove Lemma~\ref{l-bd}.
\end{proof}
\begin{lemma}\label{l-bd}
If $k$ is large enough compared to $d$,
then for any substitution tiling $T$ of the plane
for any its $d$-fragment
$P$ there is a star $C$ in the tiling $\sigma^{-k}T$ such that $\sigma^k C$
includes the entire patch $P$.
\end{lemma}
\begin{proof}
Consider supertiles of the form $\sigma^k \{A\}$ for $A\in\sigma^{-k}T$, call them \emph{$k$-supertiles}.
These supertiles partition $T$ and hence $P$ is covered by a finite
number of $k$-supertiles.
More specifically, $P$ is covered by those $k$-supertiles $\sigma^k \{A\}$ which
intersect
$[P]$. For small $k$, for instance for $k=0$,
the respective tiles $A$ might not belong to a single star of
the tiling $\sigma^{-k}T$. However, the sizes of $k$-supertiles increase as $k$ increases,
and for a large enough $k$
the respective tiles $A$ belong to a single star of
the tiling $\sigma^{-k}T$.
Indeed, cover the set $[P]$
by a disc $S$ of radius $d$ (centered at any point from $[P]$).
It suffices to show that if $k$ is large enough, then there is a star
$C$ in $\sigma^{-k}T$ such that $[\sigma^k C]$ covers disc $S$.
In other words, $[C]$ covers $H^{-k}S$, the inverse image of
$S$ under the $k$th power of the reference homothety $H$.
The radius of $H^{-k}S$ equals $\psi^kd$,
therefore the claim follows from the following
\begin{quote}
Geometrical observation:
\emph{
Let $ \alpha$ denote the minimal angle of the right golden triangle
and $h$ the length of the altitude of the small right golden triangle.
Let $S$ be a disc of diameter $D$.
If $h\ge D/\sin\alpha +D$, then every tiling of the plane has
a star $C$ such that $S\subset [C]$.}
\end{quote}
\begin{proof}[Proof of the observation]
We have to show that tiles intersecting the disc $S$ belong to a single star.
If there is a single such tile, then this is obvious.
If there are exactly two such tiles, $A$ and $B$, then they
must share a part of a side and at least one end of these two sides
belongs to both tiles. Then for the star $C$ centered at that end
we have $[C]\supset A\cup B\supset S$.
Finally, if there are three or more such tiles,
then at least one of those tiles, call it $F$, has common points with $S$ lying on two different
sides of the tile $F$. Let $E$ denote the common point of those sides
and let $A,B$ denote the points from $S$ that belong to different sides of $F$.
The angle $\angle AEB$ is one of the angles of the right golden triangle
and the length of $AB$ is at most $D$. Hence $D\ge |AB|\ge |AE| \sin \alpha$.
Therefore $|AE|$ is at most $D/\sin\alpha$.
All the points from $S$ are at distance at most $D$ from $A$ and hence at distance at most
$D/\sin\alpha+D$ from $E$. That is, $S$ is covered by
the disc with center $E$ and radius $D/\sin\alpha+D$. That disc is covered by the star centered at $E$,
provided the length of the altitude $h$ of small tiles is at least its radius $D/\sin\alpha+D$.
\end{proof}
This observation provides the relation between $k$ and $d$ we need. Assume that $h\ge 2d\psi^k(1/\sin\alpha +1)$. Then
any $d$-fragment $P$ of the initial substitution tiling $T$ is covered
by a disc of diameter $2d$ and is included in $\sigma^k C$ for some star $C$ from the tiling $\sigma^{-k}T$.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{l3}]
Let $T$ be an L-tiling of the plane. We have to show that it has a composition
and that its composition is again an L-tiling.
\emph{Why $T$ has a composition?}
Let $S$ be any small tile from $T$. Consider the star of $T$ centered at the
vertex of the right angle of $S$. We know that that star is legal.
There are only two stars in the list of legal stars, whose center is a vertex of the right angle of a small triangle, the stars $C_{1}$ and $C_{3}$.
\begin{center}
\includegraphics[scale=1]{pi-102.pdf}
\end{center}
In both these stars the small triangle
$S$ has the complement labeled $L$ on the picture.
Therefore the tiling $T$ has a composition which is obtained
by removing the common legs of all complementary pairs of
triangles $S,L$ and by decrementing all labels by 1 (and then applying $H^{-1}$, where $H$ is the reference homothety).
Hypotenuses of large triangles are built of two legs of different
tiles from $T$. Those legs have the same labels, since they belong to axes of stars $C_{1}$ and $C_{3}$.
Hence hypotenuses of all large triangles obtain unique digital labels.
Finally, new labeling of sides
is
legal, that is, the hypotenuses and small legs of large tiles and
large legs of small tiles have even colors and all other sides have odd colors.
\emph{Why the composition of $T$ is an L-tiling?}
We have to show now that the resulting colored tiling is an $L$-tiling.
Since $T$ satisfies item (1) of the local rule, so does $\sigma^{-1}T$. Let us verify item (2) of the local rule.
Let $A$ be a vertex of a
triangle from $\sigma^{-1}T$.
We have to prove that the colored star centered at $A$ in $\sigma^{-1}T$ is legal.
First note that the star centered at the same vertex $A$ in the initial tiling $T$ is different from
$C_1$, as the centers of stars
$C_1$ become inner points of sides in $\sigma^{-1}T$. Thus that star is one of the stars
$C_2$--$C_7$.
We claim that the composition transforms
these stars by the inverse arrows from Fig.~\ref{pic3}. To prove this, we need the following
\begin{lemma}\label{l1}
If $T$ is an L-tiling of the plane, then every its
star of type $C_2$--$C_7$, depending on its index, includes all tiles marked green on Fig.~\ref{star1}
and does not include tiles marked red (the star itself is marked grey).
\end{lemma}
\begin{figure}
\begin{center}
\includegraphics[scale=1]{star-1.pdf}
\end{center}
\caption{}\label{star1}
\end{figure}
We will first finish the proof of the theorem assuming Lemma~\ref{l1}.
Lemma~\ref{l1} guarantees that via composition tiles from each star $C_i$ except $C_6$ are transformed
to tiles from the star $C_{i-1}$. In the course of composition, we decrement the labels and do not change
the orientation of sides. Therefore, digital labels and orientations of all sides
become, as in the star $C_{i-1}$. Hence for $i>1$, $i\ne 6$, the star
$C_i$ is transformed to the star $C_{i-1}$.
For the star $C_6$, one its large tile can be transformed in two ways, depending on
whether the small yellow tile is in $T$ or not. If it is, then tiles from $C_6$ form the star $C_7$ and otherwise $C_5$.
Hence in the course of composition, all stars
in the tiling $T$ are transformed by the inverse grey arrows from Fig.~\ref{pic3},
which implies legality of all stars in $\sigma^{-1}T$.
It remains to prove Lemma~\ref{l1}.
\end{proof}
\section{Proof of Lemma~\ref{l1}}
We first show that in any L-tiling of the plane every star must have some fixed neighborhood,
called \emph{the neighborhood of the star}. Those neighborhoods include all green small tiles (Fig.~\ref{star1})
except one small tile near $C_7$.
In this analysis, we do not use labels and orientation of sides of triangles.
It is instructive, for reader's convenience, to print out all the legal stars and their neighborhoods (Fig.~\ref{pic32}, \ref{stars-neccessary10}
and \ref{stars-neccessary1} on page~\pageref{pic32})
and then to cut them out of paper. Matching tiles from
those paper stars with the tiles from the figures below, it is easy to verify all the claims
that certain stars do not fit in certain places.
\paragraph{The neighborhoods of legal stars.}
The neighborhoods of the stars $C_1,C_{2},C_{3},C_{4},C_{5}$
are shown on Fig.~\ref{stars-neccessary}. They all are centrally symmetric.
The initial star is colored in grey, the added tiles are colored in light-grey. These neighborhoods are obtained
from each other by decomposition.
One can verify that each of the first five stars indeed must have such neighborhood
as follows.
\emph{The star $C_1$}. Look at the blue vertex inside the grey star $C_1$
(Fig.~\ref{stars-neccessary}). That vertex lies on the large leg of a large triangle.
One can easily verify that there is the unique legal star whose center
lies on the large leg of a large triangle, namely the star
$C_1$. Hence the star of $T$ centered at the blue vertex is again $C_1$ and we
get the sought neighborhood.
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=1]{pi-104.pdf}
\end{center}
\caption{The neighborhoods of the first five stars.}\label{stars-neccessary}
\end{figure}
\emph{The star $C_2$}.
The argument is similar to the previous one. The star
$C_2$ has the following feature: it has a vertex (colored in blue)
that lies on the hypotenuse of its large
triangle. It is easy to verify that there is the unique star
whose center lies on the hypotenuse of its large
triangle, namely the star $C_2$.
\emph{The star $C_3$}.
The star $C_3$ is the unique star that has two
small triangles sharing the small leg. Hence the star centered at the blue vertex is again
$C_3$.
\emph{The star $C_4$}.
The star $C_4$ is the unique star
that has two large triangles sharing the small leg. Hence the star centered at the blue vertex is again
$C_4$. However this star does not complete the neighborhood:
the stars centered at yellow vertices must be
$C_1$ and the stars centered at red vertices again must be $C_1$.
\emph{The star $C_5$}.
The star $C_5$ is the unique star
that has two small triangles sharing the hypotenuse.
Hence the star centered at the blue vertex again must be
$C_5$. The stars centered at yellow and green
vertices must be
$C_1$. Furthermore, the stars centered at red and white vertices must be
$C_2$.
The neighborhoods of the stars
$C_6$, $C_7$ are shown on
Fig.~\ref{stars-neccessary2} (on the right).
One can verify in the following way that the stars $C_6$, $C_7$
indeed must have such neighborhoods.
\emph{The star $C_6$.}
The stars centered at yellow and green vertices
must be $C_1$ and the stars centered at red and black vertices must
be $C_2$ (on the left on Fig.~\ref{stars-neccessary2}).
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=1]{pi-1061.pdf}
\end{center}
\caption{The neighborhood of the stars $C_6,C_7$.}\label{stars-neccessary2}
\end{figure}
Now we see that the star centered at the brown vertex must be $C_3$,
which is added together with its neighborhood.
\emph{The star $C_7$}.
Since the star $C_7$ can be obtained from $C_6$ by rotation (ignoring labels and orientation),
the arguments are entirely similar to those for the star $C_6$.
Now we can start the proof of the lemma.
Assume that the star centered at a vertex
$A$ in an L-tilling $T$ is $C_i$ where $i>1$.
We have to prove that $T$ includes all tiles marked green on~Fig.~\ref{star11} (= Fig.~\ref{star1})
and does not include tiles marked red (the star itself is marked grey).
\begin{figure}
\begin{center}
\includegraphics[scale=1]{star-1.pdf}
\end{center}
\caption{}\label{star11}
\end{figure}
We will treat all $i$'s separately.
We start with simple cases $i=2,5,6$.
\paragraph{The star of
$A$ in $T$ is $C_{2}$ or $C_6$.}
It is easy to verify that in both cases the neighborhood of the star
includes all tiles marked green on Fig.~\ref{star11}, and we are done.
\paragraph{The star of the vertex $A$ in $T$ is $C_{5}$.}
Fig.~\ref{pi510}(a,b) show the star $C_{5}$
and its neighborhood. We can see that the neighborhood includes all 4 tiles marked green on Fig.~\ref{star11}.
We need to show that the small red triangle is not in $T$.
For the sake of contradiction assume
that the tiling $T$ includes the patch shown on Fig.~\ref{pi510}(c).
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=1]{pi-510.pdf}
\end{center}
\caption{Composition of the star $C_5$}\label{pi510}
\end{figure}
We will
say that
\emph{a star $C_i$ fits for a given patch in a given its vertex}
if one can draw (an isometric copy of) $C_i$ centered at that vertex
so that each of its tiles either does not overlap all the tiles from that tiling, or belongs to that tiling.
It is easy to verify that only the star $C_3$
fits for this patch in the yellow vertex. Adding to the
patch that star and its neighborhood, we obtain the patch shown on Fig.~\ref{pi510}(d). Now, by a simple search, we can verify that no legal star
fits for this patch in the blue vertex.
We proceed now to hard cases $C_3,C_5,C_7$. The arguments are
are very similar to those used in the case of $C_5$ but the analysis
is much more involved. Therefore we moved most of the proof to Appendix, as
the proof of Claim~\ref{lmain} (page~\pageref{lmain}).
\paragraph{The star of $A$ in $T$ is $C_{3}$.}
The vertex $A$ is colored green on
Fig.~\ref{pi3011}(a) and its star is colored in
dark-grey. On Fig.~\ref{pi3011}(b) we show the
neighborhood of that star (added tiles are colored in light-grey).
We can see that the neighborhood includes the large tile marked green on Fig.~\ref{star11}.
Now we have to show that the tiling $T$ does not
the small triangle marked red on Fig.~\ref{pi3011}(c).
This is the statement of Claim~\ref{lmain}(a).
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=1]{pi-3011.pdf}
\end{center}
\caption{Composition of the star $C_3$.}\label{pi3011}
\end{figure}
\paragraph{The star of the vertex $A$ is $C_{4}$.}
The vertex $A$ is shown by the green point on Fig. \ref{pi4011}(a).
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=1]{pi-4011.pdf}
\end{center}
\caption{Composition of the star $C_4$.}\label{pi4011}
\end{figure}
And
Fig. \ref{pi411}(b) shows the neighborhood of that star.
We can see that the neighborhood includes both tiles marked green on Fig.~\ref{star11}.
It remains to show that the tiling $T$ does not
the small triangle marked red (Claim~\ref{lmain}(b)).
\paragraph{The star of $A$ in $T$ is $C_{7}$.}
Fig.~\ref{pi602}(a,b) show the star $C_{7}$
and its neighborhood. We can see that the neighborhood includes all 6 tiles marked green on Fig.~\ref{star11},
except for the bottommost small tile.
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=1]{pi-602.pdf}
\end{center}
\caption{Composition of the star $C_7$.
}\label{pi602}
\end{figure}
To prove that it includes also that tile,
consider the blue vertex.
Only the stars $C_2,C_3,C_4$
fit for the patch in that vertex, they are shown on
Fig.~\ref{pi602}(c). If the star of the blue vertex is
$C_3$, we are done, as that star includes the sought tile.
It remains to show that neither of the stars
$C_2,C_4$ can stand in the blue vertex.
It is easy to show that
$C_4$ cannot be there. Indeed, adding
that star to the patch, we obtain the patch shown
on Fig.~\ref{pi602}(d). No legal star fits for that patch
in the yellow vertex. And Claim~\ref{lmain}(c) claims that the star centered at the blue vertex cannot be
$C_2$ either.
It remains to prove
\begin{claim}\label{lmain}
The following patches (Fig.~\ref{pi1})
cannot occur in L-tilings of the plane.
\end{claim}
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=1]{pi-1.pdf}
\end{center}
\caption{The following patches
do not occur in L-tilings of the plane (Claim~\ref{lmain})}\label{pi1}
\end{figure}
To prove the claim, we explore a small neighborhood
of the patches in a similar way, as it was done in finding neighborhoods of the stars.
The proof is deferred to Appendix.
\section{Acknowledgments} The author is grateful to Daria Pchelina and Alexander Kozachinskii for verifying the proofs and reading the preliminary version of the paper,
and to anonymous referees for valuable comments.
| proofpile-arXiv_065-218 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
We consider a supervised learning problem from a data set $\{x_n,y_n\} \in \mathbb{R}^d\times \mathbb{R}$, $n=1,\dots,N$ with the data
independent identically distributed (i.i.d.) samples from an unknown probability distribution $\DATAM(dxdy)$. The distribution $\DATAM$ is not known a priori but it is accessible from samples of the data.
We assume that there exists a function $\FTG:\mathbb{R}^d\to \mathbb{R}$ such that
$y_n = \FTG(x_n) + \xi_n$ where the noise is represented by iid random variables
$\xi_n$ with $\mathbb{E}[\xi_n]=0$ and $\mathbb{E}[\xi_n^2]=\sigma_\xi^2$.
We assume that the target function $\FTG(x)$ can be approximated by a single layer neural network which defines an approximation $\FTA:\mathbb{R}^d\times\mathbb{R}^{Kd}\times\mathbb{C}^K\to\mathbb{R}$
\begin{equation}
\FTA(x;\boldsymbol{\omega},\boldsymbol{\hat\beta}) = \sum_{k=1}^K \hat\beta_k s(\omega_k,x)\,,
\end{equation}
where we use the notation for the parameters of the network $\boldsymbol{\hat\beta}=(\hat\beta_1,\dots,\hat\beta_K)\in\mathbb{C}^K$, $\boldsymbol{\omega}=(\omega_1,\dots,\omega_K)\in\mathbb{R}^{Kd}$. We consider a particular activation function that is also known as Fourier features
\[
s(\omega,x)= e^{{\mathrm{i}}\SP{\omega}{x}}\,,\;\;\mbox{for $\omega\in \mathbb{R}^d$, $x\in\mathbb{R}^d$.}
\]
Here $\SP{\omega}{x} = \sum_{i=1}^d \omega^i x^i$ is the Euclidean scalar product in $\mathbb{R}^d$. The goal of the neural network training is to minimize, over the set of parameters $(\boldsymbol{\hat\beta},\boldsymbol{\omega})\in\mathbb{C}^K\times\mathbb{R}^{Kd}$, the risk functional
\begin{equation}\label{eq:risk}
\RISK(\boldsymbol{\hat\beta},\boldsymbol{\omega}) = \mathbb{E}_{\DATAM} [\ell(\FTA(x;\BFBH,\BFO),y)]\equiv \int \ell(\FTA(x;\BFBH,\BFO),y) \,\DATAM(dxdy)
\,.
\end{equation}
Since the distribution $\DATAM$ is not known in practice the minimization problem is solved for the empirical risk
\begin{equation}\label{eq:erisk}
\ERISK_N(\BFBH,\BFO) = \frac{1}{N}\sum_{n=1}^N \ell(\FTA(x_n;\BFBH,\BFO),y_n)\,.
\end{equation}
The network is said to be over-parametrized if the width $K$ is greater than the number $N$ of training points,
i.e., $K>N$. We shall assume a fixed width such that $N>K$ when we study the dependence on the size of training data sets.
We focus on the reconstruction with the regularized least squares type risk function
\[
\ell(\beta(x_n;\BFBH,\BFO),y_n) = |y_n - \FTA(x_n;\BFBH,\BFO)|^2 + \lambda \sum_{k=1}^K |\hat\beta_k|^2\,.
\]
The least-square functional
is augmented by the regularization term with a Tikhonov regularization parameter $\lambda\ge 0$. For the sake of brevity we often omit the arguments $\BFBH,\BFO$ and use the notation $\FTA(x)$ for $\FTA(x;\BFBH,\BFO)$. We also use
$|\boldsymbol{\hat\beta}|^2 := \sum_{k=1}^K |\hat\beta_k|^2$ for the Euclidean norm on $\mathbb{C}^K$.
To approximately reconstruct $\FTG$ from the data based on the least squares method is a common task in statistics and machine learning, cf. \cite{understand}, which in a basic setting
takes the form of the minimization problem
\begin{equation}\label{eq:minsquare}
\min_{\beta\in \mathcal N_K} \left\{ \mathbb{E}_\DATAM[|y_n-\beta(x_n)|^2] + \lambda \sum_{k=1}^K |\hat\beta_k|^2\right\}\,,
\end{equation}
where %
\begin{equation}\label{NN}
\mathcal N_K:=\Big\{\beta( x)=\sum_{k=1}^K\hat\beta_ks(\omega_k, x) \Big\}\,,
\end{equation}
represents an artificial neural network with one hidden layer.
Suppose we assume that the frequencies $\boldsymbol{\omega}$ are random and we denote
$\mathbb{E}_{{\boldsymbol{\omega}}}[g({\boldsymbol{\omega}},x,y)]:=\mathbb{E}[g({\boldsymbol{\omega}},x,y)\, |\, x,y]$ the conditional expectation with respect to the distribution of ${\boldsymbol{\omega}}$ conditioned on the data $(x,y)$.
Since a minimum is always less than or equal to its mean, there holds
\begin{equation}\label{min_ett}
\min_{(\boldsymbol{\hat\beta},{\boldsymbol{\omega}})\in\mathbb{C}^K\times \mathbb{R}^{Kd}} \left\{ \mathbb{E}_\DATAM[|y_n-\beta(x_n)|^2]+\lambda|\boldsymbol{\hat\beta}|^2\right\}
\le \mathbb{E}_{{\boldsymbol{\omega}}} \Big[ \min_{\boldsymbol{\hat\beta}\in\mathbb{C}^{K}} \left\{ \mathbb{E}_\DATAM[|y_n-\beta(x_n)|^2]+\lambda|\boldsymbol{\hat\beta}|^2\right\} \Big]\,.
\end{equation}
The minimization in the right hand side of \eqref{min_ett} is also known as the {\it random Fourier features problem}, see \cite{rahimi_recht,weinan_2,rudi}.
In order to obtain a better bound in \eqref{min_ett} we assume that $\omega_k$, $k=1,\dots,K$ are i.i.d. random variables with the common probability distribution $%
p(\omega) {\mathrm{d}}\omega$ and introduce a further minimization
\begin{equation}\label{E_min}
\min_{p}\mathbb{E}_{{\boldsymbol{\omega}}} \big[\min_{\boldsymbol{\hat\beta}\in\mathbb{C}^{K}}\big\{ \mathbb{E}_\DATAM[|y_n-\beta(x_n;\BFBH,\BFO)|^2]+\lambda|\boldsymbol{\hat\beta}|^2\big\}\big]
\end{equation}
An advantage of this splitting into two minimizations is that the inner optimization is a convex problem, so that several robust solution methods are available.
The question is: how can the density $p$ in the outer minimization be determined?
The goal of this work is to formulate a systematic method to approximately sample from an optimal distribution $p_*$. The first step is to determine the optimal distribution. Following Barron's work \cite{barron} and \cite{jones}, we first derive in Section \ref{sec_p} the known error estimate
\begin{equation}\label{barron_est}
\mathbb{E}_{{\boldsymbol{\omega}}}\big[ \min_{\boldsymbol{\hat\beta}\in\mathbb{C}^K}\big\{\mathbb{E}_\DATAM[| \beta(x)-y|^2]+\lambda|\boldsymbol{\hat\beta}|^2\big\}\big]
\le \frac{1+\lambda}{K}\mathbb{E}_{\omega}[\frac{|\hat f(\omega)|^2}{(2\pi)^{d}p(\omega)^2}]+ \mathbb{E}_\DATAM[|y-f(x)|^2]\,,
\end{equation}
based on independent samples $\omega_k$ from the distribution $p$. Then, as in importance sampling, it is shown that the right hand side is minimized by choosing $p(\omega)=p_*(\omega):= |\hat f(\omega)|/\|\hat f\|_{L^1(\mathbb{R}^d)}$, where $\hat f$ is the Fourier transform of $f$.
Our next step is to formulate an adaptive method that approximately generates independent samples from the density $p_*$, thereby following the general convergence \eqref{barron_est}.
We propose to use the Metropolis sampler:
\begin{itemize}
\item given frequencies ${\boldsymbol{\omega}}=(\omega_1,\ldots,\omega_k)\in\mathbb{R}^{Kd}$ with corresponding amplitudes
$\boldsymbol{\hat\beta}=(\hat\beta_1,\ldots,\hat\beta_k)\in \mathbb{C}^{K}$
a proposal ${\boldsymbol{\omega}}'\in \mathbb{R}^{Kd}$
is suggested and corresponding amplitudes
$\boldsymbol{\hat\beta}'\in \mathbb{C}^{K}$ determined by the minimum in \eqref{E_min}, then
\item the {\it Metropolis test} is for each $k$ to accept $\omega'_k$ with probability
$\min(1, |\hat\beta'_k|^\ALPHAEX/|\hat\beta_k|^\ALPHAEX)$.
\end{itemize}
The choice of the Metropolis criterion $\min(1, |\hat\beta'_k|^\ALPHAEX/|\hat\beta_k|^\ALPHAEX)$ and selection of $\ALPHAEX$ is explained in Remark~\ref{optimal_alpha}.
This adaptive algorithm (Algorithm~\ref{alg:ARFM}) is motivated mainly by two properties
based on the regularized empirical measure $\bar\beta(\omega):=\sum_{k=1}^K\hat\beta_k \phi_\varepsilon(\omega-\omega_k)$
related to the amplitudes $\boldsymbol{\hat\beta}$,
where
$\phi_\varepsilon(\omega)=
(2\pi\varepsilon^2)^{-d/2}e^{-|\omega|^2/(2\varepsilon^2)}$:
\begin{itemize}
\item[(a)] The quantities $K\,p\bar\beta$ converge to $\hat f$ in $L^1$
asymptotically, as $K\to\infty$ and $\varepsilon\to 0+$, as shown in
Proposition \ref{thm_improve_p}.
For the proof of Proposition \ref{thm_improve_p} we consider a simplified setting where the support of the $x$-data is all of $\mathbb{R}^d$.
\item[(b)] Property (a) implies that the optimal density $p_*$ will asymptotically
{\it equidistribute} $|\bar\beta|$, i.e., $|\bar\beta|$ becomes constant since $|\hat f|/p_*=\|\hat f\|_{L^1(\mathbb{R}^d)}$ is constant.
\end{itemize}
The proposed adaptive method aims to equidistribute the
amplitudes $|\hat\beta_k|$: if $|\hat\beta_k|$ is large more frequencies will be sampled Metropolis-wise in the neighborhood of $\omega_k$ and if $|\hat\beta_k|$ is small
then fewer frequencies will be sampled in the neighborhood.
Algorithm \ref{alg:ARFM} includes the dramatic simplification to compute
all amplitudes in one step for the proposed frequencies, so that the computationally costly step to solve the convex minimization problem for the amplitudes is not done
for each individual Metropolis test. A reason that this simplification works is the asymptotic
independence
$Kp|\bar\beta|\to |\hat f|$
shown in Proposition~\ref{thm_improve_p}.
We note that the regularized amplitude measure
$\bar\beta$ is impractical
to compute in high dimension $d\gg 1$. Therefore Algorithm \ref{alg:ARFM} uses the amplitudes $\hat\beta_k$ instead
and consequently Proposition~\ref{thm_improve_p}
serves only as a motivation that the algorithm can work.
In some sense, the adaptive random features Metropolis method is a stochastic generalization of deterministic adaptive computational methods for differential equations where the optimal efficiency is obtained for equidistributed error indicators, pioneered in \cite{babuska}. In the deterministic case, additional degrees of freedom are added where the error indicators are large, e.g., by subdividing finite elements or time steps. The random features Metropolis method analogously adds frequency samples where the indicators $|\hat\beta_k|$ are large.
A common setting is to, for fixed number of data point $N$, find the number of Fourier features $K$ with similar approximation errors as for kernel ridge regression.
Previous such results on the kernel learning improving the sampling for random Fourier features are presented, e.g., in \cite{bach}, \cite{Wilson2013}, \cite{Li2019TowardsAU} and \cite{pmlr-v70-avron17a}.
Our focus is somewhat different, namely
for fixed number of Fourier features $K$ find an optimal method by adaptively adjusting the frequency sampling density for each data set.
In \cite{Wilson2013} the Fourier features are adaptively sampled based on a density parametrized as a linear combination of Gaussians. %
The work \cite{bach} and \cite{Li2019TowardsAU} determine the optimal density as a leverage score for sampling random features, based on a singular value decomposition of an integral operator related to the reproducing kernel Hilbert space, and formulates a method to optimally resample given samples.
Our adaptive random feature method on the contrary is not based on a parametric description or resampling and
we are not aware of other non parametric adaptive methods generating samples for random Fourier features for general kernels.
The work \cite{pmlr-v70-avron17a} studies how to optimally choose the number of Fourier features $K$ for a given number of data points $N$ and provide upper and lower error bounds. In addition \cite{pmlr-v70-avron17a}
presents a method to effectively sample from the leverage score in the case of Gaussian kernels.
We demonstrate computational benefits of the proposed adaptive algorithm by including a simple example that provides
explicitly the computational complexity of the adaptive sampling Algorithm~\ref{alg:ARFM}. Numerical benchmarks in Section~\ref{sec:Benchmarks} then further document gains in
efficiency and accuracy in comparison
with the standard random Fourier features
that use a fixed distribution of frequencies.
\medskip
Although our analysis is carried for the specific activation
function %
$s(\omega,x)=e^{{\mathrm{i}}\omega\cdot x}$,
thus directly related
to random Fourier features approximations, we note that in the numerical experiments (see Experiment 5 in Section~\ref{sec:Benchmarks}) we also tested the activation function
\[
s(\omega,x)=\frac{1}{1 + e^{-\omega\cdot x}}\,,
\]
often used in the definition of neural networks and called the
{\it sigmoid} activation. With such a change of the activation function the concept of sampling frequencies turns into sampling weights. Numerical results in Section~\ref{sec:Benchmarks} suggest that Algorithm~\ref{alg:ARFM} performs well also
in this case. A detailed study of a more general class of activation functions is subject of ongoing work.
\medskip
Theoretical motivations of the algorithm are given in Sections~\ref{sec_p} and \ref{sec_amplitude}.
In Section~\ref{sec_amplitude} we formulate and prove the weak convergence of the scaled amplitudes $K\,\boldsymbol{\hat\beta}$.
In Section~\ref{sec_p} we derive the optimal density $p_*$ for sampling the frequencies, under the assumption that $\omega_k, k=1,\ldots, K$ are independent and $\hat f\in L^1(\mathbb{R}^d)$.
Section~\ref{sec:adaptive} describe the algorithms.
Practical consequences of the theoretical results and numerical tests with different data sets are described in
Section~\ref{sec:Benchmarks}.
\section{Optimal frequency distribution}\label{sec_p}
\subsection{Approximation rates using a Monte Carlo method}
The purpose of this section is to derive a bound for
\begin{equation}\label{E_omega_min_x}
\mathbb{E}_{\boldsymbol{\omega}}\big[ \min_{\boldsymbol{\hat\beta}\in\mathbb{C}^K}\big\{\mathbb{E}_\DATAM[| \beta(x)-y|^2]+\lambda|\boldsymbol{\hat\beta}|^2\big\}\big]
\end{equation}
and apply it to estimating the approximation rate for random Fourier features.
The {\it Fourier transform}
\[
\hat f(\omega):=(2\pi)^{-d/2}\int_{\mathbb{R}^d} f(x)e^{-{\mathrm{i}}\omega\cdot x}{\mathrm{d}} x
\]
has the inverse representation
\[
f(x)=(2\pi)^{-d/2}\int_{\mathbb{R}^d}\hat f(\omega)e^{{\mathrm{i}}\omega\cdot x}{\mathrm{d}} \omega
\]
provided $f$ and $\hat f$ are $L^1(\mathbb{R}^d)$ functions.
We assume $\{\omega_1,\ldots, \omega_k\}$ are independent samples from a probability
density $p:\mathbb{R}^d\to [0,\infty)$. Then the
Monte Carlo approximation of this representation yields
the neural network approximation $f(x)\simeq \alpha(x,\boldsymbol{\omega})$ with the estimator defined
by the empirical average
\begin{equation}\label{estimator}
\alpha(x,\boldsymbol{\omega}) = \frac{1}{K}\sum_{k=1}^K \frac{1}{(2\pi)^{d/2}}\frac{\hat f(\omega_k)}{p(\omega_k)}e^{{\mathrm{i}}\omega_k\cdot x}\,.
\end{equation}
To asses the quality of this approximation we study the variance
of the estimator $\alpha(x,\boldsymbol{\omega})$. By construction and i.i.d. sampling of $\omega_k$ the estimator is unbiased, that is
\begin{equation}
\mathbb{E}_{\boldsymbol{\omega}}[\alpha(x,\boldsymbol{\omega})]=f(x)\,,\\
\end{equation}
and we define
\[
\hat\alpha_k :=\frac{1}{(2\pi)^{d/2}} \frac{\hat f(\omega_k)}{K\,p(\omega_k)}\,.
\]
Using this Monte Carlo approximation we obtain a bound on the error which reveals a rate of convergence with respect to the number of features $K$.
\begin{theorem}\label{lemma:bound}
Suppose the frequencies $\{\omega_1,\dots,\omega_K\}$ are i.i.d. random variables with the common distribution $p(\omega){\mathrm{d}}\omega$, then
\begin{equation}\label{variance_f}
\mathrm{Var}_{\boldsymbol{\omega}}[\alpha(x,\boldsymbol{\omega})] = \frac{1}{K}\mathbb{E}_\omega\left[\frac{|\hat f(\omega)|^2}{(2\pi)^{d}p(\omega)^2}-f^2(x)\right]\,,
\end{equation}
and
\begin{equation}\label{main_rate_merr}
\mathbb{E}_{{\boldsymbol{\omega}}}\big[ \min_{\boldsymbol{\hat\beta}\in\mathbb{C}^K}\big\{\mathbb{E}_\DATAM[| \beta(x)-y|^2]+\lambda|\boldsymbol{\hat\beta}|^2\big\}\big]
\le \frac{1+\lambda}{K}\mathbb{E}_{\omega}[\frac{|\hat f(\omega)|^2}{(2\pi)^{d}p(\omega)^2}]+ \mathbb{E}_\DATAM[|y-f(x)|^2]\,.
\end{equation}
If there is no measurement error, i.e., $\sigma_\xi^2=0$ and $y_n=f(x_n)$, then
\begin{equation}\label{main_rate}
\mathbb{E}_{\boldsymbol{\omega}}\big[\min_{\boldsymbol{\hat\beta}\in\mathbb{C}^K} \big\{\mathbb{E}_\DATAM[|\beta(x)-f(x)|^2]+\lambda|\boldsymbol{\hat\beta}|^2\big\}\big]
\le \frac{1+\lambda}{K}\mathbb{E}_\omega[\frac{|\hat f(\omega)|^2}{(2\pi)^{d}p(\omega)^2}]\,.
\end{equation}
\end{theorem}
\begin{proof}
Direct calculation shows that the variance of the Monte Carlo approximation satisfies
\begin{equation}
\begin{split}
\mathbb{E}_{\boldsymbol{\omega}}[|\alpha(x,\boldsymbol{\omega})-f(x)|^2]
&=K^{-2}\mathbb{E}_{\boldsymbol{\omega}}\left[
\sum_{k=1}^K\sum_{\ell=1}^K\Big(\frac{\hat f(\omega_k )e^{{\mathrm{i}}\omega_k\cdot x}}{(2\pi)^{d/2}p(\omega_k)} - f(x)\Big)^*
\Big(\frac{\hat f(\omega_\ell )e^{{\mathrm{i}}\omega_\ell\cdot x}}{(2\pi)^{d/2}p(\omega_\ell)} - f(x)\Big)\right]\\
&=K^{-1} \mathbb{E}_\omega\left[|\frac{\hat f(\omega )e^{{\mathrm{i}}\omega\cdot x}}{(2\pi)^{d/2}p(\omega)} - f(x)|^2\right]
=K^{-1}\mathbb{E}_\omega\left[\frac{|\hat f(\omega)|^2}{(2\pi)^{d}p(\omega)^2}-f^2(x)\right]\,,
\end{split}
\end{equation}
and since a minimum is less than or equal to its average we obtain the random feature error estimate in the case without a measurement error, i.e., $\sigma^2_\xi=0$ and $y_n=f(x_n)$,
\begin{equation*}
\begin{split}
\mathbb{E}_{\boldsymbol{\omega}}\big[\min_{\boldsymbol{\hat\beta}\in\mathbb{C}^K} \big\{\mathbb{E}_\DATAM[|\beta(x)-f(x)|^2]+\lambda|\boldsymbol{\hat\beta}|^2\big\}\big]
&\le \mathbb{E}_{\boldsymbol{\omega}}\big[ \mathbb{E}_\DATAM[|\alpha(x)-f(x)|^2]+\lambda|\boldsymbol{\hat\alpha}|^2\big]\\
& \le \frac{1}{K} \mathbb{E}[\frac{|\hat f(\omega)|^2}{(2\pi)^{d}p(\omega)^2}-f^2(x)]+\, \frac{\lambda}{K}
\mathbb{E}_\omega[\frac{|\hat f(\omega)|^2}{(2\pi)^{d}p(\omega)^2}]\\
&\le \frac{1+\lambda}{K}\mathbb{E}_\omega[\frac{|\hat f(\omega)|^2}{(2\pi)^{d}p(\omega)^2}]\,.
\end{split}
\end{equation*}
Including the measurement error yields after a straightforward calculation an additional term
\[
\mathbb{E}_{{\boldsymbol{\omega}}}\big[ \min_{\boldsymbol{\hat\beta}\in\mathbb{C}^K}\big\{\mathbb{E}_\DATAM[| \beta(x)-y|^2]+\lambda|\boldsymbol{\hat\beta}|^2\big\}\big]
\le \frac{1+\lambda}{K}\mathbb{E}_{\omega}[\frac{|\hat f(\omega)|^2}{(2\pi)^{d}p(\omega)^2}]+ \mathbb{E}_\DATAM[|y-f(x)|^2]\,.
\]
\end{proof}
\subsection{Comments on the convergence rate and its complexity}\label{complexity}%
The bounds \eqref{main_rate} and \eqref{main_rate_merr} reveal the rate of convergence with respect to $K$.
To demonstrate the computational complexity and importance of using the adaptive sampling of frequencies we fix the
approximated function to be a simple Gaussian
$$
f(x)=e^{-|x|^2{\sigma}^2/2}\,,\;\;\;\mbox{with}\;\;\;
\hat f(\omega)=\frac{1}{(2\pi{\sigma}^2)^{d/2}} e^{-|\omega|^2/(2{\sigma}^2)}\,,
$$
and we consider the two cases ${\sigma}>\sqrt{2}$ and $0<{\sigma}\ll 1$. Furthermore, we choose a particular distribution $p$ by assuming the frequencies $\omega_k$, $k=1,\dots,K$ from the standard normal distribution
$\omega_k\sim\mathcal{N}(0,1)$ (i.e., the Gaussian density $p$ with the mean zero and variance one).
\smallskip
\noindent{\it Example I (large $\sigma$)}
In the first example we assume that ${\sigma}>\sqrt{2}$, thus the integral $\int_{\mathbb{R}^d}\frac{|\hat f(\omega)|^2}{p(\omega)}{\mathrm{d}} \omega$ is unbounded. The error estimate \eqref{main_rate} therefore indicates no convergence.
Algorithm~\ref{alg:ARFM} on the other hand has the optimal convergence rate for this example.
\smallskip
\noindent{\it Example II (small $\sigma$)}
In the second example we choose $0<{\sigma}\ll 1$ thus
the convergence rate $\frac{1}{K}\int_{\mathbb{R}^d}\frac{|\hat f(\omega)|^2}{p(\omega)}{\mathrm{d}} \omega$ in \eqref{main_rate} becomes $K^{-1}\mathcal O({\sigma}^{-d})$ while the rate is $K^{-1}\mathcal O(1)$ for the optimal distribution
$p=p_*=|\hat f|$, as ${\sigma}\to 0+$.
The purpose of the adaptive random feature algorithm is to avoid the large factor $\mathcal O({\sigma}^{-d})$.
\medskip
To have the loss function bounded by a given tolerance
$\TOL$ requires therefore that the non-adaptive random feature method uses $K\ge \TOL^{-1} \int_{\mathbb{R}^d}\frac{|\hat f(\omega)|^2}{p(\omega)}{\mathrm{d}} \omega\simeq
\TOL^{-1}\mathcal O({\sigma}^{-d})$, and
the computational work to solve the linear least squares problem is with $N\sim K$ proportional to $K^3\simeq \TOL^{-3}\mathcal O({\sigma}^{-3d})$.
In contrast, the proposed adaptive random features Metropolis method solves the least squares problem several times with a smaller $K$ to obtain the bound $\TOL$ for the loss. The number of Metropolis steps is asymptotically determined by the diffusion approximation in \cite{roberts_1997} and becomes proportional to $\ALPHAEX d\, {\sigma}^{-2}$.
Therefore the computational work is smaller
$ \TOL^{-3}\mathcal O(\ALPHAEX d\, {\sigma}^{-2})$
for the adaptive method.
\subsection{Optimal Monte Carlo sampling.}\label{sec_3.3}
This section determines the optimal density $p$ for independent Monte Carlo samples in \eqref{variance_f} by minimizing, with respect to $p$,
the right hand side in the variance estimate \eqref{variance_f}.
\begin{theorem}
The probability density
\begin{equation} \label{eq:optimal_density}
p_*(\omega)= \frac{ |\widehat f(\omega)|}{\int_{\mathbb{R}^d}|\widehat f(\omega')| {\mathrm{d}} \omega'}
\,.
\end{equation}
is the solution of the minimization problem
\begin{equation}\label{MC_loss222}
\min_{p,\int_{\mathbb{R}^d} p(\omega){\mathrm{d}}\omega = 1}\left\{
\frac{1}{(2\pi)^{d}}
\int_{\mathbb{R}^d}\frac{|\widehat f(\omega)|^2}{p(\omega)}
{\mathrm{d}} \omega\right\}\,.
\end{equation}
\end{theorem}
\begin{proof}
The change of variables $p(\omega)=q(\omega)/\int_{\mathbb{R}^d}q(\omega){\mathrm{d}} \omega$
implies $\int_{\mathbb{R}^d}p(\omega){\mathrm{d}} \omega=1$ for any $q:\mathbb{R}^d\to[0,\infty)$. Define for any $v:\mathbb{R}^d\to \mathbb{R}$ and $\varepsilon$ close to zero
\[
H(\varepsilon):=%
\int_{\mathbb{R}^d}\frac{|\widehat f(\omega)|^2}{q(\omega)+\varepsilon v(\omega)}
{\mathrm{d}} \omega \int_{\mathbb{R}^d}q(\omega)+\varepsilon v(\omega) {\mathrm{d}} \omega\,.
\]
At the optimum we have
\[
\begin{split}
H'(0)
= \int_{\mathbb{R}^d}\frac{|\widehat f(\omega)|^2v(\omega)}{-q^2(\omega)}{\mathrm{d}} \omega
\underbrace{\int_{\mathbb{R}^d}q(\omega') {\mathrm{d}} \omega' }_{=:c_1}
+ \underbrace{\int_{\mathbb{R}^d}\frac{|\widehat f(\omega')|^2}{q(\omega')} {\mathrm{d}} \omega'}_{=:c_2}
\int_{\mathbb{R}^d}v(\omega) {\mathrm{d}} \omega
=
\int_{\mathbb{R}^d}\big(c_2-c_1\frac{|\widehat f(\omega)|^2}{q^2(\omega)}\big)v(\omega) {\mathrm{d}} \omega
\end{split}
\]
and the optimality condition
$H'(0)=0$ implies
$q(\omega)=(\frac{c_1}{c_2})^{1/2} |\widehat f(\omega)|
$.
Consequently the optimal density becomes
\begin{equation*} %
p_*(\omega)= \frac{ |\widehat f(\omega)|}{\int_{\mathbb{R}^d}|\widehat f(\omega')| {\mathrm{d}} \omega'}
\,.
\end{equation*}
\end{proof}
We note that the optimal density does not depend on the number of Fourier features, $K$, and the number of data points, $N$, in contrast to the optimal density for the least squares problem \eqref{eq:num_disc_prob}
derived in \cite{bach}.
As mentioned at the beginning of this section sampling $\omega_k$ from the distribution $p_*(\omega){\mathrm{d}}\omega$ leads to the tight upper bound on the approximation error in \eqref{main_rate}.
\section{Asymptotic behavior of amplitudes $\hat\beta_k$}\label{sec_amplitude}
The optimal density $p_*=|\hat f|/\|\hat f\|_{L^1(\mathbb{R}^d)}$ can be related to data as follows:
by considering the problem \eqref{E_omega_min_x}
and letting $\zeta(x)=\sum_{k=1}^K \hat\zeta_k e^{{\mathrm{i}}\omega_k\cdot x}$ be a least squares minimizer of
\begin{equation}\label{zeta_min}
\min_{\zeta\in\mathcal{N}_K}\left\{\mathbb{E}_\DATAM[|\zeta(x)-y|^2\ |\ \omega] +\lambda|\hat{\boldsymbol{\zeta}}|^2\right\}
\end{equation}
the vanishing gradient at a minimum yields the normal equations, for $\ell=1,\ldots, K$,
\[
\sum_{k=1}^K\mathbb{E}_\DATAM[e^{{\mathrm{i}} (\omega_k-\omega_\ell)\cdot x}\hat\zeta_k] +\lambda\hat\zeta_\ell
=\mathbb{E}_\DATAM[y\, e^{-{\mathrm{i}}\omega_\ell\cdot x}]
=\mathbb{E}_x[f(x)e^{-{\mathrm{i}}\omega_\ell\cdot x}]\,.
\]
Thus if the $x_n$-data points are distributed according to a distribution %
with a density $\rho:\mathbb{R}^d\to [0,\infty)$ we have
\begin{equation}\label{normaleq}
\sum_{k=1}^K\int_{\mathbb{R}^d} e^{{\mathrm{i}} (\omega_k-\omega_\ell)\cdot x}\hat\zeta_k \rho(x) {\mathrm{d}} x +\lambda\hat\zeta_\ell
=\int_{\mathbb{R}^d} f(x)e^{-{\mathrm{i}}\omega_\ell\cdot x} \rho(x){\mathrm{d}} x\,
\end{equation}
and the normal equations can be written in the Fourier space as
\begin{equation}\label{limit_Z_eq}
\sum_{k=1}^K \hat\rho(\omega_\ell-\omega_k)\hat\zeta_k +\lambda\hat\zeta_\ell
=\widehat{(f\rho)}(\omega_\ell)\,,\;\;\; \ell=1,\ldots, K\,.
\end{equation}
Given the solution $\boldsymbol{\hat\zeta}_K=(\hat\zeta_1,\dots,\hat\zeta_K)$ of the normal equation \eqref{limit_Z_eq} we define $\mathbf{\hat z}_K=(\hat z_1,\dots,\hat z_K)$
\begin{equation}\label{zhat}
\hat z_k := K\,p(\omega_k)\hat\zeta_k\,.
\end{equation}
Given a sequence of samples $\{\omega_k\}_{k=1}^\infty$ drawn independently from a density $p$ we impose the following assumptions:
\begin{itemize}
\item[(i)] there exists a constant $C$ such that
\begin{equation}\label{max}
\sum_{k=1}^K |\hat\zeta_k| \equiv \frac{1}{K}\sum_{k=1}^K\frac{|\hat z_k|}{ p(\omega_k)} \le C \tag{A1}
\end{equation}
for all $K>0$,
\item[(ii)] as $K\to\infty$ we have
\begin{equation}\label{unif_b}
\lim_{K\to\infty} \max_{k\in\{1,\ldots,K\}} |\hat\zeta_k|\equiv \lim_{K\to\infty} \max_{k\in\{1,\ldots,K\}}\frac{|\hat z_k|}{K\,p(\omega_k)}=0\,,
\tag{A2}
\end{equation}
\item[(iii)] there is a bounded open set $\mathcal{U}\subset\mathbb{R}^d$ such that
\begin{equation}\label{suppassume}
{\mathrm{supp}\,} \hat f\subset {\mathrm{supp}\,} p\subset \mathcal{U}\, ,\tag{A3}
\end{equation}
\item[(iv)] the sequence $\{\omega_k\}_{k=1}^\infty$ is dense in the support of $p$, i.e.
\begin{equation}\label{eq:dense}
\overline{\{\omega_k\}_{k=1}^\infty}={\mathrm{supp}\,} p\, .\tag{A4}
\end{equation}
\end{itemize}
We note that \eqref{eq:dense} almost follows from \eqref{suppassume}, since that implies that the density $p$ has bounded first moment. Hence the law of large numbers implies that with probability one the sequence $\{\omega_k\}_{k=1}^\infty$ is dense in the support of $p$.
In order to treat the limiting behaviour of $\mathbf{\hat z}_K$ as $K\to\infty$
we introduce the empirical measure
\begin{equation}\label{empirical_def}
\hat Z_K(\omega):=\frac{1}{K}\sum_{k=1}^K\frac{\hat z_k}{p(\omega_k)}\delta(\omega-\omega_k)\,.
\end{equation}
Thus we have for $\ell=1,\ldots, K$
\begin{equation}\label{z_conv}
\sum_{k=1}^K \hat\rho(\omega_\ell-\omega_k)\hat\zeta_k
=\int_{\mathbb{R}^d}\hat\rho(\omega_\ell-\omega) \hat Z_K({\mathrm{d}} \omega)
\,,
\end{equation}
so that the normal equations \eqref{limit_Z_eq} take the form
\begin{equation}\label{normal_Z}
\int_{\mathbb{R}^d}\hat\rho(\omega_\ell-\omega) \hat Z_K({\mathrm{d}} \omega) +\lambda\hat\zeta_\ell=
\widehat{(f\rho)}(\omega_\ell)\,,\;\;\; \ell=1,\ldots, K\,.
\end{equation}
By the assumption \eqref{max} the empirical measures are uniformly bounded in the total variation norm
\[
\int_{\mathbb{R}^d}|\hat Z_K|({\mathrm{d}} \omega)= \frac{1}{K}\sum_{k=1}^K \frac{|\hat z_k|}{ p(\omega_k)}\le C\,.
\]
We note that by \eqref{suppassume} the measures $\hat Z_K$ in $\mathbb{R}^d$ have their support in $\mathcal{U}$. We obtain the weak convergence result stated as the following
Proposition.
\begin{proposition}\label{thm_improve_p}
Let $\boldsymbol{\hat\zeta}_K$ be the solution of the normal equation \eqref{limit_Z_eq} and $\hat Z_K$ the empirical measures defined by \eqref{empirical_def}.
Suppose that the assumptions \eqref{max}, \eqref{unif_b}, \eqref{suppassume}, and \eqref{eq:dense} hold, and that the density of $x$-data, $\rho$, has support on all of $\mathbb{R}^d$ and satisfies $\hat\rho\in C^1$,
then
\begin{equation}\label{thm_lim}
\lim_{\varepsilon\to 0+}\lim_{K\to\infty}
\int_{\mathbb{R}^d} \phi_\varepsilon(\cdot - \omega')\,\hat Z_K({\mathrm{d}} \omega')
=\hat f\,,\;\;\;\;\;\mbox{ in $L^1(\mathbb{R}^d)$}\,,
\end{equation}
where $\phi_\varepsilon:\mathbb{R}^d\to \mathbb{R}$ are non negative smooth functions with a support in the ball $
\mathcal{B}_\varepsilon=
\{\omega\in\mathbb{R}^d\,\big|\, |\omega|\le \varepsilon\}$ and
satisfying $\int_{\mathbb{R}^d}\phi_\varepsilon(\omega){\mathrm{d}} \omega=1$.
\end{proposition}
\begin{proof}%
To simplify the presentation we introduce
\[
\hat\chi_{K,\varepsilon}(\omega) := \hat Z_K * \phi_\varepsilon(\omega) = \int_{\mathbb{R}^d} \phi_\varepsilon(\omega - \omega') \hat Z_K({\mathrm{d}} \omega')\,.
\]
The proof consists of three steps:
\begin{itemize}
\item[1.] compactness yields a $L^1$ convergent subsequence of the regularized empirical measures $\{\hat\chi_{K_j,\varepsilon}\}_{j=1}^\infty$,
\item[2.] the normal equation \eqref{limit_Z_eq} implies a related equation for the subsequence limit, and
\item[3.] a subsequence of the empirical measures converges weakly and as $\varepsilon\to 0+$ the limit normal equation establishes \eqref{thm_lim}.
\end{itemize}
{\it Step 1.} As $\phi_\varepsilon$ are standard mollifiers, we have, for a fixed $\varepsilon >0$, that the smooth functions (we omit $\varepsilon$ in the notation $\hat\chi_{K,\varepsilon}$)
\[
\hat\chi_{K} \equiv \hat Z_K*\phi_\varepsilon:\mathbb{R}^d\to \mathbb{C}
\]
have uniformly bounded with respect to $K$ derivatives
$\|\nabla \hat\chi_K\|_{L^1(\mathbb{R}^d)}=\mathcal O(\varepsilon^{-1})$.
Let $\mathcal{V}$ be the Minkowski sum
$\mathcal{V}=\mathcal{U}+\mathcal{B}_\varepsilon = \{a+b : a\in\mathcal{U}, b\in\mathcal{B}_\varepsilon\}$.
By compactness, see \cite{evans}, there is a $L^1(\mathcal{V})$ converging subsequence of functions $\{\hat \chi_K\}$, i.e., $\hat\chi_K\to \hat\chi$ in $L^1(\mathcal{V})$
as $K\rightarrow\infty$.
Since as a consequence of the assumption \eqref{suppassume} we have
that ${\mathrm{supp}\,} \hat Z_K \subset \mathcal{U}$ for all $\hat Z_K$, and hence ${\mathrm{supp}\,} \hat\chi_K \subset \mathcal{V}$ for all $\hat\chi_K$, then the limit
$\hat \chi$ has its support in $\mathcal{V}$. Hence $\hat\chi$ can be extended to zero on $\mathbb{R}^d\setminus \mathcal{V}$. Thus we obtain
\begin{equation}\label{g_lim}
\lim_{K\to\infty}\int_{\mathbb{R}^d} g(\omega)\hat\chi_K({\mathrm{d}} \omega)=\int_{\mathbb{R}^d} g(\omega) \hat \chi(\omega) {\mathrm{d}}\omega
\end{equation}
for all $g\in C^1(\mathbb{R}^d)$.
{\it Step 2.} The normal equations %
\eqref{normal_Z} can be written as a perturbation of the convergence \eqref{g_lim} using that
we have
\[
\int_{\mathbb{R}^d} g(\omega)\hat Z_K({\mathrm{d}} \omega)
-\int_{\mathbb{R}^d} g(\omega)\hat\chi_K({\mathrm{d}} \omega)
=\int_{\mathbb{R}^d} \big(g(\omega)-g*\phi_\varepsilon(\omega)\big)
\hat Z_K({\mathrm{d}} \omega)=\mathcal O(\varepsilon)\,.
\]
Thus we re-write the term $\int \hat\rho(\omega_\ell - \omega') \hat Z_K({\mathrm{d}}\omega')$ in
\eqref{normal_Z} as
\[
\int_{\mathbb{R}^d} \hat\rho(\omega-\omega') \hat Z_K({\mathrm{d}}\omega') = \int_{\mathbb{R}^d} \hat\rho(\omega - \omega') \hat \chi_K(\omega'){\mathrm{d}}\omega' + \mathcal{O}(\varepsilon)\,,
\]
now considering a general point $\omega$ instead of $\omega_l$ and the change of measure from $\hat Z_K$ to $\hat \chi_K$,
and by Taylor's theorem%
\[
\hat\rho(\omega-\omega')=
\hat\rho(\omega_p-\omega')
+\hat\rho(\omega-\omega')-\hat\rho(\omega_p-\omega')
=\hat\rho(\omega_p-\omega')
+\int_0^1 \nabla\hat\rho\big(s\omega +(1-s)\omega_p-\omega'\big){\mathrm{d}} s \cdot(\omega-\omega_p)
\]
where
\[
\min_{p\in \{1,\ldots, K\}}|\int_0^1 \nabla\hat\rho\big(s\omega +(1-s)\omega_p-\omega'\big){\mathrm{d}} s \cdot(\omega-\omega_p)|\to 0\,,\;\;\;
\mbox{as $K\to \infty$.}
\]
since by assumption the set $\{\omega_k\}_{k=1}^\infty$ is dense in the support of $p$.
Since $\lambda\hat\zeta_\ell \to 0$, as $K\to\infty$ by assumption %
\eqref{unif_b},
the normal equation %
\eqref{normal_Z} implies that
the limit is determined by
\begin{equation}\label{fp_eq}
\widehat{(f\rho)}(\omega)=\int_{\mathbb{R}^d}\hat\rho(\omega-\omega') \hat \chi(\omega'){\mathrm{d}} \omega'+\mathcal O(\varepsilon)\,,\quad \omega\in\mathbb{R}^d\,.
\end{equation}
We have here used that the function $\widehat{f\rho}$ is continuous as $\hat\rho\in C^1$, and the denseness of the sequence $\{\omega_k\}_{k=1}^\infty$.
{\it Step 3.} From the assumption \eqref{max} all $\hat Z_K$ are uniformly bounded in the total variation norm and supported on a compact set, therefore there is a weakly converging subsequence $\hat Z_K\rightharpoonup \hat Z$, i.e., for all $g\in C^1(\mathcal{V})$
\[
\lim_{K\to\infty}\int_{\mathcal{V}}g(\omega)\hat Z_K({\mathrm{d}} \omega)\to
\int_{\mathcal{V}}g(\omega)\hat Z({\mathrm{d}} \omega)\,.
\]
This subsequence of $\hat Z_K$ can be chosen as a subsequence of the converging sequence $\hat\chi_K$.
Consequently we have
\[
\lim_{K\to\infty} \hat Z_K*\phi_\varepsilon=\hat Z*\phi_\varepsilon=\hat\chi\,.
\]
As $\varepsilon\to 0_+$ in $\hat Z*\phi_\varepsilon$ we obtain by \eqref{fp_eq}
\[
\widehat{(f\rho)}(\omega)=\int_{\mathbb{R}^d}\hat\rho(\omega-\omega') \hat Z({\mathrm{d}} \omega')\,,\;\;\;\; \omega\in\mathbb{R}^d\,,
\]
and we conclude, by the inverse Fourier transform, that
\[
Z(x)\rho(x)=f(x)\rho(x)\,,\;\;\;\; x\in\mathbb{R}^d\,,
\]
for $f\in C_0(\mathbb{R}^d)$ and $\rho$ in the Schwartz class.
If the support of $\rho$ is $\mathbb{R}^d$ we obtain that $\hat Z=\hat f\in L^1(\mathbb{R}^d)$.
\end{proof}
\medskip
The approximation in Proposition ~\ref{thm_improve_p} is in the sense of the limit of the large data set, $N\to\infty$, which implies $\boldsymbol{\hat\beta}= \boldsymbol{\hat\zeta}_K$.
Then by the result of the proposition the regularized empirical measure for $\boldsymbol{\hat\zeta}_K$, namely $\hat Z_K*\phi_\varepsilon$, satisfies
\[\hat Z_K*\phi_\varepsilon\underbrace{\to}_{K\to\infty} \hat f*\phi_\varepsilon\underbrace{\to}_{\varepsilon\to 0+} \hat f\,,
\mbox{ in $L^1(\mathbb{R}^d)$}\,,
\]
which shows that $Kp(\omega_k)\hat\beta_k$ converges weakly to $\hat f(\omega_k)$ as $K\to\infty$ and we have $|\hat f(\omega_k)|=p_*(\omega_k) \|\hat f\|_{L^1(\mathbb{R}^d)}$.
We remark that this argument gives heuristic justification for the
proposed adaptive algorithm to work, in particular, it explains an idea behind the choice of the likelihood ratio in Metropolis accept-reject criterion.
\begin{remark}\label{optimal_alpha}
By Proposition~\ref{thm_improve_p} $K\, p(\omega_k)\hat\beta_k$ converges weakly
to $\hat f(\omega_k)$ as $K\to\infty$.
If it also converged strongly,
the asymptotic sampling density for $\omega$ in the random feature Metropolis method would satisfy $p=(C|\hat f|)^\ALPHAEX/p^\ALPHAEX$
which has the fixed point solution
$p=(C|\hat f|)^{\frac{\ALPHAEX}{\ALPHAEX+1}}$.
As $\ALPHAEX\to \infty$ this density $p$
approaches the optimal $|\hat f|/\|\hat f\|_{L^1(\mathbb{R}^d)}$.
On the other hand the computational work increases with larger $\ALPHAEX$, in particular the number of Metropolis steps is asymptotically determined by the diffusion approximation in \cite{roberts_1997} and becomes inversely proportional to the variance of the target density, which now depends on $\ALPHAEX$. If the target density is Gaussian with the standard deviation ${\sigma}$, the number of Metropolis steps are then approximately $\mathcal O(\ALPHAEX d {\sigma}^{-2})$ while the density is
asymptotically proportional to $|\hat f|^{\ALPHAEX/(\ALPHAEX +1)}\sim e^{-\frac{|\omega|^2\ALPHAEX}{2{\sigma}^2(\ALPHAEX +1)}}$ which yields $\int_{\mathbb{R}^d}\frac{|\hat f(\omega)|^2}{p(\omega)}{\mathrm{d}} \omega=\mathcal O\big((1+2\ALPHAEX^{-1})^{d/2}\big)$. Thus the work for loss $\epsilon$ is roughly proportional to $\epsilon^{-3}(1+2\ALPHAEX^{-1})^{3d/2} d\ALPHAEX{\sigma}^{-2}$ which is minimal for $\ALPHAEX=3d-2$.
\end{remark}
\begin{remark}[How to remove the assumptions \eqref{max} and \eqref{unif_b}]
Assumption \eqref{max} will hold if we replace the minimization \eqref{zeta_min} by
\begin{equation*}\label{zeta_min_2}
\min_{\zeta\in\mathcal N_K}\Big(\mathbb{E}[|\zeta(x)-f(x)|^2\ |\ \omega] +\lambda|\hat{\boldsymbol{\zeta}}|^2
+ \lambda_1 \sum_{k=1}^K\max(0,|\hat \zeta_k|-\frac{\lambda_2}{K})\Big)\,,
\end{equation*}
where $\lambda_1$ and $\lambda_2$ are positive constants with
$\lambda_2> \|\frac{\hat f}{p}\|_{L^\infty(\mathbb{R}^d)}$, and this additional penalty yields a least squares problem
with the same accuracy as \eqref{main_rate}, since for the optimal solution the penalty vanishes.
The other assumption \eqref{unif_b}, which is used to
obtain $\lambda|\zeta_\ell |\to 0$ as $K\to\infty$,
can be removed by letting $\lambda$ tend to zero slowly as $K\to\infty$,
since then by \eqref{max} we obtain
\[
\lambda|\zeta_\ell | \le \lambda \sum_{k=1}^K|\zeta_k|\le \lambda C\to 0\ \mbox{ as $K\to\infty$}.
\]
\end{remark}
\section{Description of algorithms}\label{sec:adaptive}
\begin{algorithm}[tb]
\caption{Adaptive random Fourier features with Metropolis sampling}\label{alg:ARFM}
\begin{algorithmic}
\STATE {\bfseries Input:} $\{(x_n, y_n)\}_{n=1}^N$\COMMENT{data}
\STATE {\bfseries Output:} $x\mapsto\sum_{k=1}^K\hat\beta_ke^{{\mathrm{i}}\omega_k\cdot x}$\COMMENT{random features}
\STATE Choose a sampling time $T$, a proposal step length $\delta$, an exponent $\ALPHAEX$ (see Remark \ref{optimal_alpha}), a Tikhonov parameter $\lambda$ and a frequency $m$ of $\boldsymbol{\hat{\beta}}$ updates
\STATE $M \gets \mbox{ integer part}\, (T/\delta^2)$
\STATE ${\boldsymbol{\omega}} \gets \textit{the zero vector in $\mathbb{R}^{Kd}$}$
\STATE $\boldsymbol{\hat{\beta}} \gets \textit{minimizer of the problem \eqref{eq:num_disc_prob} given } \boldsymbol{\omega}$
\FOR{$i = 1$ {\bfseries to} $M$}
\STATE $r_{\mathcal{N}} \gets \textit{standard normal random vector in $\mathbb{R}^{Kd}$}$
%
\STATE $\boldsymbol{\omega}' \gets \boldsymbol{\omega} + \delta r_{\mathcal{N}}$ \COMMENT{random walk Metropolis proposal}
\STATE $\boldsymbol{\hat{\beta}}' \gets \textit{minimizer of the problem \eqref{eq:num_disc_prob} given } \boldsymbol{\omega}'$
\FOR{$k = 1$ {\bfseries to} $K$}
\STATE $r_{\mathcal{U}} \gets \textit{sample from uniform distr. on $[0,1]$}$ %
\IF {$|\hat{\beta}'_k|^\ALPHAEX/|\hat{\beta}_k|^\ALPHAEX>r_{\mathcal{U}}$\COMMENT{Metropolis test}}
\STATE $\omega_{k} \gets \omega'_k$
\STATE $\hat{\beta}_{k} \gets \hat{\beta}'_k$
\ENDIF
\ENDFOR
\IF {$i \mod m = 0$}
\STATE $\boldsymbol{\hat{\beta}} \gets \textit{minimizer of the problem \eqref{eq:num_disc_prob} with adaptive } \boldsymbol{\omega}$
\ENDIF
\ENDFOR
\STATE $\boldsymbol{\hat{\beta}} \gets \textit{minimizer of the problem \eqref{eq:num_disc_prob} with adaptive } \boldsymbol{\omega}$
\STATE $x\mapsto\sum_{k=1}^K\hat\beta_ke^{{\mathrm{i}}\omega_k\cdot x}$
\end{algorithmic}
\end{algorithm}
In this section we formulate the adaptive random features Algorithm 1, and its extension, Algorithm 2, which adaptively updates the covariance matrix when sampling frequencies $\boldsymbol{\omega}$. Both algorithms are tested on different data sets and the tests are described in Section \ref{sec:Benchmarks}.
Before running Algorithm \ref{alg:ARFM} or \ref{alg:ARFM_acov} we normalize all training data to have mean zero and component wise standard deviation one. The normalization procedure is described in Algorithm \ref{alg:normalization_of_data}.
A discrete version of problem \eqref{eq:minsquare} can be formulated, for training data $\{(x_n, y_n)\}_{n=1}^N$, as the standard least squares problem
\begin{equation}\label{eq:num_disc_prob}
\min_{\boldsymbol{\hat{\beta}}\in \mathbb{C}^K}\left\{N^{-1}|\BARS\boldsymbol{\hat{\beta}}-\mathbf y|^2 + \lambda|\boldsymbol{\hat{\beta}}|^2\right\}
\end{equation}
where $\BARS\in\mathbb{C}^{N\times K}$ is the matrix with elements $\BARS_{n,k} = e^{{\mathrm{i}}\omega_k\cdot x_n}$, $n = 1,...,N$, $k = 1,...,K$ and $\mathbf y=(y_1,\ldots,y_N)\in \mathbb{R}^N$. Problem \eqref{eq:num_disc_prob} has the corresponding linear normal equations
\begin{equation}\label{eq:num_normal_eq}
(\BARS^T\BARS+\lambda N \mathbf{I})\boldsymbol{\hat{\beta}} = \BARS^T\mathbf y
\end{equation}
which can be solved e.g. by singular value decomposition if $N\sim K$ or by the stochastic gradient method if $N\gg K$, cf. \cite{trefethen_bau_1997} and \cite{understand}. Other alternatives when $N\gg K$ are to sample the data points
using information about the kernel in the random features, see e.g.\,\cite{bach}.
Here we do not focus on the interesting and important question of how to optimally sample data points. Instead we focus on how to sample random features, i.e., the frequencies $\omega_k$.
In the random walk Metropolis proposal step in Algorithm \ref{alg:ARFM} the vector $r_{\mathcal{N}}$ is a sample from the standard multivariate normal distribution. The choice of distribution to sample $r_{\mathcal{N}}$ from is somewhat arbitrary and not always optimal. Consider for example a target distribution in two dimensions that has elliptically shaped level surfaces. From a computational complexity point of view one would like to take different step lengths in different directions.
The computational complexity reasoning leads us to consider to sample $r_{\mathcal{N}}$ from a multivariate normal distribution with a covariance matrix $C_t$ adaptively updated during the $M$ iterations. The general idea of adaptively updating a covariance matrix during Metropolis iterations is not novel. A recursive algorithm for adaptively updating the covariance matrix is proposed and analysed in \cite{haario2001} and further test results are presented in \cite{roberts_examples}.
In Algorithm~\ref{alg:ARFM} we choose the initial step length value $\delta = 2.4^2/d$ which is motivated for general Metropolis sampling in \cite{roberts2001}. When running Algorithm~\ref{alg:ARFM} the value of $\delta$ can be adjusted as a hyperparameter depending on the data.
In Algorithm~\ref{alg:ARFM_acov} there is the hyperparameter $\omega_{\text{max}}$ which defines the maximum radius of frequencies $\omega$ that can be sampled by Algorithm~\ref{alg:ARFM_acov}. In some problems the sampling of frequencies will start to diverge unless $\omega_{\text{max}}$ is finite. This will typically happen if the frequency distribution is slowly decaying as $|\omega|\to\infty$. %
In the convergence proof of \cite{haario2001} the probability density function of the distribution to be sampled from is required to have compact support. In practice though when applying Algorithm~\ref{alg:ARFM_acov} we notice that a normal distribution approximates compactness sufficiently well to not require $\omega_{\text{max}}$ to be finite. Another approach is to let $\omega_{\text{max}}$ be infinity and adjust the hyperparamerters $\delta$ and $M$ so that the iterations does not diverge from the minima.
All hyperparameters are adjusted to minimize the error computed on a validation set disjoint from the training and test data sets.
With adaptive covariance combined into Algorithm \ref{alg:ARFM} we sample $r_{\mathcal{N}}$ from $ \mathcal{N}(\boldsymbol{0}, \Bar{C})$ where initially $\Bar{C}$ is the identity matrix in $\mathbb{R}^{d\times d}$. After each iteration $i = 1,2,...,M$
the covariance $\Bar{C}'$ of all previous frequencies $\omega_k^j$, $k = 1,2,...,K$, $j < i$ is computed. After $t_0$ iterations we update $\Bar{C}$ with the value of $\Bar{C}'$.
We present adaptive covariance applied to Algorithm \ref{alg:ARFM} in Algorithm \ref{alg:ARFM_acov}.
The initial value of $\boldsymbol{\omega}$ is set to the zero vector in $\mathbb{R}^{Kd}$ so at iteration $i = 1$, the proposal frequency $\boldsymbol{\omega}'$ will be a sample from $\mathcal{N}(\boldsymbol{0}, \text{diag}([\delta^2, \delta^2,..., \delta^2])$ where $[\delta^2, \delta^2,..., \delta^2]$ is a vector in $\mathbb{R}^d$.
\begin{algorithm}[tb]
\caption{Adaptive random Fourier features with Metropolis sampling and adaptive covariance}\label{alg:ARFM_acov}
\begin{algorithmic}
\STATE {\bfseries Input:} $\{(x_n, y_n)\}_{n=1}^N$\COMMENT{data}
\STATE {\bfseries Output:} $x\mapsto\sum_{k=1}^K\hat\beta_ke^{{\mathrm{i}}\omega_k\cdot x}$\COMMENT{random features}
\STATE Choose a sampling time $T$, a proposal step length $\delta$, an exponent $\ALPHAEX$ (see Remark \ref{optimal_alpha}), a Tikhonov parameter $\lambda$, a burn in time $t_0$ for the adaptive covariance, a maximum frequency radius $\omega_{\text{max}}$ and a number $\check{N}$ of $\boldsymbol{\hat{\beta}}$ updates
\STATE $M \gets \mbox{ integer part}\, (T/\delta^2)$
\STATE ${\boldsymbol{\omega}} \gets \textit{the zero vector in $\mathbb{R}^{Kd}$}$
\STATE $\boldsymbol{\hat{\beta}} \gets \textit{minimizer of the problem \eqref{eq:num_disc_prob} given } \boldsymbol{\omega}$
\STATE $S_{\omega} \gets 0$
\STATE $C_{\omega} \gets \textit{the zero matrix in } \mathbb{R}^{d\times d}$
\STATE $\Bar{C} \gets \textit{identity matrix in }\mathbb{R}^{d\times d}$
\FOR{$i = 1$ {\bfseries to} $M$}
\STATE $r_{\mathcal{N}} \gets \textit{sample from $\mathcal{N}(\boldsymbol{0}, \Bar{C})$}$
%
\STATE $\boldsymbol{\omega}' \gets \boldsymbol{\omega} + \delta r_{\mathcal{N}}$ \COMMENT{random walk Metropolis proposal}
\STATE $\boldsymbol{\hat{\beta}}' \gets \textit{minimizer of the problem \eqref{eq:num_disc_prob} given } \boldsymbol{\omega}'$
\FOR{$k = 1$ {\bfseries to} $K$}
\STATE $r_{\mathcal{U}} \gets \textit{sample from uniform distr. on $[0,1]$}$ %
\IF {$|\hat{\beta}'_k|^\ALPHAEX/|\hat{\beta}_k|^\ALPHAEX>r_{\mathcal{U}}$ \AND $|\omega_k'| < \omega_{\text{max}}$\COMMENT{Metropolis test}}
\STATE $\omega_{k} \gets \omega'_k$
\STATE $\hat{\beta}_{k} \gets \hat{\beta}'_k$
\ENDIF
\STATE $S_{\omega} \gets S_{\omega} + \omega_k$
\STATE $S_{C} \gets S_{C} + \omega_k^T\omega_k$
\ENDFOR
\STATE $\Bar{\omega'} \gets S_{\omega}/(iK)$
\STATE $\Bar{C'} \gets S_{C}/(iK) - \Bar{\omega'}^T\Bar{\omega'}$
\IF {$i > t_0$}
\STATE $\Bar{C} \gets \Bar{C'}$
\ENDIF
\IF {$i \mod m = 0$}
\STATE $\boldsymbol{\hat{\beta}} \gets \textit{minimizer of the problem \eqref{eq:num_disc_prob} with adaptive } \boldsymbol{\omega}$
\ENDIF
\ENDFOR
\STATE $\boldsymbol{\hat{\beta}} \gets \textit{minimizer of the problem \eqref{eq:num_disc_prob} with adaptive } \boldsymbol{\omega}$
\STATE $x\mapsto\sum_{k=1}^K\hat\beta_ke^{{\mathrm{i}}\omega_k\cdot x}$
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[tb]
\caption{Normalization of data}\label{alg:normalization_of_data}
\begin{algorithmic}
\STATE {\bfseries Input:} $\{(x_n, y_n)\}_{n=1}^N$\COMMENT{data}
\STATE {\bfseries Output:} $\{(x_n, y_n)\}_{n=1}^N$\COMMENT{normalized data}
\STATE $\bar y \gets \frac{1}{N}\sum_{n=1}^N y_n$
\STATE $\bar x^j\gets \frac{1}{N}\sum_{n=1}^N x_n^j,\, j = 1,2,...,d$
\STATE $\sigma_y \gets \sqrt{\frac{\sum_{n=1}^N(y_n-\bar y)^2}{N-1}}$
\STATE $\sigma_{x^j} \gets \sqrt{\frac{\sum_{n=1}^N(x_n^j-\bar x^j)^2}{N-1}},\, j = 1,2,...,d$
\STATE $\{(x_n, y_n)\}_{n=1}^N \gets \{(\frac{x_n^1-\bar{x}_n^1}{\sigma_{x^1}}, \frac{x_n^2-\bar{x}_n^2}{\sigma_{x^2}},...,\frac{x_n^d-\bar{x}_n^d}{\sigma_{x^d}}; \frac{y_n - \bar y}{\sigma_y})\}_{n=1}^N$
\end{algorithmic}
\end{algorithm}
\section{Numerical tests}\label{sec:Benchmarks}
We demonstrate different capabilities of the proposed algorithms with three numerical case studies. The first two cases are regression problems and the third case is a classification problem. For the regression problems we also show comparisons with the stochastic gradient method.
The motivation for the first case is to compare the results of the algorithms to the estimate \eqref{main_rate} based on the constant $\mathbb{E}_\omega[\frac{|\hat f(\omega)|^2}{(2\pi)^{d}p^2(\omega)}]$ which is minimized for $p= |\hat f|/\|\hat f\|_{L^1(\mathbb{R}^d)}$. Both Algorithm~\ref{alg:ARFM} and Algorithm~\ref{alg:ARFM_acov} approximately sample the optimal distribution but for example a standard random Fourier features approach with $p \sim \mathcal{N}(0, 1)$ does not.
Another benefit of Algorithm~\ref{alg:ARFM} and especially Algorithm~\ref{alg:ARFM_acov} is the efficiency in the sense of computational complexity. The purpose of the second case is to study the development of the generalization error over actual time in comparison with a standard method which in this case is an implementation of the stochastic gradient method. The problem is in two dimensions which imposes more difficulty in finding the optimal distribution compared to a problem in one dimension.
In addition to the regression problems in the first two cases we present a classification problem in the third case. It is the classification problem of handwritten digits, with labels, found in the MNIST database. For training the neural network we use Algorithm~\ref{alg:ARFM} and compare with using naive random Fourier features. The purpose of the third case is to demonstrate the ability of Algorithm~\ref{alg:ARFM} to handle non synthetic data.
In the simulations for the first case we perform five experiments:
\begin{itemize}
\item Experiment 1: The distribution of the frequencies $\boldsymbol{\omega}\in\mathbb{R}^{Kd}$ is obtained adaptively by Algorithm~\ref{alg:ARFM}.
\item Experiment 2: The distribution of the frequencies $\boldsymbol{\omega}\in\mathbb{R}^{Kd}$ is obtained adaptively by Algorithm~\ref{alg:ARFM_acov}.
\item Experiment 3: The distribution of the frequencies $\boldsymbol{\omega}\in\mathbb{R}^{Kd}$ is fixed and the independent components $\omega_k$ are sampled from a normal distribution. %
\item Experiment 4: Both the frequencies $\boldsymbol{\omega}\in\mathbb{R}^{Kd}$ and the amplitudes $\boldsymbol{\hat{\beta}}\in \mathbb{C}^K$ are trained by the stochastic gradient method.
\item Experiment 5: The $\boldsymbol{\omega}\in\mathbb{R}^{Kd}$ weight distribution is obtained adaptively by Algorithm~\ref{alg:ARFM} but using the sigmoid activation function.
%
%
%
%
%
%
%
\end{itemize}
For the second case we perform Experiment 1-4 and in simulations for the third case we perform Experiment 1 and Experiment 3. All chosen parameter values are presented in Table \ref{table:parameter_comparison}.
We denote by $\BARS_{\text{test}}\in\mathbb{C}^{\tilde{N}\times K}$ the matrix with elements %
$e^{{\mathrm{i}}\omega_k\cdot \tilde x_n}$. The test data $\{(\tilde x_n, \tilde y_n) \, |\, n = 1,...,\tilde{N}\}$ are i.i.d. samples from the same probability distribution and normalized by the same empirical mean and standard deviation as the training data $\{(x_n, y_n) \,|\, n=1,\ldots, N$\}.
In the computational experiments we compute the \emph{generalization error} as
\[e_K := \sqrt{\sum_{n=1}^{\tilde{N}}|(\BARS_{\mathrm{test}}\boldsymbol{\hat\beta})_n-\tilde y_n|^2}\,.\]
We denote by ${\sigma}_K$ the empirical standard deviations of the generalization error, based on $\bar M=10$ independent realizations for each fixed $K$ and let an \emph{error bar} be the closed interval
\begin{equation*}
[e_{K} - 2{\sigma}_K, e_{K} + 2{\sigma}_K].
\end{equation*}
The purpose of Experiment 5 is to demonstrate the possibility of changing the activation function $x\mapsto e^{{\mathrm{i}}\omega\cdot x}$ to the sigmoid activation function $x \mapsto \frac{1}{1 + e^{-\omega\cdot x}}$ when running Algorithm~\ref{alg:ARFM}. With such a change of activation function the concept of sampling frequencies turns into sampling weights. In practice we add one dimension to each $x$-point to add a bias, compensating for using a real valued activation function and we set the value of the additional component to one. Moreover, we change $\BARS_{n,k} = e^{{\mathrm{i}}\omega_k\cdot x_n}$ in \eqref{eq:num_disc_prob} to $\BARS_{n,k} = \frac{1}{1 + e^{-\omega_k\cdot x_n}}$.
\medskip\noindent
{\it Case 1: Target function with a regularised discontinuity.}
This case tests the capability of Algorithm~\ref{alg:ARFM} and Algorithm~\ref{alg:ARFM_acov} to approximately find and sample frequencies $\boldsymbol{\omega}$ from the optimal distribution $p_* = |\hat f|/\|\hat f\|_{L^1(\mathbb{R})}$. The target function \[f(x) = \mathrm{Si}\left(\frac{x}{a}\right)e^{-\frac{x^2}{2}}\] where $a = 10^{-3}$ and \[\mathrm{Si}(x) := \int_0^x \frac{\sin(t)}{t}\mathrm{d}t\] is the so called \emph{Sine integral} has a Fourier transform that decays slowly as $\omega^{-1}$ up to $|\omega| = 1/a = 1000$. The target function $f$ is plotted in Figure \ref{fig:f_zoomed}, together with $f$ evaluated in $N$ $x$-points from a standard normal distribution over an interval chosen to emphasize that $N$ points is enough to resolve the local oscillations near $x = 0$.
\begin{figure}[ht]
\centering
\includegraphics[width=0.95\textwidth]{sin_fun.eps}
\caption{Case 1: Graph of the target function $f$ with sampled data set points $(x_n,y_n)$ marked (red on-line).
The inset shows $|\hat f|$ of its Fourier transform
and the detail of its behaviour at the origin. }\label{fig:f_zoomed}
\end{figure}
The Fourier transform of $f$ is approximated by computing the fast Fourier transform of $f$ evaluated in $2N$ equidistributed $x$-points in the interval $[-2\pi, 2\pi]$. The inset in Figure~\ref{fig:f_zoomed} presents the absolute value of the fast Fourier transform of $f$ where we can see that the frequencies drop to zero at approximately $|\boldsymbol{\omega}| = 1/a = 10^3$.
We generate training data and test data as follows. First sample $N$ $x$-points from $\mathcal{N}(0,1)$. Then evaluate the target function in each $x$-point to get the $y$-points and run Algorithm~\ref{alg:normalization_of_data} on the generated points to get the normalized training data $\{x_n, y_n\}_{n=1}^N$ and analogously the normalized test data $\{\tilde{x}_n, \tilde{y}_n\}_{n=1}$.
We run Experiment 1--5 on the generated data for different values of $K$ and present the resulting generalization error dependence on $K$, with error bars, in Figure~\ref{fig:almost_disc}. The triangles pointing to the left represent generalization errors produced from a neural network trained by Algorithm~\ref{alg:ARFM} and the diamonds by Algorithm~\ref{alg:ARFM_acov}. The stars correspond to the stochastic gradient descent with initial frequencies from
$\mathcal{N}(0, 50^2)$ while the circles also corresponds to the stochastic gradient descent but with initial frequencies from $\mathcal{N}(0, 1)$. The squares correspond to the standard random Fourier features approach sampling frequencies $\omega_k$ from $\mathcal{N}(0, 1)$.
The triangles pointing down represent Algorithm~\ref{alg:ARFM} but for the sigmoid activation function. Algorithm~\ref{alg:ARFM} show a constant slope with respect to $K$. Although the generalization error becomes smaller for the stochastic gradient descent as the variance for the initial frequencies increase to $50^2$ from $1$, it stagnates as $K$ increases.
For a given $K$ one could fine tune the initial frequency distribution for the stochastic gradient descent but for Algorithm~\ref{alg:ARFM} and Algorithm~\ref{alg:ARFM_acov} no such tuning is needed.
The specific parameter choices for each experiment are presented in Table~\ref{table:parameter_comparison}.
\begin{figure}[ht]
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{t1.eps}
\caption{Experiment 4, Stochastic gradient method with a large variance on initial components of $\boldsymbol{\omega}$}
\label{fig:almost_disc_t1}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{t2.eps}
\caption{Experiment 3, Random Fourier Features}
\label{fig:almost_disc_t2}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{t3.eps}
\caption{Experiment 4, Stochastic gradient method with initial components of $\boldsymbol{\omega}$ from $\mathcal{N}(0,1)$}
\label{fig:almost_disc_t3}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{t4.eps}
\caption{Experiment 1, Adaptive Metropolis sampling}
\label{fig:almost_disc_t4}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{t5.eps}
\caption{Experiment 5, Adaptive Metropolis sampling with the sigmoid activation function}
\label{fig:almost_disc_t5}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{t6.eps}
\caption{Experiment 2, Adaptive Metropolis sampling with adaptive covariance matrix}
\label{fig:almost_disc_t6}
\end{subfigure}
\caption{Case 1: The same data are shown in all figures with each of the six different experiments highlighted (blue on-line).}\label{fig:almost_disc}
\end{figure}
\medskip\noindent
{\it Case 2: A high dimensional target function.}
The purpose of this case is to test the ability of Algorithm~\ref{alg:ARFM} to train a neural network in a higher dimension. Therefore we set $d = 5$. The data is generated analogously to how it is generated in Case 1 but we now use the target function $f:\mathbb{R}^5\to \mathbb{R}$
\begin{equation*}
f(x) = \text{Si}\left(\frac{x_1}{a}\right)e^{-\frac{|x|^2}{2}}
\end{equation*}
where $a = 10^{-1}$. We run Experiment 1, 3 and 4 and the resulting convergence plot with respect to the number of frequencies $K$ is presented in Figure \ref{fig:hd_ge}.
In Figure \ref{fig:hd_ge} we can see the expected convergence rate of $\mathcal{O}(K^{-1/2})$ for Algorithm~\ref{alg:ARFM}. The Stochastic gradient method gives an error that converges as $\mathcal{O}(K^{-1/2})$ for smaller values of $K$ but stops improving for approximately $K>128$ for the chosen number of iterations.
\begin{figure}[ht]
\centering
\includegraphics[width=0.99\textwidth]{hd_ge.eps}
\caption{Case 2: The figure illustrates the generalization error with respect to $K$ for a target function in dimension $d = 5$.}\label{fig:hd_ge}
\end{figure}
\medskip\noindent
{\it Case 3: Anisotropic Gaussian target function.}
Now we consider the target function
\begin{equation}\label{eq:elliptic_target_fcn}
f(x) = e^{-(32 x_1)^2/2}e^{-(32^{-1} x_2)^2/2}\,,
\end{equation}
which, as well as $\hat{f}$, has elliptically shaped level surfaces. To find the optimal distribution $p_*$ which is
$\mathcal{N}(\boldsymbol{0}, \mathrm{diag}([32^{-2}, 32^2]))$ thus requires to find the covariance matrix $\text{diag}([32^{-2}, 32^2])$.
The generation of data is done as in Case 1 except that the non normalized $x$-points are independent random vectors in $\mathbb{R}^2$ with independent $\mathcal{N}(0,1)$ components. We fix the number of nodes in the neural network to $K = 256$ and compute an approximate solution to the problem \eqref{eq:num_normal_eq} by running Experiment 1, 2 and 4.
Convergence of the generalization error with respect to time is presented in Figure \ref{fig:eta_comparisons} where we note that both Algorithm~\ref{alg:ARFM} and Algorithm~\ref{alg:ARFM_acov} produce faster convergence than the stochastic gradient descent. For the stochastic gradient method the learning rate has been tuned for a benefit of the convergence rate in the generalization error but the initial distribution of the components of $\boldsymbol{\omega}$ simply chosen as the standard normal distribution.
\begin{figure}[ht]
\centering
\includegraphics[width=0.99\textwidth]{all_case_2.eps}
\caption{Case 3: The figure illustrates the generalization error over time when the approximate problem solution is computed using Algorithm 1, Algorithm 2 and stochastic gradient descent.}\label{fig:eta_comparisons}
\end{figure}
\medskip\noindent
{\it Case 4: The MNIST data set.}
The previously presented numerical tests have dealt with problems of a pure regression character. In contrast, we now turn our focus to a classification problem of handwritten digits.
The MNIST data set consists of a training set of $60000$ handwritten digits with corresponding labels and a test set of $10000$ handwritten digits with corresponding labels.
We consider the ten least squares problems
\begin{equation}\label{eq:mnist_num_disc_prob}
\min_{\boldsymbol{\hat{\beta}}^i\in \mathbb{C}^{K}}\left(N^{-1}|\BARS\boldsymbol{\hat{\beta}}^i-\mathbf y^i|^2 + \lambda|\boldsymbol{\hat{\beta}}^i|^2\right), \; i = 0,1,\dots,9\,,
\end{equation}
where $\BARS\in\mathbb{C}^{N\times K}$ is the matrix with elements $\BARS_{n,k} = e^{\mathrm{i}\omega_k\cdot x_n}$, $n = 1,...,N$, $k = 1,...,K$ and
$\mathbf{y}^i=(y_1^i,\ldots,y_N^i)\in \mathbb{R}^N$.
The training data $\{(x_n; (y_n^0, y_n^1,...,y_n^9))\}_{n=1}^N$ consist of handwritten digits $x_n\in \mathbb{R}^{784}$ with corresponding vector labels $(y_n^0, y_n^1,...,y_n^9)$.
Each vector label $(y_n^0, y_n^1,...,y_n^9)$ has one component equal to one and the other components equal to zero. The index $i$ of the component $y_n^i$ that is equal to $1$ is the number that the handwritten digit $x_n$ represents. The problems \eqref{eq:mnist_num_disc_prob}
have the corresponding linear normal equations
\begin{equation}\label{eq:num_normal_eq_mnist}
(\BARS^T\BARS+\lambda N \mathbf{I})
\boldsymbol{\hat{\beta}}^i = \BARS^T\mathbf y^i\,, \;\;\; i = 0,1,...,9\,,
\end{equation}
which we solve using the $\texttt{MATLAB}$
backslash operator for each $i = 0,1,...,9$.
The regularizing parameter $\lambda$ acts as a regulator to adjust the bias-variance trade-off.
We compute an approximate solution to the problem \eqref{eq:mnist_num_disc_prob} by using
Algorithm~\ref{alg:ARFM} but in the Metropolis test step evaluate
$||(\hat{\beta}^0_k, \hat{\beta}^1_k,...,\hat{\beta}^9_k)'||_2^\ALPHAEX/||(\hat{\beta}^0_k, \hat{\beta}^1_k,...,\hat{\beta}^9_k)||_2^\ALPHAEX>r_{\mathcal{U}}$ where $||\cdot||_{2}$ denotes the Euclidean norm $||\hat{\beta}_k||_{2} = \sqrt{\sum_{i = 0}^9 (\hat{\beta}_k^i)^2}$.
We evaluate the trained artificial neural network
for each handwritten test digit
$\tilde{x}_n, \; n=1,2,...,\tilde{N}$ and classify the handwritten test digit as the number
\begin{equation*}
\argmax_i \{|\sum_{k=1}^{K}\hat{\beta}_k^{i}s(\omega_k\cdot \Tilde{x}_n)|\}_{i=0}^9\,,
\end{equation*}
where $\{\omega_k; \hat{\beta}_k^{0}, \hat{\beta}_k^{1},...,\hat{\beta}_k^{9}\}_{k=1}^K$ are the trained frequencies and amplitudes resulting from running Algorithm~\ref{alg:ARFM}.
As a comparison we also use frequencies from the standard normal distribution and from the normal distribution $\mathcal{N}(0, 0.1^2)$ but otherwise solve the problem the same way as described in this case.
The error is computed as the percentage of misclassified digits over the test set $\{(\Tilde{x}_n; (\Tilde{y}_n^0, \Tilde{y}_n^1,...,\Tilde{y}_n^9))\}_{n=1}^{\tilde{N}}$. We present the results in
Table~\ref{table:MNIST_percent_mistaken} and
Figure~\ref{fig:mnist_ge} where we note that the smallest error is achieved when frequencies are sampled by using Algorithm~\ref{alg:ARFM}. When sampling frequencies from the standard normal distribution, i.e, $\mathcal{N}(0, 1)$, we do not observe any convergence with respect to $K$.
\begin{table}[ht]
\begin{tabular}{l|l|l|l|l|l}
\cline{2-5}
& K & Fixed & Fixed & Adaptive & \\
& & $\omega\sim\mathcal{N}(0, 1)$ & $\omega\sim\mathcal{N}(0, 0.1^2)$& &\\ \cline{2-5}
& 2 & 89.97\% & 81.65\% & 80.03\% & \\ \cline{2-5}
& 4 & 89.2\% & 70.65\% & 65.25\% & \\ \cline{2-5}
& 8 & 88.98\% & 55.35\% & 53.44\% & \\ \cline{2-5}
& 16 & 88.69\% & 46.77\% & 36.42\% & \\ \cline{2-5}
& 32 & 88.9\% & 30.43\% & 23.52\% & \\ \cline{2-5}
& 64 & 88.59\% & 19.73\% & 16.98\% & \\ \cline{2-5}
& 128 & 88.7\% & 13.6\% & 11.13\% & \\ \cline{2-5}
& 256 & 88.09\% & 10.12\% & 7.99\% & \\ \cline{2-5}
& 512 & 88.01\% & 8.16\% & 5.93\% & \\ \cline{2-5}
& 1024 & 87.34\% & 6.29\% & 4.57\% & \\ \cline{2-5}
& 2048 & 86.5\% & 4.94\% & 3.5\% & \\ \cline{2-5}
& 4096 & 85.21\% & 3.76\% & 2.74\% & \\ \cline{2-5}
& 8192 & 83.98\% & 3.16\% & 1.98\% & \\ \cline{2-5}
\end{tabular}
\caption{Case 4: The table shows the percentage of misclassified digits
in the MNIST test data set for different values of $K$. Comparison between adaptively computed distribution of frequencies $\omega_k$ and sampling a fixed (normal) distribution.}\label{table:MNIST_percent_mistaken}
\end{table}
\begin{figure}[ht]
\centering
\includegraphics[width=0.99\textwidth]{mnistZero_all.eps}
\caption{Case 4: Dependence on $K$ of the misclassification percentage in the MNIST.}\label{fig:mnist_ge}
\end{figure}
\begin{table}[ht]%
\tiny{
\bgroup
\def1.2{1.2}
\begin{tabular}{|l|l|l|l|l|l|l|l|l|l|l|l|l|}
\hline
& \multicolumn{10}{l|}{\textbf{Regression}} & \multicolumn{2}{l|}{\textbf{Classification}} \\ \hline
Case & \multicolumn{5}{l|}{\begin{tabular}[c]{@{}l@{}}Case 1:\\ a regularised step function\end{tabular}} & \multicolumn{2}{l|}{\begin{tabular}[c]{@{}l@{}}Case 2:\\ a high dimensional\\ function\end{tabular}} & \multicolumn{3}{l|}{\begin{tabular}[c]{@{}l@{}}Case 3:\\ anisotropic Gaussian\\ function\end{tabular}} & \multicolumn{2}{l|}{\begin{tabular}[c]{@{}l@{}}Case 4:\\ The MNIST\\ data set\end{tabular}} \\ \hline
Purpose & \multicolumn{5}{l|}{\begin{tabular}[c]{@{}l@{}}Find $p$ such that the constant\\ $\mathbb{E}_\omega[\frac{|\hat f(\omega)|^2}{(2\pi)^{d}p^2(\omega)}]$\\ does not become large\end{tabular}} & \multicolumn{2}{l|}{\begin{tabular}[c]{@{}l@{}}Study the ability\\ of Alg. 1 to find a\\ high dimensional\\ dimensional\\ function\end{tabular}} & \multicolumn{3}{l|}{\begin{tabular}[c]{@{}l@{}}Computational\\ complexity comparison\end{tabular}} & \multicolumn{2}{l|}{\begin{tabular}[c]{@{}l@{}}Study the ability\\ of Alg. 1 to work\\ with non\\ synthetic data\end{tabular}} \\ \hline
\begin{tabular}[c]{@{}l@{}}Target\\ $f(x)$\end{tabular} & \multicolumn{5}{l|}{\begin{tabular}[c]{@{}l@{}}$\text{Si}\left(\frac{x}{a}\right)e^{-\frac{x^2}{2}}$\\ $a = 10^{-3}$\end{tabular}} & \multicolumn{2}{l|}{\begin{tabular}[c]{@{}l@{}}$\text{Si}\left(\frac{x_1}{a}\right)e^{-\frac{|x|^2}{2}}$\\ $a = 10^{-1}$\end{tabular}} & \multicolumn{3}{l|}{$e^{-(32 x_1)^2/2}e^{-(32^{-1} x_2)^2/2}$} & \multicolumn{2}{l|}{} \\ \hline
$d$ & \multicolumn{5}{l|}{$1$} & \multicolumn{2}{l|}{$5$} & \multicolumn{3}{l|}{$2$} & \multicolumn{2}{l|}{$784$} \\ \hline
$K$ & \multicolumn{5}{l|}{$2^i, \, i = 1,2,...,11$} & \multicolumn{2}{l|}{$2^i, \, i = 1,2,...,10$} & \multicolumn{3}{l|}{$256$} & \multicolumn{2}{l|}{$2^i, \, i = 1,2,...,13$} \\ \hline
Experiment & Exp. 1 & Exp. 2 & Exp. 3 & Exp. 4 & Exp. 5 & Exp. 1 & Exp. 4 & Exp. 1 & Exp. 2 & Exp. 4 & Exp. 1 & Exp. 3 \\ \hline
Method & Alg. 1 & Alg. 2 & RFF$^{1}$ & SGM & \begin{tabular}[c]{@{}l@{}}Alg. 1\\ sigmoid\end{tabular} & Alg. 1 & SGM & Alg. 1 & Alg. 2 & SGM & Alg. 1 & RFF$^{2}$ \\ \hline
$N$ & $10^4$ & $10^4$ & $10^4$ & $10^4$ & $10^4$ & $10^4$ & $10^4$ & $10^4$ & $10^4$ & $3\times 10^7$ & $6\times 10^4$ & $6 \times 10^4$ \\ \hline
$\tilde{N}$ & $10^4$ & $10^4$ & $10^4$ & $10^4$ & $10^4$ & $10^4$ & $10^4$ & $10^4$ & $10^4$ & $10^4$ & $10^4$ & $10^4$ \\ \hline
$\ALPHAEX$ & $3d-2$ & $3d-2$ & & & $3d-2$ & $3d-2$ & & $3d-2$ & $3d-2$ & & $3d-2$ & \\ \hline
$\lambda$ & $0.1$ & $0.1$ & $0.1$ & $0$ & $0.1$ & $0.1$ & 0 & $0.1$ & $0.1$ & $0$ & $0.1$ & $0.1$ \\ \hline
$M$ & $10^3$ & $5000$ & N/A & $10^7$ & $10^4$ & $2.5\times 10^3$ & $10^7$ & $10^4$ & $10^4$ & $3\times 10^7$ & $10^2$ & \\ \hline
$\bar{M}$ & $10$ & $10$ & $10$ & $10$ & $10$ & $10$ & $10$ & $1$ & $1$ & $1$ & $1$ & $1$ \\ \hline
$\delta$ & $2.4^2/d$ & $0.1$ & & & $2.4^2/d$ & $\frac{2.4^2}{10d}$ & & $0.5$ & $0.1$ & & $0.1$ & \\ \hline
$\Delta t$ & & & & $\mathtt{1.5e-4}$ & & & $\mathtt{3.0e-4}$ & & & $\mathtt{1.5e-3}$ & & \\ \hline
$t_0$ & & $M/10$ & & & & & & & $M/10$ & & & \\ \hline
$\boldsymbol{\omega}_{\text{max}}$ & & $\infty$ & & & & & & & $\infty$ & & & \\ \hline
$m$ & $10$ & $50$ & & & $100$ & $25$ & & $100$ & $100$ & & $M+1$ & \\ \hline
\end{tabular}
\egroup
\caption{Summary of numerical experiments together with their corresponding parameter choices. \\
$^1$ -- Random Fourier features with frequencies sampled from the fixed distribution $\mathcal{N}(0,1)$\\
$^2$ -- Random Fourier features with frequencies sampled from the fixed distribution $\mathcal{N}(0,1)$, or
$\mathcal{N}(0,0.1^2)$ }
\label{table:parameter_comparison}
}
\end{table}
\subsection{Optimal distribution $p_*$}
In this numerical test we demonstrate how the average generalization error depends on the distribution $p$ for a particular choice of data distribution.
We recall the frequencies $\boldsymbol{\omega} = (\omega_1, \ldots, \omega_K)$ with $(\omega_k)$ i.i.d. from the distribution $p$. %
In the experiment the data ${(x_n, y_n)}_{n=1}^N$ are given by $y_n = e^{-|x_n|^2/2} + \epsilon_n$. We let the components $\omega_k$ be sampled independently from $\mathcal{N}(0,\sigma_{\omega}^2)$ and monitor the results for different values of $\sigma_{\omega}$. Note that Algorithm~\ref{alg:ARFM} would approximately sample $\boldsymbol{\omega}$ from the optimal density $p_*$, which in this case correspond to the standard normal, i.e., $\omega_k\sim\mathcal{N}(0,1)$, see \eqref{eq:optimal_density} .
In the simulations we choose $x_n$ from $\mathcal{N}(0,1)$, $\epsilon_n$ from $\mathcal{N}\big(0,0.1^2\big)$, $d = 7$, $K = 500$, $N = 10^5$ and $\lambda = 0.01$. The error bars are estimated by generating $\bar M=10$ independent realizations for each choice of $\sigma_{\omega}$.
The results are depicted
in Figure~\ref{fig:mean_sqr_vs_omega_vol} where we observe that the generalization error is minimized for
$\sigma_{\omega} \approx 1$ which is in an agreement
with the theoretical optimum.
\begin{figure}[ht]
\centering
\includegraphics[width=0.99\textwidth]{ERROR_VS_OMEGA_VOL_v2.eps}
\caption{The generalization error as a function of the standard deviation $\sigma_\omega$ of $p({\omega})$.}\label{fig:mean_sqr_vs_omega_vol}
\end{figure}
\subsection{Computing infrastructure}
The numerical experiments are computed on a desktop with an \texttt{Intel Core i9-9900K CPU @ 3.60GHz} and \texttt{32 GiB} of memory running \texttt{Matlab R2019a} under
\texttt{Windows 10 Home}.
| proofpile-arXiv_065-219 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Proposed Approach}
As shown in Fig.~\ref{fig:teaser}, given a set of video clips, with only clip-level annotations of the actions taking place, our goal is to learn a model to recognise and localise these actions in space and time.
Our method is based on Multiple Instance Learning (MIL) which we briefly review in Sec.~\ref{sec:mil}. Thereafter, we show how we use it for weakly-supervised spatio-temporal action recognition in Sec.~\ref{sec:method_action_recognition_as_mil}.
We then describe how the standard MIL assumption, %
is often violated in our scenario and describe a method to mitigate this by leveraging uncertainty estimates by our network in Sec.~\ref{sec:mil_noise}.
Finally, we discuss implementation details of our network in Sec.~\ref{sec:implementation_details}.
\subsection{Multiple Instance Learning}
\label{sec:mil}
In the standard Multiple Instance Learning (MIL)~\cite{diettrich_1997} formulation, one is given a bag of $N$ instances, denoted as $x = \{x_1, x_2, \ldots, x_N\}$.
The class labels for each of the instances is unknown, but the label for the entire bag, $x$, is known.
The standard MIL assumption is that a bag is assigned a class label if at least one instance in the bag is associated with this label.
More formally, we consider the multi-label classification case, where the label vector for the bag is $y \in \mathbb{R}^{C}$, and $y_l = 1$ if there is at least one instance with the $l^{th}$ label is present in the bag, and $y_l = 0$ otherwise.
Note that each bag can be labelled with multiple of the $C$ class labels.
Our goal is to train an instance-level classifier (parameterised as a neural network), that predicts $p(y_{l} = 1 | x_j)$, or the label probabilities for the $j^{th}$ instance.
However, as we only have the labels for the entire bag, and not each instance, MIL methods aggregate the set of instance-level probabilities, $\{p_{ij}\}$ for a bag $i$, to bag-level probabilities, $p_i$, using an aggregation function, $g(\cdot)$, where the probabilities are obtained from a suitable activation function (sigmoid or softmax) on the logits output by the neural network:
\begin{equation}
p(y_{il} = 1 | x_1, x_2, \ldots, x_N) = g(p_{i1}, p_{i2}, \ldots, p_{iN}).
\label{eq:mil_pooling}
\end{equation}
Once we have bag-level predictions, we can apply a standard classification loss between the bag-level probabilities and bag-level ground truth, and train a neural network with stochastic gradient descent.
Since we consider the multi-label classification case, we use the binary cross-entropy:
\begin{equation}
\mathcal{L}_{ce}(x, y) = -\sum_{i}^{N_b}\sum_{l}^{C} y_{il} \log{p_{il}} + (1 - y_{il})\log(1 - p_{il})
\label{eq:cross_entropy_mil}
\end{equation}
Note that we defined $p_{il}$ as the bag-level probability of the $i^{th}$ bag taking the $l^{th}$ label, which is obtained using Eq.~\ref{eq:mil_pooling}, and $N_b$ is the number of bags in the mini-batch.
\paragraph{Aggregation}
The aggregation function, $g(\cdot)$, can naturally be implemented in neural networks as a global pooling function over all outputs of the network.
Common, permutation-invariant pooling functions include, max-pooling, generalised mean-pooling and log-sum-exponential (LSE) pooling~\cite{boyd_2004} (a smooth and convex approximation of the maximum function) respectively:
\begin{align}
g(\{p_j\}) &= \max_j p_j \\
g(\{p_j\}) &= \left(\frac{1}{\left|j\right|} \sum_j p_j^r\right) ^{\frac{1}{r}} \\
g(\{p_j\}) &= \frac{1}{r} \log \left(\frac{1}{\left|j\right|} \sum_j e^{r \cdot p_j} \right)
\end{align}
Max-pooling only considers the top-scoring instance in the bag, and thus naturally captures the MIL assumption that at least one instance in the bag has the specified, bag-level label.
Moreover, it can also be more robust to instances in the bag that do not have the bag-level label.
However, mean and LSE pooling have been employed in applications such as weakly-supervised
segmentation \cite{pinheiro_cvpr_2015}, object recognition~\cite{Sun_2016_CVPR} and medical imaging~\cite{kraus_arxiv_2015} where multiple instances in the bag do typically have the bag-level label.
Note that higher values of the $r$ hyperparameter for both these functions increases their ``peakiness'' and approximates the maximum value.
For our scenario, detailed in the next section, we found max-pooling to be the most appropriate.
\subsection{Weakly-supervised spatio-temporal action recognition as multiple instance learning}
\label{sec:method_action_recognition_as_mil}
\input{figures/method_diagram}
Our goal is to learn a model to recognise and localise actions in space and time given only video-level annotations.
To facilitate this, we leverage a person detector that has been trained on a large image dataset, \emph{i.e.} Microsoft COCO~\cite{lin_eccv_2014}.
Concretely, we run a person detector on our training videos, and create person tubelets which are person detections over $K$ consecutive frames in the video. %
Our bag for multiple instance learning thus consists of all the tubelets within a video, and is annotated with the video-level labels that we have as supervision, as illustrated in Fig.~\ref{fig:method_diagram}.
Note that the size of the bag varies for every video clip, as the bag size is determined by the length of the video and the number of detected people.
As shown in Fig.~\ref{fig:method_diagram}, our network architecture for this task is a Fast-RCNN~\cite{girshick_iccv_2015} style detector that has been extended temporally.
Given a video clip of $K$ frames, and proposals which in our case are person detections, the network classifies the action(s) taking place at the centre frame of each proposal, given the temporal context of the $K-1$ frames around it.
Note that the spatio-temporal localisation task is effectively factorised: the spatial localisation capability of the model depends on the quality of the person detections.
Temporal localisation, on the other hand, is performed by linking person tubelets through the video as commonly done in the literature~\cite{kalogeiton_iccv_2017,singh_iccv_2017,zhao_cvpr_2019,cheron_neurips_2018}, since this method can scale to arbitrarily long videos.
We use the same algorithm as Kalogeiton~\emph{et al.} \cite{kalogeiton_iccv_2017} which links together detections within a small temporal window greedily based on the spatial intersection over union (IoU) between bounding boxes on consecutive frames.
Finally, note that for a video consisting of $T$ frames, the bag could consist of $T - K + 1$ person tubelets if a person is detected on each frame of the video, and a tubelet is started from each frame.
Due to memory limitations, it is infeasible to fit an entire bag onto a GPU for training.
As a result, we uniformly sample instances from each bag during training, whilst still retaining the original bag-level label.
This introduces additional noise into the problem, as detailed next. %
\iffalse
Therefore, the
Factorised localisation problem:
Our goal is to learn a model to recognise and localise actions in space and time given only video-level annotations.
\ANURAG{To make this possible ..}, we leverage person detectors that have been trained on a large image dataset, \emph{i.e.} Microsoft COCO~\cite{lin_eccv_2014}.
Concretely, we run a person detector on our training videos, and create person tubelets which are person detections linked temporally over at most $K$ frames.
Our bag for multiple instance learning thus consists of all the tubelets within a video, and is annotated with the video-level labels that we have as supervision, as shown in Fig.~\ref{fig:mil_setup}.
Note that the size of the bag varies for every video clip, as the bag size is determined by the length of the video and the number of detected people.
Spatial localisation -- comes from the person tubelets.
Temporal localisation -- comes from linking person tubeles together over a long video. We use the same tube linking as sota to be comparable.
Something about length of the video.
In our weakly-supervised, spatio-temporal action recognition scenario, each bag
What the problem is
Leverage person detectors trained on a large image dataset, \emph{i.e.} Microsoft COCO~\cite{lin_eccv_2014}.
Given accurate ground truth and person tubelets,
In a multiple instance learning setting, each bag t
We leverage
Person detector -- spatial
What is temporal? -- the fact that we work with tubelets and then link them up over time.
What is a bag?
Then why it is hard
Where does the spatial and temporal part come from?
Link to the next section -- That we use predicted boxes all the time. And also the fact that we sample from a bag due to memory reasons.
For our problem, we have video clips with only clip-level annotations.
Have a video clip.
And a person detector trained on large image datasets such as COCO.
From person detector, we extract person tubes for the video clip.
Each person tube is an instance in our bag.
And labels for our bag are the bag-level labels.
Size of the bag varies for every clip as it is determined by the number of person detections.
Want to explain spatial and temporal part.
Spatial weakly supervised if there are multiple actors in the scene. Don't know which video-level tag corresponds to which person.
Temporally weakly supervised if number of frames in the video, $T >> K$.
Person can be detected and not perform an action.
Or person can be detected and change their action.
So not as simple as classifying the whole tube.
Person tubes are $K$ frames long.
So in a long video clip, have m
$x_i \in \mathbb{R}^{T \times C \times H \times W}$ in video.
Bag from a video clip.
Can be large
Human detector
\fi
\subsection{Label noise and violation of the standard MIL assumption}
\label{sec:mil_noise}
\input{figures/loss_surface}
The standard MIL assumption, that at least one instance in the bag is assigned the bag-level label is often violated in our scenario.
There are two primary factors for this:
Firstly, due to computational constraints, we cannot process a whole bag at a time, but must instead sample instances from a bag.
It is therefore possible to sample a bag that does not contain any tubelets with the labelled action.
The likelihood of this occurring is inversely proportional to the ratio of the duration of the labelled action to the total video length.
Secondly, in a weakly-supervised scenario, we use person detectors that are not trained on the video dataset of interest.
Consequently, there can be failures in the detector, especially when there is a large domain gap between the detector's training distribution and the video dataset.
False negatives (missing detections for people in the scene) are a particular issue because it is possible that we do not have a single person tubelet in the bag that corresponds to the labelled action.
Therefore, there are cases when there is no tubelet which actually has the bag-level label.
To handle these cases, inspired by \cite{kendall_neurips_2017,novotny_cvpr_2018}, we modify the network to additionally predict the uncertainty $\sigma \in \ \mathbb{R}^{C}$ for each binary label for all tubelets in the bag.
Intuitively, to minimise the training error, the network can predict the bag-level label with low uncertainty or it can predict a high uncertainty to avoid being penalised heavily for noisy bags where the bag-level label is not present in any of the tubelets.
The final loss, in conjunction with the original cross entropy, is defined as:
\begin{equation}
\mathcal{L}(x, y, \sigma) = \frac{1}{\sigma^2}\mathcal{L}_{ce}(x, y) + \log \sigma^2
\label{eq:uncertainty_loss}
\end{equation}
As shown by \cite{kendall_cvpr_2018}, this corresponds to assuming a Boltzmann distribution on the output of the network with a temperature of $\sigma^2$, and approximately minimising its log-likelihood.
The loss surface of this probabilistic loss is visualised in Fig.~\ref{fig:loss_surface}. %
Note how the loss is the lowest when the predicted label is correct and there is low uncertainty.
However, the loss is not excessive if the incorrect label is predicted with a high uncertainty.
This is in contrast with the standard cross-entropy loss which penalises incorrect predictions heavily.
\iffalse
More concretely,
So cases where no tubelet actually has the bag-level label.
To handle these cases, predict the uncertainty
Intuitively, during training, the network can predict the bag-level label with low uncertainty for one of the tubelets, or it can predict a high uncertainty to avoid being penalised heavily for noisy bags where the.
Define a distribution
Minimise log likelihood.
Following \cite{kendal_cvpr_2018} simplify, and end up minimising
The loss surface of this probabilistic loss is visualised in Fig.~\ref{fig:loss_surface}.
Note how the loss is the lowest when the predicted label is correct and there is low uncertainty.
However, the loss is not excessive if the incorrect label is predicted with a high uncertainty.
This is in contrast with the standard cross entropy loss
Therefore, in the aforementioned cases when no tubelet actually has the bag-level label
In order to handle the classification of difficult bags
Some cases, the network should not actually be able to
The traditional MIL assumption, at least one instance in the bag ..., can be violated.
For a number of reasons:
-- False negative detections for the actors. Weakly supervised scenario, we are using detectors that have not been trained on the video dataset of interest.
Might not have proposals for the people performing the labelled action.
When dealing with long videos, we have to sample smaller tubelets. Therefore, the bag that is sampled for training might not actually have the bag-level label.
Introduce probabilistic loss.
Allows network to be confident, or incorrect with low confidence.
In some scenarios, the network has to have low confidence.
Maybe plot the loss surface to show that this is the case.
-- Noise in the annotations
-- To deal with long videos, we have to sample clips. And we might end up sampling the clips which do not actually have the bag label. This becomes a bigger issue when the clips are long, and the interval for which the annotated action is present is short.
-- Person detector can be wrong. Even though it is trained on a lot of data. False negatives specifically.
Uncertainty.
Define distribution
Log likelihood, and then
\fi
\subsection{Network architecture and implementation}
\label{sec:implementation_details}
Our action detector is similar to Fast-RCNN~\cite{girshick_iccv_2015} using the SlowFast~\cite{feichtenhofer_iccv_2019} video network architecture based on the ResNet-50 backbone~\cite{he_cvpr_2016} pretrained on Kinetics~\cite{kay2017kinetics}.
As described in Sec.~\ref{sec:method_action_recognition_as_mil}, we use region proposals obtained from a Faster-RCNN detection model trained with Detectron~\cite{Detectron2018}. %
Region-of-interest features~\cite{girshick_iccv_2015} are extracted from the last feature map of ``res5'' using RoIAlign~\cite{he_iccv_2017}.
Our choice for this architecture is motivated by the fact that it is simple and has achieved state-of-the-art results on the AVA dataset~\cite{gu2018ava} in a fully-supervised setting~\cite{feichtenhofer_iccv_2019}.
Note that our network does not use additional optical flow inputs (which can be considered as an additional source of supervision) as common in other video architectures~\cite{carreira_cvpr_2017,kalogeiton_iccv_2017,singh_iccv_2017,cheron_neurips_2018}.
We predict the uncertainty, $\sigma \in \mathbb{R}^C$ for each of the $C$ binary labels defined by the dataset for each tubelet.
As we use max-pooling to aggregate the tubelet predictions, we select the uncertainty prediction corresponding to the selected tubelet for computing the loss.
For numerical stability, we predict $v := \log \sigma^2$ with our network, using the ``softplus'', $f(x) = \log(1 + \exp(-x))$, activation function to ensure positivity.
We then compute $\frac{1}{\sigma^2} = \exp(-v)$, and avoid the possibility of dividing by 0 which could be the case if we predicted $\sigma^2$ directly with the network.
We train our network with synchronous stochastic gradient descent (SGD), using 8 GPUs and a batch size of 4 on each GPU.
In our case, each element of a batch is of a bag from Multiple Instance Learning.
Each bag samples a maximum of 4 tubelets.
Each tubelet itself consists of 16 frames.
\section{Conclusion and Future Work}
We have proposed a weakly supervised spatio-temporal action detection method based on Multiple Instance Learning (MIL).
Our approach incorporates uncertainty predictions made by the network such that it can better handle noise in our bags and violations of the standard MIL assumption by predicting a high uncertainty for noisy bags which cannot be classified correctly.
We achieve state-of-the-art results among weakly supervised methods on the UCF101-24 dataset, and also report the first weakly-supervised results on AVA, which is the only large-scale action recognition dataset.
Our analysis of the accuracy trade-offs as the time intervals for which sub-clips of the video are annotated will also aid future dataset annotation efforts.
Future work is to incorporate additional sources of noisy, weakly-labelled data, such as data which can be scraped off internet search engines.
\section{Related Work}
Most prior work on spatio-temporal action recognition has been fully-supervised.
Initial approaches in the area used 3D sliding window detectors in conjunction with handcrafted, volumetric features~\cite{ke_iccv_2005,laptev_iccv_2007}.
Current state-of-the-art approaches are temporal extensions of object detection architectures~\cite{kalogeiton_iccv_2017,singh_iccv_2017,zhao_cvpr_2019,peng_eccv_2016} such as Faster-RCNN~\cite{ren_neurips_2015} and SSD~\cite{liu_eccv_2016}.
These approaches predict bounding boxes around the action in a frame, using as input either a single frame along with optical flow to capture temporal information~\cite{singh_iccv_2017,saha_bmvc_2016} or multiple frames at the input to provide temporal context~\cite{kalogeiton_iccv_2017}.
The predicted bounding boxes are then linked over time using an online, greedy algorithm or dynamic programming to create spatio-temporal tracks.
Our work builds on these methods by also utilising a detection architecture and spatio-temporal linking.
However, these approaches all require bounding box annotations at each frame in the video whilst we only use video-level labels which are significantly cheaper to acquire.
Weakly supervised approaches to spatio-temporal action recognition have also been explored before as they enable a significant reduction in annotation time and cost.
Relevant to our approach is the work of \cite{cheron_neurips_2018}.
Che\'ron \emph{et al.} \cite{cheron_neurips_2018} also use person detections, and infer their action labels using a formulation based on discriminative clustering~\cite{bach_neurips_2008}.
Although their approach allows them to incorporate different types of supervision, it effectively learns a linear classifier on top of pretrained, deep features.
Our method in contrast is learned fully end-to-end.
Mettes~\emph{et al.} \cite{mettes2016spot} also employed Multiple Instance Learning (MIL), but used action proposals~\cite{van_gemert_bmvc_2015} instead of the human detections used by our work and \cite{cheron_neurips_2018}.
However, \cite{mettes2016spot}, rely on additional cheap ``point'' annotations (a single spatial point annotated for a subset of the frames which constitute the action) which also ensures that the standard MIL assumption is not violated.
In follow-up work \cite{mettes_bmvc_2017}, the authors removed the need for ``point'' annotations by incorporating biases (\emph{i.e.} the presence of objects in the video, a bias that actions typically occur in the centre of a frame) instead.
Finally, Weinzaepfel \emph{et al.} \cite{weinzaepfel_arxiv_2016} also used a Multiple Instance Learning framework in conjunction with human detections.
The authors, however, assumed that sparse spatial supervision was present (\emph{i.e.} bounding boxes for a small subset of frames in the action tube), unlike our method which requires video-level labels alone.
We also note that many approaches have addressed temporal action detection (localising actions in time but not space) with only video-level tags as supervision~\cite{wang2017untrimmednets,singh_hide_seek_iccv_2017,nguyen2018weakly,paul2018w}.
UntrimmedNets~\cite{wang2017untrimmednets} uses a network with two branches, a classification module to perform action classification and a selection module to select relevant frames.%
Hide-and-Seek~\cite{singh_hide_seek_iccv_2017} obtains more precise temporal boundaries by forcing the network to attend to more discriminative frames by randomly hiding parts of videos.
However, these methods are trained and evaluated on datasets such as ActivityNet~\cite{caba2015activitynet} and THUMOS14~\cite{jiang2014thumos}, which contain mostly one action per video, and are thus significantly less challenging than datasets such as AVA~\cite{gu2018ava} which we evaluate on.
Finally, we note that another approach to combat the effort of dataset annotation has been various forms of self-supervised learning, where discriminative feature representations can be learned with unlabelled data.
Examples in video include cross-modal self-supervised learning by learning correspondences between the audio and image streams readily available in videos~\cite{arandjelovic2017look,owens2016ambient,zhao2018sound}, transcribed speech~\cite{sun_iccv_2019} or using meta-data such as hashtags~\cite{ghadiyaram2019large} as a form of weak labelling.
Self-supervised approaches, however, are complementary to our approach, as they still require a limited amount of fully-labelled data for the final task of interest.
In our weakly-supervised action detection scenario, we never have access to full, spatio-temporal ground-truth annotations for a single training example.
\iffalse
Our method, in contrast, is trained end-to-end.
Weakly supervised approaches to video understanding
Fully supervised spatio-temporal action recognition
Initial works in the area used 3D sliding window detectors in conjunction with volumetric features
Current state-of-the art approaches are temporal extensions of object detection architectures such as Faster-RCNN and SSD.
These papers predict bounding boxes around the action in a frame
- Single frame, with optical flow to capture temproal features
- Using multiple frames as context
Bounding boxes are then linked over time to create spatio-temporal tracks.
Fully-supervised. Require a lot of annotations.
Weakly supervised spatio-temporal action recognition, not so common.
Che\'ron \emph{et al.} also use person detections, and infer the labels of them using discriminative clustering.
Although their formulation allows them to incorporate different types of supervision, it effectively learns a linear classifier on pretrained deep features.
Our method, in contrast, is trained end-to-end.
Mettes~\emph{et al.} \cite{mettes2016spot} also use a multiple instance learning (MIL) framework and action proposals~\cite{van_gemert_bmvc_2015} instead of human detections like our work.
However, \cite{mettes2016spot}, rely on additional cheap ``point'' annotations (a single spatial point annotated for a subset of the frames which constitute the action) which also ensures that the standard MIL assumption is not violated.
In follow-up work \cite{mettes_bmvc_2017}, the removed the need for ``point'' annotations by incorporating biases (\emph{i.e.} the presence of objects in the video, a bias that actions typically occur in the centre of a frame) instead.
Weinzapfael \emph{et al.} \cite{weinzaepfel_arxiv_2016} also used a Multiple Instance Learning framework in conjunction with human detections.
However, they assumed that sparse spatial supervision was present () unlike our method which requires video-level labels alone.
Weakly supervised temporal localisation, which only localises the actions in time and not space.
Representative approaches are UntrimmedNets~\cite{wang2017untrimmednets} and Hide-and-Seek~\cite{singh_hide_seek_iccv_2017}.
UntrimmedNets uses a network with two branches, a classification module to perform action classification and a selection module to select relevant frames, and is trained with only video-level supervision.
Hide-and-Seek obtains more precise temporal boundaries by forcing the network on more discriminative frames by randomly hiding parts of videos.
However, these methods are trained and evaluated on datasets such as ActivityNet~\cite{caba2015activitynet} and THUMOS14~\cite{jiang2014thumos}, which contain mostly one action per video, and are thus significantly less challenging than datasets such as AVA~\cite{gu2018ava} which we evaluate on.
\subsubsection{Supervision for Action Recognition}
Obtaining large scale datasets such as Kinetics~\cite{kay2017kinetics}, Moments in Time ... come at an exorbitant labelling cost. This has led to a focus on the procurement of weak labels in the form of accompanying meta data such as hashtags~\cite{ghadiyaram2019large}, \textit{cross-modal supervision} from the audio streams readily available with
videos~\cite{owens2016ambient,zhao2018sound,arandjelovic2017look,owens2018audio,korbar2018cooperative}, as well as the transcribed speech ...
This only provides supervision at a coarse, video level. ~\cite{nagrani_cvpr_2020} mine noisy clips, no data cleaning, many cases there are no humans even detected
Self-supervision is somewhat orthogonal though.
Even with self-supervised learning, you need some fully labelled examples to learn a model for your task of interest.
In our set-up, you never have full ground-truth annotations for a single example.
\subsubsection{Spatio-Temporal Localization}
In recent years, progress in action localization has been driven by datasets such as
THUMOS14~\cite{jiang2014thumos}, Charades~\cite{sigurdsson2016hollywood}, ActivityNet~\cite{caba2015activitynet} (temporal) and
AVA~\cite{gu2018ava} and UCF-101-24 (spatial), all created at a significant manual labelling cost. Fully supervised methods for spatio-temporal action localisation adopt a two-stage, propose-then-classification framework. However, the annotation cost required to label start and end points (temporal localisation) and bounding boxes (spatial) is significantly higher than that required to obtain coarse, video level labels, which has spurred a recent focus on works that use weaker forms of supervision eg.\ video level labels only. Moreover, achieving consensus amongst annotators is challenging due to inherent ambiguities in the start and end times of an
action. This prevents fully-supervised methods from scaling to larger datasets, both in the number of training examples and the number of action classes. A number of approaches address temporal action detection with weak supervision:
\begin{enumerate}
\item UntrimmedNets~\cite{wang2017untrimmednets} uses two branches, a classification module to perform action classification and a selection module to pick out relevant frames -- that can be trained end-to-end from clip level supervision
only.
\item Hide-n-Seek~\cite{singh_iccv_2017} addresses the tendency of popular weakly supervised solutions - networks with global average pooling - to only focus on the most discriminative frames by randomly hiding parts of the videos.
\item STPN~\cite{nguyen2018weakly} introduce
an attention module to learn the weights for the weighted
temporal pooling of segment-level feature representations.
This method generates detections by thresholding Temporal Class Activation Mappings (T-CAM) weighted by the
attention values.
\item W-TALC~\cite{paul2018w}
introduces a system with k-max Multiple Instance Learning and explicitly identifies the correlations between videos of similar categories by a co-activity similar loss
\end{enumerate}
All these works, however, ignore spatial localization. In contrast, we seek to not only localize the action temporally but to also look for spatial localization
from weak supervision.
\begin{enumerate}
\item Weinzaepfel et al \cite{weinzaepfel_arxiv_2016} perform both temporal and spatial detections by coupling frame-level EdgeBoxes~\cite{zitnick2014edge} region proposals from a sparse set of frames with a tracking-by-detection framework.
\item Mettes et al.~\cite{mettes2016spot} reduces bounding box annotation to a single spatial point specified
for each frame of an action.
\item In contrast to both these works, we aim for a solution that requires only video level labels, which are easier to annotate and more importantly, can be mined automatically.
\item Unlike works that rely on proposals followed by selection networks, we rely instead on recent advances in
off-the-shell human detectors~\cite{Detectron2018} and use human tracks as done in Cheron et al~\cite{cheron_neurips_2018}
\end{enumerate}
\fi
\section{Introduction}
\input{figures/teaser}
Video classification has witnessed great advances recently due to large datasets such as Kinetics~\cite{kay2017kinetics} and Moments in Time~\cite{monfort2018moments} which have enabled training of specialised neural network architectures for video \cite{carreira_cvpr_2017,feichtenhofer_iccv_2019}.
However, progress in other video understanding tasks, such as spatio-temporal action detection, has lagged
behind in comparison.
There are fewer datasets for action recognition, which are also significantly smaller than their video-classification counterparts.
A reason for this is the exorbitant cost of annotating videos with spatio-temporal labels -- each frame of an action has to be manually labelled with a bounding box.
Moreover, annotating temporal boundaries of actions is not only arduous, but often ambiguous with annotators failing to reach consensus about the start and end times of an action~\cite{cheron_neurips_2018,sigurdsson_cvpr_2017}.
In this paper, we propose a method to train spatio-temporal action detectors using only weak, video-level annotations as shown in Fig.~\ref{fig:teaser}.
To achieve this, we leverage image-based person detectors which have been trained on large image datasets such as Microsoft COCO~\cite{lin_eccv_2014} and are accurate across large variations in appearance, scene and pose.
We adopt a Multiple Instance Learning (MIL) framework, where a person tubelet is an instance, and all person tubelets in the video form a bag.
An important consideration in our approach is the presence of label noise: this is introduced from using off-the-shelf person detectors which have not been trained on the video-dataset of interest, and also the fact that we have to sample tubelets from large bags in long videos due to memory constraints.
In both of these scenarios, the standard Multiple Instance Learning assumption~\cite{diettrich_1997}, that each bag contains at least one instance with the bag-level label, may be violated.
We are not aware of previous work that has explicitly addressed this problem, and we do so with a probabilistic variant of MIL where we estimate the uncertainty of an instance-level prediction.
Using our approach, we obtain state-of-the-art results among weakly-supervised methods on the UCF101-24 dataset.
Furthermore, we report, to our knowledge, the first weakly-supervised results on the AVA dataset (the only large-scale dataset for spatio-temporal action recognition), where we also show the accuracy trade-offs when annotating video-clips for time intervals of varying durations.
\iffalse
Datasets for action detection are f
Video classification --> really good. Lot of data.
Spatio-temporal --> Not so much. Not much data. Besides AVA.
So can we use plentiful video classification data for AVA
Lot of progress in video recognition, ie performance on Kinetics~\cite{kay2017kinetics}, Moments in Time~\cite{monfort2018moments} ...
However, localising actions in space and time is more challenging.
Severely restricted by the availability of data
Add some statistic about how long it takes to add spatio-temporal annotations vs just video-level labels.
So this paper shows how we can learn models for spatio-temporal recognition with just video-level labels, which are easier to annotate.
Moreover, video-level annotations can be mined automatically~\cite{nagrani_cvpr_2020}.
We leverage per-frame person detectors, which have been trained on large datasets such as Microsoft COCO and are accurate across large variations in appearance, scene and pose.
Multiple Instance Learning framework, where a person tubelet is an instance, and all the person tubelets in the video form a bag.
And we only have the label for the bag.
For this multiple instance learning approach to work, we need to deal with noise: Firstly, when we do not have person detections for the labelled actions.
Secondly, when we used videos that have been scraped from the web, our labels for the video itself are noisy and could be incorrect.
In both of these scenarios, the standard Multiple Instance Learning assumption \cite{diettrich_1997}, that each bag contains at least one instance of with the specified label, is wrong.
We are not aware of previous work that have addressed this problem, and do so with a probabilistic variant of MIL where we estimate the uncertainty of the instance-level predictions.
We report the first weakly-supervised results on the AVA dataset, where we obtain up to X\% of the fully-supervised performance of a state-of-the-art baseline~\cite{feichtenhofer_iccv_2019}.
Moreover, by utilising the additional, weakly mined data of~\cite{nagrani_cvpr_2020}, we are able to improve performance even further by Y\%.
To our knowledge, this is the first weakly-supervised work that shows how additional, large-scale unlabelled data can be used to substantially improve model performance, and outperform models trained with full supervision on exhaustively-labelled, curated datasets.
\fi
\section{Experiments}
\subsection{Experimental set-up}
We evaluate our method on UCF101-24 and AVA, described in more detail below.
Note that other video datasets such as THUMOS~\cite{jiang2014thumos} and ActivityNet~\cite{caba2015activitynet} are not suitable for spatiotemporal localisation, as they lack bounding box annotations.
\paragraph{UCF101-24:}
UCF101-24 is a subset of the UCF101~\cite{soomro2012ucf101} dataset, consisting of 24 action classes with spatio-temporal localisation annotation, released as bounding box annotations of humans.%
Although each video contains only a single action class, it may contain multiple individuals performing the action with different spatial and temporal boundaries.
Moreover, there may also be people present in the video who are not performing any labelled action.
Following standard practice, we use the corrected annotations of \cite{singh_iccv_2017} and report the mean average precision at a video level (Video AP) for the first split of the dataset.
For evaluating the Video AP, we link tubelets together using the algorithm of \cite{kalogeiton_iccv_2017}.
\paragraph{AVA~\cite{gu2018ava}:}
This dataset consists of 430, 15-minute video clips obtained from movies.
80 atomic visual actions are annotated exhaustively for all people in the video, where one person is often simultaneously performing multiple actions.
The dataset annotates keyframes at every second in the video.
Following standard practice, we report the Frame AP at an IoU threshold of 0.5 using v2.2 annotations. %
\iffalse
The AVA dataset annotates keyframes
The AVA dataset consists of 80 atomic visual actions densely annotated in 430, 15-minute movie clips.
Actions are localised
, where actions are localized in space and time, resulting in 1.62M action labels with multiple labels per human occurring frequently. The key
Keyframes every 1 second are labelled.
Evaluate with Frame AP.
In weakly supervised case, remove spatial annotations for a time period.
AVA clips are 15 minutes, and atomic actions.
Video duration means we cannot fit much in memory.
Atomic actions means that we each video clip basically contains every action. So no signal to learn from.
\fi
\subsection{Experiments on UCF101-24}
We first conduct ablation studies of our model on the UCF101-24 dataset.
We discard the spatio-temporal annotations for the whole untrimmed video, %
and so our bag in multiple instance learning contains tubelets from the whole video.
\input{tables/ucf_ablation}
\paragraph{Ablation study}
Table~\ref{tab:ucf_ablation} ablates different variants of our method:
The most na\"ive baseline is to not perform any multiple instance learning, and to simply train in a fully-supervised fashion assuming that the label of a tubelet is the video-level label.
As shown in the first row of Tab.~\ref{tab:ucf_ablation}, this method performs the worst as the assumed tubelet-level labels are often incorrect.
The use of multiple instance learning improves results, with the various aggregation functions performing similarly.
Max-pooling, however, performs the best, and we believe this is because the max operation is the most suitable for dealing with the noise present in our tubelets as described in Sec.~\ref{sec:mil_noise}.
Note that for mean and LSE-pooling, we set $r = 1$.
Finally, introducing our uncertainty-based loss function improves results even further, obtaining a Video mAP of 35.0 at a threshold of 0.5.
This is 80\% of the performance achieved by our fully-supervised baseline.
\paragraph{Person detections on UCF101-24}
Note that for our weakly-supervised experiments, the person tubelets for training are obtained from a Faster-RCNN~\cite{ren_neurips_2015} person detector that has only been trained on Microsoft COCO~\cite{lin_eccv_2014}.
There is a significant domain gap between COCO and UCF, and the annotation protocol of person boxes on UCF is also not consistent (for example, the bounding box for a person riding a horse often includes the horse in UCF) with that of COCO.
These discrepancies are reflected by the fact that our person detections used during training only have a recall of 46.9\% compared to the ground truth person boxes, when using an IoU threshold of 0.5 to signify a correct match.
Furthermore, the precision of our person tubelets on the training set is only 21.1\%.
A major contributing factor to this is that UCF action annotations are not exhaustive -- there may be people in the video who are not labelled at all as they are not performing an annotated action.
These people will, however, still be detected by a COCO-trained detector and considered as false positives during this evaluation.
The fact that we are able to train our model with these annotations demonstrates the ability of our multiple instance learning method to handle label noise in the training set.
The inconsistencies in the UCF101-24 dataset labelling are detailed further in the supplementary, and has also been noted previously by Ch\'eron \emph{et al.} \cite{cheron_neurips_2018}.
Noise in the person detections are not a problem for the training of our fully-supervised baseline, as it is trained with ground-truth boxes in addition to predicted boxes.
As we have box-level supervision in this case, predicted detections which have an IoU of more than 0.5 with a ground-truth detection are assigned the label of the ground-truth box, or the negative label otherwise, during fully-supervised training.
As the goal of this paper is not to develop a better human detector or tracker for building the person tubelets, we use the Faster-RCNN detector released publicly by Ch\'eron~\emph{et al.} \cite{cheron_neurips_2018} for all our evaluations on the UCF101-24 validation set.
This detector was originally trained on COCO and then finetuned on the UCF101-24 training set using Detectron~\cite{Detectron2018}.
\paragraph{The effect of tubelet sampling}
For the tubelets of length $K =16$ that we use, there is a mean of 33.1 tubelets per video in the UCF101-24 dataset. In computing this, we only consider tubelets which have a spatio-temporal IoU of less than 0.5 with each other. More tubelets would be obtained if we counted one from each frame of the video.
As we can fit a maximum of 16 tubelets onto a 16GB Nvidia V100 GPU, it is clear that it is necessary to sample the tubelets in each bag.
Note that UCF videos often have a high number of tubelets, as there are often many people in the video who are not labelled as performing an action.
As described in the previous subsection, this is also a significant source of noise.
\input{tables/ucf_batch_size}
Table~\ref{tab:ucf_batch_size} shows the effect of changing the batch size (number of bags), and the number of tubelets sampled per bag, such that GPU memory usage is maximised.
We can see that the uncertainty loss helps in all cases and that
accuracy decreases with low batch sizes. We believe this is due to batch normalisation statistics being too correlated when more tubes are from the same video.
\paragraph{Comparison to state-of-the-art}
\input{tables/ucf101_24_sota_comparison.tex}
Table~\ref{tab:ucf_sota_comparison} compares our results to the state-of-the-art.
The bottom-half of the table shows that we outperform previous weakly-supervised methods by a large margin.
The top-half shows that our fully-supervised baseline is also competitive with the fully-supervised state-of-the-art, although that is not the main goal of this work.
The fully-supervised methods which outperform our method are based on action detectors which directly predict the person proposals with the network, and are thus able to handle the person annotation peculiarities of the UCF101-24 dataset more effectively.
We do not observe any issues with person detections for our experiments on AVA in the next section. %
\paragraph{Qualitative Results}
\input{figures/ucf_qualitative}
Figure~\ref{fig:ucf_qualitative} presents qualitative results of our method.
The first two rows show success cases of our method where the tubelet detection and linking have performed well.
The third row shows a failure case, since the basketball player represented by the green track is not actually performing the ``Basketball Dunk'' action.
According to the UCF101-24 annotations, only the player represented with the blue track is performing this action.
This video clip is thus an example of a video where there are many people not performing the action annotated for the video, and is especially challenging for our weakly-supervised method.
The fourth row shows a different failure case as an error by the online tubelet linking algorithm (we used the same method as~\cite{kalogeiton_iccv_2017}) has made the identities of the two cyclists change after they occluded each other.
\subsection{Experiments on AVA}
In this section, we report what to our knowledge are the first weakly-supervised action detection experiments on AVA~\cite{gu2018ava}.
The AVA dataset labels keyframes in a 15-minute video clip, where each keyframe is sampled every second (\emph{i.e.} at 1 Hz).
The evaluation protocol of the AVA dataset measures the ability of an action detection model to classify the actions occuring in a keyframe given the temporal context around it.
We control the difficulty of the weakly-supervised action recognition problem by combining the annotations from $N$ consecutive keyframes into a single, clip-level annotation.
This effectively means that we are obtaining clip-level annotations for sub-clips of $N$ seconds from the original AVA video.
The weakly-supervised problem gets more difficult as $N$ increases, as the sub-clips get longer and the number of observed labels within each sub-clip increases.
Note that when $N = 1$, only the spatial localisation ability of the model is being tested, as during training, it is unknown which of the subclip-level labels correspond to each person tubelet in the MIL bag.
When $N > 1$, the subclip-level labels can correspond to zero, one or many of the person tubelets at different keyframes in the clip, and it is thus a more difficult task.
As an AVA video clip consists of 900 seconds, $N = 900$ represents the most extreme case when spatio-temporal annotations are discarded for the entire 15-minute video.
\input{tables/ava_results}
\input{tables/ava_sota_results}
\input{figures/ava_per_class_results}
Table~\ref{tab:ava_results} shows the results of our model in this setting.
As expected, the performance of our method improves the shorter the sub-clip.
For $N = 1$ and $N = 5$, our method obtains 90\% and 72\% of fully-supervised performance respectively, suggesting that bounding-box level annotations are not required for training action recognition models if the video clips are annotated over short temporal intervals.
Understandably, the results from $N = 900$, where we use the video-level annotations over the whole 15-minute clip are the worst as it is the most difficult setting.
Figure~\ref{fig:ava_per_class_results} further analyses the per-class results for the different levels of supervision presented in Tab.~\ref{tab:ava_results}.
As expected, stronger levels of supervision (shorter sub-clip durations) result in better per-class accuracy.
However, some action classes are affected more than others by weaker labels (longer sub-clips).
Examples of this include ``sing to'' and ``listen to'' which show a larger difference to the fully-supervised baseline than other classes.
Moreover, some classes such as ``watch (a person)'', ``get up'', ``close (e.g., a door, a box)'' and ``hand clap'' perform reasonably when trained with sub-clips ($N \leq 10$), but much more poorly when trained with longer sub-clips.
Finally, we compare our fully-supervised baseline to the state-of-the-art in Tab.~\ref{tab:ava_sota}.
Note that our weakly-supervised result from sub-clips of 10 seconds (Tab.~\ref{tab:ava_results}) outperforms the original fully-supervised baseline using introduced by the AVA dataset~\cite{gu2018ava} that uses both RGB and optical flow as inputs.
Our model, on the other hand, only uses RGB as its input modality.
Our SlowFast model performs similarly to the published results of the original authors~\cite{feichtenhofer_iccv_2019}.
Note that we have not used Non-local~\cite{wang_cvpr_2018}, test-time augmentation or ensembling which are all complementary methods to improve performance~\cite{feichtenhofer_iccv_2019}.
We can see that in contrast to the UCF dataset in the previous section, our person detector is accurate on AVA, and so a Fast-RCNN-style detector using person tubelets as proposals can achieve state-of-the-art results.
| proofpile-arXiv_065-220 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\subsection{Notations}
A finite automaton over an alphabet $\Sigma$ is a tuple $\mathcal{A}=(Q,\iota,\Sigma,$ $\Delta,F)$ where $Q$ is a finite set of locations s.t. $\iota \in Q$ is the initial location, $\Sigma$ is a finite alphabet of actions, $\Delta \subseteq (Q \times \Sigma \times Q)$ is a finite transition relation, $F \subseteq Q$ is the set of \emph{accepting} locations.
A word $w = \alpha_0 . \alpha_1. \cdots. \alpha_n$ is a finite sequence of letters from $\Sigma$; we let $w[i] = \alpha_i$ be the $i$-th letter of $w$,
$|w|$ be the length of $w$ which is $n + 1$.
Let $\epsilon$ be the empty word and $|\epsilon|=0$, and let $\Sigma^\ast$ be the set of finite words over $\Sigma$.
The \emph{language}, ${\cal L}(\mathcal{A})$, accepted by $\mathcal{A}$ is defined in the usual manner as the set of words that can lead to $F$ from $\iota$.
Let $V$ be a finite set of real-valued variables.
A \emph{valuation} is a function $\nu: V \rightarrow \mathbb{R}$.
The set of valuations is $[V \rightarrow \mathbb{R}]$.
We denote by $\beta(V)$ the set of \emph{constraints} (or Boolean predicates) over $V$ and
given $\varphi \in \beta(V)$, we let $\textit{Vars}(\varphi)$ be the set of unconstrained variables in $\varphi$.
Given a valuation, we let the truth value of a constraint (Boolean predicate) $\varphi$ be denoted by $\varphi(\nu) \in \{ \mathit{True}, \mathit{False}\}$,
and write $\nu \models \varphi$ when $\varphi(\nu) = \mathit{True}$ and let $\semof{\varphi} = \{ \nu \mid \nu \models \varphi\}$.
An \emph{update} $\mu \subseteq [V \rightarrow \mathbb{R}] \times [V \rightarrow \mathbb{R}]$ is a binary relation over valuations.
Given an update $\mu$ and a set of valuations $\mathcal{V}$, we let $\mu(\mathcal{V}) = \{ \nu' \mid \exists \nu \in \mathcal{V} \text{ and } (\nu,\nu') \in \mu \}$.
We let ${\cal U}(V)$ be the set of updates on the variables in $V$.
Similar to the update relation, we define a \emph{rate} function $\rho:V\rightarrow\mathbb{R}$
(rates can be negative), {i.e.},~ a function from a variable to a real number\footnote{We can allow rates to be arbitrary terms but in this paper we restrict to deterministic rates or bounded intervals.}.
A rate is then a vector
$\rho\in \mathbb{R}^V$.
Given a valuation $\nu$ and a timestep $\delta \in \mathbb{R}_{\geq 0}$ the valuation $\nu + (\rho,\delta)$ is defined by: $(\nu + (\rho,\delta))(v) = \nu(v) + \rho(v) \times \delta$ for $v \in V$.
\subsection{Real-Time Instructions}
Let $\Sigma = \beta(V) \times {\cal U}(V) \times {\cal R}(V)$ be a countable set of instructions -- and intentionally also the alphabet of the CFG.
Each $\alpha \in \Sigma$ is a tuple $(\textit{guard, update, rates})$ denoted by $(\gamma_\alpha, \mu_\alpha, \rho_\alpha)$.
Let $\nu: V \rightarrow \mathbb{R}$ and $\nu' : V \rightarrow \mathbb{R}$ be two valuations.
For each pair $(\alpha,\delta) \in \Sigma \times \mathbb{R}_{\geq 0}$
we define the following transition relation $\xrightarrow{~ \alpha, \delta ~}$:
\[
\nu \xrightarrow{~ \alpha, \delta ~} \nu' \iff
\begin{cases}
1.\quad \nu \models \gamma_\alpha \text{ (guard of $\alpha$ is satisfied in $\nu$)}, \\
2.\quad \exists \nu'' \text{ s.t. } (\nu, \nu'') \in \mu_\alpha
\text{ (discrete update allowed by $\alpha$) and } \\
3. \quad \nu' = \nu'' + (\rho_\alpha,\delta)
\text{ (continuous update as defined by $\alpha$).}
\end{cases}
\]
The semantics of $\alpha \in \Sigma$ is a mapping $\fut{\alpha} : [V \rightarrow \mathbb{R}] \rightarrow 2^{ [V \rightarrow \mathbb{R}]}$ and for $\nu \in [V \rightarrow \mathbb{R}]$
\begin{equation}
\fut{\alpha}(\nu) = \{ \nu' \, | \, \exists \delta \geq 0, \nu\xrightarrow{~ \alpha, \delta ~} \nu' \}\mathpunct.
\end{equation}
%
It follows that:
\begin{fact}\label{fact-1}
$\exists \delta \geq 0, \nu \xrightarrow{~ \alpha, \delta ~} \nu' \iff \nu' \in \fut{\alpha}(\nu)$.
\end{fact}
This mapping can be straightforwardly extended to sets of valuations $K \subseteq [V \rightarrow \mathbb{R}]$ as follows:
\begin{equation}
\fut{\alpha}(K) = \underset{\nu \in K}{\bigcup} \fut{\alpha}(\nu)\mathpunct.
\end{equation}
\subsection{Post Operator}
Let $K$ be a set of valuations and $w \in \Sigma^\ast$.
We inductively define the \emph{(strongest) post operator} $\textit{Post}(K,w)$ as follows:
\begin{align*}
\textit{Post}(K, \epsilon) & = K \\
\textit{Post}(K, \alpha.w) & = \textit{Post}(\fut{\alpha}(K),w)
\end{align*}
The post operator extends to logical constraints $\varphi \in \beta(V)$ by defining $\textit{Post}(\varphi,w) = \textit{Post}(\semof{\varphi},w)$.
In the sequel, we assume that, when $\varphi \in \beta(V)$,
then $\fut{\alpha}(\semof{\varphi})$ is also definable as a constraint in $\beta(V)$. This inductively implies that $\textit{Post}(\varphi,w)$ can also be expressed as a constraint in $\beta(V)$ for sequences of instructions $w \in \Sigma^\ast$.
\subsection{Timed Words and Feasible Words}
A \emph{timed word} (over alphabet $\Sigma$) is a fini\-te sequence $\sigma=(\alpha_0,\delta_0).(\alpha_1,\delta_1). \cdots.(\alpha_n,\delta_n)$ such that for each $0 \leq i \leq n$, $\delta_i \in \mathbb{R}_{\geq 0}$ and
$\alpha_i \in \Sigma$.
The timed word $\sigma$ is \emph{feasible} if and only if there exists a set of valuations $\{\nu_0,\dots,\nu_{n+1}\} \subseteq [V \rightarrow \mathbb{R}]$ such that:
\[
\nu_0 \xrightarrow{~\alpha_0, \delta_0 ~} \nu_1 \xrightarrow{~\alpha_1, \delta_1 ~} \nu_2 \quad \cdots \quad \nu_n \xrightarrow{~\alpha_n, \delta_n ~} \nu_{n+1} \mathpunct.
\]
We let $\textit{Unt}(\sigma) = \alpha_0.\alpha_1.\cdots.\alpha_n$ be the \emph{untimed} version of $\sigma$. We extend the notion \emph{feasible} to an untimed word $w \in \Sigma^\ast$: $w$ is feasible iff
$w = \textit{Unt}(\sigma)$ for some feasible timed word $\sigma$.
\begin{lemma}\label{lemma-2}
An untimed word $w \in \Sigma^\ast$ is \emph{feasible} iff $\textit{Post}(\mathit{True},w) \neq \mathit{False}$.
\end{lemma}
\begin{proof}
We prove this Lemma by induction on the length of $w$.
The induction hypothesis is:
\[
\nu_0 \xrightarrow{~\alpha_0, \delta_0 ~} \nu_1 \xrightarrow{~\alpha_1, \delta_1 ~} \nu_2 \quad \cdots \quad \nu_n \xrightarrow{~\alpha_n, \delta_n ~} \nu_{n+1} \iff \nu_{n+1} \in \textit{Post}(\{\nu_0\},\alpha_0.\alpha_1.\cdots.\alpha_n)
\]
which is enough to prove the Lemma.
\noindent{\it Base step.} If $w = \epsilon$, then $\textit{Post}(\{ \nu_0 \},\epsilon) = \{\nu_0\}$.
\noindent{\it Inductive step.}
Assume $\nu_0 \xrightarrow{~\alpha_0, \delta_0 ~} \nu_1 \xrightarrow{~\alpha_1, \delta_1 ~} \nu_2 \quad \cdots \quad \nu_n \xrightarrow{~\alpha_n, \delta_n ~} \nu_{n+1} \xrightarrow{~\alpha_{n+1}, \delta_{n+1} } \nu_{n+2}$.
By induction hypothesis, $\nu_{n+1} \in \textit{Post}(\{\nu_0\}, \alpha_0.\alpha_1.\cdots.\alpha_n)$, and $\nu_{n+2} \in \fut{\alpha_{n+1}}(\nu_{n+1})$.
By definition of $\textit{Post}$ this implies that $\nu_{n+2} \in \textit{Post}(\{\nu_0\}, \alpha_0.\alpha_1.\cdots.\alpha_n.\alpha_{n+1})$.
\end{proof}
\smallskip
\subsection{Real-Time Programs}
The specification of a real-time program decouples the \emph{control} ({e.g.},~ for Timed Automata, the locations) and the \emph{data} (the clocks or integer variables).
A \emph{real-time program} is a pair $P=(A_P,\semof{\cdot})$ where $A_P$ is a finite automaton $A_P=(Q,\iota,\Sigma, \Delta, F)$ over the alphabet\footnote{$\Sigma$ can be infinite but we require the control-flow graph $\Delta$ (transition relation) of $A_P$ to be finite.} $\Sigma$, $\Delta$ defines the control-flow graph of the program and $\semof{\cdot}$ provides the semantics of each instruction.
\noindent A timed word $\sigma$ is \emph{accepted} by $P$ if and only if:
\begin{enumerate}
\item $\textit{Unt}(\sigma)$ is accepted by $A_P$ and,
\item $\sigma$ is feasible.
\end{enumerate}
The \emph{timed language}, ${\cal L}^t(P)$, of a real-time program $P$ is the set of timed words accepted by $P$, {i.e.},~ $\sigma \in {\cal L}^t(P)$ if and only if $\textit{Unt}(\sigma) \in {\cal L}(A_P)$ and $\sigma$ is feasible.
\begin{remark}
We do not assume any particular values initially for the variables of a real-time program (the variables that appear in $I$).
This is reflected by the definition of \emph{feasibility} that only requires the existence of valuations without containing the initial one $\nu_0$.
When specifying a real-time program, initial values can be explicitly set by regular instructions at the beginning of the program.
This is similar to standard programs where the first instructions can set the values of some variables.
\end{remark}
\subsection{Timed Language Emptiness Problem}
The \emph{(timed) language emptiness prob\-lem} asks the following:
\begin{quote}
Given a real-time program $P$, is ${\cal L}^t(P)$ empty?
\end{quote}
\begin{theorem}
${\cal L}^t(P) \neq \varnothing$ iff
$\exists w \in {\cal L}(A_P)$ such that $\textit{Post}(\mathit{True},w) \not\subseteq \mathit{False}$.
\end{theorem}
\begin{proof}
${\cal L}^t(P) \neq \varnothing$ iff there exists a feasible timed word $\sigma$ such that $\textit{Unt}(\sigma)$ is accepted by $A_P$. This is equivalent to
the existence of a feasible word $w \in {\cal L}(A_P)$, and
by Lemma~\ref{lemma-2}, feasibility of $w$ is equivalent to $\textit{Post}(\mathit{True},w) \not\subseteq \mathit{False}$.
\end{proof}
\subsection{Useful Classes of Real-Time Programs}
\emph{Timed Automata} are a special case of real-time programs.
The variables are called clocks.
$\beta(V)$ is restricted to constraints on individual clocks or \emph{difference constraints} generated by the grammar:
\begin{equation}\label{eq-grammar-ta}
b_1, b_2::= \mathit{True} \mid \mathit{False} \mid x - y \Join k \mid x \Join k \mid b_1\wedge b_2
\end{equation}
where $x, y\in V$, $k \in \mathbb{Q}_{\geq 0}$ and $\Join\,\in\{<,\leq,=,\geq,>\}$\footnote{
While difference constraints are strictly disallowed in most definitions of Timed Automata, the method we propose
retain its properties regardless of their presence.}.
We note that wlog. we omit \emph{location invariants} as for the language emptiness problem, these can be implemented as guards.
An update in $\mu \in {\cal U}(V)$ is defined by a set of clocks to be \emph{reset}.
Each pair $(\nu,\nu') \in \mu$ is such that $\nu'(x)=\nu(x)$ or $\nu'(x)=0$ for each $x \in V$.
The valid rates are fixed to 1, and thus ${\cal R}(V) = \{1\}^V$.
\smallskip
\emph{Stopwatch Automata} can also be defined as a special case of real-time programs.
As defined in~\cite{stopwatches}, Stopwatch Automata
are Timed Automata extended with \emph{stopwatches} which are clocks that can be stopped. $\beta(V)$ and ${\cal U}(V)$ are the same as for Timed Automata but the set of valid rates is defined by the functions of the form ${\cal R}(V) = \{0,1\}^V$ (the clock rates can be either $0$ or $1$). An example of a Stopwatch Automaton is given by the timed system $\mathcal{A}_1$ in \figref{fig-ex1}.
As there exists syntactic translations (preserving timed languages or reachability) that map hybrid automata to stopwatch automata~\cite{stopwatches}, and translations that map time Petri nets~\cite{DBLP:conf/formats/BerardCHLR05,cassez-jss-06} and extensions~\cite{tpn-13,BJJJMS:TCS:13} thereof to timed automata, it follows that time Petri nets and hybrid automata are also special cases of real-time programs.
This shows that the method we present in the next section is applicable to a wide range of timed systems.
What is remarkable as well, is that it is not restricted to timed systems that have a finite number of discrete states but can also accommodate infinite discrete state spaces. For example, the real-time program $P_2$ in \figref{fig-ex2}, page~\pageref{fig-ex2} has two clocks $x$ and $y$ and an unbounded integer variable $i$.
Even though $i$ is unbounded, our technique discovers the loop invariant $y \geq i$ of the $\iota$ and $\ell_0$ locations --
an invariant is over a real-time clock $y$ and the integer variable $i$.
It allows us to prove that ${\cal L}^t(P_2) = \varnothing$ as the guard of $t_2$ never can be satisfied ($y<i$).
\subsection{Verification of Timed and Stopwatch Automata}\label{exp:stopwatch}
The real-time programs, $P_1$ of \figref{fig-ex1} and $P_2$ of \figref{fig-ex2} can be analyzed with our technique.
The analysis (\textsc{rttar}\xspace algorithm~\ref{algo-1}) terminates in two iterations for the program $P_1$, a stopwatch automaton. As emphasized in the introduction, neither \textsc{Uppaal}\xspace (over-approximation with DBMs) nor \textsc{PHAver}\xspace can provide the correct answer to the reachability problem for $P_1$.
To prove that location $2$ is unreachable in program $P_2$ requires to discover an invariant that
mixes integers (discrete part of the state) and clocks (continuous part).
Our technique successfully discovers the program invariants. As a result the refinement depicted in \figref{fig-mix-int-clock-interpol} is constructed
and as it contains ${\cal L}(A_{P_2})$ the refinement algorithm RTTAR terminates and proves that $2$ is not reachable. $A_{P_2}$ can only be analyzed in \textsc{Uppaal}\xspace with significant computational effort and bounded integers.
\subsection{Parametric Stopwatch Automata}
We compare the \textsc{rttar}\xspace tool to \textsc{Imitator}\xspace \cite{imitator} -- the state-of-the-art parameter synthesis tool for reachability\footnote{We compare with the \texttt{EFSynth}-algorithm in the \textsc{Imitator}\xspace tool as this yielded the lowest computation time in the two terminating instances.}.
We shall here use the semi-algorithm presented in Section \ref{sec:synth}
For the test-cases we use the gadget presented initially in \figref{fig-ex1}, a few of the test-cases used in \cite{Andre2015}, as well as two modified versions of Fischers Protocol, shown in~\figref{fig:fischer}.
In the first version we replace the constants in the model with parameters.
In the second version (marked by robust), we wish to compute an expression, that given an arbitrary upper and lower bound yields the robustness of the system -- in the same style as the experiments presented in Section~\ref{exp:robust}, but here for arbitrary guard-values.
\begin{table}
\centering
\input{results/parametersynth}
\caption{Results for parameter-synthesis comparing \textsc{rttar}\xspace with \textsc{Imitator}\xspace. Time is given in seconds. DNF marks that the tool did not complete the computation within an hour.}
\label{tab:synth}
\end{table}
~
\begin{figure}
\centering
\includegraphics[width=5.2cm]{fischer.png}
\captionof{figure}{A \textsc{Uppaal}\xspace template for a single process in Fischers Algorithm. The variables \texttt{e}, \texttt{a} and \texttt{b} are parameters for $\epsilon$, lower and upper bounds for clock-values respectively.}
\label{fig:fischer}
\end{figure}
As illustrated by Table~\ref{tab:synth} the performance of \textsc{rttar}\xspace is slower than \textsc{Imitator}\xspace when \textsc{Imitator}\xspace is able to compute the results. On the other hand, when using \textsc{Imitator}\xspace to verify our motivating example from \figref{fig-ex1}, we observe that \textsc{Imitator}\xspace never terminates, due to the divergence of the polyhedra-computation. This is the effect illustrated in Table~\ref{tab-sym-comp}.
When trying to synthesize the parameters for Fischers algorithm, in all cases, \textsc{Imitator}\xspace times out and never computes a result.
For both two and four processes in Fischers algorithm, our tool detects that the system is safe if and only if $a < 0 \vee b < 0 \vee b - a > 0$. Notice that $a < 0 \vee b < 0$ is a trivial constraint preventing the system from doing anything. The constraint $b - a > 0$ is the only useful one. Our technique provides a formal proof that the algorithm is correct for $b -a >0$.
In the same manner, our technique can compute the most general constraint ensuring that Fischers algorithm is robust.
The result of \textsc{rttar}\xspace algorithm is that the system is robust iff
$ \epsilon \leq 0 \vee a < 0 \vee b < 0\vee b - a - 2\epsilon > 0$
-- which for $\epsilon=0$ (modulo the initial non-zero constraint on $\epsilon$) reduces to the constraint-system obtained in the non-robust case.
\subsection{Robustness of Timed Automata}\label{exp:robust}
To address the robustness problem for a real-time program $P$, we use the semi-algorithm presented in Section~\ref{sec:synth} and reduce the robustness-checking problem to that of parameter-synthesis.
Notice the delimitation of the input-problems to robust-only instances from Section~\ref{sec:robust}.
\begin{table}
\centering
\input{results/robustness}
\caption{Results for robustness analysis comparing \textsc{rttar}\xspace with \textsc{symrob}\xspace. Time is given in seconds. N/A indicates that \textsc{symrob}\xspace was unable to compute the robustness for the given model.}
\label{tab:robustness}
\end{table}
As Table~\ref{tab:robustness} demonstrates, \textsc{symrob}\xspace \cite{symrob} and \textsc{rttar}\xspace do not always agree on the results.
Notably, since the TA \texttt{M3} contains strict guards, \textsc{symrob}\xspace is unable to compute the robustness of it.
Furthermore, \textsc{symrob}\xspace under-approximates $\epsilon$, an artifact of the so-called ``loop-acceleration''-technique and the polyhedra-based algorithm.
This can be observed in the modified model \texttt{M3c}, which is now analyzable by \textsc{symrob}\xspace, but differs in results compared to \textsc{rttar}\xspace.
This is the same case with the model denoted \texttt{a}.
We experimented with $\epsilon$-values to confirm that \texttt{M3} is safe for all the values tested -- while \texttt{a} is safe only for values tested respecting $\epsilon<\frac{1}{2}$.
We can also see that our proposed method is significantly slower than the special-purpose algorithms deployed by \textsc{symrob}\xspace, but in contrast to \textsc{symrob}\xspace, it computes the maximal set of good paramaters.
\section{Introduction} \label{sec:intro}
\input{intro}
\section{Motivations} \label{sec:example}
\input{example}
\section{Real-Time Programs}
\input{definitions-2}
\section{Trace Abstraction Refinement for Real-Time Programs}\label{sec:tar}
\input{refinement}
\section{Parameter Synthesis for Real-Time Programs}\label{sec:synth}
\input{synthesis}
\section{Experiments}\label{sec:experiments}
\input{experiments}
\section{Conclusion}
\input{conclusion}
\paragraph{Acknowledgments.}
The research was partially funded by Innovation Fund Denmark center DiCyPS and ERC Advanced Grant LASSO.
Furthermore, these results was made possible by an external stay partially funded by Otto M\o nsted Fonden.
\bibliographystyle{plain}
\subsection{Refinement of Trace Abstraction for Real-Time Programs}
We have already introduced our algorithm in~\figref{fig-tar}, page~\pageref{fig-tar}.
We now give a precise formulation of the TAR semi-algorithm for real-time programs,
in~\algref{algo-1}.
It is essentially the same as the semi-algorithm as introduced in~\cite{traceref} -- we therefore omit theorems of completeness and soundness as these will be equivalent to the theorems in~\cite{traceref} and are proved in the exact same manner.
\begin{algorithm}
\small
\SetAlFnt{\small}
\SetAlFnt{\small}
\SetAlCapFnt{\small}
\SetAlCapNameFnt{\small}
\SetAlgoVlined
\SetNoFillComment
\SetKwInOut{Input}{Input}
\SetKwInOut{Output}{Result}
\SetKwInOut{Data}{Var}
\newcommand{\myrcomment}[1]{\hfill \small \textcolor{gray}{\textit{/* #1 */}}\par}
\newcommand{\mycomment}[1]{\small \textcolor{gray}{\textit{/* #1 */}} \hfill\par}
\Input{A real-time program $P=(A_P, \semof{\cdot})$.}
\Output{$(\mathit{True}, -)$ if ${\cal L}^t(P) = \varnothing$, and otherwise $(\mathit{False}, w)$ if ${\cal L}^t(P)\neq \varnothing$ with $w \in {\cal L}(A_P)$ and $\textit{Post}(\mathit{True},w) \not\subseteq \mathit{False}$ -- or non-termination.}
\Data{$R$: a regular language, initially $R = \varnothing$.\\
$w$: a word in $ {\cal L}(A_P)$, initially $w = \epsilon$.\\
$T$: A finite automaton, initially empty.
}
\While{${\cal L}(A_P) \not\subseteq R$}{
Let $w \in {\cal L}(A_P) \setminus R$;\\
\eIf{
$\textit{Post}(\mathit{True},w) \not\subseteq \mathit{False} $}{
\tcc{$w$ is feasible and $w$ is a counter-example}
\Return $(\mathit{False}, w)$\;
}{%
\tcc{$w$ is infeasible, compute an interpolant automaton based on $w$}
Let $T = \textit{ITA}(w)$\;
\tcc{Add $T$ to refinement and continue}
Let $R := R \cup {\cal L}(T)$\;
}
}
\Return $(\mathit{True}, - )$\;
\caption{RTTAR -- Trace Abstraction Refinement for Real-Time Programs}
\label{algo-1}
\end{algorithm}
The input to the semi-algorithm \emph{TAR-RT} is a real-time program $P=(A_P, \semof{\cdot})$.
An invariant of the semi-algorithm is that the refinement $R$, which is subtracted to the initial set of traces, is either empty or containing infeasible traces only.
In the coarsets, initial abstraction, all the words ${\cal L}(A_P)$ are potentially feasible.
In each iteration of the algorithm, we then chip away infeasible behaviour (via the set $R$) of $A_P$, making the set difference ${\cal L}(A_P)\setminus R$ move closer to the set of feasible traces, thereby shrinking the overapproximation of feasible traces (${\cal L}(A_P)\setminus R$).
Initially the refinement $R$ is the empty set.
The semi-algorithm works as follows:
\begin{description}
\item[Step~1] line~1, check whether all the (untimed) traces in ${\cal L}(A_P)$ are in $R$. If this is the case, ${\cal L}^t(P)$ is empty and the semi-algorithm terminates (line~8). Otherwise (line~2), there is a sequence $w \in {\cal L}(A_P) \setminus R$, goto Step~2;
\item[Step~2] if $w$ is feasible (line~3) {i.e.},~ there is a feasible timed word $\sigma$ such that $\textit{Unt}(\sigma)=w$, then $\sigma \in {\cal L}^t(P)$ and ${\cal L}^t(P) \neq \varnothing$ and the semi-algorithm terminates (line~4).
Otherwise $w$ is not feasible, goto Step~3;
\item[Step~3] $w$ is infeasible and given the reason for infeasibility we can construct (line~6) a finite \emph{interpolant automaton}, $\textit{ITA}(w)$, that accepts $w$ and other words that are infeasible for the same reason. How $\textit{ITA}(w)$ is computed is addressed in the sequel.
The automaton $\textit{ITA}(w)$ is added (line~7) to the previous refinement $R$ and the semi-algorithm starts a new round at Step~1 (line~1).
\end{description}
In the next paragraphs we explain the main steps of the algorithms: how to check feasibility of a sequence of instructions and how to build $\textit{ITA}(w)$.
\subsection{Checking Feasibility}
Given a arbitrary word $w \in \Sigma^\ast$, we can check whether $w$ is feasible by encoding the side-effects of each instruction in $w$ using linear arithmetic as demonstrated in Examples~1 and~2.
We now define a function $\textit{Enc}$ for constructing such a constraint-system characterizing the feasibility of a given trace.
We first show how to encode the side-effects and feasibility of a single instruction
$\alpha \in \Sigma$.
Recall that $\alpha=(\gamma, \mu, \rho)$ where the three components are respectively the guard, the update, and the rates.
Assume that the variables\footnote{The union of the variables in $\gamma, \mu, \rho$.} in $\alpha$ are $X = \{x_1, x_2, \cdots, x_k\}$.
We can define the semantics of $\alpha$ using the standard unprimed\footnote{$\overline{x}$ denotes the vector of variables $\{x_1, x_2, \cdots, x_k\}$.} and primed variables ($X'$).
We assume that the guard and the updates can be defined by predicates and write $\alpha=(\varphi(\overline{x}),\mu(\overline{x}, \overline{x}'),\rho(\overline{x}))$ with:
\begin{itemize}
\item $\varphi(\overline{x}) \in \beta(X)$ is the guard of the instruction,
\item $\mu(\overline{x}, \overline{x}')$ a set of constraints in $\beta(X \cup X')$,
\item $\rho: X \rightarrow \mathbb{Q}$ defines the rates of the variables.
\end{itemize}
The effect of $\alpha$ from a valuation $\overline{x}''$, which is composed of 1) discrete step if the guard is true followed by the updates leading to a new valuation $\overline{x}'$, and 2) continuous step {i.e.},~ time elapsing $\delta$, leading to a new valuation $\overline{x}$, can be encoded as follows:
\begin{equation}\label{eq-enco-inst}
\textit{Enc}(\alpha, \overline{x}'', \overline{x}', \overline{x}, \delta) = \varphi(\overline{x}'') \wedge \mu(\overline{x}'', \overline{x}') \wedge \overline{x} = \overline{x}' + (\rho, \delta) \wedge \delta \geq 0
\end{equation}
Let $K(\overline{x})$ be a set of valuations that can be defined as constraint in $\beta(X)$.
It follows that $\fut{\alpha}(K(\overline{x}))$ is defined by:
\begin{equation}
\exists \delta , \overline{x}'',\overline{x}' \text{ such that } K(\overline{x}'') \wedge \textit{Enc}(\alpha, \overline{x}'', \overline{x}', \overline{x}, \delta) \label{eq-post-enc}
\end{equation}
In other terms, $\fut{\alpha}(K(\overline{x}))$ is not empty iff $K(\overline{x}'') \wedge \textit{Enc}(\alpha, \overline{x}'', \overline{x}', \overline{x}, \delta)$ is \emph{satisfiable}.
\smallskip
We can now define the encoding of a sequence of instructions $w = \alpha_0.\alpha_1. \cdots . \alpha_n \in \Sigma^\ast$.
Given a set of variables $W$, we define the corresponding set of super-scripted variables $W^k = \{ w^j, w \in W, 0 \leq j \leq k\}$.
Instead of using $x, x', x''$ we use super-scripted variables $\overline{x}^k$ (and $\overline{y}^k$ for the intermediate variables $x'$) to encode the side-effect of each instruction in the trace:
\[
\textit{Enc}(w) = \bigwedge_{i = 0}^n \textit{Enc}(\alpha_i, \overline{x}^i, \overline{y}^i, \overline{x}^{i+1}, \delta^i)
\]
It is straighgforward to prove that the function
$\textit{Enc}:\Sigma^\ast\rightarrow\beta(X^{n+1} \cup Y^n \cup \{ \delta \}^n )$ constructs a constraint-system
characterizing exactly the feasibility of a word $w$:
\begin{fact}\label{lem:encoding}
For each $ w \in \Sigma^\ast$, $\textit{Post}(\mathit{True},w) \not\subseteq \mathit{False}$ iff $\textit{Enc}(w)$ is satisfiable.
\end{fact}
If the terms we build are in a logic supported by SMT-solvers ({e.g.},~ Linear Real Arithmetic) we can automatically check satisfiability.
If $\textit{Enc}(w)$ is satisfiable we can even collect some \emph{model} which provides witness values for the $\delta_k$.
Otherwise, if $\textit{Enc}(w)$ is unsatisfiable, there are some options to collect \emph{some reasons for unsatisfiability}: unsat cores or interpolants. The latter is discussed in the next section.
\smallskip
An example of an encoding for the real-time program $P_1$ (\figref{fig-ex1}) and the sequence $w_1 = i.t_0.t_2$ is given by the predicates in Equation \eqref{eq-1}--\eqref{eq-3}.
Hence the sequence $w_1 = i.t_0.t_2$ is feasible iff $\textit{Enc}(w_1) = C_0 \wedge C_1 \wedge C_2$ is satisfiable.
Using a SMT-solver, {e.g.},~ with Z3, we can confirm that $\textit{Enc}(w_1)$ is unsatisfiable. The interpolating\footnote{The interpolating feature of Z3 has been phased out from version 4.6.x. However, there are alternative techniques to obtain inductive interpolants {e.g.},~ using unsat cores~\cite{DBLP:conf/sigsoft/DietschHMNP17}.} solver Z3 can also generate a sequence of interpolants, $I_0 = x \leq y$ and $I_1 = x - y \leq z$, that provide a general reason for unsatisfiability and satisfy:
\begin{equation*}
\{ \mathit{True} \} \quad i \quad \{I_0\} \quad t_0 \quad \{I_1\} \quad t_2 \quad \{\mathit{False}\}\mathpunct.
\end{equation*}
We can use the interpolants to build interpolant automata as described in the next section.
\subsection{Construction of Interpolant Automata}\label{sec:construction}
\subsubsection{Inductive Interpolant}
When it is determined that a trace $w$ is infeasible, we can easily discard such a single trace and continue searching for a different one.
However, the power of the TAR method is to generalize the infeasibility of a single trace $w$ into a family (regular set) of traces.
This regular set of infeasible traces is computed from \emph{a reason of infeasibility} of $w$ and is formally specified by an \emph{interpolant automaton}, $\textit{ITA}(w)$.
The reason for infeasibility itself can be the predicates obtained by computing strongest post-conditions or weakest-preconditions or anything in between but it must be an \emph{inductive interpolant}\footnote{Strongest post-conditions and weakest pre-conditions can provide inductive interpolants}.
\begin{figure}[t]
\centering
\begin{tikzpicture}[node distance=1.5cm and 2.5cm,very thick]
%
\small
\node[initial left,state,rectangle] (in) {$\mathit{True}$};
\node[state, right of=in, above of=in,yshift=-1.0cm] (s2) {$I_0$};
\node[state, right of=in, below of=in,yshift=1.0cm] (s4) {$I'_0$};
\node[state, right of=s4] (s5) {$I'_1$};
\node[state, right of=s5] (s6) {$I'_2$};
\node[state, right of=s6] (s7) {$I'_3$};
\node[state, rectangle,right of=s7, above of=s7,yshift=-1.0cm, accepting] (s0) {$\mathit{False}$};
\node[state, right of=s5, above of=s6,yshift=-0.5cm] (s3) {$I'_1$};
\path[->]
(s2) edge node {$t_0$} (s3)
(s3) edge node {$t_2$} (s0)
(s4) edge[swap] node {$t_0$} (s5)
(s5) edge[bend left] node {$t_1$} (s6)
(s6) edge[bend left] node {$t_0$} (s5)
(s6) edge[swap] node {$t_0$} (s7)
(s7) edge[swap] node {$t_2$} (s0)
(in) edge node {$i$} (s2)
(in) edge[swap] node {$i$} (s4)
;
\end{tikzpicture}
\vspace*{-.3cm}
\caption{Interpolant automaton for ${\cal L}(\textit{ITA}(w_1)) \cup {\cal L}(\textit{ITA}(w2))$. }
\label{fig-mix-int-clock-interpol}
\end{figure}
Given a conjunctive formula $f = C_0\wedge\cdots\wedge C_m$, if $f$ is unsatisfiable,
an \emph{inductive interpolant} is a sequence of predicates $I_0,\dots,I_{m-1}$ s.t:
\begin{itemize}
\item $\mathit{True} \wedge C_0 \implies I_0$,
\item $I_{m-1} \wedge C_m \implies \mathit{False}$,
\item For each $0 \leq n < m-1$, $I_n \wedge C_{n+1} \implies I_{n+1}$, and the variables in $I_n$ appear in both $C_n$ and $C_{n+1}$ {i.e.},~ $\textit{Vars}(I_n) \subseteq \textit{Vars}(C_n) \cap \textit{Vars}(C_{n+1})$.
\end{itemize}
If the predicates $C_0, C_1, \cdots, C_m$ encode the side effects of a sequence of instructions $\alpha_0. \alpha_1.\cdots, \alpha_m$, then one can intuitively think of each interpolant as a \emph{sufficient} condition for infeasibility of the post-fix of the trace and this can be represented by a sequence of valid Hoare triples of the form $\{C\} \ a \ \{D\}$:
\begin{align*}
\{ \mathit{True} \} & \quad \alpha_0 \quad \{I_0\} \quad \alpha_1 \quad \{I_1\} \quad \cdots \quad \{ I_{m-1} \} \quad \alpha_m \quad \{\mathit{False}\}
\end{align*}
Consider the real-time program $P_3$ of \figref{fig-ex2} and the two infeasible untimed words $w_1=i.t_0.t_2$ and $w_2=i.t_0.t_1.t_0.t_2$.
Some inductive interpolants for $w_1$ and $w_2$ can be given by:
$I_0= y_0 \geq x_0 \wedge (k_0=0)$,
$I_1= y_1 \geq k_1$ for $w_1$ and
$I'_0=y_0 \geq x_0 \wedge k_0 \leq 0$,
$I'_1=y_1 \geq 1 \wedge k_1 \leq 0$,
$I'_2=y_2\geq k_2 + x_2$,
$I'_3=y_3\geq k_3 + 1$ for $w_2$.
From the inductive interpolants one can obtain valid Hoare triples by de-indexing the predicates in the inductive interpolants\footnote{This is a direct result of the encoding function $\textit{Enc}$. The interpolants can only contain at most one version of each indexed variables.} as shown in Equations~\ref{eq-int1}-\ref{eq-int2}:
\begin{align}
\{ \mathit{True} \} & \quad i \quad \{\pi(I_0)\} \quad t_0 \quad \{ \pi(I_1) \} \quad t_2 \quad \{\mathit{False}\} \label{eq-int1} \\
\{ \mathit{True} \} & \quad i \quad \{ \pi(I'_0) \} \quad t_0 \quad \{\pi(I'_1) \} \quad t_1 \quad \{ \pi(I'_2)\} \quad t_0 \quad \{\pi(I'_3)\} \quad t_2 \quad \{\mathit{False}\} \label{eq-int2}
\end{align}
where $\pi(I_k)$ is the same as $I_k$ where each indexed variable $x_j$ replaced by $x$.
As can be seen in Equation~\ref{eq-int2}, the sequence contains two occurrences of $t_0$: this suggests that a loop occurs in the program, and this loop may be infeasible as well.
Formally, because $\textit{Post}(\pi(I'_2),t_0) \subseteq I'_1$, any trace of the form $i.t_0.t_1.(t_0.t_1)^\ast.t_0.t_2$ is infeasible. This enables us to construct an interpolant automaton $\textit{ITA}(w_2)$ accepting the regular set of infeasible traces $i.t_0.t_1.(t_0.t_1)^\ast.t_0.t_2$.
Overall, because $w_1$ is also infeasible, the \emph{union} of the languages accepted by $\textit{ITA}(w_2)$ and $\textit{ITA}(w_1)$ is a set of infeasible traces as defined by the finite automaton in~\figref{fig-mix-int-clock-interpol}.
\smallskip
Given $w$ such that $\textit{Enc}(w)$ is unsatisfiable we can always find an inductive interpolant: the strongest post-conditions $\textit{Post}(\mathit{True}, w[i])$ or (the weakest pre-conditions from $\mathit{False}$) defines an inductive interpolant.
More generally, we have:
\begin{lemma}\label{lem-int-to-hoaretriple}
Let $w = \alpha_0.\alpha_1.\cdots.\alpha_m \in \Sigma^\ast$.
If $\textit{Enc}(w) = C_0 \wedge C_1 \wedge \cdots \wedge C_{m}$ is unsatisfiable and $I_0, \cdots, I_{m-1}$ is an inductive interpolant for $\textit{Enc}(w)$, the following sequence of Hoare triples
\[
\{\mathit{True}\} \quad \alpha_0 \quad \{ \pi(I_0)\} \quad \alpha_1 \quad \{ \pi(I_1)\} \quad \cdots \quad
\alpha_{m-1} \quad \{ \pi(I_{m-1}) \} \quad \alpha_m \quad \{\mathit{False}\}
\]
is valid.
\end{lemma}
\begin{proof}
The proof follows from the encoding $\textit{Enc}(w)$ and the fact that each $I_k$ is included in the weakest pre-condition $\textit{wp}(\mathit{False},\alpha_{k+1}.\alpha_m)$ which can be proved by induction using the property of inductive interpolants.
\end{proof}
\smallskip
\subsubsection{Interpolant Automata}
Let us formalize the interpolant-automata construction.
Let $w =\alpha_0.\alpha_1. \cdots.\alpha_m \in \Sigma^\ast$, $\textit{Enc}(w) = C_0 \wedge \dots \wedge C_{m}$ and assume $\textit{Post}(\mathit{True},w) \subseteq \mathit{False}$ {i.e.},~ $\textit{Enc}(w)$ is unsatisfiable (Fact~\ref{lem:encoding}).
Let $I_0,\dots I_{m-1}$ be an inductive interpolant for $C_0 \wedge \dots \wedge C_{m}$.
We can construct an interpolant automaton for $w$, $\textit{ITA}(w)=(Q^w,q^w_0,\Sigma^w,\Delta^w,F^w)$ as follows:
\begin{itemize}
\item $Q^w=\{\mathit{True},\mathit{False},\pi(I_0),\cdots,\pi(I_{m-1})\}$, (note that if two de-indexed interpolants are the same they account for one state only),
\item $\Sigma^w = \{ \alpha_0,\alpha_1, \cdots,\alpha_m \}$,
\item $F^w = \{ \mathit{False} \}$,
\item $\Delta^w$ satisfies following conditions:
\begin{enumerate}
\item $(\mathit{True}, \alpha_0, \pi(I_0)) \in \Delta^w$,
\item $(\pi(I_{m-1}), \alpha_m, \mathit{False}) \in \Delta^w$,
\item $\forall a \in \Sigma^w, \forall 0 \leq k,j \leq m - 1$, if $\textit{Post}(\pi(I_k),a) \subseteq \pi(I_j)$ then $(\pi(I_k), a, \pi(I_{j})) \in \Delta^w$.
\end{enumerate}
\end{itemize}
Notice that as $\textit{Post}(\pi(I_k),\alpha_{k+1}) \subseteq \pi(I_{k+1})$ the word $w$ itself is accepted by $\textit{ITA}(w)$ and $\textit{ITA}(w)$ is never empty.
\begin{theorem}[Interpolant Automata]\label{lem:interpolants}
Let $w$ be an infeasible word over $P$, then for all $w'\in{\cal L}(\textit{ITA}(w))$, $w'$ is infeasible.
\end{theorem}
\begin{proof}
This proof is essentially the same as the original one in~\cite{traceref}.
The proof uses rule~3 in the construction of $\textit{ITA}(w)$: every word accepted by $\textit{ITA}(w)$
goes through a sequence of states that form a sequence of valid Hoare triples and end up in $\mathit{False}$.
It follows that if $w' \in \textit{ITA}(w)$, $\textit{Post}(\mathit{True}, w') \subseteq \mathit{False}$.
\end{proof}
\subsection{Union of Interpolant Automata}\label{sec:union}
In the TAR algorithm we construct interpolant automata at each iteration and the current refinement $R$ is the \emph{union} of the regular languages ${\cal L}(\textit{ITA}(w_k))$ for each infeasible $w_k$.
The union can be computed using standard automata-theoretic operations.
This assumes that we somehow \emph{forget} the predicates associated with each state of an interpolant automaton.
In this section we introduce a new technique to re-use the information computed in each $\textit{ITA}(w_k)$ and obtain larger refinements.
\smallskip
Let $A=(Q,q_0, \Sigma, \Delta, F)$ be a finite automaton such that each $q \in Q$ is a predicate in $\varphi(X)$.
We say that $A$ is \emph{sound} if the transition relation $\Delta$ satisfies: $(I,\alpha,J) \in \Delta$ implies that
$\fut{\alpha}(I) \subseteq J$ (or $\textit{Post}(I,\alpha) \subseteq J$).
Let $R=(Q^R, \{\mathit{True}\}, \Sigma^R, \Delta^R, \{ \mathit{False}\})$ be a sound finite automaton that accepts only infeasible traces.
Let $w \in \Sigma^\ast$ with $w$ infeasible. The automaton $\textit{ITA}(w)=(Q^w, \{\mathit{True}\}, \Sigma^w, \Delta^w, \{ \mathit{False}\})$ built as described in section~\ref{sec:construction} is sound. We can define an \emph{extended union}, $R \uplus \textit{ITA}(w) = ( Q^R \cup Q^w, \{ \mathit{True} \}, \Sigma^R \cup \Sigma^w, \Delta^{R \uplus \textit{ITA}(w)}, \{\mathit{False}\} )$ of $R$ and $\textit{ITA}(w)$ with:
\[
\Delta^{R \uplus \textit{ITA}(w)} = \{ (p, \alpha, p') \} \mid \exists (q,\alpha,q') \in \Delta^R \cup \Delta^w \text{ s.t.} p\subseteq q\text{ and }p'\supseteq q'\}.\label{union2}
\]
It is easy to see that ${\cal L}(R \uplus \textit{ITA}(w)) \supseteq {\cal L}(R) \cup {\cal L}(\textit{ITA}(w))$ but also:
\begin{theorem}
Let $w' \in {\cal L}(R \uplus \textit{ITA}(w))$. Then $\textit{Post}(\mathit{True},w') \subseteq \mathit{False}$.
\end{theorem}
\begin{proof}
Each transition $(p, \alpha, p') $ in $R \uplus \textit{ITA}(w)$ corresponds to a valid Hoare triple.
It is either in $\Delta^R$ or $\Delta^w$ and then is valid by construction or it is weaker than an established Hoare triple in $\Delta^R$ or $\Delta^w$.
\end{proof}
This theorem allows us to use the $\uplus$ operator in Algorithm~\ref{algo-1} instead of the standard union of regular languages. The advantage is that we re-use already established Hoare triples to build
a larger refinement at each iteration.
\iffalse
\smallskip
While Section~\ref{sec:construction} outlines the construction of a single interpolant automaton, the union operation $R \cup {\cal L}(\textit{ITA}(w))$ is left unspecified.
As $R$ is a regular language, we can assume that $R$ itself is represented as a NFA, and thus the union operation can be seen as the union of two finite automata.
Let $\mathcal{A}^R=(Q^R,q^R_0,\Sigma^R,\Delta_w^R,F^R)$ denote the current representation of the language $R$, then
clearly, for achieving the simple union $R \cup {\cal L}(\textit{ITA}(w))$, one can simply treat every location in $\mathcal{A}^R$ and $\textit{ITA}(w)$ as unique and via conventional finite automata-constructions, compute an interpolant automata capturing exactly the language $\mathcal{A}^R \cup {\cal L}(\textit{ITA}(w))$.
However, as the construction used in the $\textit{ITA}$ algorithm preserves some semantics via the naming of the (namely the actual obtained interpolants), we can utilize this information to construct more elaborate unions.
The basic idea is that a given state $q\inQ^R$ of $\mathcal{A}^R$ in fact represents a trace-based invariant over the state-space of $P$; after any prefix $w$ leading to $q$ we know that any accepted postfix $u$ from $q$ is infeasible from any state satisfying the predicate used for the labeling of $q$.
As a matter of fact, we already know that the transitions of the automata constructed with the $\textit{ITA}$ algorithm represents Hoar-tripples, and thus we can simply let the automata $\mathcal{A}^R$ construct a transition-function directly from said tripples.
Let $\mathcal{A}_w^I=(Q_w^I,q^I_0,\Sigma^I,\Delta_w^I,F^I)=\textit{ITA}(w)$ the interpolant automaton for the word $w\in\Sigma^\ast$, then we can construct the unioned automata $\mathcal{A}^R_w=(Q^R_w,q^R_0,\Sigma^R,\Delta^R_w,F^R)$ s.t. ${\cal L}(\mathcal{A}^R_w)\supseteq{\cal L}(\mathcal{A}^R)\cup{\cal L}(\mathcal{A}^I_w)$.
\begin{enumerate}
\item Let $Q^R_w=Q^R\cupQ_w^I$, and
\item let $\Delta^I=\{(p,\alpha,p')\mid \exists (q,\alpha,q')\in\Delta_R\cup\Delta_w^I\text{ s.t. }p\subseteq q\text{ and }p'\supseteq q'\}$.\label{union2}
\end{enumerate}
The intuition of the construction is as follows; $(q,\alpha,q')\in\Delta_R\cup\Delta_w^I$ implies that we have already proven that from any state of our real-time program $P$ which is ``only'' in $p\subseteq q$, if we exhibit an $\alpha$ instruction, then we are bound to reach a state ``definitely" in $p'\supseteq q'$.
So clearly, what is invariant for $q$ is also invariant for any subset $p$.
In fact, what we here utilize is that any infeasible trace from $q$ surely also is infeasible from $p$ {i.e.},~ we can say that $p$ simulates $q$ in terms of infeasibility and should thus also in terms of the regular language accepted.
We can thus state the following lemmas of language preservation.
\begin{theorem}[Language Preservation]
Let $\mathcal{A}^R$ and $\mathcal{A}^I_w$ be two interpolant automata then ${\cal L}(\mathcal{A}^R)\cup{\cal L}(\mathcal{A}^I_w)\subseteq {\cal L}(\mathcal{A}^R\cup\mathcal{A}^I_w)$.
\end{theorem}
\begin{proof}
We can easy prove the lemma as the transition-relation constructed implies that $\Delta^R_w\supseteq\Delta_w^I\cup\Delta_u^I$ given that $p\subseteq q$ is true also if $p=q$ (and similar for $p',q'$).
\end{proof}
Furthermore, we can state that all words accepted by the automata constructed via union are also infeasible.
\begin{theorem}[Soundness Preservation]
Let $\mathcal{A}^R$ and $\mathcal{A}^I_w$ be two sound interpolant automata then the infeasibility of all words $w\in{\cal L}(\mathcal{A}^R)\cup{\cal L}(\mathcal{A}^I_w)$ on $P$ implies the infeasibility of all words $u\in{\cal L}(\mathcal{A}^R\cup\mathcal{A}^I_w)$ on $P$.
\end{theorem}
\begin{proof}
Notice that soundness simply requires that $\textit{Post}(q,\alpha)\subseteq q'$ for all $(q,\alpha,q')\in\Delta^R_w$.
We know that this trivially holds for both $\Delta^R$ and $\Delta^I_w$, thus $\Delta^R\cup\Delta^I_w$ is also a sound transition relation.
Let us therefor focus on the transitions which fall into the set $\Delta^R_w\setminus(\Delta^R\cup\Delta^I_w)$.
Such a transition must stem from step \ref{union2} of our automata union where for some $p\neq q$ or $p'\neq q'$ s.t. $(q,\alpha,q')\in\Delta^R\cup\Delta^I_w$, $p\subseteq q$, $p'\supseteq q'$ and $(p,\alpha,p')\not\in\Delta^R\cup\Delta^I_w$.
As $p\subseteq q$ we know that $\textit{Post}(p,\alpha)\subseteq\textit{Post}(q,\alpha)$ and thus $\textit{Post}(p,\alpha)\subseteq q'$.
As we also know that $p'\supseteq q'$ then we have that $\textit{Post}(q,\alpha)\subseteq p'$ and $\textit{Post}(p,\alpha)\subseteq p'$.
\end{proof}
\fi
\subsection{Feasibility Beyond Timed Automata}
Satisfiability can be checked with an SMT-solver (and decision procedures exist for useful theories).
In the case of timed automata and stopwatch automata, the feasibility of a trace can be encoded in linear arithmetic. The corresponding theory, Linear Real Arithmetic (LRA) is decidable and supported by most SMT-solvers.
It is also possible to encode non-linear constraints (non-linear guards and assignments).
In the latter cases, the SMT-solver may not be able to provide an answer to the SAT problem as
non-linear theories are undecidable. However, we can still build on a semi-decision procedure of the SMT-solver, and if it provides an answer, get the status of a trace (feasible or not).
\subsection{Sufficient Conditions for Termination}
Let us now construct a set of criteria on a real-time program $P=((Q,q_0,\Sigma, \Delta, F),\semof{\cdot})$ s.t. our proposed method is guaranteed to terminate.
\begin{lemma}{Termination}
The algorithm presented in \figref{fig-tar} terminates if the following three conditions hold.
\begin{enumerate}
\item For any word $\sigma\in\Sigma^\ast$, then $\semof{\sigma}$ is expressible within a decidable theory (supported by the solver), and\label{term1}
\item the statespace of $P$ has a finite representation, and\label{term2}
\item the solver used returns interpolants within the finite statespace representation.\label{term3}
\end{enumerate}
\end{lemma}
\begin{proof}
First consider the algorithm presented in \figref{fig-tar}, then we can initially state that for each iteration of the loop $R$ grows and thus the NFA representing $R$ ($\mathcal{A}^R$) must also.
As per the construction presented in Section~\ref{sec:union} we can observe that the transition-function of $\mathcal{A}^R$ will increase by at least one in each iteration in Step 3.
If not, the selection of $\sigma$ between step 1 and step 2 is surely violated or the construction of $\textit{ITA}$ in step 3 is.
From Conditions \ref{term2} and \ref{term3} we have that the statespace is finitely representable and that these representatives are used by the solver.
Thus we know that the interpolant automata also has a finite set of states as per the construction of Section~\ref{sec:union}.
Together with the finiteness of the set of instructions, this implies that the transition-function of the interpolant automata must also be finite.
Hence, the algorithm can (at most) introduce a transition between each pair of states with each instruction, but must at least introduce a new one in every iteration.
\end{proof}
As this termination condition relies on the solver, it is heavily dependent on the construction of the solver.
However, if we consider the class of real-time programs captured by Timed Automata, we know that condition \ref{term1} is satisfied (in fact it is Linear Real Arithmetic), condition \ref{term2} is satisfied via the region-graph construction.
This leaves the construction of a solver satisfying condition \ref{term3}, which in turn should be feasible already from condition \ref{term2}, but is practically achievable for TA via extrapolation-techniques and difference bound matrices (or for systems with only non-strict guards; timed-darts or integer representatives).
\subsection{\bfseries Maximal Safe Initial Set Problem}
Given a real-time program $P$, the objective is to determine a set of \emph{initial valuations} $I \subseteq [V \rightarrow \mathbb{R}]$
such that, when we start the program in $I$, ${\cal L}^t(P)$ is empty.
Given a constraint $I \in \beta(V)$, we define the corresponding \emph{assume} instruction by:
$\textit{Assume}(I) = (I, \textit{Id}, \overline{0})$. This instruction leaves all the variables unchanged (discrete update is the identity function and the rate vector is $\overline{0}$) and this acts as a guard only.
Let $P=(Q, q_0, \Sigma, \Delta, F)$ be a real-time program and $I \in \beta(V)$. We define the
real-time program $\textit{Assume}(I).P=(Q, \{ \iota \}, \Sigma \cup \{ \textit{Assume}(I) \}, \Delta \cup \{(\iota,\textit{Assume}(I),q_0)\}, F)$.
\smallskip
The \emph{\bf \itshape maximal safe initial state problem} asks the following:
\begin{quote}
\bf Given a real-time program $P$, find a maximal $I \in \beta(V)$ s.t. ${\cal L}^t(\textit{Assume}(I).P) = \varnothing$.
\end{quote}
\subsection{\bfseries Semi-Algorithm for the Maximal Safe Initial State Problem}
Let $w \in {\cal L}(\textit{Assume}(I).P)$ be a feasible word. It follows that $\textit{Enc}(w)$ must be satisfiable.
We can define the set of initial values for which $\textit{Enc}(w)$ is satisfiable by projecting away all the variables in the encoding $\textit{Enc}(w)$ except the ones indexed by $0$. Let $I_0 = \exists (\textit{Vars}(\textit{Enc}(w)) \setminus X^0) .\textit{Enc}(w)$ be the resulting (existentially quantified) predicate and $\pi(I_0)$ be the corresponding constraint on the program variables without indices. We let $\exists_i(w) = \pi(I_0)$.
It follows that $\exists_i(w)$ is the maximal set of valuations for which $w$ is feasible.
Note that existential quantification for the theory of Linear Real Arithmetic is within the theory via Fourier–Motzkin-elimination -- hence the computation of $\exists_i(w)$ by an SMT-solver only needs support for Linear Real Arithmetic when $P$ encodes a linear hybrid, stopwatch or timed automaton.\footnote{This idea of using Fourier-Motzkin elimination has already been proposed~\cite{10.1007/3-540-48320-9_14} in the context of timed Petri nets.}
\smallskip
The TAR-based semi-algorithm for the maximal safe initial state problem is presented in \figref{fig-rob-emptiness}.
\begin{figure}[ht]
\centering
\begin{tikzpicture}[scale=1,node distance=1.4cm and 1.5cm, very thick, bend angle=20,bend angle=10]
\fontsize{10}{11}\selectfont
\node[module](0,0) (init) {\textbf{1:} ${\cal L}^t(\textit{Assume}(I).P) = \varnothing$?};
\node[below of=init,xshift=-0cm,green] (nobug) {Maximal safe init is $I$};
\node[above of=init,xshift=-2.9cm] (start) {$I := \mathit{True}$};
\node[module,right of=init, xshift=6cm] (step3) {\textbf{2:} $I := I \wedge \neg \exists_i(\textit{Unt}(\sigma))$};
\coordinate[above of=step3] (o3);
\coordinate[above of=init] (oi);
\path[->] (init.south) edge[draw=green,swap] node {\green{Yes}} (nobug);
\draw[->,draw=red] (init.north) node[xshift=-0.3cm,yshift=.5cm] {\red{No}} -- (oi) -- node[yshift=0cm] {
\begin{tabular}{c}
\red{Let $\sigma \in {\cal L}^t(\textit{Assume}(I).P)$}
\end{tabular}} (o3) -- (step3.north);
\draw[->] (step3) -- (init);
\draw[->] (start) |- ($(init.west)$);
\end{tikzpicture}
\caption{Semi-algorithm $\mathit{SafeInit}$.}
\label{fig-rob-emptiness}
\end{figure}
\noindent The semi-algorithm in~\figref{fig-rob-emptiness} works as follows:
\begin{enumerate}
\item initially $I = \mathit{True}$
\item using the semi-algorithm~\ref{algo-1}, check whether ${\cal L}^t(\textit{Assume}(I).P)$ is empty
\item if so $P$ does not accept any timed word when we start from $\semof{I}$;
\item Otherwise, there is a witness word $\sigma \in {\cal L}^t(\textit{Assume}(I).P)$,
implying that $I\wedge\textit{Enc}(\textit{Unt}(\sigma))$ is satisfiable.
It follows that $\exists_i.\textit{Enc}(\textit{Unt}(\sigma))$ cannot be part of the maximal
set. It is used to strengthen $I$ and repeating from step~2.
\end{enumerate}
If the semi-algorithm terminates, it computes exactly \textbf{the} maximal set of values for which the system is safe ($I$), captured formally by Theorem~\ref{thm-max-constraint}.
\begin{theorem}\label{thm-max-constraint}
If the semi-algorithm $\mathit{SafeInit}$ terminates and outputs $I$, then:
\begin{enumerate}
\item ${\cal L}^t(\textit{Assume}(I).P) = \varnothing$ and
\item for any $I'\in \beta(V)$, ${\cal L}^t(\textit{Assume}(I').P) = \varnothing$ implies $I' \subseteq I$.
\end{enumerate}
\end{theorem}
\begin{proof}
The fact that ${\cal L}^t(\textit{Assume}(I).P) = \varnothing$ follows from termination.
The fact that $I$ is maximal is an invariant of the semi-algorithm: at the beginning, $I = \mathit{True}$ and is clearly maximal. At each iteration, we may subtract a set of valuations $K$ from the previously computed $I$, but these valuations are all such that ${\cal L}^t(\textit{Assume}(\nu).P) \neq \varnothing$ for any $\nu\in K$ by definition of existential quantification.
Hence every time a set of valuations is removed by strengthening $I$ only unsafe initial valuations are removed.
It follows that if $\textit{safeInit}$ terminates, $I$ is maximal.
\end{proof}
\subsection{Parameter Synthesis}
Let $P=(Q, q_0, \Sigma, \Delta, F)$ be a real-time program over a set of variables $X \cup U$ such that: $\forall u \in U, \forall (g,\mu,\rho) \in \Delta, (\nu,
\nu') \in \mu \implies \nu(u) = \nu'(u)$ and $\rho(u) = 0$. In words, variables in $U$ are constant variables. Note that they can appear in the guard $g$.
\smallskip
The \emph{\bf \itshape parameter synthesis problem} asks the following:
\begin{quote}
\bf \hskip-.9em Given a real-time program $P$, find a maximal set $I \in \beta(U)$ s.t. ${\cal L}^t(\textit{Assume}(I).P) = \varnothing$.
\end{quote}
The \emph{parameter synthesis problem} is a special case of
the maximal safe initial state problem.
Indeed, solving the maximal safe initial state problem allows us to find the maximal set of parameters such that ${\cal L}^t(P) = \varnothing$.
Let $I$ be a solution\footnote{For now assume there is a unique maximal solution.} to the maximal safe initial state problem. Then $\exists (\textit{Vars}(P) \setminus U).I$ is a maximal set of parameter values such that ${\cal L}^t(P) = \varnothing$.
\subsection{Robustness Checking}\label{sec:robust}
Another remarkable feature of our technique is that it can readily be used to check \emph{robustness} of real-time programs and hence timed automata.
In essence, checking robustness amounts to enlarging the guards of a real-time program $P$ by an $\varepsilon > 0$.
The resulting program is $P_\varepsilon$.
\smallskip
The \emph{\bf \itshape robustness problem} asks the following:
\begin{quote}
Given a real-time program $P$, is there some $\epsilon > 0$, s.t. ${\cal L}^t(P_\epsilon) = \varnothing$.
\end{quote}
Using our method we can solve the \emph{\bf \itshape robustness synthesis problem} which asks the following:
\begin{quote}
Given a real-time program $P$, find a maximal $\epsilon > 0$, s.t. ${\cal L}^t(P_\epsilon) = \varnothing$.
\end{quote}
This problem asks for a witness (maximal) value for $\epsilon$.
The robustness synthesis is a special case of the parameter synthesis problem where $\epsilon$ is a parameter of the program $P$.
Note that in our experiments (next section), we assume that $P$ is robust and in this case we can compute a maximal value for $\epsilon$.
Proving that a program is non-robust requires proving \textit{feasibility} of infinite traces for ever decreasing $\epsilon$.
We have developed some techniques (similar to proving termination for standard programs) to do so but this is still under development.
| proofpile-arXiv_065-221 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}\noindent
\nocite{ForbesEtAl_IP_MBR}
In a recent paper in this journal, Forbes, Wright, Markon and Krueger \citeyearpar[][henceforth FWMK]{ForbesEtAl_IP_MBR} voiced the concern that Gaussian graphical models (GGMs) that are estimated from partial correlations wrongfully remove crucial information from the data: The variance that is shared by its constituents. This concern is fundamental to their evaluation of the use of network models in psychopathology \citep[see, for instance;][]{ForbesEtAl_2017, ForbesEtAl_2019_WorldPsychiatry}. FWMK are under the impression that if an edge between two variables is estimated using a partial correlation, ``the edge is based on the variance shared by each pair of [variables] \emph{after removing the variance they share with all other [variables] in the network} (p. 13, their italics).\footnote{Even though their concern is only about the use of network models in the context of psychopathology, their critique is about a statistical concept or method and not about context. We have therefore replaced the word "symptom" with "variable" in this quote to express the generality of their concern.}
When the network comprises many variables, a large part of the covariance between the two focal variables is shared with other variables in the network. As a result, the region of unique covariance between the focal variables shrinks with the size of the network, and estimated edges become ``unreliable'' and primarily made up of ``random and systematic error'' (p. 14).
Here we show that the concerns of FWMK are wrong.
We illustrate the concern of FWMK for a three-variable network in Figure \ref{fig:explained-variance}, which we will also use later in Section \ref{sec:partial-correlation-regression}. We aim to obtain the relation between the variables $X_1$ and $X_3$ at the population level. Their covariance comprises two parts; one part that embodies the variance that is shared only by the two variables $X_1$ and $X_3$, and one part that embodies the variance that is shared by the two variables $X_1$ and $X_3$ in conjunction with the other variable $X_2$. In Figure \ref{fig:explained-variance}(a), we see a representation of the overlap in variance between all three variables.
In Figure \ref{fig:explained-variance}(b), we see the complete covariance between $X_1$ and $X_3$. In Figure \ref{fig:explained-variance}(c), on the other hand, the overlap between $X_1$ and $X_2$ has been removed (partialled out) and only considers the unique contributions of $X_1$ and $X_2$ on $X_3$. The concern of FWMK is that partial correlations remove the overlap between $X_1$ and $X_2$ in estimating the relation between $X_1$ and $X_3$. We will demonstrate, however, that this view is incorrect. That is, all the variance of $X_3$ that $X_1$ and $X_2$ \textit{could} explain \textit{is} explained using partial correlations, and so no information is lost.
We provide two arguments that show that no information is lost when using partial covariances (or regression coefficients) with respect to covariances. The first is that the partial covariance matrix is a one-to-one correspondence (i.e., a bijection) with the covariance matrix. This implies that you can go back and forth between the two worlds, and that these partial covariance and covariance worlds are basically the same. The second argument uses the regression perspective and shows that the explained variance ($R^{2}$) contains all shared variance from the predictors, and so nothing is lost.
\def(0,0) circle (1cm){(0,0) circle (1cm)}
\def(45:1.5cm) circle (1cm){(45:1.5cm) circle (1cm)}
\def(0:1.5cm) circle (1cm){(0:1.5cm) circle (1cm)}
\begin{figure}[t]\centering
\begin{tabular}{ c @{\hspace{5em}} c }
&
\begin{tikzpicture}[scale = 1.2]
\draw (0,0) circle (1cm) node[below] {$X_{1}$};
\draw (45:1.5cm) circle (1cm) node [above] {$X_{3}$};
\draw (0:1.5cm) circle (1cm) node [below] {$X_{2}$};
\draw (-1.5,0) node [left,above] {(b)};
\begin{scope}
\clip (0,0) circle (1cm);
\fill[gray] (45:1.5cm) circle (1cm);
\end{scope}
\end{tikzpicture}
\\[-4em]
\begin{tikzpicture}[scale = 1.2]
\draw (0,0) circle (1cm) node[below] {$X_{1}$};
\draw (45:1.5cm) circle (1cm) node [above] {$X_{3}$};
\draw (0:1.5cm) circle (1cm) node [below] {$X_{2}$};
\draw (-1.5,0) node [left, above] {(a)};
\begin{scope}
\clip (0,0) circle (1cm);
\fill[gray] (45:1.5cm) circle (1cm);
\end{scope}
\begin{scope}
\clip (45:1.5cm) circle (1cm);
\fill[gray] (0:1.5cm) circle (1cm);
\end{scope}
\begin{scope}
\clip (0,0) circle (1cm);
\clip (45:1.5cm) circle (1cm);
\fill[gray] (0:1.5cm) circle (1cm);
\end{scope}
\end{tikzpicture}
&\\[-4em]
&
\begin{tikzpicture}[scale = 1.2]
\draw (0,0) circle (1cm) node[below] {$X_{1}$};
\draw (45:1.5cm) circle (1cm) node [above] {$X_{3}$};
\draw (0:1.5cm) circle (1cm) node [below] {$X_{2}$};
\draw (-1.5,0) node [left, above] {(c)};
\begin{scope}
\clip (0,0) circle (1cm);
\fill[gray] (45:1.5cm) circle (1cm);
\end{scope}
\begin{scope}
\clip (45:1.5cm) circle (1cm);
\fill[gray] (0:1.5cm) circle (1cm);
\end{scope}
\begin{scope}
\clip (0,0) circle (1cm);
\clip (45:1.5cm) circle (1cm);
\fill[white] (0:1.5cm) circle (1cm);
\end{scope}
\end{tikzpicture}
\end{tabular}
\caption
A network of three variables (variables are nodes) or the regression of node $X_{3}$ on the predictors $X_{1}$ and $X_{2}$. Part (a) shows all variance that the predictors share with the dependent variable. Part (b) shows the contribution of $X_1$ to the explained variance in regression (i.e., $R^{2}$). Part (c) illustrates the variance that comes from each regressor separately. The shared variance is removed from the contribution of the regressors to prevent bias in the associated coefficients.}
\label{fig:explained-variance}
\end{figure}
FWMK's conviction is that partial correlations only capture the unique covariances (the shared variance between $X_1$ and $X_3$ that does not overlap with $X_2$) but not the shared covariances (the shared variance between $X_1$, $X_2$, and $X_3$).
If the partial correlation indeed excludes the shared covariances, then the data that come from a unidimensional latent variable model (ULVM) should be associated with an empty network (no edges), as there are no unique covariances in a ULVM. This result is crucial since we often use instruments that are consistent with low-dimensional or ULVM, such as IQ-tests, in psychology. If partial correlations indeed only use the unique parts of the covariances, networks based on the ULVM, data of IQ tests, for example, would at the population level be empty and contain no edges. As a result, the GGM is useless in the case of the ULVM, even at a descriptive level, as it is unable to convey the most basic observation in intelligence research, the positive manifold. We agree that if this view is correct, the future of GGMs applied to psychological data would be dire.
It is hard to overstate the severity of the above conclusion about GGMs. However, it also suggests that its premises must be wrong, as it is well-known that if the data come from a unidimensional latent variable model, the estimated network is going to be fully-connected and not empty. That these networks are fully-connected was theoretically and empirically shown in the case of binary variables, using a connection between binary latent variable models and binary network models \citep{EpskampEtAl_2018_HoP, MarsmanEtAL_2018_MBR, MarsmanEtAl2015SciRep}, and was also proven in the general case by, for example, \citet{HollandRosenbaum1986}. At a minimum, this implies that there is an essential element missing in the understanding of GGMs and partial correlations, and this paper aims to fill that gap.
The remainder of this paper comprises three parts. In the first part, we formally introduce the ULVM and GGM and consider the role that partial correlations play in the estimation of a GGM. In the second part, we will analyze the theoretical relationship between the GGM and the ULVM and show that one indeed does expect to obtain a fully-connected network from data that come from the ULVM. In the third part, we revisit the relationship between linear regression, the GGM, and partial correlation to prove that the GGM estimated from partial correlations indeed conveys the shared variance.
\section{Models}\noindent
In this section, we introduce the unidimensional latent variable model (ULVM) and the GGM. We show how the assumptions about the ULVM's regression from the latent variable to the observed variables leads to a particularly simple form of the population covariance matrix. We will use the expression that we obtain for the covariance matrix to relate the ULVM to the GGM in the next section. There, we will also show that for the ULVM, observed partial correlations would all be positive. That this proves our first point that the GGM applied to data coming from a ULVM will be fully-connected and not empty, relies on the fact that estimating the edges in a GGM is equivalent to computing the nonzero elements in the matrix of partial correlations. We will show that determining the matrix of partial correlations is equivalent to obtaining the independence relations between variables in the network in this section, and provide a small example to illustrate the principle.
\subsection{The Unidimensional Latent Variable Model}
\label{sec:latent-variable-model}
The ULVM assumes that there is a single latent variable $\eta$ (a random variable) that can explain away the correlations between the observed random variables $X_{1},X_{2},\ldots,X_{p}$. In other words, we have that $X_{i}$ and $X_{j}$ are independent conditional on the latent variable $\eta$. This conditional independence implies that the correlation between $X_{i}$ and $X_{j}$ is 0 given $\eta$. This assumption is often called local independence and is written as $X_{i}\indepen X_{j}\mid \eta$, where the symbol $\indepen$ stands for statistical independence \citep{Dawid1979}. The relation between each observed variable $X_{i}$ and the latent variable $\eta$ is often assumed linear, that is,
\begin{align}
X_{i} = \lambda_{i}\eta + e_{i}
\end{align}
where $\lambda_{i}$ is the loading (regression coefficient) for the relation between the observed- and latent variables, and $e_{i}$ is the error (or residual if there is misspecification). See Figure \ref{fig:latent-network}, left panel, for a graphical illustration of the model. We assume that both the observed- and latent variables are continuous and have a joint Gaussian distribution.
\begin{figure}[t]
\hspace{2em}
\begin{tikzpicture}[scale=1.1,
transform shape, node distance=2cm,
roundnode/.style={circle, draw=black, thick, minimum size=7mm},
squarednode/.style={rectangle, draw=black, thick, minimum size=5mm},
arrow/.style = {semithick,-Stealth},
dotnode/.style={fill,inner sep=0pt,minimum size=2pt,circle}
]
\node[roundnode] (latent) {$\eta$};
\node[squarednode, left=1.2cm of latent,yshift=0.5cm] (X2) {$X_2$};
\node[squarednode, above=0.5cm of X2] (X1) {$X_1$};
\node[squarednode, below=0.5cm of X2] (X3) {$X_3$};
\node[squarednode, below=0.5cm of X3] (X4) {$X_4$};
\foreach \i/\Yshift in {1/0,2/-3pt,3/-2pt,4/0}
{
\node[left=1cm of X\i] (e\i) {$e_\i$};
\draw[arrow] (latent) -- node["$\lambda_\i$"{inner sep=1pt,yshift=\Yshift}]{} (X\i.east);
\draw[arrow] (e\i) -- (X\i);
}
\end{tikzpicture}
\hspace{5em}~
\raisebox{2em}{
\begin{tikzpicture}[scale=1.1,
transform shape, node distance=2cm,
roundnode/.style={circle, draw=black, thick, minimum size=7mm},
squarednode/.style={rectangle, draw=black, thick, minimum size=5mm},
arrow/.style = {semithick,-Stealth},
dotnode/.style={fill,inner sep=0pt,minimum size=2pt,circle}
]
\node[roundnode] (X1) {$X_{1}$};
\node[roundnode, right=1.2cm of X1] (X2) {$X_{2}$};
\node[roundnode, below=1.2cm of X1] (X3) {$X_{3}$};
\node[roundnode, below=1.2cm of X2] (X4) {$X_{4}$};
\draw[-, thick] (X1) -- node[above]{$\alpha\lambda_{1}\lambda_{2}$} (X2);
\draw[-, thick] (X1) -- node[left]{$\alpha\lambda_{1}\lambda_{3}$} (X3);
\draw[-, thick] (X1) -- node[left]{\tiny $\alpha\lambda_{1}\lambda_{4}$} (X4);
\draw[-, thick] (X2) -- node[right]{\tiny $\alpha\lambda_{2}\lambda_{3}$} (X3);
\draw[-, thick] (X2) -- node[right]{$\alpha\lambda_{2}\lambda_{4}$} (X4);
\draw[-, thick] (X3) -- node[below]{$\alpha\lambda_{3}\lambda_{4}$} (X4);
\end{tikzpicture}}
\caption{The left panel shows a unidimensional latent variable model with observed variables $X_{1}, X_2, X_3$ and $X_{4}$, a scalar latent variable $\eta$, and loadings $\lambda_{1}, \lambda_2, \lambda_3$ and $\lambda_{4}$. The $e_{i}$ on the left are the error terms for the observed variables.
The right panel shows the associated network model. All observed variables are connected to each other with parameter $\alpha\lambda_{i}\lambda_{j}$, where $-\alpha^{-1}=\lambda^{\sf T}\lambda+1$ (see (\ref{eq:inverse-covariance}) and further for details on the weights)}.
\label{fig:latent-network}
\end{figure}
By considering covariances between the variables given the linear model above we find intuitive notions about what to expect from such a model. Suppose that the mean and variance of the latent variable $\eta$ are $\mu_{\eta}$ and $1$, respectively, and that the mean and variance of the error $e_{i}$ are $0$ and $1$, respectively.
We assume that the errors of different variables are uncorrelated and that the errors are also uncorrelated with the latent variable. These assumptions are
\begin{itemize}
\item[(a)] $\mathbb{E}(e_{i})=0$ and $\mathbb{E}(e_{i}e_{j})=0$, $\mathbb{E}(e_{i}^{2})=1$
\item[(b)] $\mathbb{E}[e_{i}(\eta-\mu_{\eta})]=0$ \hfill \refstepcounter{equation}\textup{(\theequation)}
\item[(c)] $\mathbb{E}(\eta)=\mu_{\eta}$ and $\mathbb{E}(\eta-\mu_{\eta})^{2}=1$
\end{itemize}
With these assumptions we find the following expression for the marginal covariance of variables $X_i$ and $X_j$, $i\ne j$,
\begin{align}
\text{cov}(X_{i},X_{j})&=\mathbb{E}(X_{i}-\lambda_{i}\mu_{\eta})(X_{j}-\lambda_{j}\mu_{\eta}) \notag\\
&=\mathbb{E}(\lambda_{i}(\eta-\mu_{\eta})+e_{i})(\lambda_{j}(\eta-\mu_{\eta}) +e_{j})=\lambda_{i}\lambda_{j}.
\end{align}
If $i=j$, then we obtain $\text{var}(X_{i})=\lambda^{2}_{i}+1$. In other words, the covariance matrix of the random variables in the $p$ vector $\mathbf{x}=(X_{1},X_{2},\ldots,X_{p})^{\sf T}$ is equal to the $p\times p$ matrix
\begin{align}
\boldsymbol{\Sigma}=\boldsymbol{\lambda}\boldsymbol{\lambda}^{\sf T}\mathbb{E}(\eta-\mu_{\eta})^{2}+\mathbb{E}(\mathbf{e}\mathbf{e}^{\sf T})=\boldsymbol{\lambda}\boldsymbol{\lambda}^{\sf T}+\mathbf{I}_{p}
\end{align}
where $\mathbf{I}_{p}$ is the $p\times p$ identity matrix and $\boldsymbol{\lambda}=(\lambda_{1},\ldots,\lambda_{p})^{\sf T}$ and $\mathbf{e}=(e_{1},\ldots, e_{p})^{\sf T}$ are $p$ vectors. In an empirical analysis the interest is in estimating the parameters $\boldsymbol{\lambda}$ by fitting this expected variance matrix to the sample variance matrix.
When we condition on the latent variable $\eta$ we obviously obtain a different covariance matrix.
We fix $\eta$ to any particular value (conditioning) and then determine expectations.
We find the following expression for the conditional covariance of variables $X_i$ and $X_j$, $i\ne j$,
\begin{align}
\text{cov}(X_{i},X_{j}\mid \eta)=\mathbb{E}[(X_{i}-\lambda_{i}\eta)(X_{j}-\lambda_{j}\eta)\mid \eta]=\mathbb{E}(e_{i}e_{j}\mid \eta)=0
\end{align}
and the value $1$ if $i=j$.
This shows that conditional on the latent variable $\eta$ the correlations between any of the observed variables is indeed equal to $0$.
\subsection{The Gaussian Graphical Model}
\label{sec:gaussian-graphical-model}
What do we mean by a network or graphical model? In the case where all variables have a joint Gaussian (multivariate normal) density, we speak of a GGM. A GGM refers to the correspondence between a picture of a network and conditional independence relations. In particular, the nodes of the network $G=(V,E)$ in $V=\{1,2,\ldots,p\}$ are associated with random variables $X_{1},X_{2},\ldots,X_{p}$, and the edges of the network in $E=\{(i,j)\in V\times V: i - j\}$ indicate that whenever variables $i$ and $j$ are neighbours (adjacent), i.e., $i - j$, then $X_{i}$ is dependent on $X_{j}$ given all remaining variables $X_{V\backslash \{i,j\}}$, where the set $V\backslash \{i,j\}$ is the set of nodes $1,2,\ldots,p$ with the nodes $i$ and $j$ removed. For Gaussian random variables, it turns out that determining that two variables are independent given all other variables, is the same as checking if the partial correlation between these two variables is equal to $0$ \citep[][Section 5.1.3]{Lauritzen96}.
It turns out that the matrix of partial covariances of all variables corresponds exactly to the inverse of the (co)variance matrix $\boldsymbol{\Sigma}$ of all variables $X_{1},X_{2},\ldots,X_{p}$. The partial correlations can be obtained from the matrix of partial covariances by dividing each off-diagonal element with the product of corresponding diagonal elements. The inverse $\boldsymbol{\Sigma}^{-1}=\boldsymbol{\Theta}$ is often referred to as the concentration matrix. So, in a multivariate Gaussian distribution all we need do is to determine the zeros in the concentration matrix and we have found our conditional independencies.
\citet[][Proposition 5.2]{Lauritzen96} showed that a zero in the concentration matrix corresponds to a conditional independence relation, i.e.,
\begin{align}
\Theta_{ij}=0 \quad\Longleftrightarrow\quad X_{i}\indepen X_{j}\mid X_{V\backslash\{i,j\}}.
\end{align}
Note that we condition on all remaining variables in $V\backslash\{i,j\}$. And so an edge $i-j$ will be in the network $G$ if and only if $X_{i}$ is dependent on $X_{j}$ conditional on all other variables in $V\backslash\{i,j\}$. We could also say that, given the set of variables in $V$, we can find no alternative explanation for the dependence between $X_{i}$ and $X_{j}$ \citep{Pearl:2001}.
To illustrate the role of partial covariance (and correlation) in the GGM, we consider a small example with three nodes $V=\{1,2,3\}$ and two edges $E=\{1-2,1-3\}$. Suppose that we have the following variance matrix $\Sigma$ and concentration matrix $\Theta=\Sigma^{-1}$
\begin{align*}
\boldsymbol{\Sigma}=
\begin{pmatrix}
2 &-1 &-1\\
-1 &1.5 &0.5\\
-1 &0.5 &1.5
\end{pmatrix}
\quad
\text{and}
\quad
\boldsymbol{\Theta}=
\begin{pmatrix}
1 &0.5 &0.5\\
0.5 &1 &0\\
0.5 &0 &1
\end{pmatrix}
\end{align*}
We notice that the variables $X_{2}$ and $X_{3}$ have covariance 0.5 (correlation $\rho_{23}=0.5/1.5=\tfrac{1}{3}$) but are not correlated conditional on variable $X_{1}$ (partial correlation $\rho_{23\mid 1}=0$). The conditional independence can be interpreted as having found an alternative explanation for the correlation between variables $X_{2}$ and $X_{3}$, namely their relation to variable $X_{1}$.
Thus, a GGM provides information on possible alternative explanations for correlations. In other words, if we find a zero partial correlation, then we know that there is no unique connection between the variables; if there is non-zero partial correlation, then we know that no other variables can explain away the obtained correlation.
\section{The Relation Between the GGM and ULVM}
\label{sec:relations-networks-latent}
An obvious question for any researcher considering both networks and latent variable models is: What are the similarities and how can I characterise them? Here we consider the case of a ULVM and determine what network corresponds to such a model.
That is, if a ULVM holds for the observed variables, then what does this imply for a network of only observed variables?
The answer is that we would obtain a complete network
in which all nodes are connected to each other \citep[see][for binary observed variables]{MarsmanEtAL_2018_MBR}. The associated network is shown in Figure \ref{fig:latent-network}, right panel.
This result may seem counterintuitive, especially if FWMK are correct that partial correlations remove the variance that is shared among variables in the network. In the ULVM, there is, in principle, no unique variance, and all variance can be attributed to a single (latent) variable. We will review this idea later in more detail.
We only require the following standard assumptions about the latent variable model to obtain our result.
The random variables $\eta$ and $X$ are such that they satisfy
\begin{itemize}
\item[1.] {\em local independence}: $X_{i}\indep X_{j}\mid \eta$ for all $i\ne j\in V$,
\item[2.] {\em unidimensionality}: $\eta$ is a scalar, and
\item[3.] {\em monotonicity}: if $\eta_{1}>\eta_{2}$ then $\mathbb{P}(X_{j}\mid \eta_{1})>\mathbb{P}(X_{j}\mid \eta_{2})$ for all $j\in V.$
\end{itemize}
Using these assumptions we obtain a marginal distribution of the variables $X_1, X_2, \dots, X_p$ with variance matrix (see the Appendix for a proof)
\begin{align}\label{eq:covaraince-matrix}
\boldsymbol{\Sigma} = \boldsymbol{\lambda}\boldsymbol{\lambda}^{\sf T} + \mathbf{I}_{p}
\end{align}
which is exactly the same as the variance matrix that we observed for the marginal distribution under the ULVM in Section \ref{sec:latent-variable-model}.
As we saw in Section \ref{sec:gaussian-graphical-model}, a network is obtained by taking the inverse of $\Sigma$, that is $\Sigma^{-1}=\Theta$, which we refer to as the concentration matrix.
The concentration matrix is (see the Appendix)
\begin{align}\label{eq:inverse-covariance}
\boldsymbol{\Theta}=\mathbf{I}_{p} - \frac{1}{\boldsymbol{\lambda}^{\sf T}\boldsymbol{\lambda}+1}\boldsymbol{\lambda}\boldsymbol{\lambda}^{\sf T}.
\end{align}
We now see that an off-diagonal element $\Theta_{ij}$ for $i\ne j$ is $\alpha\lambda_{i}\lambda_{j}$, where $-\alpha^{-1}=(\boldsymbol{\lambda}^{\sf T}\boldsymbol{\lambda}+1)$. Hence, $\Theta_{ij}$ is in general non-zero. If $\Theta_{ij}=0$, then $\lambda_{i}=0$ for some $i\in V$ and then the variable $X_{i}$ cannot be an indicator variable for the latent variable. Hence, we do not have a ULVM.
We illustrate (\ref{eq:covaraince-matrix}) and (\ref{eq:inverse-covariance}) using $\boldsymbol{\lambda}=(1,0.5,0.5)^{\sf T}$. Then we obtain
\begin{align*}
\boldsymbol{\Sigma}=
\begin{pmatrix}
2 &0.5 &0.5\\
0.5 &1.25 &0.25\\
0.5 &0.25 &1.25
\end{pmatrix}
\quad \text{and} \quad
\boldsymbol{\Theta}=
\begin{pmatrix}
0.6 &-0.2 &-0.2\\
-0.2 &0.9 &-0.1\\
-0.2 &-0.1 &0.9
\end{pmatrix}
\end{align*}
Computing the element $\Theta_{12}$ using (\ref{eq:inverse-covariance}) with $\boldsymbol{\lambda}^{\sf T}\boldsymbol{\lambda}=1^{2}+0.5^{2}+0.5^{2}=1.5$ gives
\begin{align*}
\alpha\lambda_{1}\lambda_{2} = -\frac{1}{1.5+1}1\cdot 0.5 = -\frac{0.5}{2.5}=-0.2
\end{align*}
which is equivalent to element $\Theta_{12}=-0.2$ in the inverse covariance matrix above. This also shows that for any of the partial covariances $\Theta_{ij}$ to be 0, one of the $\lambda_{i}$ has to be 0. But, obviously, then indicator $i$ is not part of the ULVM.
This result is in line with that of \citet[][Thm 6]{HollandRosenbaum1986}. \citeauthor{HollandRosenbaum1986} showed that a ULVM induces non-zero partial correlations. Suppose that a latent variable model satisfies 1-3 above, then Theorem 6 of \citet{HollandRosenbaum1986} shows that for any partition of the nodes any two nodes are conditionally associated given the other partition. This implies that the partial correlations are all non-zero. \citet{Junker:1997} explain this by saying that the monotone and unidimensional latent variable $\eta$ induces so much `internal coherence' among the observed variables, that the covariation must be larger than 0. These results underscore our concerns with the ideas of FWMK about partial correlation networks.
Another result, given in \citet{Junker:1997}, shows that when the number of variables that is conditioned on is countably infinite, the covariation vanishes (vanishing conditional independence). This is because an infinite set of highly related variables is an exact (almost sure, in fact) representation of the unidimensional latent variable (or the sigma-field associated with the set of variables conditioned on). In other words, the latent variable $\eta$ can be represented by an {\em infinite} set of variables that are on equal footing with all other variables (i.e., variables that have a similar relation to the latent variable as all others). This result implies that only a network with an infinite number of variables, where all variables would fit the ULVM, will be empty, since in that case the conditioning variables become a representation of the latent variable. This can be seen from the matrix $\boldsymbol{\Theta}$ above, since if there are infinitely many observed variables and $\sum_{i}\lambda_{i}^{2}$ does not converge, then the term $\boldsymbol{\lambda}^{\sf T}\boldsymbol{\lambda}\to \infty$, and so $\boldsymbol{\Theta}$ will tend to $\mathbf{I}_{p}$ as $p$ gets large \citep[see also][equation (7), for a similar result]{Guttman:1953}.
\section{The GGM as a series of regressions}
\label{sec:networks-regression}
A GGM can be estimated by a series of regressions. The reason is that the regression coefficients can be written in terms of the concentration matrix (inverse covariance matrix) of the nodes. Recall that $\Theta_{ij}$ denotes the partial covariance between variables $X_{i}$ and $X_{j}$ with all other variables partialled out, and also recall that if $\Theta_{ij}=0$ this implies that $X_{i}$ and $X_{j}$ are independent conditional on all other variables under consideration. The regression coefficient $\hat{\beta}_{ij}$ can be written in terms of the concentration matrix as \citep[][Section 5.1.3]{Lauritzen96}
\begin{align}
\beta_{ij}=-\frac{\Theta_{ij}}{\Theta_{ii}}
\end{align}
Clearly, if $\Theta_{ij}=0$, then $\beta_{ij}=0$ as well. And so, by inspecting the regression coefficients we can determine the conditional independencies that also hold for the concentration matrix $\Theta$. In the Appendix we provide a small example with three nodes to show that these relations hold.
Here, we use these relations to show that the regression coefficients indeed explain the dependent variable, which implies that the partial correlations do use the shared variance.
The procedure of using a series of regressions to obtain a GGM was first shown to lead to correct networks in \citet{Meinshausen:2006}.
We start at any node $i$, and use the associated random variable $X_{i}$ and then call this node $Y$. Then we estimate the non-zero regression coefficients $\beta_{ij}$ for all other remaining nodes in $V\backslash \{i\}$. The notation $\beta_{ij}$ means we are thinking of the connection $i \leftarrow j$ in the network. So, we have a multiple regression, where $Y$ is variable $X_{i}$ and the other variables $X_{V\backslash\{i\}}$ are the predictors
\begin{align}
Y = \beta_{0} + \beta_{i1}X_{1} + \beta_{i2}X_{2} + \cdots + \beta_{ip}X_{p} + e_{i}
\end{align}
where we exclude the predictor $X_{i}$ because we have made that node the dependent variable $Y$.
The non-zero coefficients $\beta_{ij}$ tell us which nodes $j$ are in the neighbourhood of variable $i$, i.e., to which other nodes variable $i$ is connected.
We do this for all nodes in $V$ and then combine the results because we have used both $\beta_{ij}$ and $\beta_{ji}$, once with variable $j$ being the predictor and once with variable $j$ as the dependent variable.
We can use the {\em and} rule or the {\em or} rule.
In the {\em and} rule we say that the edge $i-j$ is present in the network whenever both $\beta_{ij}\ne 0$ and $\beta_{ji}\ne 0$. In the {\em or} rule we identify the edge $i-j$ whenever either $\beta_{ij}\ne 0$ or $\beta_{ji}\ne 0$.
The idea of estimating the inverse covariance matrix can also be motivated by looking to identify the joint probability distribution of the variables $X_{1},X_{2},\ldots,X_{p}$.
This requires aggregating across all configurations of the random variables, which is computationally difficult.
One way to make this easier is by reducing the joint distribution into smaller parts and instead of considering all variables simultaneously we only have to consider joint distributions of a smaller number of variables at a time. In the extreme case we use a product of univariate conditional distributions.
\begin{align}
p(x) \propto p_{1}(x_{1}\mid x_{V\backslash\{1\}})p_{2}(x_{2}\mid x_{V\backslash\{2\}})\cdots p_{p}(x_{p}\mid x_{V\backslash\{p\}})
\end{align}
This is known as the pseudo-likelihood, because it is proportional to the likelihood \citep{Hyvarinen:2006,Nguyen:2017}.
Each univariate conditional distribution then implies a multivariate regression. To see this, let $Y=X_{i}$ as before and consider the conditional expectation of $Y$ given all remaining variables $X_{V\backslash\{i\}}$
\begin{align}
\mathbb{E}(Y\mid X_{V\backslash \{i\}}) = \beta_{0} + \beta_{i1}X_{1} +\beta_{i2}X_{2} + \cdots + \beta_{ip}X_{p}
\end{align}
This is clearly the regression equation which we consider for each node $i\in V$. Hence by considering all univariate conditional distributions we are in fact determining the pseudo-likelihood which is proportional to the joint density. This idea is related to the coupling of the cliques in the graph and the factorisation of cliques, and is called the Hammerlsey-Clifford theorem \citep[see, e.g.,][]{Lauritzen96,Wainwright:2019}.
\section{Partial correlations and explained variance in regression}
\label{sec:partial-correlation-regression}
Since the GGM coincides with a series of regressions, each node is explained by the remaining nodes in the network. Specifically, at the population level, each node is explained by its neighbors, and the others are irrelevant in the sense that the other variables are independent, given the neighbors. The reason that we can consider the series of regressions the same as a Gaussian graphical model is because of the relation with the conditional covariances, as we saw in the previous part of Section \ref{sec:networks-regression}.
We first explain the relation between the regression coefficients and the partial correlation more precisely here. Then, we decompose the $R^{2}$ measure and then show with a small example and some simulated data how the explained variance can be (re)distributed among the predictors.
In regression the coefficients are often obtained by ordinary least squares (see the Appendix) and $R^{2}$ is calculated using these coefficients. Suppose we have three variables $X_{1}$, $X_{2}$ and $X_{3}$, as in the example from the introduction corresponding to Figure \ref{fig:explained-variance}. We consider $X_{3}$ as the dependent variable in a regression, so that $X_{1}$ and $X_{2}$ are predictors. If we assume that all three variables have mean 0 and variance 1, then we obtain the regression coefficient (see the Appendix)
\begin{align}\label{eq:beta-3nodes}
\beta_{31}=
\frac{
\text{cor}(X_{1},X_{3})-
\text{cor}(X_{1},X_{2})\text{cor}(Y,X_{2})
}
{
1-\text{cor}(X_{1},X_{2})^{2}
}
\end{align}
where $\text{cor}()$ is the correlation and $1-\text{cor}(X_{1},X_{2})^{2}$ is the conditional variance of $X_{1}$ given $X_{2}$. This gives the relation between the regression coefficient and the partial correlation (see the Appendix and \citet{Anderson:1958})
\begin{align}
\beta_{31} \frac{\sqrt{1-\text{cor}(X_{1},X_{2})^{2}}}{\sqrt{1-\text{cor}(X_{3},X_{2})^{2}}} =
\frac{
\text{cor}(X_{1},X_{3})-
\text{cor}(X_{1},X_{2})\text{cor}(X_{3},X_{2})
}
{
\sqrt{1-\text{cor}(X_{1},X_{2})^{2}}\sqrt{1-\text{cor}(X_{3},X_{2})^{2}}
}
=\rho_{31\mid 2}
\end{align}
So the regression coefficient is a rescaling of the partial correlation, where it is clear that both the regression coefficient and the partial correlation use the conditional covariance between $X_{1}$ and $X_{3}$ given $X_{2}$. It is also clear from this formulation that in the coefficient the part of $X_{2}$ is taken out of the correlation between $X_{1}$ and $X_{3}$.
The fact that the partial covariances are obtained from the covariances by taking its inverse provides the first argument that shows that no information is lost when using the partial covariances (or correlations) to describe the relations between the variables. This is because the covariance and partial covariance are in one-to-one correspondence with each other. That is, for each pair of variables with partial covariance $a$ (point in the space of the partial covariances) there is a unique pair of variables with covariance $b$ (point in the space of covariances). Hence, we can go back and forth from the space of partial covariances and covariances (see the Appendix for a more formal discussion of this).
The second argument that shows that no information is lost by considering the partial covariances (or partial correlations) comes from considering a GGM as a series of regressions, and the associated multiple correlation measure $R^{2}$ used in regression and in networks \citep{Haslbeck:2018}. The definition of $R^2$ is (see the Appendix)
\begin{align}\label{eq:r2-decomposed}
R^2=\frac{\text{var}(\hat{Y})}{\text{var}(Y)}=\sum_{i=1}^{p}\beta_{Yi}\frac{\text{cov}(X_{i},Y)}{\text{var}(Y)}
\end{align}
In other words, we can decompose the explained variance $R^{2}$ into a term for each predictor separately. From this decomposition and (\ref{eq:beta-3nodes}) it is clear that the coefficient represents the unique contribution of the predictor, but that the covariance between the predictor and the dependent variable (not a partial covariance) co-determines the explained variance in regression.
We consider the three node example of Figure \ref{fig:explained-variance}. Suppose that we take $X_{3}$ as $Y$, the dependent variable, with the predictors $X_{1}$ and $X_{2}$. Then we see that $R^{2}$ is made up of the scaled covariance between each of the predictors and the dependent variable $\text{cov}(X_{i},X_{3})/\text{var}(X_{3})$ multiplied by its respective regression coefficient $\beta_{3i}$. The explained variance part of $X_{1}$ is therefore composed of the complete overlap between $Y$ and $X_{1}$ (scaled by $\text{var}(X_{3})$, c.f. Figure \ref{fig:explained-variance}(b)) and the coefficient $\beta_{31}$ which does not depend on $X_{2}$. The contribution to $R^{2}$ of each predictor is therefore proportional to its covariance (overlap) with the dependent variable. Hence, if we were to take out (partial out) the part of $X_{1}$ out of $X_{2}$ we will not change $R^{2}$ but only redistribute the contribution to $R^{2}$ of each of the predictors. We will show this in simulated dataset, after we briefly discuss the interpretation of the regression coefficients and its relation to partial correlation.
We illustrate the principle of $R^{2}$ and its decomposition in (\ref{eq:r2-decomposed}) further with a small simulation. We generate data according to
\begin{align*}
X_{3}=\beta_{31}X_{1} +\beta_{32}X_{2} + e
\end{align*}
where we set the coefficients to $\beta_{31}=1$ and $\beta_{32}=2$, respectively, and the error variance to $1$.
To introduce a correlation (i.e., overlap) between the regressors $X_{1}$ and $X_{2}$, we express $X_2$ in terms of $X_1$ and an additional error term
\begin{align*}
X_{2} = 0.2X_{1} + e_{2}
\end{align*}
where the second error's variance is also set to $1$, so that $\text{cor}(X_{1},X_{2})=0.2$ because $\text{cov}(X_{1},0.2X_{1}+e_{2})=0.2\text{var}(X_{1})+0$ because $\text{var}(X_{1})=1$. We have simulated $n = 100$ cases from this model.
We start with standard regression, which is the default in most statistical packages. The results for the standard regression of $X_3$ on $X_1$ and $X_2$ are shown on the left side of Table \ref{tab:regression-simulations} (the R-syntax for the simulation is in the Appendix).
From Table \ref{tab:regression-simulations} (left column), we see that the coefficients approximate the population values, and that the predictors explain $87.39\%$ of the response variable. We can now verify the decomposition of (\ref{eq:r2-decomposed}). For this example with three variables we have the decomposition
\begin{align*}
\hat{R}^{2}=\hat{\beta}_{31}\frac{\widehat{\text{cov}}(X_{1},X_{3})}{\widehat{\text{var}}(X_{3})}
+ \hat{\beta}_{32}\frac{\widehat{\text{cov}}(X_{2},X_{3})}{\widehat{\text{var}}(X_{3})}
\end{align*}
With the values in Table \ref{tab:regression-simulations} and $\widehat{\text{var}}(X_{3})=7.63438$, we obtain
\begin{align*}
\hat{R}^{2}=1.05327\frac{1.62573}{7.63438} + 2.07496\frac{2.39005}{7.63438}=0.22429+0.64960=0.8739
\end{align*}
We see the different contributions of each of the predictors to $R^{2}$, which depends on the combination of the covariance (size of the overlap between predictor and dependent variable) and the regression coefficient. Each regression coefficient has the effect of other variables partialled out, and the contribution of the predictor to $R^{2}$ is determined by the overlap (without anything partialled out) between $X_{1}$ and $X_{3}$ (and scaled by the variance of $X_{3}$ in this example).
\begin{table}[b]\centering
\caption{Regression output of the small simulation with three random variables.}
\label{tab:regression-simulations}
\begin{tabular}{l l l l @{\hspace{4em}} l l l}
\midrule
&\multicolumn{3}{c}{standard (type II)} &\multicolumn{3}{c}{projected (type I)}\\
&estimate &std. error &$\widehat{\text{cov}}(X_{i},X_{3})$ &estimate &std. error &$\widehat{\text{cov}}(X_{i},X_{3})$\\
\midrule
$X_{1}$ &1.05327 &0.09482 &1.62573 &1.43143 &0.09310 &1.62573\\
$X_{2}$ &2.07496 &0.09863 &2.39005 &2.07496 &0.09863 &2.09377\\
\midrule
&\multicolumn{3}{c}{\hspace{-4em}$\widehat{\text{cov}}(X_{1},X_{2})=0.21633$, $\hat{R}^{2}=0.8739$} &\multicolumn{3}{c}{\hspace{-1em}$\widehat{\text{cov}}(X_{1},X_{2}^{p})=0.01290$, $\hat{R}^{2}=0.8739$}\\
\midrule
\end{tabular}
\end{table}
Next, we do the same but now we first partial out the variance (overlap) of $X_{1}$ from $X_{2}$ before we enter it in the regression. This corresponds to Figure \ref{fig:explained-variance}(b). We consider the regression of $X_3$ on $X_1$ and $X_2^{p}$, where the projected variable ensures that $\text{cor}(X_{1}, X_{2}^{p})=0$. If we believe that $R^{2}$ leaves out completely the overlap between $X_{1}$ and $X_{2}$ (Figure \ref{fig:explained-variance}(c)), then the use of the projected predictor would increase (or remain the same if there were no overlap) the percentage of explained variance, since $X_{1}$ now contains this overlap. This type of regression is sometimes referred to as type I sum of squares, while the former (standard) regression is referred to as type II sum of squares \citep{Ip:2001,Kennedy:2002}. The results for the projected regression are shown on the right of Table \ref{tab:regression-simulations} and reveals a higher coefficient for the first predictor but the same percentage of explained variance.
The coefficient for $X_{1}$ is higher because we removed any overlap between $X_{1}$ and $X_{2}$ from $X_{2}$, and so we now allow all variance of $X_{1}$ to be explained by $X_{1}$, as in Figure \ref{fig:explained-variance}(b). From (\ref{eq:beta-3nodes}) we clearly see that because $\text{cor}(X_{1},X_{2}^{p})=0$, the coefficient (with the specific settings of means of 0 and variances of 1) is the same as the correlation between $X_{1}$ and $X_{3}$; nothing of $X_{2}^{p}$ is left to subtract from $\text{cor}(X_{1},X_{3})$.
We verify the decomposition of the explained variance of (\ref{eq:r2-decomposed})
\begin{align*}
\hat{R}^{2}=1.43143\frac{1.62573}{7.63438} + 2.07496\frac{2.09377}{7.63438}=0.30482+0.56910=0.8739
\end{align*}
From this decomposition with $X_{2}^{p}$ instead of $X_{2}$ we notice two things. First, the coefficient $\hat{\beta}_{31}$ increased because the covariance (overlap) between $X_{1}$ and $X_{2}^{p}$ is approximately 0 (see Table \ref{tab:regression-simulations}). From (\ref{eq:beta-3nodes}) this implies that (almost) nothing is ${1}$ and $X_{3}$ because $\widehat{\text{cov}}(X_{1},X_{2}^{p})=0.01290$. So, the coefficient increased from 1.05327 to 1.43143. So, $X_{1}$ is allowed to explain more of the variance of $X_{3}$. The second difference in the $R^{2}$ decomposition is that the covariance $\widehat{\text{cov}}(X_{2}^{p},X_{3})$ is reduced from 2.39005 to 2.09377 because the common part with $X_{1}$ is taken out of $X_{2}$ giving the variable $X_{2}^{p}$. These two changes lead to different decompositions in $R^{2}$. But, obviously, owe have not changed to total variance (area) of $X_{3}$ explained by the predictors $X_{1}$ and either $X_{2}$ or $X_{2}^{p}$. The only thing that has changed is which predictor gets to explain the variance of $X_{3}$.
Since in the projected regression we took out of $X_{2}$ anything that was in common with $X_{1}$, and $R^{2}$ is exactly the same, we must conclude that a standard regression indeed explains all the variance that can be explained by the predictors. That is, no shared variance is taken out.
\section{Regularised regression and the GGM}
Although the relation between regression, GGM and networks is clear from the previous sections, in practice, regression is often performed with some alternative way that may change the relation with the original network. Here we will focus on the least absolute shrinkage and selection operator (lasso, or $\ell_{1}$-norm) in regression. This regularisation technique takes the sum of the absolute values of the parameters $\beta_{ij}$ as a penalty, i.e., $\sum_{j=1}^{p}|\beta_{ij}|$. Because this function is also minimised, the lasso shrinks the parameter values towards zero, or sets them to zero, depending on the regularisation parameter \citep{Tibshirani:1996}. It has been shown that, given a set of assumptions, the lasso is consistent, meaning that the correct parameters are obtained in a specific asymptotic framework \citep[e.g.,][]{Meinshausen:2006,Wainwright:2009,Buhlmann:2011,Waldorp:2019}. One of the assumptions of the lasso is that the network is sparse, i.e., in the situation of a network where nodes are added at each step, the number of edges will always remain bounded (the number of edges is in the order of the number of nodes). For a dense network, however, the parameters will be poorly estimated \citep{Waldorp:2019}. As a consequence for dense networks, the regression parameters of the lasso are inappropriate to use as scaled partial correlations because many of the edges will have been set to 0, while they should be part of the network. In the extreme case discussed in this manuscript, the ULVM corresponds to a fully-connected network, and so, the lasso will incorrectly set several edges to 0. Although this does not change the results of the previous sections, it does warrant careful consideration if the network to be estimated is sparse or dense.
\section{Concluding Comments}\noindent
In this paper, we have refuted the belief that partial correlations remove the shared variance in estimating GGMs, as recently voiced by FWMK, and have shown that all variance of the focal node that can be explained by other nodes is explained. First, we showed that if the data come from a ULVM, and there is no unique variance, the estimated network is fully-connected, and not empty, as FWMK would make us believe. Secondly, we have revisited the relation between GGMs, partial correlations, and regression to show that partial correlations indeed do not remove shared variance from the explained variance.
We have also established a formal connection between the latent variable model and the GGM, which is further evidence for broad connections that exist between graphical models and latent variable models. A particular consequence of these relations is that reliability and replication issues for one model, an unrestricted GGM, say, are also likely to be an issue for the other. It is interesting to observe that the critique of FWMK has focused on one of the two models while advocating the other, which seems contradictory given these formal results.
This incongruity leaves us with what we believe is the most relevant issue, not mentioned by FWMK, but certainly present between the lines: The network model is wrong. The network model may indeed be wrong, and this is worth discussing and investigating scientifically. We believe that one of the most important ways to approach such a debate is by considering what predictions a model makes and how this can be verified or falsified empirically.
| proofpile-arXiv_065-222 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Prediction algorithms use data, necessarily sampled under specific conditions, to learn correlations that extrapolate to new or related data. If successful, the performance gap between these two environments is small, and we say that algorithms \textit{generalize} beyond their training data. Doing so is difficult however, some form of uncertainty about the distribution of new data is unavoidable. The set of potential distributional changes that we may encounter is mostly unknown and in many cases may be large and varied. Some examples include covariate shifts \cite{bickel2009discriminative}, interventions in the underlying causal system \cite{pearl2009causality}, varying levels of noise \cite{fuller2009measurement} and confounding \cite{pearl1998there}. All of these feature in modern applications, and while learning systems are increasingly deployed in practice, generalization of predictions and their reliability in a broad sense remains an open question.
\begin{figure*}[t]
\centering
\includegraphics[width=1\textwidth]{Figures/Fig1.png}
\caption[.]{\textbf{The challenges of generalization}. Each panel plots testing performance under different shifts. The proposed approach, Derivative Invariant Risk Minimization (DIRM, described in section \ref{sec_gen}), is a relaxation of the causal solution that naturally interpolates (as a function of hyperparameter $\lambda$, see eq. (\ref{robust_objective})) between the causal solution and Ordinary Least Squares (OLS).}
\label{Fig1}
\end{figure*}
A common approach to formalize learning with uncertain data is, instead of optimizing for correlations in a \textit{fixed} distribution, to do so simultaneously for a \textit{range} of different distributions in an uncertainty set $\mathcal P$,
\begin{align}
\label{robust_pop}
\underset{f}{\text{minimize }} \underset{P \in \mathcal P}{\sup}\hspace{0.1cm} \mathbb E_{(x,y)\sim P} [ \mathcal L(f(x),y)],
\end{align}
for some measure of error $\mathcal L$ of the function $f$ that relates input and output examples $(x,y)\sim P$. Choosing different sets $\mathcal P$ leads to estimators with different properties. It includes as special cases, for instance, many approaches in domain adaptation, covariate shift, robust statistics and optimization, see e.g. \cite{ben2009robust,kuhn2019wasserstein,bickel2009discriminative,duchi2016statistics,duchi2019distributionally,sinha2017certifying,wozabal2012framework,abadeh2015distributionally,duchi2018learning}. Robust solutions to problem (\ref{robust_pop}) are said to generalize if potential shifted, test distributions are contained in $\mathcal P$, but also larger sets $\mathcal P$ result in conservative solutions (i.e. with sub-optimal performance) on data sampled from distributions away from worst-case scenarios.
One formulation of causality is also a version of this problem: $\mathcal P$ defined as any distribution arising from interventions on observed covariates $x$ leading to shifts in their distribution $P_x$ (see e.g. sections 3.2 and 3.3 in \cite{meinshausen2018causality}). The invariance to changes in covariate distributions of causal solutions is powerful for generalization, but implicitly assumes that all covariates or other drivers of the outcome subject to change at test time are observed. Often shifts occur elsewhere, for example in the distribution of unobserved confounders, in which case also conditional distributions $P_{y|x}$ may shift. In the presence of unobserved confounders, the goals of achieving robustness and learning a causal model can be \textit{different} (and similar behaviour also occurs with varying measurement noise). There is, in general, an inherent \textit{trade-off} in generalization performance: in the presence of unobserved confounders, causal and correlation-based solutions are both optimal in different regimes, depending on the shift in the underlying generating mechanism from which new data is generated. We consider next a simple example, illustrated in Figure \ref{Fig1}, to show this explicitly.
\subsection{Introductory example}
Assume access to observations of variables $(X_1,X_2,Y)$ in two training datasets, each dataset sampled with different variances ($\sigma^2=1$ and $\sigma^2 = 2$) from the following structural model $\mathbb F$,
\begin{align*}
X_2 := -H + E_{X_2}, \quad Y := X_2 + 3H + E_{Y},\quad X_1 := Y + X_2 + E_{X_1}.
\end{align*}
$E_{X_1}, E_{X_2}\sim\mathcal N(0,\sigma^2)$, $E_Y\sim\mathcal N(0,1)$ are exogenous.
\begin{enumerate}[leftmargin=*, itemsep=0pt, topsep=0pt]
\item In a first scenario (\textbf{leftmost panel}) all data (training and testing) is generated \textit{without} unobserved confounders, $H:=0$.
\item In a second scenario (\textbf{remaining panels}) all data (training and testing) is generated \textit{with} unobserved confounders, $H:=E_H\sim\mathcal N(0,1)$.
\end{enumerate}
Each panel of Figure \ref{Fig1} shows performance on \textbf{new} data obtained after manipulating the underlying data generating system; the magnitude and type of intervention appears in the horizontal axis. We consider the minimum average error solution, Ordinary Least Squares (OLS), the causal solution, i.e. the linear model with coefficients $(0,1)$ for $(X_1,X_2)$, and Derivative Invariant Risk Minimization (DIRM, the proposed approach described in section \ref{sec_gen}) in different instantiations as a function of a hyperparameter $\lambda$, see eq. (\ref{robust_objective}). Three observations motivate this paper.
\begin{enumerate}[leftmargin=*,itemsep=0pt]
\item The presence of unobserved confounding hurts generalization performance in general with higher errors for all methods, e.g. contrast the $y$-axis of the leftmost and middle panel, but also leads to heterogeneous behaviour between optimization objectives depending on the nature of the shift in new data, e.g. contrast the two rightmost panels.
\item Minimum error solutions absorb spurious correlations (due to $H$ and the fact that $X_1$ is caused by $Y$) by construction with unstable performance under shifts in $p(X_1,X_2)$ but as a consequence better performance under shifts in $p(H)$. Causal solutions, by contrast, are designed to be robust to shifts in observed covariates but completely abstract from variation in unobserved variables and are sub-optimal with moderate shifts in observed variables (e.g. middle panel).
\item Minimum average error and causal solutions can be interpreted as two extremes of a distributionally robust optimization problem (\ref{robust_pop}), with a range of intermediate solutions that DIRM seeks to exploit and that in practice may have a more desirable performance profile.
\end{enumerate}
\subsection{Our Contributions}
This work investigates generalization performance in the presence of unobserved confounding with data from multiple environments. Our first steps in section \ref{sec_2} emphasize a qualitative difference in the statistical invariances (which feature prominently in the field of domain generalization, see e.g. \cite{arjovsky2019invariant,krueger2020out,parascandolo2020learning,koyama2020out}) that can be expected in the presence of unobserved confounders while keeping in mind the trade-offs in performance illustrated in Figure \ref{Fig1}. This trade-off and new invariance principles suggest a new objective, Derivative Invariant Risk Minimization (described in section \ref{sec_gen}), that defines a range of intermediate solutions between the causal and minimum error extremes. These solutions are robust in a well-defined sense, as upperbounding a robust minimization problem (\ref{robust_pop}) that defines $\mathcal P$ as an \textit{affine} combination of training data distributions. This result, when $\mathcal P$ is interpreted as a set of distributions arising from shifts in the underlying causal model, confirms the interpolation behaviour found in Figure \ref{Fig1} but also defines robustness guarantees in a much broader sense, including robustness to interventions in unobserved and target variables that are only limited by the geometry of training environments (see section \ref{sec_rob_inter}). We conclude this paper with a discussion of related work and with performance comparisons on medical data and other benchmarks for domain generalization.
\section{Invariances with Unobserved Confounders}
\label{sec_2}
This section introduces the problem of out-of-distribution generalization. We describe in greater detail the reasons that learning principles, such as Empirical Risk Minimization (ERM), underperform in general, and define invariances across environments to recover more robust solutions.
We take the perspective that all potential distributions that may be observed over a system of variables arise from a structural causal model $\mathcal M = (\mathbb F, \mathbb V, \mathbb U)$, characterized by endogenous variables, $\mathbb V\in\mathcal V$, representing all variables determined by the system, either observed or not; exogenous variables, $\mathbb U\in\mathcal U$, representing independent sources of randomness, and a sequence of structural equations $\mathbb F: \mathcal U \rightarrow \mathcal V$, describing how endogenous variables are (deterministically) derived from the exogenous variables, see e.g. \cite{pearl2009causality}. An example is given in Figure \ref{Fig1}, $\mathbb V = (X_1,X_2,H,Y)$ are endogenous and $\mathbb U = (E_{X_1},E_{X_2},E_{H},E_{Y})$ are exogenous variables. Unseen test data is generated from such a system $\mathcal M$ after manipulating the distribution of exogenous variables $\mathbb U$, which propagates across the system shifting the joint distribution of all variables $\mathbb V$, whether observed or unobserved, but keeping the causal mechanisms $\mathbb F$ unchanged. Representative examples include changes in data collection conditions, such as due to different measurement devices, or new data sources, such as patients in different hospitals or countries.
\textbf{Objective.} Our goal is to learn a representation $Z = \phi(X)$ acting on the set of observed variables $X \subset \mathbb V$ with the ability to extrapolate to new unseen data, and doing so acknowledging that all relevant variables in $\mathbb V$ are likely not observed. Unobserved confounders (say for predicting $Y\in \mathbb V$) simultaneously cause $X$ and $Y$, confounding or biasing the causal association between $X$ and $Y$ giving rise to spurious correlations that do not reproduce in general, see e.g. \cite{pearl1998there} for an introduction.
\subsection{The biases of unobserved confounding}
Consider the following structural equation for observed variables $(X,Y)$,
\begin{align}
\label{nonlinear_model}
Y := f\circ\phi(X) + E,
\end{align}
where $f := f(\cdot;\beta_0)$ is a predictor acting on a representation $Z:=\phi(X)$ and $E$ stands for potential sources of mispecification and unexplained sources of variability. For a given sample of data $(x,y)$ and $z = \phi(x)$, the optimal prediction rule $\hat\beta$ is often taken to minimize squared residuals, with $\hat\beta$ the solution to the normal equations: $\nabla_{\beta} f(z;\hat\beta)y = \nabla_{\beta} f(z;\hat\beta)f(z;\hat\beta)$, where $\nabla_{\beta} f(z;\hat\beta)$ denotes the column vector of gradients of $f$ with respect to parameters $\beta$ evaluated at $\hat\beta$. Consider the Taylor expansion of $f(z;\beta_0)$ around an estimate $\hat\beta$ sufficiently close to $\beta_0$, $f(z;\beta_0) \approx f(z;\hat\beta) + \nabla_{\beta} f(z;\hat\beta)^T (\beta_0 - \hat\beta)$. Using this approximation in our first order optimality condition we find,
\begin{align}
\label{least_squares_consistency}
\nabla_{\beta} f(z;\hat\beta)\nabla_{\beta} f(z;\hat\beta)^T(\beta_0 - \hat\beta) + v = \nabla_{\beta} f(z;\hat\beta) \epsilon,
\end{align}
where $v$ is a scaled disturbance term that includes the rest of the linear approximation of $f$ and is small asymptotically; $\epsilon:= y - f(z;\hat\beta)$ is the residual. $\hat \beta$ is consistent for the true $\beta_0$ if and only if $\nabla_{\beta} f(z;\hat\beta) \epsilon \rightarrow 0$ in probability. Consistency is satisfied if $E$ (all sources of variation in $Y$ not captured by $X$) are independent of $X$ (i.e. exogenous) or in other words if all common causes or confounders to both $X$ and $Y$ have been observed. If this is not the case, conventional regression may assign significant associations to variables that are neither directly nor indirectly related to the outcome, and as a consequence we have no performance guarantees on new data with changes in the distribution of these variables.
\vspace{-0.1cm}
\subsection{Invariances with multiple environments}
\label{sec_invariances}
The underlying structural mechanism $\mathbb F$, that also relates unobserved with observed variables, even if unknown, is stable irrespective of manipulations in exogenous variables that may give rise to heterogeneous data sources. Under certain conditions, statistical footprints emerge from this structural invariance across different data sources that are testable from data, see e.g. \cite{peters2016causal,ghassami2017learning,rothenhausler2019causal}.
\textbf{Assumption 1}. We assume that we have access to input and output pairs $(X,Y)$ observed across heterogeneous data sources or environments $e$, defined as a probability distribution $P_e$ over an observation space $\mathcal X \times \mathcal Y$ that arises, just like new unseen data, from manipulations in the distribution of exogenous variables in an underlying model $\mathcal M$.
\textbf{Assumption 2}. For the remainder of this section \textit{only}, consider restricting ourselves to data sources emerging from manipulations in exogenous $E_X$ (i.e. manipulations of observed variables) in an underlying additive noise model with unobserved confounding.
It may be shown then, by considering the distributions of error terms $Y - f\circ\phi(X)$ and its correlation with any function of $X$, that the inner product $\nabla_{\beta} f(z;\beta_0) \epsilon$, even if \textit{non-zero} due to unobserved confounding as shown in (\ref{least_squares_consistency}), converges to a \textit{fixed unknown value equal across training environments}.
\textbf{Proposition 1} (Derivative invariance). \textit{For any two environment distributions $P_i$ and $P_j$ generated under assumption 2, it holds that, up to disturbance terms, the causal parameter $\beta_0$ satisfies,}
\begin{align}
\label{optimal_beta}
\underset{(x,y)\sim P_i}{\mathbb E}\nabla_{\beta} f(z;\beta_0)(y - f(z;\beta_0)) - \underset{(x,y)\sim P_j}{\mathbb E}\nabla_{\beta} f(z;\beta_0)(y - f(z;\beta_0)) = 0.
\end{align}
\textit{Proof}. All proofs are given in the Appendix.
This \textit{invariance} across environments must hold for causal parameters (under certain conditions) \textit{even} in the presence of unobserved confounders. A few remarks are necessary concerning this relationship and its extrapolation properties.
\subsection{Remarks}
\begin{itemize}[leftmargin=*]
\item The first remark is based on the observation that, up to a constant, each inner product in (\ref{optimal_beta}) is the gradient of the squared error with respect to $\beta$. This reveals that the optimal predictor, in the presence of unobserved confounding, is not one that produces minimum loss but one that produces a \textit{non-zero} loss gradient \textit{equal} across environments. Therefore, seeking minimum error solutions, even in the population case, produces estimators with \textit{necessarily} unstable correlations because the variability due to unobserved confounders is not explainable from observed data. Forcing gradients to be zero then \textit{forces} models to utilize artifacts of the specific data collection process that are not related to the input-output relationship; and, for this reason, will not in general perform outside training data.
\item From (\ref{optimal_beta}) we may pose a sequence of moment conditions for each pair of available environments. We may then seek solutions $\beta$ that make all of them small simultaneously. Solutions are unique if the set of moments is sufficient to identify $\beta^{\star}$ exactly (and given our model assumptions may be interpreted as causal and robust to certain interventions). In the Appendix, we revisit our introductory example to show that indeed this is the case, and that other invariances exploited for causality and robustness (such as \cite{arjovsky2019invariant,krueger2020out}) do not hold in the presence of unobserved confounding and give biased results.
\item In practice, only a \textit{set} of solutions may be identified with the moment conditions in Proposition 1 with no performance guarantees for any individual solutions, and no guarantees if assumptions fail to hold. Moreover, even if accessible, we have seen in Figure \ref{Fig1} that causal solutions may not always be desirable under more general shifts (for example shifts in unobserved variables).
\end{itemize}
\section{A Robust Optimization Perspective}
\label{sec_gen}
In this section we motivate a relaxation of the ideas presented using the language of robust optimization. One strategy is to optimize for the worst case loss across environments which ensures accurate prediction on any convex mixture of training environments \cite{ben2009robust}. The space of convex mixtures, however, can be restrictive. For instance, in high-dimensional systems perturbed data is likely occur at a new vertex not represented as a linear combination of training environments. We desire performance guarantees outside this convex hull.
We consider in this section problems of the form of (\ref{robust_pop}) over an \textit{affine} combination of training losses, similarly to \cite{krueger2020out}, and show that they relate closely to the invariances presented in Proposition 1. Let $\Delta_{\eta}:=\{\{\alpha_e\}_{e\in\mathcal E}: \alpha_e \geq -\eta, \sum_{e\in\mathcal E} \alpha_e = 1\}$ be a collection of scalars and consider the set of distributions defined by $\mathcal P := \{\sum_{e\in\mathcal E} \alpha_e P_e : \{\alpha_e\} \in\Delta_{\eta}\}$, all affine combinations of distributions defined by the available environments. $\eta \in \mathbb R$ defines the strength of the extrapolation, $\eta = 0$ corresponds to a convex hull of distributions but above that value the space of distributions is richer, going beyond what has been observed: affine combinations amplify the strength of manipulations that generated the observed training environments. The following theorem presents an interesting upperbound to the robust problem (\ref{robust_pop}) with affine combinations of errors.
\textbf{Theorem 1} \textit{Let $\{P_e\}_{e \in \mathcal E}$, be a set of available training environments. Further, let the parameter space of $\beta$ be open and bounded. Then, the following inequality holds,}
\begin{align*}
\underset{\{\alpha_e\} \in \Delta_{\eta}}{\sup}\hspace{0.1cm} &\sum_{e\in\mathcal E} \alpha_e \underset{(x,y)\sim P_e}{\mathbb E} \mathcal L\left(f \circ \phi(x),y \right) \leq \underset{(x,y)\sim P_e, e\sim \mathcal E}{\mathbb E} \mathcal L\left(f \circ \phi(x),y \right) \\
&+ (1 + n\eta) \cdot C \cdot
\Big|\Big| \hspace{0.1cm} \underset{e\in \mathcal E}{\sup}\hspace{0.1cm}\underset{(x,y)\sim P_e}{\mathbb E} \nabla_{\beta}\mathcal L\left(f \circ \phi(x),y \right) - \underset{(x,y)\sim P_e, e\sim \mathcal E}{\mathbb E} \nabla_{\beta}\mathcal L\left(f \circ \phi(x),y \right)\hspace{0.1cm} \Big |\Big|_{L_2},
\end{align*}
\textit{where $||\cdot||_{L_2}$ denotes the $L_2$-norm, $C$ is a constant that depends on the domain of $\beta$, $n:= |\mathcal E|$ is the number of available environments and $e\sim\mathcal E$ loosely denotes sampling indeces with equal probability from $\mathcal E$.}
\textbf{Interpretation.} This bound illustrates the trade-off between the invariance of Proposition 1 (second term of the inequality above) and prediction in-sample (the first term). A combination of them upper-bounds a robust optimization problem over affine combinations of training environments, and depending how much we weight each objective (prediction versus invariance) we can expect solutions to be more or less robust. Specifically, for $\eta = -1/n$ the objective reduces to ERM, but otherwise the upperbound increasingly weights differences in loss derivatives (violations of the invariances of section \ref{sec_invariances}), and in the limit ($\eta\rightarrow\infty$) can be interpreted to be robust at least to \textit{any} affine combination of training losses.
\textbf{Remark on assumptions.} Note that the requirement that $\mathbb F$ be fixed or Assumption 2, is not necessary for generalization guarantees. As long as new data distributions can be represented as affine combinations of training distributions, we can expected performance to be as least as good as that observed for the robust problem in Theorem 1.
\subsection{Proposed objective}
Our proposed learning objective is to guide the optimization of $\phi$ and $\beta$ towards solutions that minimize the upperbound in Theorem 1. Using Lagrange multipliers we define the general objective,
\begin{align}
\label{robust_objective}
\underset{\beta,\phi}{\text{minimize }}\underset{(x,y)\sim P_e, e\sim \mathcal E}{\mathbb E} \mathcal L\left(f \circ \phi(x),y \right) + \lambda\cdot \underset{e\sim \mathcal E}{\text{Var}}\left(|| \underset{(x,y)\sim P_e}{\mathbb E}\nabla_{\beta}\mathcal L\left(f\circ \phi(x),y \right)||_{L_2}\right),
\end{align}
where $\lambda \geq 0$. We call this problem Derivative Invariant Risk Minimization (DIRM). This objective shares similarities with the objective proposed in \cite{krueger2020out}. The authors considered enforcing equality in environment-specific losses, rather than derivatives, as regularization, which can also be related to a robust optimization problem over an affine combination of errors. We have seen in section \ref{sec_invariances} however that equality in losses is not expected to hold in the presence of unobserved confounders
\textbf{Remark on optimization.} The $L_2$ norm in the regularizer is an integral over the domain of values of $\beta$ and is in general intractable. We approximate this objective in practice with norms on functional evaluations at each step of the optimization rather than explicitly computing the integral. We give more details and show this approximation to be justified empirically in the Appendix.
\subsection{Robustness in terms of interventions}
\label{sec_rob_inter}
In this section we give a causal perspective on the robustness achieved by our objective in (\ref{robust_objective}). As is apparent in Theorem 1, performance guarantees on data from a new environment depend on the relationship of new distributions with those observed during training.
Let $f\circ\phi_{\lambda \rightarrow \infty}$ minimize $\mathcal L$ among all functions that satisfy all pairs of moment conditions defined in (\ref{optimal_beta}); that is, a solution to our proposed objective in (\ref{robust_objective}) with $\lambda\rightarrow\infty$. At optimality, it holds that gradients evaluated at this solution are equal across environments. As a consequence of Theorem 1, the loss evaluated at this solution with respect to \textit{any} affine combination of environments is bounded by the average loss computed in-sample (denoted $L$, say),
\begin{align}
\sum_{e\in\mathcal E} \alpha_e \underset{(x,y)\sim P_e}{\mathbb E} \mathcal L\left(f \circ \phi(x),y \right) \leq L, \qquad\text{for any set of } \alpha_e \in \Delta_{\eta}.
\end{align}
From the perspective of interventions in the underlying causal mechanism, this can be seen as a form of data-driven predictive stability across a range of distributions whose perturbations occur in the same direction as those observed during training.
\textbf{Example.} Consider distributions $P$ of a univariate random variable $X$ given by affine combinations of training distributions $P_0$ with mean $0$ and $P_1$ which, due to intervention, has mean $1$ so that, using our notation, $\mathbb E_PX = \alpha_0\mathbb E_{P_0}X + \alpha_1\mathbb E_{P_1}X$, $\alpha_0=1-\alpha_1\geq -\eta$. $\mathbb E_PX\in[-\eta,\eta]$ and thus we may expect DIRM to be robust to distributions subject to interventions of magnitude $\pm\eta$ on $X$ and any magnitude in the limit $\eta\rightarrow\infty$ (or equivalently $\lambda\rightarrow\infty$). With this reasoning, however, note that the "diversity" of training environments has a large influence on whether we can interpret solutions to be causal (for which we need interventions on all observed variables and unique minimizers) and robustness guarantees: for instance, with equal means in $P_0$ and $P_1$ affine combinations would not extrapolate to interventions in the mean of $X$. This is why we say that interventions in test data must have the same "direction" as interventions in training data (but interventions can occur on observed, unobserved or target variables).
\begin{minipage}{.6\textwidth}
Using our simple example in Figure \ref{Fig1} to verify this fact empirically, we consider 3 scenarios corresponding to interventions on exogenous variables of $X, H$ and $Y$. In each, training data from two environments is generated with means in the distribution of the concerned variables set to a value of 0 and 1 respectively (that is interventions occur on the same variables during training and testing), everything else being equal ($\sigma^2 := 1, H:= E_H\sim \mathcal N(0,1)$). Performance is evaluated on data generated by increasing the shift in the variable being studied up to a mean of 5. In all cases, we see in Figure \ref{stability} that the performance of $f\circ\phi_{\lambda \rightarrow \infty}$ is stable to increasing perturbations in the system. No other learning paradigm has this property.
\end{minipage}
\hfill
\begin{minipage}{.32\textwidth}
\begin{figure}[H]
\captionsetup{skip=1pt}
\centering
\includegraphics[width=0.9\textwidth]{Figures/stability.png}
\caption{Stability to general shifts.}
\label{stability}
\end{figure}
\end{minipage}
\subsection{Stability of certain optimal solutions}
\label{stability_section}
A special case may also be considered when the underlying system of variables and the available environments allow for optimal solutions $f\circ\phi_{\lambda \rightarrow \infty}$ and $f\circ\phi_{\lambda = 0}$ to coincide. In this case, the learned representation $\phi(x)$ results in a predictor $f$ optimal on average \textit{and} simultaneously with equal gradient in each environment, thus,
\begin{align*}
||\underset{(x,y)\sim P_e}{\mathbb E}\nabla_{\beta}\mathcal L\left(f\circ \phi(x),y \right)||_{L_2} = 0, \qquad \text{for all } e\in\mathcal E.
\end{align*}
For this representation $\phi$, it follows that optimal solutions $f$ learned on any new dataset sampled from an affine combination of training distributions coincides with this special solution. This gives us a sense of reproducibility of learning: if a specific feature is significant for predictions on the whole range of $\lambda$ with the available data then it will likely be significant on new (related) data. We explore this further in section \ref{sec_reproducibility}.
\textbf{Contrast with IRM} \cite{arjovsky2019invariant}. The above special case where all solutions in our hyperparameter range agree has important parallels with IRM. The authors proposed a learning objective enforcing representations of data with minimum error on average and across environments, such that at optimum $\mathbb E_{P_i} Y|\phi^{\star}(X) = \mathbb E_{P_j} Y|\phi^{\star}(X)$ for any pair $(i,j)\in\mathcal E$. \textit{Without} unobserved confounding, our proposal and IRM agree. But, \textit{with} unobserved confounding, minimum error solutions of IRM by design converge to spurious associations (see remarks after Proposition 1) and are not guaranteed to generalize to more general environments. For example, in the presence of additive unobserved confounding $H$, irrespective of $\phi$, we may have $\mathbb E_{P_i} Y|\phi^{\star}(X) = \phi^{\star}(X) + \mathbb E_{P_i} H \neq \phi^{\star}(X) + \mathbb E_{P_j} H = \mathbb E_{P_j} Y|\phi^{\star}(X)$ if the means of $H$ differ. The sought invariance then does not hold.
\section{Related work}
There has been a growing interest in interpreting shifts in distribution to fundamentally arise from interventions in the causal mechanisms of data. Peters et al. \cite{peters2016causal} exploited this link for causal inference: causal relationships by definition being invariant to the observational regime. Invariant solutions, as a result of this connection, may be interpreted also as robust to certain interventions \cite{meinshausen2018causality}, and recent work has explored learning invariances in various problem settings from a causal perspective \cite{arjovsky2019invariant,rothenhausler2019causal,krueger2020out,gimenez2020identifying}. Among those, we note the invariance proposed in \cite{rothenhausler2019causal}, the authors seek to recover causal solutions with unobserved confounding. Generalization properties of these solutions were rarely studied, with one exception being Anchor regression \cite{rothenhausler2018anchor}. The authors proposed to interpolate between empirical risk minimization and causal solutions with explicit robustness to certain interventions in a linear model. The present work may be interpreted as a non-linear formulation of this principle with a more general study of generalization.
Notions of invariance have been found useful in the broader field of domain generalization without necessarily referring to an underlying causal model. For instance, recent work has included the use data augmentation \cite{volpi2018generalizing,shankar2018generalizing}, meta-learning to simulate domain shift \cite{li2018learning,zhang2020adaptive}, constrastive learning \cite{kim2021selfreg}, adversarial learning of representations invariant to the environment \cite{ganin2016domain, albuquerque2019adversarial}, and with applications in structured medical domains \cite{jin2020enforcing}. Closest to DIRM are \cite{koyama2020out} and recently \cite{shi2021gradient} that explicitly use loss derivatives with respect to model parameters to regularize ERM solutions without however deriving their objectives with respect to shifts in an underlying causal model or with respect to an underlying robust optimization problem.
A further line of research, instead of appealing explicitly to invariances between environments, proposes to solve directly a worst-case optimization problem (\ref{robust_pop}). One popular approach is to define $\mathcal P$ as a ball around the empirical distribution $\hat P$, for example using $f$-divergences or Wasserstein balls of a defined radius, see e.g. \cite{kuhn2019wasserstein,duchi2016statistics,duchi2019distributionally,sinha2017certifying,wozabal2012framework,abadeh2015distributionally,duchi2018learning}. These are general and multiple environments are not required, but this also means that sets are defined agnostic to the geometry of plausible shifted distributions, and may therefore lead to solutions, when tractable, that are overly conservative or do not satisfy generalization requirements \cite{duchi2019distributionally}.
\section{Experiments}
In this section, we conduct an analysis of generalization performance on shifted image, speech and tabular data from the medical domain.
Data linkages, electronic health records, and bio-repositories, are increasingly being collected to inform medical practice. As a result, also prediction models derived from healthcare data are being put forward as potentially revolutionizing decision-making in hospitals. Recent studies \cite{cabitza2017unintended,venugopalan2019s},
however, suggest that their performance may reflect not only their ability to identify disease-specific
features, but also their ability to exploit spurious correlations due to unobserved confounding (such as
varying data collection practices): a major challenge for the reliability of decision support systems.
In our comparisons we consider the following baseline algorithms:
\begin{itemize}[leftmargin=*, itemsep=0pt]
\item Empirical Risk Minimization (\textbf{ERM}) that optimizes for minimum loss agnostic of data source.
\item Group Distributionally Robust Optimization (\textbf{DRO}) \cite{sagawa2019distributionally} that optimizes for minimum loss across the worst convex mixture of training environments.
\item Domain Adversarial Neural Networks (\textbf{DANN}) \cite{ganin2016domain} that use domain adversarial training to facilitate transfer by augmenting the neural network architecture with an additional domain classifier to enforce the distribution of $\phi(X)$ to be the same across training environments.
\item Invariant Risk Minimization (\textbf{IRM}) \cite{arjovsky2019invariant} that regularizes ERM ensuring representations $\phi(X)$ be optimal in every observed environment.
\item Risk Extrapolation (\textbf{REx}) \cite{krueger2020out} that regularizes for equality in environment losses instead of considering their derivatives.
\end{itemize}
\textbf{Appendix.} We make additional comparisons in the Appendix on domain generalization benchmarks including VLCS \cite{fang2013unbiased}, PACS \cite{li2017deeper} and Office-Home \cite{venkateswara2017deep} using the DomainBed platform \cite{gulrajani2020search}. All experimental details are standardized across experiments and algorithms (equal network architectures and hyperparameter optimization techniques), and all specifications can be found in the Appendix.
\begin{table*}[t]
\fontsize{9.5}{9.5}\selectfont
\centering
\begin{tabular}{ |p{1.2cm}|C{1.6cm}|C{1.6cm}||C{1.6cm}|C{1.6cm}||C{1.6cm}|C{1.6cm}| }
\cline{2-7}
\multicolumn{1}{c|}{} & \multicolumn{2}{c||}{\textbf{Pneumonia Prediction}} & \multicolumn{2}{c||}{\textbf{Parkinson Prediction}}&\multicolumn{2}{c|}{\textbf{Survival Prediction}}\\
\cline{2-7}
\multicolumn{1}{c|}{} & \textbf{Training} & \textbf{Testing} & \textbf{Training} & \textbf{Testing}&\textbf{Training} & \textbf{Testing}\\
\hline
ERM & 91.6 ($\pm$ .7) & 52.7 ($\pm$ 1) & 95.5 ($\pm$ .5) & 62.8 ($\pm$ 1) & 93.2 ($\pm$ .4) & 75.4 ($\pm$ .9)\\
\hline
DRO & 91.2 ($\pm$ .5) & 53.0 ($\pm$ .6) & 94.0 ($\pm$ .3) & 69.9 ($\pm$ 2)& 90.4 ($\pm$ .4) & 75.2 ($\pm$ .8) \\
\hline
DANN & 91.3 ($\pm$ 1) & 57.7 ($\pm$ 2) & 91.6 ($\pm$ 2) & 51.4 ($\pm$ 5) & 89.0 ($\pm$ .8) & 73.8 ($\pm$ .9)\\
\hline
IRM & 89.3 ($\pm$ 1) & 58.6 ($\pm$ 2) & 93.7 ($\pm$ 1) & 71.4 ($\pm$ 2)& 91.7 ($\pm$ .6) & 75.6 ($\pm$ .8)\\
\hline
REx & 87.6 ($\pm$ 1) & 57.7 ($\pm$ 2) & 92.1 ($\pm$ 1) & 72.5 ($\pm$ 2)& 91.1 ($\pm$ .5) & 75.1 ($\pm$ .9)\\
\hline
\textbf{DIRM} & 84.4 ($\pm$ 1) & 63.1 ($\pm$ 3) & 93.0 ($\pm$ 2)& 72.4 ($\pm$ 2) & 91.2 ($\pm$ .6) & 77.6 ($\pm$ 1) \\
\hline
\end{tabular}
\caption{Accuracy of predictions in percentages ($\%$). Uncertainty intervals are standard deviations. All datasets are approximately balanced, $50\%$ performance is as good as random guessing.}
\label{perf}
\end{table*}
\subsection{Diagnosis of Pneumonia with Chest X-ray Data}
In this section, we attempt to replicate the study in \cite{zech2018confounding}. The authors observed a tendency of image models towards exploiting spurious correlations for the diagnosis on pneumonia from patient Chest X-rays that do not reproduce outside of training data. We use publicly available data from the National Institutes of Health (NIH) \cite{wang2017chestx} and the Guangzhou Women and Children’s Medical Center (GMC) \cite{kermany2018identifying}. Differences in distribution are manifest, and can be seen for example in the top edge of mean pneumonia-diagnosed X-rays shown in Figure \ref{x_ray}. In this experiment, we exploit this (spurious) pathology correlation to demonstrate the need for solutions robust to changes in site-specific features.
\begin{minipage}{.6\textwidth}
\textbf{Experiment design.} We construct two training sets that will serve as training environments. In each environment, $90\%$ and $80\%$ of pneumonia-diagnosed patients were drawn from the NIH dataset and the remaining $10\%$ and $20\%$ of the pneumonia-diagnosed patients were drawn from the GMC dataset. The reverse logic ($10\%$ NIH / $90\%$ GMC split) was followed for the test set. This encourages algorithms to use NIH-specific correlations for prediction during training which are no expected to extrapolate during testing.
\end{minipage}
\hfill
\begin{minipage}{.32\textwidth}
\begin{figure}[H]
\vspace{-0.4cm}
\captionsetup{skip=5pt}
\centering
\includegraphics[width=1\textwidth]{Figures/x_ray.png}
\caption{Mean pneumonia X-ray.}
\label{x_ray}
\end{figure}
\end{minipage}
Our results (Table \ref{perf}) show that DIRM significantly outperforms, suggesting that the proposed invariance guides the algorithm towards better solutions in the case of changes due to unobserved factors.
\subsection{Diagnosis of Parkinson's Disease with Speech}
Parkinson's disease is a progressive nervous system disorder that affects movement. Symptoms start gradually, sometimes starting with a barely noticeable tremor in a patient's voice. This section investigates the performance of predictive models for the detection of Parskinson's disease, trained on voice recordings of vowels, numbers and individual words and tested on vowel recordings of unseen patients.
\textbf{Experiment design.} We used the UCI Parkinson Speech Dataset with given training and testing splits \cite{sakar2013collection}. Even though the distributions of features will differ in different types of recordings and patients, we would expect the underlying patterns in speech to reproduce across different samples. However, this is not the case for correlations learned with baseline training paradigms (Table \ref{perf}). This suggests that spurious correlations due to the specific type of recording (e.g. different vowels or numbers), or even chance associations emphasized due to low sample sizes (120 examples), may be responsible for poor generalization performance. Our results show that correcting for spurious differences between recording types (DIRM, IRM, REx) can improve performance substantially over ERM although the gain of DIRM over competing methods is less pronounced.
\subsection{Survival Prediction with Health Records}
This section investigates whether predictive models transfer across data from different medical studies \cite{meta2012survival}, all containing patients that experienced heart failure. The problem is to predict survival within 3 years of experiencing heart failure from a total of 33 demographic variables. We introduce a twist however, explicitly introducing unobserved confounding by omitting certain predictive variables. The objective is to test performance on new studies with \textit{shifted} distributions, while knowing that these occur predominantly due to variability in unobserved variables.
\textbf{Experiment design.} Confounded data is constructed by omitting a patient's age from the data, found in a preliminary correlation analysis to be associated with the outcome as well as other significant predictors such as blood pressure and body mass index (that is, it confounds the association between blood pressure, body mass index, and survival). This example explicitly introduces unobserved confounding, but this scenario is expected in many other scenarios and across application domains. For instance, such a shift might occur if a prediction model is taken to patients in a different hospital or country than it was trained on. Often distribution of very relevant variables (e.g. socio-economic status, ethnicity, diet, etc.) will differ even though this information is rarely recorded in the data. We consider the 5 studies in MAGGIC of over 500 patients with balanced death rates. Performance results are averages over 5 experiments, in each case, one study is used for testing and the remaining four are used for training. DIRM's performance in this case is competitive with methods which serves to confirm the desirable performance profile of DIRM.
\subsubsection{Reproducibility of variable selection}
\label{sec_reproducibility}
Prediction algorithms are often use to infer influential features in outcome prediction. It is important that this inference be consistent across environments even if perturbed or shifted in some variables. Healthcare is challenging in this respect because patient heterogeneity is high. We showed in section \ref{stability_section} that in the event that the optimal predictor is invariant as a function of $\lambda\in[0,\infty)$, optimal predictors estimated in \textit{every} new dataset in the span of observed distributions, should be \textit{stable}. We test this aspect in this section, considering a form of diluted stability for feature selection ($\lambda\in[0,1]$ instead of $\lambda\in[0,\infty)$).
\begin{minipage}{.6\textwidth}
\textbf{Experiment design.} For a single layer network, we consider significant those covariates with estimated parameters bounded away from zero in all solutions in the range $\lambda\in[0,1]$. Comparisons are made with ERM (conventional logistic regression) and both methods are trained separately on 100 different random pairs of the 33 MAGGIC studies, that is 100 different environments on which algorithms may give different relevant features. Figure \ref{maggic} shows how many features (among the top 10 discovered features) in each of the 100 experiments intersect. For instance, we have that $6$ features intersecting across $80/100$ runs for DIRM while only $4$ for ERM (approximately). DIRM thus recovers influential features more consistently than ERM.
\end{minipage}
\hfill
\begin{minipage}{.37\textwidth}
\begin{figure}[H]
\vspace{-1.5em}
\captionsetup{font=small,skip=0pt}
\centering
\includegraphics[width=0.7\textwidth]{Figures/stability_maggic.png}
\caption{Reproducibility of variable selection.}
\label{maggic}
\end{figure}
\end{minipage}
\section{Conclusions}
We have studied the problem of out-of-sample generalization from a new perspective, grounded in the underlying causal mechanism generating new data that may arise from shifts in observed, unobserved or target variables. Our proposal is a new objective, DIRM, that is provably robust to certain shifts in distribution, and is informed by new statistical invariances in the presence of unobserved confounders. Our experiments show that we may expect better generalization performance and also better reproducibility of influential features in problems of variable selection. A limitation of DIRM is that robustness guarantees crucially depend on the (unobserved) properties of available data: DIRM generally does not guarantee protection against unsuspected events. For example, in Theorem 1, the supremum contains distributions that lie in the affine combination of training environments, as opposed to arbitrary distributions.
\section*{Acknowledgements}
This work was supported by the Alan Turing Institute under the EPSRC grant EP/N510129/1, the ONR and the NSF grants number 1462245 and number 1533983.
| proofpile-arXiv_065-223 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\input{sections/abstract}
\input{sections/introduction}
\input{sections/prerequisites}
\input{sections/stress-tests}
\input{sections/pg-ig}
\input{sections/experiments}
\input{sections/results}
\input{sections/related-work}
\input{sections/conclusion}
\input{sections/acknowlegdements}
\section*{Acknowledgements}
We would like to thank Christoph Alt, David Harbecke, Moritz Augustin and Arne Binder for their helpful feedback. Furthermore, we would like to thank Jonas Mikkelsen for helping with the code review. This work has been supported by the German Federal Ministry of Education and Research as part of the project BBDC2 (01IS18025E).
\section{Hyperparameters}
Below, we cite the formulas of the methods that involve hyper parameters and report the hyper parameters we use in our experiments.
\subsection{(Pattern-Guided) Integrated Gradients}
Integrated Gradients \cite{sundararajan2017axiomatic} is given by
\begin{equation*}
\phi_{f,i}(x) = \frac{x_{i} - \bar{x}_{i}}{m}\sum_{k=1}^{m} \frac{\partial f(\bar{x} + \frac{k}{m}(x - \bar{x}))}{\partial x_{i}}
\end{equation*}
In our experiments, $\bar{x}=0, m=25$. For the proposed pattern-guided version we use the same hyperparameters.
\subsection{SmoothGrad$^{2}$}
SmoothGrad$^{2}$ \cite{hooker2019benchmark} is given by
\begin{equation*}
\phi_{f,i}(x) = \frac{1}{n}\sum_{1}^{n}\left(\frac{\partial f(x')}{\partial x'_{i}}\right)^{2}
\end{equation*}
where $x' = x + \mathcal{N}(\mu, \sigma^{2})$ is sampled for each $n$. In our experiments, $n=25, \mu=0, \sigma^{2}=0.15$.
\subsection{SmoothGrad-IG}
SmoothGrad-IG \cite{smilkov2017smoothgrad} is given by
\begin{equation*}
\phi_{f,i}(x) = \frac{x_{i} - \bar{x}_{i}}{m}
\sum_{k=1}^{m}
\frac{1}{n}\sum_{1}^{n} \frac{\partial f(\bar{x} + \frac{k}{m}(x' - \bar{x})))}{\partial x'_{i}}
\end{equation*}
where $x'$ is defined above. In our experiments, $\bar{x}=0, m=25, n=25, \mu=0, \sigma^{2}=0.15$.
\subsection{VarGrad}
VarGrad \cite{hooker2019benchmark} is given by
\begin{equation*}
\phi_{f,i}(x) = var_{n}(\frac{\partial f(x')}{\partial x'_{i}})
\end{equation*}
where $x'$ is defined above. In our experiments, $n=25, \mu=0, \sigma^{2}=0.15$.
\subsection{Expected Gradients}
Expected Gradients \cite{erion2019learning} is given by
\begin{equation*}
\phi_{f,i}(x) = \underset{\bar{x}\sim \mathbf{x}, \alpha \sim U(0,1)}{\mathbb{E}} \left[(x_{i} - \bar{x}_{i}) \frac{\partial f(\bar{x} + \alpha(x - \bar{x}))}{\partial x_{i}}\right]
\end{equation*}
In our experiments, we sampled $\bar{x}$ 49 times from the data.
\section{Conclusion \& Future Work}
We present Pattern-Guided Integrated Gradients, which combines Integrated Gradients with PatternAttribution. Due to favorable properties, the new method passes stress tests that both parent methods fail. Furthermore, in a large-scale image degradation experiment, PGIG outperforms nine alternative methods, including its parent methods.
The image degradation metric that we offer to empirically validate the new method is being discussed, however \cite{hooker2019benchmark}. In the future, the new method should thus also be tested against other metrics. Furthermore, IG was shown to have a problematic degree of invariance to model randomization \cite{adebayo2018sanity}. It should be explored to what degree PGIG still exhibits this behaviour.
\section{Experiments}
\label{sec:pg-ig-experiments}
\citet{sundararajan2017axiomatic} motivate IG axiomatically but do not study their method empirically. \citet{kindermans2017learning} derive their method axiomatically as well, but they also conduct image degradation experiments, an established metric to estimate the quality of saliency methods.
For the image degradation benchmark, a growing number of patches in the input images are replaced with their mean channel values and the output activation\footnote{We explain networks after the final softmax activation.} (confidence) of the model is monitored. The order in which patches are perturbed is dictated by the accumulated saliencies of their pixels. The premise of this experiment is that the steeper the drop in confidence, the more accurately the attribution method has identified the most important features.
We, too, benchmark PGIG with the image degradation metric. Like \citet{kindermans2017learning}, we use a pre-trained VGG-16 model \cite{simonyan2014very} to generate saliency maps for the 50k images in the validation split of the ImageNet data set \cite{deng2009imagenet} that are then used to successively degrade the top 100 patches in descending order and aggregate confidence values. Images were cropped to 224x224 and normalized within $[-1,1]$. Our code base builds on the PyTorch Visual Attribution framework \cite{visualattr2018} from where we also received the patterns for VGG-16. Data and code are open source: \url{https://github.com/DFKI-NLP/pgig}.
We compare the patch ranking by PGIG against a random ordering of the patches as well as against rankings determined by nine other gradient-based explainability methods, which are organized into two classes.
The first class contains gradient aggregation methods of which Integrated Gradients is a member. Vargrad \cite{adebayo2018sanity} and SmoothGrad \cite{smilkov2017smoothgrad} calculate the variance and mean respectively of the input gradients wrt. randomly perturbed inputs. For our experiments, we use the squared version of SmoothGrad \cite{hooker2019benchmark} as it outperformed its rooted counterpart. \citet{smilkov2017smoothgrad} also suggest an extension of Integrated Gradients, which merges Integrated Gradients with SmoothGrad, denoted by SmoothGrad-IG. Expected Gradients \cite{erion2019learning} is another derivative of Integrated Gradients that uses baselines drawn from the data distribution to aggregate values.
The second class consists of modifications to the Vanilla Gradient method \cite{simonyan2013deep}, of which PatternAttribution is a member. Gradient times Input \cite{shrikumar2016not} simply multiplies the gradient saliency map by the input, while Guided Backpropagation \cite{springenberg2014striving} aims to filter for positive class evidence by inhibiting negative gradient values as well as those corresponding to negative values in the layer inputs.
\section{Introduction}
Integrated Gradients \cite{sundararajan2017axiomatic} is a gradient-based explainability method with a sound theoretical motivation that has also performed well experimentally \cite{ancona2017towards}. The authors of the PatternAttribution method \cite{kindermans2017learning}, however, compellingly argue that IG does not give enough importance to the class signals in the input data (as do several other gradient-based explainability methods). They show that model weights function to cancel the distractor (noise) in the data and thus are more informative about the distractor than the signal. The weights even tend to direct the gradient (and hence attributions of gradient-based explainability methods) away from the signal. To direct the weights (and thus attributions) towards the signal, PA modifies them with informative directions -- called patterns --- that it learns from data.
\begin{figure}[!htb]
\centering
\subfigure[]{
\includegraphics[scale=.094]{figures/elephant/elephant_input.png}
}
\subfigure[]{
\includegraphics[scale=.094]{figures/elephant/integrate_grad.png}
}
\subfigure[]{
\includegraphics[scale=.094]{figures/elephant/pattern_vanilla_grad.png}
}
\subfigure[]{
\includegraphics[scale=.094]{figures/elephant/pattern_integrate_grad.png}
}
\caption{Saliency maps (red: positive, blue: negative) for an input (a) according to IG (b), PA (c), and PGIG (d), explaining the correct (\texttt{African Elephant}) VGG-16 classification of (a).}
\label{fig:heatmaps}
\end{figure}
We show that PA, in turn, suffers from problems that IG has overcome. In particular, it suffers from the saturation problem which occurs at function plateaus (cf. vanishing gradient), resulting in zero-attributions for input features that contributed to non-zero output activations.
In response, we propose a hybrid approach, Pattern-Guided Integrated Gradients, that combines the strengths of both methods. We demonstrate that \textsc{PGIG} passes controlled stress tests that both parent methods fail. Furthermore, we find that it outperforms its parent methods, as well as seven other prominent explainability methods, in a large-scale image degradation experiment. The new method can be implemented quickly, in particular when Integrated Gradients and PatternAttribution are already part of the code base, as is the case in several explainability frameworks \cite{visualattr2018, alber2019innvestigate}.
\section{Pattern-Guided Integrated Gradients}
In response, we propose Pattern-Guided Integrated Gradients, given by
\begin{equation}
\label{eq:pgig}
\phi_{f,i}^{PGIG}(x) = \frac{x_{i} - \bar{x}_{i}}{m}\sum_{k=1}^{m} \frac{\partial^{(p)} f(\bar{x} + \frac{k}{m}(x-\bar{x}))}{\partial^{(p)} x_{i}}
\end{equation}
PGIG sums over the saliency maps returned by PA for inputs along the straight path between the baseline and the point of interest, $x$.
\subsection{Properties}
Like IG, PGIG is a path method that mitigates the saturation problem and like PA, it considers informative directions and thus avoids the distractor. Its favorable attributions are visualized in the bottom row of Fig.~\ref{fig:stresstest} (\textbf{d}).
In the linear case, we can extract the pattern from Eq.~\ref{eq:pgig} and recover IG:
\begin{align*}
\phi^{PGIG}_{f,i}(x) &= \frac{x_{i} - \bar{x}_{i}}{m}\sum_{k=1}^{m} \frac{\partial^{(p)} f(\dots)}{\partial^{(p)} x_{i}}\\ &= \frac{x_{i} - \bar{x}_{i}}{m}\sum_{k=1}^{m}p_{i}\frac{\partial f(\dots)}{\partial x_{i}}\\ &= p_{i}\phi_{f,i}^{IG}(x).
\end{align*}
PGIG scales the attribution scores of Integrated Gradients according to the class informativeness of input $i$. PGIG thus is sensitive to changes in all input dimensions $1 \leq i \leq D$ except for when $p_{i} = 0$. One interpretation of this is that PGIG is not sensitive to changes in pure distractor dimensions, such as $x_{2}$ in Fig.~\ref{fig:stresstest} (\textbf{c}), (\textbf{d}). This reasoning directly translates to intermediate and ReLU-activated layers. Regarding implementation invariance, we leave it to future work to prove or disprove this property for PGIG.
\section{Prerequisites}
\label{sec:prerequisites}
In this section, we briefly discuss Integrated Gradients and PatternAttribution to introduce notation as well as important concepts and properties of the two attribution methods. We consider an attribution method a function $\phi_{f,i}(x): \mathbb{R}^{D} \rightarrow \mathbb{R}$ that maps each input feature $i\in\{1\dots D\}$ in $x \in \mathbb{R}^{D}$ to a real number that signifies the importance of input $i$ to the output of model $f: \mathbb{R}^{D} \rightarrow \mathbb{R}$.\footnote{For simplicity, without the loss of generality, we assume one-dimensional outputs throughout this paper.}
\subsection{Integrated Gradients}
\label{secsub:integrated-gradients}
The attributions provided by Integrated Gradients are a summation of the gradient attribution maps at values from the straight-line path between a baseline $\bar{x}$ (a user-defined reference point) and the input, $x$. The formula is given by
\begin{equation}
\label{eq:ig}
\phi_{f,i}^{IG}(x) = \frac{x_{i} - \bar{x}_{i}}{m}\sum_{k=1}^{m} \frac{\partial f(\bar{x} + \frac{k}{m}(x - \bar{x}))}{\partial x_{i}}
\end{equation}
where $m$ is a hyperparameter, the number of equidistant steps along the path. As a path method, IG mitigates the aforementioned saturation problem, which we demonstrate below. Furthermore, the authors of Integrated Gradients cite two desirable properties for attribution methods: The first is referred to as \textit{Sensitivity}. An attribution method is \textit{sensitive} ``if for every input and baseline that differ in one feature but have different predictions then the differing feature should be given a non-zero attribution'' \cite{sundararajan2017axiomatic}. The second property is called \textit{Implementation Invariance} and demands that two networks that are functionally equivalent -- regardless of implementation -- should always yield identical attribution maps. IG is both sensitive and implementation invariant as $\lim m \rightarrow \infty$.
\subsection{PatternAttribution}
\label{secsub:patternattribution}
The authors of PatternAttribution \cite{kindermans2017learning} criticize Integrated Gradients, among other gradient methods, for not discriminating signal and distractor in the input data. Their argument is based on the observation that a well-trained model cancels the distractor in the input -- that is, everything that it did not find to co-vary with the target.
For example, assume that we want to model a simple linear relation $x \rightarrow y$, where $x = s + d$ is composed of a signal $s$ that carries all the information needed to predict the target $y$ (it co-varies with the target) and an additive distractor $d$.
In case of a well-trained linear model, the model must have learned weights $w$ s.t. $f(d)=w^{T}d =0$ and $f(x)=w^{T}x = w^{T}s =y$.
Thus, the weights function as a filter that must always change direction with the distractor in order to stay orthogonal to it. A change in the signal, on the other hand, is accounted for by a change in magnitude of the weights. The authors conclude that gradient methods -- including IG -- that channel attributions along $w$ inherently direct it towards a direction that is determined by the distractor, not the signal.
For PatternAttribution, prior to the backward pass, the weights $w$ of a linear or ReLU activated layer are replaced by $w \odot p$ where
\begin{equation}
\label{eq:lincase}
p = \frac{\mathbb{E}_{+}[\mathbf{x}\mathbf{\hat{y}}]-\mathbb{E}_{+}[\mathbf{x}]\mathbb{E}[\mathbf{\hat{y}}]}{w^{T}\mathbb{E}_{+}[\mathbf{x}\mathbf{\hat{y}}]-w^{T}\mathbb{E}_{+}[\mathbf{x}]\mathbb{E}[\mathbf{\hat{y}}]}
\end{equation}
is a pattern computed over a batch of layer inputs and outputs $\mathbf{x}, \mathbf{\hat{y}}$. $\mathbb{E}_{+}[\cdot]$ denotes the expectation tensor over the positive regime of a ReLU activated layer, $\{x | w^{T}x > 0\}$. We can interpret Eq.~\ref{eq:lincase} as follows: Weights that primarily cancel the distractor are scaled down whereas weights that amplify or conserve the signal are preserved. This way, PatternAttribution directs a modified gradient
towards the signal. Subsequently, we denote a gradient backward call with the patterns in place as $\partial^{(p)}$. According to this notation, PatternAttribution becomes
\begin{equation}
\phi^{\textsc{PA}}_{f,i}(x) = \frac{\partial^{(p)} f (x)}{ \partial^{(p)} x_{i}}
\end{equation}
\section{Related Work}
Much of the related work that inspired the new method has already been mentioned in the previous sections. PGIG is of course based on its parent methods, IG \cite{sundararajan2017axiomatic} and PA \cite{kindermans2017learning}. PGIG is not the first method to extend IG, however. Expected Gradients \cite{erion2019learning} and a layered version of Intergrated Gradients \cite{mudrakarta2018did} are other examples. Integrated-Gradient Optimized Saliency \cite{qi2019visualizing} uses IG with mask optimization to generate attributions. Unlike PGIG, however, none of these methods apply informative directions.
PGIG is both a modification to the Vanilla Gradient method \cite{simonyan2013deep}, such as Guided Backpropagation \cite{springenberg2014striving}, and a gradient aggregate method, such as SmoothGrad \cite{smilkov2017smoothgrad}, VarGrad \cite{adebayo2018sanity}, or the very recent SmoothTaylor method \cite{goh2020understanding}. For SmoothTaylor, \citet{goh2020understanding} bridge IG and SmoothGrad -- loosely related to what \citet{smilkov2017smoothgrad} propose for SmoothGrad-IG -- but within a Taylor's theorem framework.
\section{Results \& Discussion}
Saliency maps produced by IG, PA and PGIG
are shown in Fig.~\ref{fig:heatmaps}. For comparability, we choose the same input image that \citet{kindermans2017learning} discuss in their paper. We observe that the heat map generated by PGIG appears plausible, as do the heatmaps of its parent methods: the most salient input features are located in the proximity of the African elephant in the input image.
Regarding faithfulness, confidence curves are plotted in Fig.~\ref{fig:exp-pert-ig}. We observe that, according to the image degradation metric, the random patch ordering performs worst with VGG-16 on ImageNet, as expected. The random ordering is followed by the simple gradient method. It should be mentioned, however, that the simple gradient is more a sensitivity detector than an attribution method.
Gradient times Input and Integrated Gradients both multiply gradients with inputs and perform similarly in our experiment. This is on par with the finding that in the linear case, Input times Gradient and Integrated Gradients even are equivalent methods \cite{adebayo2018sanity}.
Both methods are surpassed by VarGrad, which itself is exceeded by SmoothGrad$^{2}$, SmoothGrad-IG, Expected Gradients, Guided Backpropagation and PatternAttribution; all of which perform similarly. Interestingly, these methods are of different classes: VarGrad, SmoothGrad, SmoothGrad-IG and Expected Gradients are gradient aggregating methods, whereas Guided Backpropagation and PatternAttribution are gradient modifying methods. Thus, we do not see any class membership preference.
Pattern-Guided Integrated Gradients is both, a gradient aggregating method and a gradient modifying method. According to the image degradation metric, it outperforms all other methods tested. However, this result is definitive only for methods without hyper parameters, such as Guided Backpropagation or PatternAttribution. We report the hyper parameters we use in the appendix.
\section{Stress Tests}
The authors of Integrated Gradients and the authors of PatternAttribution each present a different stress test to demonstrate the benefits of their method over alternative approaches. The former demonstrate that IG mitigates the saturation problem, the latter prove that PA is able to avoid noise.
We combine the two stress tests by defining a target function that involves a plateau and training a network to model the function with noisy input data. We demonstrate that each method fails the other's test: PA starves at the plateau and IG attributes importance to the noise in the input. We then combine the two methods into Pattern-Guided Integrated Gradients and demonstrate that the hybrid approach passes all tests. We illustrate this argument in Fig.~\ref{fig:stresstest}.
\subsection{Target Function} With IG, PA and PGIG we will later explain a neural network that models $y = 1-ReLU(1-z)$, for $z \in [z^{(1)} = -2, z^{(2)} = -1.99, \dots, z^{(N)} = 2]$, the target function, depicted in Fig.~\ref{fig:stresstest} (\textbf{a}). Please note that $y$ plateaus after $z>1$. This \textit{non-zero} plateau is the first stress test for the gradient-based attribution methods because the gradient becomes zero at the plateau but the attribution scores should not be zero.
\subsection{Input Data} Let us now generate two-dimensional input training data, $x\in \mathbb{R}^{2}$ where $x = s + d$. As mentioned above, we want the signal $s$ to co-vary with the target. To generate such a signal, we scale $(1,0)^{T}$ with $z$, s.t. $\mathbf{s} = [(1,0)^{T}z^{(1)},(1,0)^{T}z^{(2)}, ...]$. The signal is visualized in Fig.~\ref{fig:stresstest} (\textbf{b}), top row.
The distractor (Fig.~\ref{fig:stresstest} (\textbf{b}), middle row) is $d = (1,1)^{T}\epsilon$ where $\epsilon \sim \mathcal{N}(\mu, \sigma^{2})$, which we sample independently for each $s\in\mathbf{s}$. Note that while $s$ carries information about $z$ and $y$ in the first dimension, $d$ contains only noise, i.e. it does not contain information about the target. Because only $d$ is present in the second dimension, let us subsequently refer to the second dimension as the \textit{distractor dimension} and the first dimension as the \textit{signal dimension}.
\subsection{Model} To produce the input given the target, the network must effectively cancel the distractor (below, we will construct such a network). This is the second stress test: If the model cancels the distractor, inputs pertaining only to the distractor should receive only zero attributions. In our case, this means that $x_{2}$ should receive only zero attributions, as it contains nothing but noise from the distractor. This is challenging since the respective inputs and weights might not be zero.
Let us first consider a proxy model which learns $w$ such that $w^{T}x = z$. We can solve for $w$ analytically: Since $w$ needs to be orthogonal to the distractor base vector which is $(1,1)^{T}$, $w = (1,-1)^{T}$. If, for example, $s^{T} = (.5, 0)$ and $d^{T} = (.1, .1)$, then indeed $w^{T}(s + d) = .5 = z = w^{T}s$ and $w^{T}d = 0$. Thus, $w$ successfully cancels the distractor.
Now, assume that $f^{(1)}$ and $f^{(2)}$ are two dense layers, with unit biases and parameters $w^{(1)}, w^{(2)}$, accepting two- and one- dimensional inputs, respectively. If we set $w^{(1)} = -w$ and $w
^{(2)} = (-1)$ then $y = f(x) = f^{(1)}(\rho(f^{(2)}(x)))$, where $\rho$ is shorthand for ReLU. This model is outlined in Fig.~\ref{fig:stresstest} (\textbf{c}).
\subsection{Attributions} At this point, we have combined the two stress tests from \citet{sundararajan2017axiomatic} and \citet{kindermans2017learning} and apply IG and PA to receive attributions for $\hat{y}$, depicted in Fig~\ref{fig:stresstest} (\textbf{d}).
For PA, we compute the patterns for $w^{(1)}$ and $w^{(2)}$ with Eq.~\ref{eq:lincase}, which yields $p^{(1)} \approx (-1, 0)^{T}$ and $p^{(2)} = (-1)$. The bias contributions are considered zero, as the bias does not co-vary with the target \cite{kindermans2017learning}. For PA, the backpropagation is started with $\hat{y}$, whereas for IG, the backpropagation is invoked starting with $1.0$.\footnote{For demonstration purposes, we violate the constraint for IG that output values are in the range $[0,1]$. In our case, this only scales the attributions and does not corrupt the method.}
IG (Fig.~\ref{fig:stresstest} (\textbf{d}), top row) follows the function in the signal dimension ($x_{1}$) including the plateau after $z>1$, which we consider desirable. It does, however, attribute a significant portion of importance to inputs from the distractor dimension, which only carries noise and is cancelled by the model. PA (Fig.~\ref{fig:stresstest} (\textbf{d}), middle row) successfully avoids the noise, i.e. its attribution scores in the distractor dimension are low, but it suffers from the saturation problem at the plateau in the signal dimension due to a zero-gradient. As a consequence, PA violates sensitivity. | proofpile-arXiv_065-224 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\setcounter{equation}{0}
The entropy of black holes plays a crucial role in black hole thermodynamics. In the approach of Wald \cite{wald}, entropy is introduced in the framework of (Riemannian) diffeomorphism invariant theories as the Noether charge on horizon. In these theories, the gravitational dynamics is described by a metric of spacetme only, as is the case in general relativity (GR), and matter fields are tensor fields on spacetime manifold. After some time, Jacobson and Mohd \cite{JM} extended these considerations to theories whose spacetime geometry is described by an orthonormal coframe and the related Lorentz (or spin) connection. Staying close to the spirit of GR, they restricted their analysis to a torsionless Lorentz connection, which is completely determined in terms of the coframe field. Thus, in spite of the change of basic dynamical variables, the geometry of spacetime remained Riemannian.
A quite natural extension of the treatment of entropy was proposed recently in Ref. \cite{bc1}, where the Lorentz connection was liberated from its Riemannian constraints by going over to Poincar\'e gauge theory (PG), a modern gauge-field-theoretic approach to gravity \cite{bh,pg}. In analogy to gauge theories of internal symmetries, PG is constructed by localizing the Poincar\'e group (translations and Lorentz rotations) of spacetime symmetries. In PG, the basic gravitational variables are again the coframe and the Lorentz connection, but here, in contrast to GR, the spacetime geometry is characterized by two types of field strengths, the torsion and the curvature.
The Hamiltonian approach to entropy proposed in Ref. \cite{bc1} describes the asymptotic charges (energy and angular momentum) and entropy as the canonical charges at infinity and horizon, respectively. It was successfully applied to spherically symmetric and asymptotically flat Kerr solutions in PG, and to the Kerr-Anti-de Sitter (Kerr-AdS) black holes in GR \cite{bc1,bc2,bc3}. Once the asymptotic charges and entropy are calculated, they are also shown to satisfy the first law of black hole thermodynamics, which is an independent test of the formalism. The objective of the present paper is to extend our Hamiltonian approach to physically more interesting but technically rather involved case of the Kerr-AdS black hole with torsion \cite{kads1,kads2}, see also \cite{kads3,kads4}.
The paper is organized as follows. In section \ref{sec2}, we describe basic aspects of our Hamiltonian approach to entropy in PG, and section \ref{sec3} offers a review of the geometry of Kerr-AdS spacetimes with torsion. Then, in section \ref{sec4}, we apply the Hamiltonian approach to calculate energy and angular momentum of the Kerr-AdS black hole in PG, with respect to the AdS background configuration. In this analysis, a particular attention is payed to a proper treatment of the Boyer-Lindquist coordinate system in the asymptotic region. Section \ref{sec5} is the central part of the present paper as it contains a detailed derivation of the Kerr-AdS black hole entropy. In section \ref{sec6}, we give a short verification of the validity of the first law of black hole thermodynamics, and section \ref{sec7} is devoted to concluding remarks. Finally, three appendices contain some technical details of our analysis of entropy.
Our conventions are the same as in Refs. \cite{bc1,bc2,bc3}. The Latin indices $(i,j,\dots)$ are the local Lorentz indices, the Greek indices $(\m,\n,\dots)$ are the coordinate indices, and both run over $0,1,2,3$. The orthonormal coframe (tetrad) is $b^i=b^i{}_\m dx^\m$, the dual basis (frame) is $h_i=h_i{}^\m\pd_\m$, $\om^{ij}=\om^{ij}{}_\m dx^\m$ is the Lorentz connection, the metric components in the local Lorentz and coordinate basis are $\eta_{ij}=(1,-1,-1,-1)$ and $g_{\m\n}=g_{ij} b^i{}_\m b^j{}_\n$, respectively, and $\ve_{ijmn}$ is the totally antisymmetric symbol with $\ve_{0123}=1$. The Hodge dual of a form $\alpha$ is denoted by $\hd\alpha$, and the wedge product of forms is implicitly understood.
\section{Entropy as the canonical charge}\label{sec2}
\setcounter{equation}{0}
To prepare our analysis of entropy for Kerr-AdS black holes with torsion, we start with a short account of the (geometric and) dynamical structure of PG \cite{bh,pg} and the basic aspects of the Hamiltonian understanding of black hole entropy \cite{bc1,bc2,bc3}.
The geometric structure of spacetime in PG is characterised by the existence of two gauge potentials, the coframe (tetrad) $b^i$ and the Lorentz connection $\om^{ij}=-\om^{ji}$ (1-forms), the related field strengths are the torsion $T^i:=d b^i+\om^i{}_k b^k$ and the curvature $R^{ij}:=d\om^{ij}+\om^i{}_k\om^{kj}$ (2-forms), and the associated spacetime structure is described by a Riemann-Cartan (RC) geometry.
The PG dynamics is determined by a Lagrangian $L=L_G+L_M$ (4-form), where $L_G$ is the pure gravitational part and $L_M$ describes matter fields and their gravitational interactions. The gravitational Lagrangian is assumed to be parity invariant and at most quadratic in the field strengths:
\be
L_G=-\hd(a_0R+2\L)+T^i\sum_{n=1}^3\hd(a_n\ir{n}T_i)
+\frac{1}{2}R^{ij}\sum_{n=1}^6\hd(b_n\ir{n}R_{ij})\,, \lab{2.1}
\ee
where $(a_0,\L,a_n,b_n)$ are the coupling constants, and $\ir{n}T_i,\ir{n}R_{ij}$ are irreducible parts of the field strengths, see, for instance, Ref. \cite{bc1}.
The variation of $L_G$ with respect to $b^i$ and $\om^{ij}$ yields the gravitational field equations in vacuum. After introducing the covariant gravitational momenta $H_i:=\pd L_G/\pd T^i$ and $H_{ij}:=\pd L_G/\pd R^{ij}$ (2-forms), and the associated energy-momentum and spin currents,
$E_i:=\pd L_G/\pd b^i$ and $E_{ij}:=\pd L_G/\pd\om^{ij}$ (3-forms), the gravitational field equations take a compact form
\bsubeq\lab{2.2}
\bea
\d b^i:&&\quad \nab H_i+E_i=0\, , \lab{2.2a}\\
\d\om^{ij}:&&\quad \nab H_{ij}+E_{ij}=0\,. \lab{2.2b}
\eea
\esubeq
In the presence of matter, the right-hand sides of \eq{2.2a} and \eq{2.2b} contain the corresponding matter currents.
The explicit expressions for the covariant momenta
\bsubeq\lab{2.3}
\bea
&&H_i=2\sum_{m=1}^2\hd(a_n \ir{m}T_i)\,, \\
&&H_{ij}=-2a_0\hd(b_ib_j)+2\sum_{n=1}^6\hd(b_n\ir{n}R_{ij})\,,
\eea
\esubeq
play and important role in the analysis of black hole entropy.
The asymptotic conserved charges (energy and angular momentum) in PG are closely related to the regularity (functional differentiability) of the canonical gauge generator of local Poincar\'e symmetries. Following the ideas of Regge and Teitelboim \cite{rt1974}, the canonical form of these charges can be expressed in terms of certain surface integrals at spatial infinity, see Refs. \cite{kads3,bv1983,nester}. On the other hand, the concept of black hole entropy in GR is best understood as the \emph{Noether charge} on horizon \cite{wald}. As shown in Ref. \cite{bc1}, this idea can be quite naturally extended to PG by introducing entropy as the \emph{canonical charge} on horizon. By construction, this extension can be applied not only to black holes with torsion, but also to Riemannian black holes.
For a stationary black hole spacetime, its spatial section $\S$ is assumed to have two components, one at infinity and the other at horizon, $\pd\S=S_\infty\cup S_H$. The corresponding boundary integral $\G$ has two parts, $\G=\G_\infty-\G_H$, which are determined by the following variational equations:
\bsubeq\lab{2.4}
\bea
&&\d\G_\infty=\oint_{S_\infty}\d B(\xi)\,,\qquad
\d\G_H=\oint_{S_H} \d B(\xi)\,, \\
&&\d B(\xi):=(\xi\inn b^{i})\d H_i+\d b^i(\xi\inn H_i)
+\frac{1}{2}(\xi\inn\om^{ij})\d H_{ij}
+\frac{1}{2}\d\om^{ij}(\xi\inn\d H_{ij})\, .
\eea
\esubeq
Here, $\xi$ is the Killing vector which takes the values $\pd_t$ and/or $\pd_\vphi$ on $S_\infty$, and becomes a linear combination thereof on $S_H$, such that $\xi^2=0$. The variation $\d B$ is determined in accordance with the \emph{boundary conditions}, which must be chosen so as to ensure the solutions for $\G_\infty$ and $\d\G_H$ to exist and be finite. In particular, $\d$ is required to satisfy the following rules:
\bitem
\item[(r1)] On $S_\infty$, the variation $\d$ acts on the parameters of a black hole solution, but not on the para\-me\-ters of the background configuration.\vsm
\item[(r2)] On $S_H$, the variation $\d$ must keep surface gravity constant.
\eitem
When the variational equations \eq{2.4} are \emph{$\d$-integrable} and the solutions for $\G_\infty$ and $\G_H$ are \emph{finite}, they are interpreted as the asymptotic charges and black hole entropy, respectively.
Although $\G_\infty$ and $\G_H$ are defined as a priory independent quantities,
the analysis of their construction \cite{bc1} reveals that the regularity of the canonical gauge generator is ensured by the relation
\be
\d\G\equiv\d\G_\infty-\d\G_H=0\,, \lab{2.5}
\ee
which is equivalent to the first law of black hole thermodynamics.
\section{Kerr-AdS black hole with torsion}\label{sec3}
\setcounter{equation}{0}
In this section, we present Baekler et. al. Kerr-AdS solution \cite{kads1,kads2} in the framework of a wider class of parity even PG Lagrangians \cite{kads3}; for an extension to the general parity violating Lagrangian, see Obukhov \cite{kads4}.
\subsection{Metric and tetrad}
The metric of Kerr-AdS spacetime in Boyer-Lindquist coordinates takes the form \cite{carter,hentei,gibbons}
\bsubeq\lab{3.1}
\be
ds^2=\frac{\D}{\r^2}\Big(dt+\frac{a}{\a}\sin^2\th d\vphi\Big)^2
-\frac{\r^2}{\D}dr^2-\frac{\r^2}{f}d\th^2
-\frac{f}{\r^2}\sin^2\th\Big[a dt+\frac{(r^2+a^2)}{\a}d\vphi\Big]^2\,,\lab{3.1a}
\ee
where
\bea
&&\D(r):=(r^2+a^2)(1+\l r^2)-2mr\, ,\qquad \a:=1-\l a^2\,, \nn\\
&&\r^2(r,\th):=r^2+a^2\cos^2\th\,,\qquad
f(\th):=1-\l a^2\cos^2\th\,.
\eea
\esubeq
Here, $m$ and $a$ are the parameters of the solution, $\l=-\L/3a_0$, $\a$ normalizes the range of the angular variable $\vphi$ to $2\pi$, and $0\le\th<\pi$. For $m=0$, the metric reduces to the AdS form, albeit in somewhat ``twisted" coordinates \cite{carter,hentei}. The metric possesses two Killing vectors, $\pd_t$ and $\pd_\vphi$, and the larger root of $\D(r)=0$ defines the outher horizon,
\be
(r_+^2+a^2)(1+\l r_+^2)-2mr_+=0\,.
\ee
The angular velocity is given by
\be
\om(r):=\frac{g_{t\vphi}}{g_{\vphi\vphi}}
=\frac{a\a\big[f(r^2+a^2)-\D\big]}{f(r^2+a^2)^2-a^2\D\sin^2\th}\, ,\qquad
\om(r_+)=\frac{a\a}{r_+^2+a^2}\, ,\chmr \lab{3.3}
\ee
Note that $\om(r)$ does not vanish for large $r$, $\om\sim -\l a+O_2$.
Surface gravity has the form
\be
\k=\frac{[\pd\D]_{r_+}}{2(r_+^2+a^2)}
=\frac{r_+(1+\l a^2+3\l r_+^2-a^2/r_+^2)}{2(r_+^2+a^2)}\,.
\ee
The orthonormal tetrad associated to the metric \eq{3.1} is chosen in the form
\bea
&&b^0=N\Big(dt+\frac{a}{\a}\sin^2\th\,d\vphi\Big)\,,\qquad
b^1=\frac{dr}{N}\,, \nn\\
&&b^2=Pd\th\, ,\qquad
b^3=\frac{\sin\th}{P}\Big[a\,dt+\frac{(r^2+a^2)}{\a}d\vphi\Big]\,, \lab{3.5}
\eea
where
\be
N(r,\th)=\sqrt{\D/\r^2}\, ,\qquad P(r,\th)=\sqrt{\r^2/f}\, . \nn
\ee
A simple calculation of the horizon area yields
\be
A=\int_{r_+}b^2b^3=\frac{4\pi(r_+^2+a^2)}{\a}\,.
\ee
The Riemannian connection $\tom^{ij}$ is defined in the usual way as
\be
\tom^{ij}:=\frac{1}{2}\Big[h^i\inn d b^j-h^j\inn d b^i
-\Big(h^i\inn (h^j\inn d b^m)\Big)b_m\Big]\,,\lab{3.7}
\ee
see also appendix \ref{appA}.
\subsection{Torsion, connection and curvature}
The ansatz for torsion is given by \cite{kads2,kads3}
\bea
&&T^0=T^1=\frac{1}{N}\Big[-V_1b^0b^1-2V_4b^2b^3\Big]
+\frac{1}{N^2}\Big[V_2b^-b^2+V_3b^-b^3\Big]\,, \nn\\
&&T^2:=\frac{1}{N}\Big[ V_5b^-b^2+V_4b^-b^3\Big]\,, \nn\\
&&T^3:=\frac{1}{N}\Big[-V_4b^-b^2+V_5b^-b^3\Big]\,, \lab{3.8}
\eea
where $b^-:=b^0-b^1$ and the torsion functions $V_n$ have the form
\bea
&&V_1=\frac{m}{\r^4}(r^2-a^2\cos^2\th)\, ,\qquad
V_2=-\frac{m}{\r^4 P}ra^2\sin\th\cos\th\, , \nn\\
&&V_3=\frac{m}{\r^4 P}r^2a\sin\th\, ,\qquad
V_4=\frac{m}{\r^4}ra\cos\th\,,\qquad V_5=\frac{m}{\r^4}r^2\,.
\eea
Thus, the torsion tends to zero at spatial infinity.
The irreducible components of $T^i$ are displayed in Appendix \ref{appA}; in particular, $\ir{3}T^i=0$. After introducing the contorsion 1-form,
\bsubeq
\be
K^{ij}:=\frac{1}{2}\Big[h^i\inn T^j-h^j\inn T^i
-\Big(h^i\inn\big( h^j\inn T^k\big)\Big) b_k\Big]\,,
\ee
or more explicitly
\bea
&&K^{01}=\frac{1}{N}V_1b^-\, , \nn\\
&&K^{02}=K^{12}=-\frac{1}{N^2}V_2b^-
+\frac{1}{N}\big(V_5 b^2-V_4 b^3\big)\,, \nn\\
&&K^{03}=K^{13}=-\frac{1}{N^2}V_3b^-
+\frac{1}{N}\big(V_4 b^2+V_5 b^3\big)\, , \nn\\
&&K^{23}=-\frac{2}{N}V_4b^-\, ,
\eea
\esubeq
the RC connection is given by
\be
\om^{ij}=\tom^{ij}+K^{ij}\, .
\ee
\bitem
\item The tetrad field $b^i$ and the Lorentz connection $\om^{ij}$ are basic elements of the RC geometry of spacetime.
\eitem
The RC curvature $R^{ij}=d\om^{ij}+\om^i{}_k\om^{kj}$ has only two nonvanishing irreducible parts, $\ir{4}R^{ij}$ and $\ir{6}R^{ij}$; with $A=(0,1)$ and $c=(2,3)$, they are given by
\be
\ir{6}R^{ij}=\l b^ib^j\, ,\qquad
\ir{4}R^{Ac}=\frac{\l m r}{\D}b^-b^c\,.
\ee
The quadratic invariants
\be
R^{ij}\hd R_{ij}=12\l^2\heps\, ,\qquad T^i\hd T_i=0\,,
\ee
where $\heps:=b^0b^1b^2b^3$ is the volume 4-form, are regular. Note that the curvature invariant differs from its Riemannian analogue \cite{bc3}.
The effective form of the Lagrangian is determined by the nonvanishing irreducible parts of the field strengths,
\be
L_G=-\hd(a_0R+2\L_0)+T^i\hd(a_1\ir{1}T_i+a_2\ir{2}T_i)
+\frac{1}{2}R^{ij}\hd(b_4\ir{4}R_{ij}+b_6\ir{6}R_{ij})\,.\lab{3.13}
\ee
The Kerr-AdS geometry is a solution of the PG field equations \eq{2.2} provided the Lagrangian parameters satisfy the following restrictions:
\be
2a_1+a_2=0\,,\qquad a_0-a_1-\l(b_4+b_6)=0\,,\qquad 3\l a_0+\L=0\, .
\ee
With the above form of $L_G$, the covariant momenta \eq{2.3} are determined by
\bea
&&H_i=2a_1\,\hd(\ir{1}T_i-2\,\ir{2}T_i)\, , \nn\\
&&H_{ij}=-2(a_0-\l b_6)\,\hd(b_ib_j)+2b_4\hd\ir{4}R_{ij}\, ,
\eea
see also appendix \ref{appA}.
\section{Asymptotic charges}\label{sec4}
\setcounter{equation}{0}
As shown by Carter \cite{carter} and Henneaux and Teitelboim \cite{hentei}, Boyer-Lindquist coordinates are not adequate for analyzing the asymptotic charges of Kerr-AdS spacetime since the corresponding asymptotic behavior of the metric components is twisted with respect to the standard AdS backgrond configuration. However, as we discussed in \cite{bc3}, one can use Boyer-Lindquist coordinates as a technically simple first step in the calculations, whereupon the transition to the new, ``untwisted" coordinates
\bea
T=t\, ,\qquad \phi=\vphi-\l at \lab{4.1}
\eea
yields the correct final result. In fact, Henneaux and Teitelboim's analysis, based on the properties of asymptotic states, yields formulas for the new coordinates which also include an additional part transforming $(r,\th)$ into $(R,\Th)$. However, that part is not needed in our approach which is based on the Hamiltonian variational approach \eq{2.4}.
Under the coordinate transformation \eq{4.1}, the components od the Killing vector $\xi$ and the metric tensor $g_{\m\n}$ transform according to
\bea
&&\xi_T=\xi_t+\l a\xi_\vphi\, ,\qquad \xi_\phi=\xi_\vphi\,, \nn\\
&&g_{T\phi}=g_{tt}+\l a g_{\vphi\vphi}\, ,\qquad
g_{\phi\phi}=g_{\vphi\vphi}\, , \nn\\
&&g_{TT}=g_{tt}+2\l a g_{t\vphi}+(\l a)^2g_{\vphi\vphi}\,. \lab{4.2}
\eea
Before we begin with calculations, let us note that the background configuration, which is defined by $m=0$, also depends on the parameter $a$. Hence, in order to avoid the variation of those $a$'s that ``belong" to the background, we introduce an improved interpretation of the rule (r1) formulated in section \ref{sec3}:
\bitem
\item[\rp] In the variational equation \eq{2.4} for $\d\G_\infty(\xi)$, first apply $\d$ to all the parameters $(m, a)$ appearing in $B(\xi)$, then subtract those $\d a$ terms that survive the limit $m = 0$, as they originate from the variation of the AdS background.
\eitem
In the calculations that follow, we use the notation
\bea
&&A_0:=a_0-\l(b_4+b_6)\equiv a_1\,, \nn\\
&&d\Om:=\sin\th d\th d\vphi\to 4\pi\, ,\qquad
d\Om':=\sin^3\th d\th d\vphi\to\frac{2}{3}4\pi\, ,
\eea
Various components of $\om^{ij}$ and $H_i,H_{ij}$ can be found with the help of Appendix A.
\subsection{Angular momentum}
We start the analysis of angular momentum by calculating the expression $\d E_\vphi:=\d\G_\infty(\pd_\vphi)$. For simplicity, we write $\d E_\vphi$ in the form $\d E_\vphi=\d E_{\vphi 1}+\d E_{\vphi 2}$, where
\bea
&&\d E_{\vphi 1}:=\frac{1}{2}\om^{ij}{}_\vphi\d H_{ij}
+\frac{1}{2}\d\om^{ij}H_{ij\vphi}\, , \nn\\
&&\d E_{\vphi 2}:=b^i{}_\vphi\d H_{i}+\d b^iH_{i\vphi}\,,
\eea
and the integration over $S_\infty$ is implicitly understood. The calculation is performed by ignoring $\d a$ terms that are independent of $m$, even when they are divergent, and by omitting asymptotically vanishing $O(r^{-n})$ terms. The nonvanishing contributions are given by
\bsubeq
\bea
\d E_{\vphi 1}&=&\om^{13}{}_\vphi\d H_{13}+\d\om^{13}H_{13\vphi}
=\big(\om^{13}{}_\vphi\d H_{13\th\vphi}
+\d\om^{13}{}_\vphi H_{13\th\vphi}\big)d\th d\vphi \nn\\
&=&\d\big(\om^{13}{}_\vphi H_{13\th\vphi}\big)d\th d\vphi
=2A_0\d\Big(\frac{ma}{\a^2}\Big)d\Om'\,, \\
\d E_{\vphi 2}&=&b^0{}_\vphi\d H_0+\d b^0 H_{0\vphi}=
\big(b^0{}_\vphi\d H_{0\th\vphi}
+\d b^0{}_\vphi H_{0\th\vphi}\big)d\th d\vphi \nn\\
&=&\d\big(b^0{}_\vphi H_{0\th\vphi}\big)d\th d\vphi
=4a_1\d\Big(\frac{ma}{\a^2}\Big)d\Om'\,.
\eea
\esubeq
Summing up the two terms and using $A_0=a_1$, one obtains
\be
E_\vphi=16\pi A_0\d\Big(\frac{ma}{\a^2}\Big)=E_\phi\,. \lab{4.6}
\ee
The last equality follows from the trivial coordinate transformation $\xi_\phi=\xi_\vphi$, see \eq{4.2}.
\subsection{Energy}
Going over to the energy, we represent the expression
$\d E_t:=\d\G_\infty(\pd_t)$ by the sum of
\bea
&&\d E_{t1}=\frac{1}{2}\om^{ij}{}_t\d H_{ij}
+\frac{1}{2}\d\om^{ij}H_{ijt}\, , \nn\\
&&\d E_{t2}=b^i{}_t\d H_{i}+\d b^iH_{it}\,.
\eea
The nonvanishing contributions to $\d E_{t1}$ are
\bsubeq
\bea
&&\d\om^{12} H_{12t}=(\d\om^{12}{}_\th H_{12t\vphi})d\th d\vphi
=-A_0m\frac{\d f}{\a f}\sin\th d\th d\vphi\, , \nn\\
&&\d\om^{13}H_{13t}=(-\d\om^{13}{}_\vphi H_{13t\th})d\th\d\vphi
=-A_0m\frac{2f\d\a-\a\d f}{\a^2 f}\sin\th d\th d\vphi\,,\nn\\
\Ra&&\d E_{t1}=-2A_0m\frac{\d\a}{\a^2}
=2A_0 m\d\Big(\frac{1}{\a}\Big)\times 4\pi\,.
\eea
In a similar manner,
\bea
&&b^0{}_t\d H_0=(b^0{_t}\d H_{0\th\vphi})d\th d\vphi
=4a_1\frac{\a\d m-m\d\a}{\a^2}\sin\th d\th d\vphi\, , \nn\\
\Ra&&\d E_{t2}=4a_1\d\Big(\frac{m}{\a}\Big)\times 4\pi\, .
\eea
\esubeq
Thus, the complete result takes the form
\be
\d E_t=16\pi A_0\left[\frac{m}{2}\d\Big(\frac{1}{\a}\Big)
+\d\Big(\frac{m}{\a}\Big)\right]\,, \lab{4.9}
\ee
which shows why Boyer-Lindquist coordintes are inadequate. Namely, if \eq{4.9} were the final result, the variational equation for energy would not be integrable, and consequently, energy would not be even defined. As we noted earlier, the correct result can be obtained only by going over to the untwisted $(T,\phi)$ coordinates. Indeed, using the transformation law \eq{4.2}$_1$ for the components of $\xi$, the expression for $\d E_t=\d\G_\infty(\pd_t)$ is transformed into the final result for $\d E_T:=\d\G_\infty(\pd_T)$, given by
\be
\d E_T=\d E_t+\l a\d E_\vphi=16\pi A_0\d\Big(\frac{m}{\a^2}\Big)\,. \lab{4.10}
\ee
The results \eq{4.10} and \eq{4.6} for the asymptotic charges $E_T$ and $E_\phi$, respectively, coincide with those obtained by Hecht and Nester \cite{kads3}; in the GR limit, they reduce to the form found earlier by Henneaux and Teitelboim \cite{hentei}, see also Ref. \cite{bc3}.
\section{Entropy}\label{sec5}
\setcounter{equation}{0}
Entropy is defined by the variational equation for $\G_H(\xi)$, with
\bea
&&\xi:=\pd_T-\Om_+\pd_\phi=\pd_t-\om_+\pd_\vphi\,, \nn\\
&&\om_+=\frac{a\a}{r_+^2+a^2}\,,\qquad
\Om_+=\om_++\l a=\frac{a(1+\l r_+^2)}{r_+^2+a^2}\,.
\eea
In the analysis of $\d\G_H(\xi)$, the following relations are very useful:
\bea
&&N\pd_r N\big|_{r_+}=\frac{\kappa(r_+^2+a^2)}{\r_+^2}\, ,\qquad
N\d N \big|_{r_+}=0\, , \nn\\
&&\xi\inn b^0\big|_{r_+}=N\frac{\r_+^2}{r_+^2+a^2}\,,
\qquad\xi\inn b^a\big|_{r_+}=0\,. \nn
\eea
They allow us to easily obtain the interior products $\xi\inn\a\equiv \a_\xi$ for any form $\a$ expressed in the orthonormal basis. Thus, for instance, using the expressions for the Riemannian connection $\tom^{ij}$ displayed in Appendix \ref{appA}, one finds
\bea
&&\xi\inn\tom^{01}=-N'(\xi\inn b^0)=-\k\, ,\qquad
\xi\inn \tom^{02}=\frac{Na^2\sin\th\cos\th}{P(r_+^2+a^2)}\,, \nn\\
&&\xi\inn\tom^{13}=-\frac{Nar_+}{P(r_+^2+a^2)}\sin\th\,,\qquad
\xi\inn\tom^{03}=\xi\inn\tom^{12}=0\,,\quad \xi\inn\tom^{23}\sim N^2\,.\nn
\eea
In a similar manner, one can calculate the interior products $\xi\inn\om^{ij}$, $\xi\inn H_{ij}$, and $\xi\inn H_i$, appearing in the variational equation \eq{2.4}.
In order to make our analysis of entropy as transparent as possible, we organize the calculations in several simpler steps.
\subsection{The basic result}
We begin with the calculation of the expression $\d\G_H(\xi)$, given in Eq. \eq{2.4}, by dividing it into two parts, denoted symbolically by $\d\G_1$ and
$\d\G_2$.
\subsubsection*{\mb{\d\G_1=\frac{1}{2}\om^{ij}{}_\xi\d H_{ij}
+\frac{1}{2}\d\om^{ij}H_{ij\xi}}}
The only nonvanishing contributions stemming from the first element of $\d\G_1$ are
\bsubeq\lab{5.2}
\bea
&&\om^{01}{}_\xi\d H_{01} ~[=]~ \om^{01}{}_\xi\d H_{01\th\vphi}
=2\bA_0\Big(\k-V_1\frac{\r_+^2}{r_+^2+a^2}\Big)
\d\Big(\frac{r_+^2+a^2}{\a}\Big)\sin\th\,, \lab{5.2a}\\
&&\om^{03}{}_\xi\d H_{03}+\om^{13}{}_\xi\d H_{13}
~[=]~ K^{03}{}_\xi\,\d(H_{03\th\vphi}+H_{13\th\vphi})
+\tom^{13}{}_\xi\d H_{13\th\vphi} \nn\\
&&\hspace{20pt}
=2\bA_0\Big(\frac{1}{N}V_3\frac{\r_+^2}{r_+^2+a^2}\Big)\cdot
\d\Big(PN\frac{a}{\a}\Big)\sin^2\th
+2\l b_4\frac{ar_+N}{P(r_+^2+a^2)}
\d\Big(\frac{mr_+}{N\r_+^2}\frac{Pa}{\a}\Big)\sin^3\th\,. \nn\\
\lab{5.2b}
\eea
\esubeq
Here, the symbol $[=]$ stands for an equality up to the factor $d\th d\vphi$, and $\bA_0=a_0-\l b_6$. In $\d H_{13\th\vphi}$, the term proportional to $\bA_0$ is omitted as it vanishes on horizon, $N\d N|_{r_+}=0$.
In the second element of $\d\G_1$ there are $2+2$ nonvanishing contributions,
\bsubeq\lab{5.3}
\bea
&&\d\om^{02}H_{02\xi}+\d\om^{12}H_{12\xi} ~[=]~
\d\tom^{12}{}_\th H_{12\xi\vphi}
+\d K^{02}{}_\th(H_{02\xi\vphi}+H_{12\xi\vphi}) \nn\\
&&\hspace{20pt}
=-2\bA_0\d\Big(\frac{mPr_+^2}{N\r_+^4}\Big)\frac{N\r_+^2}{P\a}\sin\th
-2\l b_4\d\Big(\frac{NPr_+}{\r_+^2}\Big)\frac{mr_+}{NP\a}\sin\th\,,
\lab{5.3a}
\eea
and
\bea
&&\d\om^{03}H_{03\xi}+\d\om^{13}H_{13\xi}~[=]~
\approx-\d K^{03}{}_\vphi (H_{03\xi\th}+H_{13\xi\th})
-\d\tom^{13}{}_\vphi H_{13\xi\th} \nn\\
&&\hspace{20pt}
=-2\bA_0\d\Big(\frac{mr_+^2}{NP\r_+^2\a}\Big)
\frac{NP\r_+^2}{r_+^2+a^2}\sin\th
-2\l b_4\d\Big(\frac{Nr_+}{\a P}\Big)
\frac{mr_+}{N}\frac{P}{r_+^2+a^2}\sin\th\,. \lab{5.3b}
\eea
\esubeq
In $H_{13\xi\th}$, the term proportional to $\bA_0$ is omitted.
\subsubsection*{\mb{\d\G_2=b^i{}_\xi\d H_i+\d b^iH_{i\xi}}}
The only nonvanishing contributions from $\d\G_2$ are
\bsubeq\lab{5.4}
\bea
&&b^0{}_\xi\d H_0~[=]~ b^0{}_\xi \d H_{0\th\vphi}=N\frac{\r_+^2}{r_+^2+a^2}
\d\Big[\frac{2a_1mr_+^2}{N\a\r_+^4}(r_+^2+a^2+\r_+^2)\Big]
\sin\th\,, \lab{5.4a}\\
&&\d b^0 H_{0\xi}~[=]~-\d b^0{}_\vphi H_{0\xi\th}
=-2a_1\d\Big(\frac{Na}{\a}\Big)
\frac{V_3P}{N}\frac{\r_+^2}{r_+^2+a^2}\sin^2\th\,, \lab{5.4b}\\
&&\d b^2 H_{2\xi}~[=]~\d b^2{}_\th H_{2\xi\vphi}-\d b^2{}_\vphi H_{2\xi\th}
=2a_1(\d P)(V_1-V_5)\frac{\sin\th}{P\a}\r_+^2\,, \lab{5.4c}\\
&&\d b^3 H_{3\xi}~[=]~-\d b^3{}_\vphi H_{3\xi\th}
=2a_1\d\Big(\frac{r_+^2+a^2}{P\a}\Big)
(V_1-V_5)P\frac{\r_+^2}{r_+^2+a^2}\sin\th\,. \lab{5.4d}
\eea
\esubeq
\subsection{Simplifications}\label{sub52}
The expressions for entropy found in \eq{5.2}--\eq{5.4} look rather complex. It is almost evident that prior to any direct calculation, they should be simplified. The evidence for the existence of the following two simplifications is provided in Appendix \ref{appB}:
\bitem
\item[\mb{T1.}] The sum of the terms proportional to $\d N/N$ in \eq{5.2}-\eq{5.4} vanishes.\vsm
\item[\mb{T2.}] The sum of the terms proportional to $\d P/P$ in \eq{5.2}-\eq{5.4} vanishes.
\eitem
As a consequence, the original expressions become notably simpler:
\bsubeq
\bea
\text{\eq{5.2a}}:&&
2\bA_0\left[\k-V_1\frac{\r_+^2}{r_+^2+a^2}\right]\cdot
\d\Big(\frac{r_+^2+a^2}{\a}\Big)\sin\th\,, \nn\\
\text{\eq{5.2b}}:&&
2\bA_0\Big(V_3P\frac{\r_+^2}{r_+^2+a^2}\Big)\cdot
\d\Big(\frac{a}{\a}\Big)\sin^2\th
+2\l b_4\frac{ar_+}{(r_+^2+a^2)}
\d\Big(\frac{mr_+}{\r_+^2}\frac{a}{\a}\Big)\sin^3\th\,.\qquad
\lab{5.5a}
\eea
\bea
\text{\eq{5.3a}}:&&
-2\bA_0\d\Big(\frac{mr_+^2}{\r_+^4}\Big)\frac{\r_+^2}{\a}\sin\th
-2\l b_4\d\Big(\frac{r_+}{\r_+^2}\Big)\frac{mr_+}{\a} \sin\th\,, \nn\\
\text{\eq{5.3b}}:&&
-2\bA_0\d\Big(\frac{mr_+^2}{\r_+^2\a}\Big)
\frac{\r_+^2}{r_+^2+a^2}\sin\th
-2\l b_4\d\Big(\frac{r_+}{\a}\Big)
\frac{mr_+}{r_+^2+a^2}\sin\th\,. \lab{5.5b}
\eea
\bea
\text{\eq{5.4a}}:&&
2a_1\frac{\r_+^2}{r_+^2+a^2}
\d\Big[\frac{mr_+^2}{\a\r_+^4}(r_+^2+a^2+\r_+^2)\Big]\sin\th\,, \nn\\
\text{\eq{5.4b}}:&&
-2a_1\d\Big(\frac{a}{\a}\Big)
V_3P\frac{\r_+^2}{r_+^2+a^2}\sin^2\th\,, \nn\\
\text{\eq{5.4c}}:&& =0\,, \nn\\
\text{\eq{5.4d}}:&&
2a_1\d\Big(\frac{r_+^2+a^2}{\a}\Big)
(V_1-V_5)\frac{\r_+^2}{r_+^2+a^2}\sin\th\,. \lab{5.5c}
\eea
\esubeq
In further analysis, we shall use the relation $\bA_0=A_0+\l b_4$ to express these results in terms of only \emph{two independent coupling constants}, $A_0$ and $\l b_4$. In this process, one should use the identity $a_1\equiv A_0$.
\subsection{The terms proportional to \mb{\l b_4}}
Since the contributions in \eq{5.5c} are proportional to $a_1\equiv A_0$, the $\l b_4$ contributions are determined by replacing $\bA_0\to\l b_4$ into \eq{5.5a} and \eq{5.5b}). Then, by dividing each term by $2\l b_4$ (for simplicity), one obtains
\bea
\text{\eq{5.2a}}:&&
\left[\k-\frac{m(r_+^2-a^2\cos^2\th)}{\r_+^2(r_+^2+a^2)}\right]
\d\Big(\frac{r_+^2+a^2}{\a}\Big)\sin\th\,, \nn\\
\text{\eq{5.2b}}:&&
\frac{amr_+^2\sin^3\th}{\r_+^2(r_+^2+a^2)}
\d\Big(\frac{a}{\a}\Big)
+\frac{ar_+}{r_+^2+a^2}
\d\Big(\frac{mr_+}{\r_+^2}\frac{a}{\a}\Big)\sin^3\th\,, \nn\\
\text{\eq{5.3a}}:&&
-\Big[\frac{\r_+^2}{\a}\d\left(\frac{mr_+^2}{\r_+^4}\right)
+\frac{mr_+}{\a}\d\Big(\frac{r_+}{\r_+^2}\Big)\Big]\sin\th\,, \nn\\
\text{\eq{5.3b}}:&&
-\Big[\frac{\r_+^2}{r_+^2+a^2}\d\Big(\frac{mr_+^2}{\a\r_+^2}\Big)
+\frac{mr_+}{r_+^2+a^2}\d\Big(\frac{r_+}{\a}\Big)\Big]\sin\th\,. \lab{5.6}
\eea
These contributions can be further simplified, as shown in Appendix \ref{appB}.
\bitem
\item[\mb{T3.}] When the sum of the terms in \eq{5.6} is integrated over $d\th d\vphi$, it vanishes.
\eitem
This result allows us to go over to the final stage of the analysis of entropy.
\subsection{The terms proportional to $A_0$}
The remaining contributions proportional to $A_0$ are obtained by the substitution $\bA_0\to A_0$ into \eq{5.5a} and \eq{5.5b}. By a suitable rearrangement, the result can be expressed as
\bea
&&\pha{\hspace{-2cm}}
\text{\eq{5.2a}}+\text{\eq{5.2b}}_1+\text{\eq{5.3a}}_1+\text{\eq{5.3b}}_1:\nn\\
&&\pha{\hspace{-1cm}}
2A_0\sin\th\left[\left(\k-\frac{V_1\r_+^2}{r_+^2+a^2}\right)
\d\left(\frac{r_+^2+a^2}\a\right)
+\frac{amr_+^2\sin^2\th}{\r_+^2(r_+^2+a^2)}\d\left(\frac a\a\right)\right.\nn\\
&&\pha{\hspace{4cm}}
\left.-\frac{\r_+^2}\a\d\left(\frac{mr_+^2}{\r_+^4}\right)
-\frac{\r_+^2}{r_+^2+a^2}\d\left(\frac{mr_+^2}{\a\r_+^2}\right)\right]\,,\nn\\
&&\pha{\hspace{-2cm}}
\text{\eq{5.4a}}+\text{\eq{5.4b}}+\text{\eq{5.4c}}+\text{\eq{5.4d}}: \nn\\
&&\pha{\hspace{-1cm}}
2a_1\frac{\r_+^2}{r_+^2+a^2}\sin\th\left[V_1\d\left(\frac{r_+^2+a^2}\a\right)
-\frac{amr_+^2\sin^2\th}{\r_+^4}\d\left(\frac a\a\right)\right. \nn\\
&&\pha{\hspace{3.5cm}}
\left.+\frac{r_+^2+a^2}\a\d\left(\frac{mr_+^2}{\r_+^4}\right)
+\d\left(\frac{mr_+^2}{\a\r_+^2}\right)\right]\,. \nn
\eea
After using $A_0=a_1$, all these contributions sum up to a simple expression
\be
\text{\eq{5.2}}+\text{\eq{5.3}}+\text{\eq{5.4}}=
2A_0\k \sin\th\d\left(\frac{r_+^2+a^2}\a\right)\,. \lab{5.7}
\ee
Then, the integration over $d\th d\vphi$ yields the final result
\be
\d\G_H=8\pi A_0 \k\d\Big(\frac{r_+^2+a^2}{\a}\Big)=T\d S\, ,
\qquad S:=16\pi A_0 \frac{\pi(r_+^2+a^2)}{4}\,,
\ee
where $T=\k/2\pi$ is the black hole temperature and $S$ the Kerr-AdS entropy in PG.
\section{The first law}\label{sec6}
\setcounter{equation}{0}
In the Hamiltonian approach described in section \ref{sec2}, the asymptotic charges and entropy are defined by the variational equations \eq{2.4} as a priory independent quantities. The results that we found for $\d E_T,\d E_\vphi$ and $\d\G_H$, combined with the identity derived in Appendix \ref{appC}, imply the validity of the first law of black hole thermodynamics for the Kerr-AdS black hole,
\be
T\d S=\d E_T-\Om_+\d E_\vphi\, , \lab{6.1}
\ee
in accordance with Eq. \eq{2.5}.
\section{Concluding remarks}\label{sec7}
\setcounter{equation}{0}
In the present paper, we performed a classical Hamiltonian analysis of the thermodynamic variables, energy, angular momentum and entropy, for the Kerr-AdS spacetimes in PG.
Our analysis relies on the Kerr-AdS solution with torsion, constructed some thirty years ago by Baekler et al. \cite{kads1,kads2}. The results for energy and angular momentum coincide with those obtained by Hecht and Nester \cite{kads3}. In both their and our analyses, it was essential to understand the limitations of the Boyer-Lindquist coordinates at large distances in accordance with the ideas of Henneaux and Teitelboim \cite{hentei}, the ideas which can be traced back to the work of Carter \cite{carter}.
As far as we know, the result \eq{6.1} for entropy is completely new in the literature, although our earlier results for the spherically symmetric and asymptotically flat Kerr solutions \cite{bc1,bc2,bc3} led to certain ideas on what might be the answer in the Kerr-KAdS case. The calculations producing the final result for the Kerr-AdS entropy are rather complex, but at the end, they confirm that black hole entropy in PG can be interpreted as the canonical charge on horizon.
In spite of a very different geometric/dynamical content of PG and GR, our analysis shows that the related Kerr-AdS thermodynamic variables differ solely by a constant multiplicative factor. This somewhat puzzling situation may indicate the need for a deeper understanding of the role of boundary conditions.
\section*{Acknowledgments}
This work was partially supported by the Serbian Science Foundation under Grant No. 171031.
| proofpile-arXiv_065-225 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Motivation}
Upon bringing a quantum impurity/dot close to a bulk superconductor the quasiparticle bound states can develop inside the pairing gap $\omega \in \left( -\Delta, \Delta\right)$ \cite{balatsky.vekhter.06}. These in-gap states originate either (1) from the proximity effect, when the Cooper pairs penetrate such nanoscopic object converting it into superconducting grain, or (2) by pairing the quantum dot electron with the opposite spin electron of a bulk superconductor. Depending on the specific mechanism, they are dubbed the Andreev \cite{Rodero-11} or Yu-Shiba-Rusinov bound states \cite{Paaske-2010}, respectively. The subgap quasiparticles have been observed in numerous experimental studies, using magnetic impurities deposited on superconducting substrates \cite{STM-1,STM-2,STM-3,Franke-2018} and quantum dots embedded into the Josephson \cite{Josephson-1,Josephson-2,Josephson-3}, Andreev \cite{Andreev-1,Andreev-2,Andreev-3} or multi-terminal heterojunctions \cite{multiterminal-1,multiterminal-2,Baumgartner-17}.
With the advent of time-resolved techniques such bound states could be nowadays studied, inspecting their dynamical properties. Some aspects concerning this issue have been so far investigated theoretically by several groups, e.g.\ addressing the response time to a step-like pulse \cite{Xing-07}, the time-dependent multiple Andreev (particle-to-hole) reflections \cite{Stefanucci-10}, sequential tunneling \cite{Konig-12}, influence of time-dependent bias \cite{Pototzky-14}, waiting time distributions manifested in the nonequilibrium transport \cite{Governale-13,Michalek-17}, short-time counting statistics \cite{Konig-16}, realization of the metastable bound states in the phase-biased Josephson junction \cite{LevyYeyati-2016,LevyYeyati-2017}, transient effects caused by forming the Andreev \cite{Taranko-2018} and Josephson \cite{Taranko-2019} junctions, bound states of the periodically driven systems \cite{Komnik-13,Melin-2017,Arachea-2018,Baran-2019}, cross-correlations between currents of a Cooper pair splitter \cite{Wrzesniewski-2020,Flindt-2020,Michalek-2020} and studying more exotic heterostructures, hosting the Majorana modes \cite{Souto-2020,Jonckheere-2020,Manousakis-2020}.
Time-dependent change of the model parameters is usually followed by a thermalization processes \cite{Polkovnikov-2011,Freericks-2014}. In the superconducting heterostructures the rate of relaxation processes depends on a continuum \cite{LevyYeyati-2017}. In particular, when the quantum system is {\it quenched} from its ground state, i.e. when some parameter of the Hamiltonian is suddenly changed, the resulting time evolution might lead to nontrivial behaviour upon reaching its new asymptotic state, sometimes undergoing the dynamical quantum phase transitions \cite{Heyl-2018}. Dynamics triggered by such {\it quantum quench}, when the initially prepared state $\left| \Psi(t_{0})\right>$ described by the Hamiltonian $\hat{H}_{0}$ undergoes evolution to $\left| \Psi(t)\right> = e^{-i\hat{H}t/\hbar} \left| \Psi(t_{0})\right>$, where at later times $t>t_{0}$ the Hamiltonian $\hat{H}\neq \hat{H}_{0}$, has been recently the topic of intensive studies. Such phenomena can be conveniently explored in nanoscopic heterostructures, because the available experimental methods enable controllable change of the system's parameters $\hat{H}_{0} \rightarrow \hat{H}$.
\begin{figure}
\includegraphics[width=0.7\columnwidth]{sketch.pdf}
\caption{Heterostructure, consisting of a correlated quantum dot (QD)
coupled to the normal (N) and superconducting (S) leads whose energy level $\varepsilon_d$ can be changed by the gate potential $V_{G}(t)$.}
\label{scheme}
\end{figure}
In this work we study the dynamical features of a correlated quantum dot (QD) placed between the normal (N) and superconducting (S) electrodes (Fig.~\ref{scheme}), focusing on two types of quenches caused by: (i) an abrupt change of the coupling to the superconducting lead and (ii) a sudden alternation of the gate potential lifting the energy level of QD. This allows us to explore the dynamical properties of the subgap quasiparticles existing in a fully correlated quantum dot junction, and to examine their behavior in a vicinity of the singlet-doublet quantum phase transition.
We achieve this goal by employing the time-dependent numerical renormalization group (tNRG) method \cite{Wilson1975,Anders2005,Anders2006}
to quantitatively study the quench dynamics of an unbiased junction. On the other hand in the case of a biased heterostructure, the dynamics is examined by determining the equation of motion of relevant operators within the mean-field approximation, upon establishing the validity of such approach by comparison with tNRG in the relevant transport regime.
This enables us to draw conclusions about the dynamical behavior
of superconductor-proximized, correlated quantum dot junction subject to arbitrary bias voltage.
The paper is organized as follows. In Sec.~\ref{sec:formulation} we formulate the microscopic model, describe the specific quench protocols, and outline two computational methods for determination of the time-dependent physical observables. Next, in Sec.~\ref{unbiased_junction}, we analyze evolution of the quantum dot occupancy, the complex order parameter and the charge current induced by both quantum quenches in the unbiased heterojunction. Sec.~\ref{Sec:biased} presents the charge transport properties for the biased system, which could be suitable for experimental verification. Finally, in Sec.~\ref{Sec:conclusions}, we summarize the main results.
\section{Formulation of the problem
\label{sec:formulation}}
In this section we present the microscopic model and specify
two types of quantum quenches that could be practically realized.
Next, we outline the computational methods suitable to account for
the time-dependent phenomena, proximity effect and electron correlations.
\subsection{Microscopic model
\label{sec:micro-model}}
For the description of our N-QD-S heterostructure we use the single
impurity Anderson Hamiltonian
\begin{eqnarray}
\hat{H} = \underbrace{\sum_{\sigma} \varepsilon_{d}(t) \hat{d}^{\dagger}_{\sigma}
\hat{d}_{\sigma} + U \; \hat{n}_{\uparrow} \hat{n}_{\downarrow}}_{\hat{H}_{QD}} +
\sum_{\beta} \left( \hat{H}_{\beta} + \hat{V}_{\beta - QD} \right)
\label{model}
\end{eqnarray}
where $\hat{d}_{\sigma}$ ($\hat{d}^{\dagger}_{\sigma}$) is the annihilation (creation)
operator of the quantum dot electron with spin $\sigma$ whose (time-dependent)
energy is $\varepsilon_{d}(t)$ and $U$ denotes electrostatic repulsion between the
opposite spin electrons. We treat the external metallic lead as free fermion gas
$\hat{H}_{N} \!=\! \sum_{{\bf k},\sigma} \xi_{\bf k} \hat{c}_{{\bf k} \sigma}^{\dagger}
\hat{c}_{{\bf k} \sigma}$, where $\xi_{\bf k}=\varepsilon_{\bf k}-\mu_{N}$ is the
energy $\varepsilon_{\bf k}$ of itinerant electrons measured from the chemical potential
$\mu_{N}$. The superconducting lead is described by the BCS model $\hat{H}_{S} \!=\!
\sum_{{\bf q},\sigma} \xi_{{\bf q}} \hat{c}_{{\bf q}\sigma}^{\dagger} \hat{c}_{{\bf q}
\sigma} \!-\! \sum_{\bf q} \Delta \left( \hat{c}_{{\bf q} \uparrow} ^{\dagger}
\hat{c}_{-{\bf q} \downarrow}^{\dagger} + \hat{c} _{-{\bf q} \downarrow} \hat{c}_{{\bf q}
\uparrow }\right)$ with $\xi_{\bf q}=\varepsilon_{\bf q}-\mu_{S}$ and the isotropic
pairing gap $\Delta$.
Coupling of the QD electrons to the metallic lead is given by the hybridization term
$\hat{V}_{N-QD} = \sum_{{\bf k},\sigma} \left( V_{\bf k} \; \hat{d}_{\sigma}^{\dagger}
\hat{c}_{{\bf k} \sigma} + \mbox{\rm h.c.} \right)$ and $\hat{V}_{S - QD}$ can be
expressed by interchanging the indices ${\bf k} \leftrightarrow {\bf q}$. In the present
study we focus on the subgap quasiparticle states, therefore for simplicity we impose
the constant auxiliary couplings $\Gamma_{N (S)}=\pi \sum_{{\bf k}({\bf q})}
|V_{{\bf k}({\bf q})}|^2 \;\delta(\omega \!-\! \varepsilon_{{\bf k}({\bf q})})$.
For the energy regime $|\omega|\ll\Delta$ the coupling $\Gamma_{S}$ can be regarded
as the proximity induced pairing potential, whereas $\Gamma_{N}$ controls the inverse
life-time of the in-gap quasiparticles. As we shall see, these couplings
manifest themselves in the dynamical quantities in qualitatively different ways.
\subsection{Quench protocols}
\label{sec:quench}
Any type of the quantum quench can be generally cast
into the following time-dependent Hamiltonian
\begin{equation}
\hat{H}(t) = \theta(-t)\hat{H}_{0} + \theta(t)\hat{H},
\label{Eq:Hamiltonian_TD}
\end{equation}
where $\theta(t)$ is the step function. The initial Hamiltonian $\hat{H}_0$ is suddenly replaced (at time $t=0$) by the new Hamiltonian $\hat{H}$. In particular, an abrupt change can be realized within the same structure of the model (\ref{model}) by appropriately modifying its parameters.
Evolution for the time-dependent expectation value of the physical observable ${\hat{\cal{O}}}(t)$ is then governed by (for time-independent Hamiltonian)
\begin{eqnarray}\label{Eq:O}
O(t) \equiv \langle {\hat{\cal{O}}}(t) \rangle = \mathrm{Tr}\left\{e^{-i\hat{H}t} \hat{\rho}_0 e^{i\hat{H}t} {\hat{\cal{O}}} \right\} \nonumber \\
=\mathrm{Tr}\left\{\hat{\rho}_0 \hat{\cal{O}}_{H}(t)\right\}\equiv\langle \hat{\cal{O}}_{H}(t) \rangle,
\end{eqnarray}
where $\hat{\rho}_0$ denotes the initial equilibrium density matrix
of the system described by $\hat{H}_0$ and $\hat{\cal{O}}_{H}(t)$ is the Heisenberg representation of $\hat{\cal{O}}$.
In this work we shall examine the dynamical behavior of various quantities,
considering two different types of the quantum quenches. In the first case, we
impose an abrupt change of coupling to the superconducting lead
\begin{eqnarray}
V_{{\bf q}}(t) =
\left\{ \begin{array}{ll}
0 & \hspace{0.5cm} \mbox{\rm for } t \leq 0 \\
V_{{\bf q}} & \hspace{0.5cm} \mbox{\rm for } t > 0
\end{array} \right.
\label{abrupt_coupling}
\end{eqnarray}
which is formally equivalent to the assumption $\Gamma_{S}(t)=\Gamma_{S} \; \theta(t)$.
Another type of the quantum quench will refer to the time-dependent QD energy level
\begin{eqnarray}
\varepsilon_{d}(t) =
\left\{ \begin{array}{ll}
\varepsilon_{d} & \hspace{0.5cm} \mbox{\rm for } t \leq 0 \\
\varepsilon_{d} + V_{G} & \hspace{0.5cm} \mbox{\rm for } t > 0
\end{array} \right.
\label{abrupt_gate}
\end{eqnarray}
which could be practically achieved by applying the gate potential
$V_{G}(t)=V_{G} \; \theta(t)$.
For computing the time-dependent expectation values of our interest, such as the charge occupancy
$n_{\sigma}(t)\equiv \langle \hat{d}^{\dagger}_{\sigma}(t)\hat{d}_{\sigma}(t) \rangle$,
the complex order parameter $\chi(t)\equiv \langle \hat{d}_{\downarrow}(t)\hat{d}_{\uparrow}(t) \rangle$
and the charge currents $j_{S,N}(t)$ we use two techniques. Below we briefly outline both methods.
\subsection{Mean field approach}
In absence of correlations ($U\!=\!0$) one could exactly determine all required
observables, solving the set of coupled equations of motion for appropriately chosen operators. But even for $U\!=\!0$, the observables have far from trivial evolution. For the abrupt coupling of the uncorrelated QD to both external electrodes we have recently inspected the characteristic time-scales manifested in a buildup
of the subgap bound states \cite{Taranko-2018,Taranko-2019}. Technically,
we have solved the Heisenberg equation of motion
for the localized $\hat{d}_{\sigma}^{(\dagger)}$
and itinerant $\hat{c}_{{\bf k}/{\bf q}\sigma}^{(\dagger)}$ electron operators,
respectively. For this purpose we have expressed the Heisenberg equations of motion
introducing the Laplace transforms $\hat{O}(s) = \int_{0}^{\infty} e^{-st}\hat{O}(t)dt $,
which are suitable for considering the specific initial conditions
$\hat{O}(0)$. Next, performing the inverse Laplace transforms we have determined the time-dependent operators $\hat{O}(t)$ and used
them for computing analytical exact formulas for the expectation values, such as $n_{\sigma}(t)
\equiv \langle \hat{d}_{\sigma}^{\dagger}(t)\hat{d}_{\sigma}(t)\rangle$.
Typical evolution of the uncorrelated quasiparticle spectrum driven by a sudden change of coupling $\Gamma_{S}(t)$ is schematically illustrated in Fig.~\ref{idea}. Initially, the electron state exists at the QD level $\varepsilon_{d}$ and its line-broadening (inverse life-time) depends on the coupling $\Gamma_{N}$ to the metallic bath. Upon coupling the QD to the bulk superconductor this quasiparticle state evolves into a pair of the Andreev peaks centered at $\pm E_{A}$, which for $U=0$ and $\Delta\rightarrow\infty$ are given by $E_{A}=\sqrt{\varepsilon_{d}^{2}+\Gamma_{S}^{2}}$. This new quasiparticle spectrum is gradually developed through a sequence of quantum oscillations with the characteristic frequency $\omega=E_{A}$, reminiscent of the Rabi-type oscillations of two-level systems \cite{Taranko-2018}. The relaxation processes originating from the QD coupling to the normal lead are responsible for damping of these quantum oscillations. The evolution of the time-dependent observables is thus controlled by two characteristic time scales: (i) period of the quantum oscillations $T=2\pi/E_{A}$ (ii) governed by an exponential decay $\exp{\left(-t/\tau\right)}$ with $\tau=\hbar/\Gamma_{N}$.
\begin{figure}
\includegraphics[width=0.5\columnwidth]{changeover.pdf}
\caption{Illustration of the post-quench evolution driven by a sudden change of the coupling to superconductor
$\Gamma_{S}(t)=\Gamma_{S}\theta(t)$, presenting the quasiparticle peak existing
till $t=0$ at $\varepsilon_{d}$, which changes into a pair of bound states at $\pm E_{A}$. Such changeover is accompanied with damped quantum oscillations of frequency $\omega=E_{A}$.}
\label{idea}
\end{figure}
Similar approach leading to the analytical expressions for physical quantities does not hold for the system with the Coulomb correlations included, as the corresponding set of equations of motion for $\hat{d}_{\sigma}(t)$ and $\hat{c}_{k/q \sigma}(t)$ can not be closed. Even for a weakly correlated system, when the Coulomb repulsion term can be linearized within the Hartree-Fock-Bogoliubov (mean field) decoupling scheme
\begin{eqnarray}
\hat{d}^{\dagger}_{\uparrow}\hat{d}_{\uparrow}
\hat{d}^{\dagger}_{\downarrow}\hat{d}_{\downarrow} & \simeq &
n_{\uparrow}(t) \hat{d}^{\dagger}_{\downarrow}\hat{d}_{\downarrow}
+n_{\downarrow}(t) \hat{d}^{\dagger}_{\uparrow}\hat{d}_{\uparrow}
-n_{\uparrow}(t)n_{\downarrow}(t)
\nonumber \\ & + &
\chi(t) \hat{d}^{\dagger}_{\uparrow} \hat{d}^{\dagger}_{\downarrow}
+\chi^{*}(t) \hat{d}_{\downarrow} \hat{d}_{\uparrow}
-\left| \chi(t) \right|^{2} ,
\label{HFB}
\end{eqnarray}
the analytical approach (described above) fails. With the approximation (\ref{HFB}),
we can incorporate the Hartree-Fock term into the renormalized QD energy level
$\tilde{\varepsilon}_{d}(t) \equiv \varepsilon_{d}
(t)+U n_{-\sigma}(t)$, whereas the anomalous contribution rescales the effective
pairing potential $\tilde{\Gamma}_{S}(t) \equiv \Gamma_{S}(t) - U \chi(t)$.
In comparison to the case with $U=0$, now the effective QD level and effective pairing
potential are time-dependent functions. The corresponding equations of motion for $\hat{d}_{\sigma}(t)$ and $\hat{c}_{{\bf k}/{\bf q}\sigma}(t)$ cannot be transformed in a tractable way through the Laplace transformation into an algebraic system of equations for $\hat{d}_{\sigma}(s)$ and $\hat{c}_{{\bf k}/{\bf q}\sigma}(s)$ and next, through the inverse Laplace transformation into final required expressions for $\hat{d}_{\sigma}(t)$ and $\hat{c}_{{\bf k}/{\bf q}\sigma}(t)$. However, in such a case we can find the observables of interest, $n_{\sigma}(t)$ and $\langle \hat{d}_{\downarrow}(t)\hat{d}_{\uparrow}(t)\rangle$, solving numerically the set of coupled equations of motion for
$n_{\sigma}(t)$, $\langle \hat{d}_{\downarrow}(t)\hat{d}_{\uparrow}(t)\rangle$, $\langle \hat{d}^{\dagger}_{\sigma}(t)\hat{c}_{{\bf k}\sigma}(0)\rangle$ and $\langle \hat{d}_{\sigma}(t)\hat{c}_{{\bf k}-\sigma}(t)\rangle$, respectively (see Ref.~\cite{Taranko-2018}). In this paper we are going to consider different types of quantum quenches applied in the system under consideration using this method of calculations.
Obviously, one may ask about the validity of the static mean field approximation
(\ref{HFB}). This decoupling could be expected to give credible results, whenever
the Coulomb potential $U$ is much smaller than the pairing strength $\Gamma_{S}$
(recall, that on-dot pairing is driven here by the superconducting proximity effect).
Nonetheless, it has been shown \cite{Zonda-2015} that (under the stationary conditions) the lowest order treatment (\ref{HFB}) of the Coulomb interaction
qualitatively reproduces the even-odd parity change of the ground state realized
at $U\sim\Gamma_{S}$. Such results well agree with the numerical renormalization
group data and with the quantum Monte Carlo simulations \cite{Novotny-2019}.
Motivated by this fact, in the following we confront this approximation with the sophisticated
(and computationally more demanding) time-dependent numerical renormalization group
method. While the latter method allows for accurate studies of dynamics in the strongly-correlated regime, the Hartree-Fock scheme enables the determination of the differential
tunneling conductance in the biased heterojunction (Sec.~\ref{Sec:biased}).
\subsection{Time-dependent numerical \\renormalization group}
The time-dependent numerical renormalization group (tNRG) is an extension of the Wilson's numerical renormalization group (NRG) method,
which allows one to conveniently study the dynamics of quantum impurity systems \cite{Wilson1975,Bulla2008,Anders2005, Anders2006,Costi2014,Costi2014generalization}.
An invaluable advantage of this approach is the very accurate treatment of many-body correlations in a fully non-perturbative manner.
In order to study the quench dynamics of the system described by the time-dependent Hamiltonian specified in Eq.~(\ref{Eq:Hamiltonian_TD}),
we use the NRG method to solve both the initial and final Hamiltonians, $\hat{H}_0$ and $\hat{H}$, independently \cite{NRG_code}. In the NRG procedure both Hamiltonians are diagonalized in an iterative manner,
keeping at each iteration at least $N_K$ energetically lowest-lying eigenstates labeled with superscript $K$. The high-energy discarded states, labeled with superscript $D$, are collected from all the iterations and used to construct the full many-body
initial and final eigenbases \cite{Anders2005}
\begin{equation} \label{Eq:completeness}
\sum_{nse}|nse\rangle^{\!D}_{0} \,{}^{D}_{\,0}\!\langle nse| \!=\! \mathbbm{\hat{1}} \;\;\;\,
{\rm and}
\,\;\;\; \sum_{nse}|nse\rangle^{\!D} \,{}^D \!\langle nse| \!=\! \mathbbm{\hat{1}},
\end{equation}
corresponding to $\hat{H}_0$ and $\hat{H}$, respectively. The index $s$ labels the eigenstates at iteration with integer number $n$, while $e$ indicates the environmental subspace representing the rest of the Wilson chain. Here, we note that all eigenstates of the last iteration are
considered as discarded.
In the next step, an initial full density matrix $\hat{\rho}_0$ is constructed for the system
described by $\hat{H}_0$ at thermal equilibrium \cite{Andreas_broadening2007}
\begin{equation}
\hat{\rho}_0=\sum_{nse}\frac{e^{-\beta E_{0ns}^D}}{Z} |nse\rangle^{\!D}_{0} \,{}^{D}_{\,0}\!\langle nse|,
\end{equation}
where $\beta \equiv 1/T$ is the inverse temperature and
\begin{equation}
Z\equiv\sum_{nse} e^{-\beta E_{0ns}^D}
\end{equation}
is the partition function.
The actual time-dependent calculations are performed in the frequency space.
The expectation value of the frequency-dependent local operator
$O(\omega)\equiv\langle \mathcal{\hat{O}}(\omega) \rangle$
expressed with the use of the corresponding eigenstates is given by
\begin{eqnarray}\label{Eq:Ow}
O(\omega) &=&\!\!
\sum_{n}^{ XX'\neq KK}\sum_{n'} \sum_{ss'e} {}^{X}\! \langle nse|w_{n'} \hat{\rho}_{0n'}| ns'e\rangle^{\! X'} \nonumber\\
&&\times {}^{ X'}\! \langle ns'e|\mathcal{\hat{O}}|nse\rangle^{\! X} \; \delta(\omega + E_{ns}^{X} - E_{ns'}^{X'}),
\end{eqnarray}
where $\hat{\rho}_{0n'}$ is the part of the initial density matrix
given at iteration $n'$ and $w_{n'}$ is the weight of the contribution evaluated by tracing out the environmental states \cite{Andreas_broadening2007}.
The calculation of the expectation value is performed
in an iterative fashion by adding all the contributions,
as described in Ref.~\cite{WrzesniewskiWeymann-2019}.
Subsequently, the discrete data is weakly smoothed with a log-Gaussian function and broadening parameter $b \leq 0.1$, and then Fourier-transformed into the time domain to eventually obtain a time-dependent expectation value of the local operator \cite{Andreas2012}
\begin{equation}
O(t)=\int_{-\infty }^{\infty}O(\omega)e^{-i \omega t} d \omega.
\end{equation}
For results calculated with the tNRG procedure
we used the discretization parameter $1.5 \leqslant \Lambda \leqslant 2$, set the length of the Wilson chain to $N=100$
and kept at least $N_K=2000$ eigenstates at each iteration.
More detailed description of the tNRG implementation
in the matrix product state framework has been presented in Ref.~\cite{WrzesniewskiWeymann-2019}.
\section{Dynamics of unbiased setup}
\label{unbiased_junction}
\begin{figure}
\includegraphics[width=0.95\columnwidth]{Fig3.pdf}
\caption{Comparison of the time dependent observables obtained by tNRG technique (solid lines) and HFB approximation (dashed lines) for a sudden change of the QD level from $\varepsilon_{d}(t\leq 0)=-U/2$ to $\varepsilon_{d}(t>0)=-U$. The couplings of QD to external leads are assumed to be $\Gamma_{S}=0.2$, $\Gamma_{N}=0.01$ and $U=0.1$. tNRG parameters are in units of band halfwidth.}
\label{quench_eps}
\end{figure}
\begin{figure}[t]
\includegraphics[width=0.95\columnwidth]{Fig2.pdf}
\caption{Comparison of the tNRG results (solid lines) with the mean field values (dashed lines) obtained for $\varepsilon_{d}=0$, $U=0.1$ and $\Gamma_{N}=0.01$ imposing the quench of $\Gamma_{S}$, $\Gamma_{S}(t<0)=0\rightarrow \Gamma_{S}(t>0)=0.1$.}
\label{quench_gammas}
\end{figure}
We have checked that in the weak correlation limit, $U\ll\Gamma_{S}$, both computational procedures yield practically identical results. In what follows, we shall inspect the time-dependent quantities obtained under arbitrary conditions as a test for credibility of the approximate treatment, which will be used in Sec.~\ref{Sec:biased} to compute the transport properties of the biased heterostructure. For this purpose, we restrict our considerations to the superconducting atomic limit $\Delta \rightarrow \infty$ and assume a small coupling $\Gamma_{N}=0.01$ in order to guarantee the long life-times of in-gap quasiparticles. The latter assumption is also useful for the analysis of the relaxation processes, whose characteristic time-scale is $\tau \sim \Gamma_{N}^{-1}$ \cite{Taranko-2018}.
Figure~\ref{quench_eps} shows the time-dependent occupancy $n(t)$, charge current $j_{S}(t)$ (expressed in units of $\frac{4e}{\hbar}\Gamma_{S}$) and the real part of the order parameter $\chi(t)$ obtained for a sudden change of the QD level $\varepsilon_{d}(t)$. Since the current to the normal contact, $j_{N}(t)$, obeys the charge conservation, $j_{S}(t)+j_{N}(t)=e\frac{d n(t)}{dt}$, we skip its presentation here. Figure~\ref{quench_gammas} displays the same quantities obtained for a sudden switching of the coupling $\Gamma_{S}(t)=U\;\theta(t)$. In both cases we clearly recognize that the initial observables gradually evolve to their new steady-state-limit values over the characteristic time interval $\tau \sim 1/\Gamma_{N}$. Meanwhile, they undergo the quantum oscillations, whose frequency depends on the energies of in-gap quasiparticles. Such behavior has been previously obtained by us analytically \cite{Taranko-2018} for the noninteracting case (see Fig.~\ref{idea}). In what follows we shall analyze the role of electron correlations.
\subsection{Quench in coupling $\Gamma_S$}
For understanding the dynamics of the correlated quantum dot driven by any type of the quench, it is useful to recall the stationary solution in the limit of $\Gamma_{N}=0$ and $\Delta \rightarrow \infty$. Depending on the model parameters, i.e. $\varepsilon_{d}$, $U$ and $\Gamma_{S}$, the quantum dot can be either in the singly occupied $\left| \sigma \right>$ or the BCS-type $u \left| 0\right> - v \left| \uparrow \downarrow \right>$ ground state \cite{Bauer-2007}. For
\begin{eqnarray}
\left( \varepsilon_{d} + \frac{U}{2} \right)^{\!2} + \Gamma_{S}^{2} = \left( \frac{U}{2}\right)^{\!2}
\end{eqnarray}
there occurs a {\it quantum phase transition} from the (spinful) doublet to the (spinless) singlet configuration. It has crucial importance for an interplay between the on-dot pairing and the correlation effects. For finite $\Gamma_{N}\neq 0$, such transition is replaced by a crossover. Nonetheless, all the essential features of these qualitatively different (singlet/doublet) phases are still clearly observable.
\begin{figure}
\includegraphics[width=1\columnwidth]{Fig_tnrg3.pdf}
\caption{The time-dependent occupation number $n(t)$, current $j_S(t)$ [in units $\frac{4e}{\hbar}\Gamma_{S}$] and the real part of $\chi(t) = \langle d_\downarrow(t) d_\uparrow(t) \rangle$ after switching the coupling strength $\Gamma_S(t)$
from zero to its final value $\Gamma_S$. Results are obtained by tNRG for parameters as in Fig.~\ref{quench_gammas}}.
\label{tnrg_gs_quench1}
\end{figure}
\begin{figure}
\includegraphics[width=1\columnwidth]{Fig_HFB_1.pdf}
\caption{The same as in Fig.~\ref{tnrg_gs_quench1} obtained by the mean-field approximation.}
\label{hfb_gs_quench1}
\end{figure}
In particular, for the half-filled dot ($\varepsilon_{d}=-U/2$), such quantum phase transition (crossover) would occur at $\Gamma_{S}=U/2$. Figures~\ref{tnrg_gs_quench1} and \ref{hfb_gs_quench1} present the evolution of the physical quantities with respect to time (horizontal axis) and the final coupling strength $\Gamma_{S}$ (vertical axis) obtained for $\varepsilon_{d}=0$ by tNRG and mean field approximation, respectively. In this case the quantum dot evolves to the BCS-type configuration for all values of $\Gamma_{S}$. Figures~\ref{tnrg_gs_quench3} and \ref{hfb_gs_quench3} correspond
to the nearly half-filled quantum dot. As expected, in the doublet region ($\Gamma_{S} < U/2$) we notice the order parameter to be negligibly small (bottom panel) and we hardly observe any significant charge flow $j_{S}(t)$ (middle panel) due to the dominant Coulomb repulsion. For stronger couplings $\Gamma_{S} > U/2$, the system again evolves to the BCS-type ground state, and this is achieved through a sequence of the damped quantum oscillations. With increasing $\Gamma_{S}$, the quasiparticle energies move further and further away, therefore the oscillation frequency grows.
Let us notice, that both methods yield practically identical results.
\begin{figure}
\includegraphics[width=1\columnwidth]{Fig_tnrg8.pdf}
\caption{The time-dependent occupation number $n(t)$, current $j_S(t)$ and real part of the order parameter
$\chi(t) = \langle d_\downarrow(t) d_\uparrow(t) \rangle$ after switching the coupling strength $\Gamma_S(t)$
from zero to its final value $\Gamma_S$ (indicated on vertical axis). Results are obtained by tNRG for parameters as in Fig.~\ref{quench_gammas} and $\varepsilon_{d}=-U/2 - \delta$, where $\delta=U/20$.}
\label{tnrg_gs_quench3}
\end{figure}
\subsection{Quench in orbital level position}
Let us now inspect the second type of quantum quench, related to abrupt change of the QD energy level (\ref{abrupt_gate}). Figures~\ref{tnrg_eps_quench1} and \ref{hfb_eps_quench1} present the time-dependent observables obtained for the same parameters as in Fig.~\ref{quench_eps} by tNRG and mean field approximation, respectively. Here, we set the coupling to superconductor equal to $\Gamma_S=2U$. Initially the orbital level is tuned to the particle-hole symmetry point, $\varepsilon_d(t\leq 0)=-U/2$, marked by the horizontal dashed lines in the figures. The final value of $\varepsilon_{d}/U$ after the quench is indicated on the y-axis, correspondingly.
One can see that the evolution of physical observables to their new stationary values is realized through a sequence of quantum oscillations, analogously to the behavior displayed in Fig. \ref{quench_eps}. These oscillations show up for a wide range of final values of energy level $\varepsilon_{d}$. In this regard, the absolute difference $|\varepsilon_d(t\leq0) - \varepsilon_d(t>0)|$ has a strong influence on the amplitude of such oscillations. This is especially evident, when examining the time-dependence of all observables near the particle-hole symmetry point. However, exactly for $\varepsilon_{d}=-U/2$, the quantum oscillations are completely absent. We have previously provided physical reasoning for this phenomenon, inspecting the transient effects of uncorrelated system \cite{Taranko-2018}. The oscillations originate from the leakage of Cooper pairs onto the quantum dot and such processes are hardly possible when the initial configuration is exactly half-occupied. Away from the half-filling, however, the Cooper pairs can flow back and forth, what is manifested by the quantum oscillations in all observables. Their frequency depends on the energies $E_{A}$ of the bound states (see Fig.~\ref{idea}) reminiscent of the Rabi oscillations in two-level systems. The relaxation mechanism is contributed here by the coupling $\Gamma_{N}$ to a continuum of the metallic lead.
Abrupt change of the QD energy level has a considerable impact on the long-time limit of the occupation number. For instance, $n(t\rightarrow \infty)\approx 0.57$, for the quench to $\varepsilon_{d}/U=0.5$ and $n(t\rightarrow \infty)\approx 1.23$, for $\varepsilon_{d}/U=-1$, respectively. The occupancy oscillations are mostly pronounced right after the quench in the early time-interval $t \Gamma_N \lesssim 1$. As time elapses, they are exponentially suppressed with the relaxation rate $\tau \sim 1/\Gamma_N$. Interestingly, some intriguing effect can be observed in the time-dependent supercurrent $j_S(t)$, whose evolution is characterized by the oscillations shifted by $\pi$ upon crossing the half-filling $\varepsilon_{d}=-U/2$. The maxima perfectly coincide with minima around $\varepsilon_{d}=-U/2$, marked by the dashed lines. This effect resembles the $0-\pi$ {\it phase transition}, whose nature has been widely discussed in the literature for the stationary conditions \cite{Novotny-2019,Meden-2019}. As already mentioned, the other current $j_{N}(t)$ is bounded with the QD occupancy $n(t)$ and $j_S(t)$ through the charge conservation law $j_{S}(t)+j_{N}(t)=e\frac{d n(t)}{dt}$.
\begin{figure}
\includegraphics[width=1\columnwidth]{Fig_tnrg1.pdf}
\caption{The time dependent occupation number $n(t)$, current $j_S(t)$ and the real part of $\langle d_\downarrow(t) d_\uparrow(t) \rangle$ after QD level quench from $\varepsilon_{d}(t\leq 0)=-U/2$ to $\varepsilon_{d}(t>0)=\varepsilon_d$ as a function of time.
The coupling to superconductor is set to $\Gamma_S/U=2$
and the other parameters are the same as in Fig. \ref{quench_eps}.}
\label{tnrg_eps_quench1}
\end{figure}
The oscillatory behavior induced by the quench of QD energy level is least evident in the real part of the time-dependent order parameter $\chi(t)$. This quantity could be regarded as a qualitative measure of the on-dot pairing and indirectly affects the charge current $j_{N}$ (Sec.~\ref{Sec:biased}). Its magnitude is meaningful predominantly in the BCS-type ground state, as has been pointed out by the previous NRG studies \cite{Bauer-2007} under the stationary conditions.
The most significant variations of ${\rm Re}\chi(t)$ are realized in the short-time limit, when the occupation number $n(t)$ has its minima for quenches to $\varepsilon_{d}>0$. We once again recall, that the quantum dot is strongly coupled to superconductor ($\Gamma_S/U=2$), which firmly establishes the large value of ${\rm Re}\chi(t)$ in both the initial and final states. For this particular regime, the quench does not affect the long-time limit in a considerable way.
Further significant modifications of the oscillatory time-dependent quantities can be observed when changing the coupling to the superconductor $\Gamma_{S}$. Let us now examine typical results obtained for the system, using the parameters initially tuned to the quantum phase transition ($\Gamma_S/U=0.5$). Figure~\ref{tnrg_eps_quench2} displays the evolution obtained after the quench in the quantum dot energy level, identical to that discussed above.
All presented time-dependencies following the quantum quench maintain the oscillatory character. However, due to the reduction of the coupling strength to superconductor $\Gamma_S$, the oscillations have generally lower frequency as compared with the previous case. Additionally, the magnitude of the quench influences the frequency in the way that it is shifted toward higher values as the difference $|\varepsilon_d(t\leq0) - \varepsilon_d(t>0)|$ is increased. This behavior gives an interesting prospect for a device generating transient supercurrents with frequency controlled by appropriate switching of the gate potential $V_G$ in a step-like manner. It is important to note here that the oscillations of the imaginary part of the order parameter have the amplitude unchanged.
Smaller values of the pairing potential relax the constrains on the long-time limit for the occupation number. Here, $n(t\rightarrow \infty)\approx 0.2$ for the quench to $\varepsilon_{d}/U=0.5$ and $n(t\rightarrow \infty)\approx 1.55$ for $\varepsilon_{d}/U=-1$, which are the values spanning wider range of $n(t\rightarrow \infty)$ than in the case of the system strongly coupled to superconductor with $\Gamma_S/U=2$, cf. Fig.~\ref{tnrg_eps_quench1}. On the other hand, as expected upon lowering the pairing amplitude, the real part of the order parameter $\chi(t)$ reveals reduced values from a smaller range, both during the time evolution and after achieving the long-time limit.
\begin{figure}
\includegraphics[width=1\columnwidth]{Fig_tnrg2.pdf}
\caption{The time dependent occupation number $n(t)$, current $j_S(t)$ and the real part of $\langle d_\downarrow(t) d_\uparrow(t) \rangle$ after QD level quench from $\varepsilon_{d}(t\leq 0)=-U/2$ to $\varepsilon_{d}(t>0)=\varepsilon_d$ as a function of time.
The coupling to superconductor is set to $\Gamma_S/U=0.5$, while
the other parameters are the same as in Fig. \ref{quench_eps}.}
\label{tnrg_eps_quench2}
\end{figure}
\subsection{Dynamical susceptibility}
The aforementioned non-trivial dynamical behavior of the studied system can be further revealed when inspecting interplay of the superconducting correlations with the local magnetism. To get an insight into such competition, let us first examine the magnetic susceptibility defined as $\chi_B \equiv \frac{d}{dB} \langle S_z \rangle_{B=0}$ for the system in the equilibrium. Figure~\ref{tnrg_suscept_static} presents the behavior of $\chi_B$ as a function of temperature $T$ for different values of coupling to the superconducting lead $\Gamma_S$.
For the quantum dot completely decoupled from the superconductor $\Gamma_S=0$, the maximum of magnetic susceptibility is found for temperature $T\approx\Gamma_N$. It acquires reduced value of $\chi_B T \approx 0.19$ as compared with a free-spin case, where $\chi_B T = 1/4$. When the temperature decreases, the Kondo effect becomes enhanced, which results in a full screening of the quantum dot spin for $T/\Gamma_N<10^{-3}$, where $\chi_B T \rightarrow 0$. However, when the system is coupled to the superconducting lead, the temperature-dependent susceptibility is substantially modified. As the coupling $\Gamma_S$ is enhanced, the maximum of susceptibility is reduced and shifted toward higher temperatures. Moreover, the full screening of the orbital level holds at significantly higher temperatures due to the strong superconducting correlations \cite{Domanski-2016}. Finally, for high temperatures, exceeding the values of coupling and Coulomb correlations ($T>\Gamma_S,\Gamma_N, U$), all lines converge near $\chi_B T \approx 0.125$.
\begin{figure}
\includegraphics[width=1\columnwidth]{Fig_tnrg4.pdf}
\caption{The magnetic susceptibility as a function of temperature $T$ for several values of coupling to superconductor $\Gamma_S$, as indicated.
The other parameters are the same as in Fig. \ref{quench_gammas}.
Susceptibility is multiplied by temperature $T$.}
\label{tnrg_suscept_static}
\end{figure}
Upon varying the coupling strength $\Gamma_S$, the most pronounced change of magnetic susceptibility occurs at temperature $T\approx\Gamma_N$. To get a better understanding of the dynamical aspects of this dependence, in Fig.~\ref{tnrg_sd_suscept} we show the time-dependent susceptibility and the squared magnetization following the quench in the coupling strength $\Gamma_S$. It is important to note, that magnetic susceptibility (being a measure of a response to external magnetic field) is a property of the system well specified at equilibrium. Here, we estimate its time evolution by calculating the magnetization in a very small but finite external magnetic field $B_z$, which allows us to approximate the time dependence of the susceptibility as $S_z(t) \approx \chi_B(t)T$.
\begin{figure}[t]
\includegraphics[width=1\columnwidth]{Fig_tnrg7.pdf}
\caption{The time dependent susceptibility $\chi_B(t)$
and square of the magnetization $S_z^2(t)$
after the quench in $\Gamma_S$ from initial value indicated at the top of each column.
Results shown in (a)-(d) are calculated for temperature $T/\Gamma_N=10^0$,
while (e) and (f) are determined for $T/\Gamma_N \sim 10^{-7}$.
Cyan dashed lines indicate the coupling strength $\Gamma_S(t)=U/2$
associated with the quantum phase transition. The other parameters are the same as in Fig. \ref{quench_gammas}.}
\label{tnrg_sd_suscept}
\end{figure}
We consider two initial values $\Gamma_S(t<0)/U=0.25$ (left column) and $\Gamma_S(t<0)/U=0.75$ (right column), associated with the earlier discussed spinful and spinless phases, respectively. We also remind that for energy of the orbital level tuned to the particle-hole symmetry point $\varepsilon_d=-U/2$, charge and supercurrent dynamics are fully suppressed.
Let us first focus on the case when the time evolution is determined after the quench from spinful phase with initial value $\Gamma_S(t<0)/U=0.25$, see the left column in Fig.~\ref{tnrg_sd_suscept}. When the final value of the coupling strength to superconductor is chosen in a way that the system remains in the same phase, i.e. $\Gamma_S(t>0)/U<0.5$, both $\chi_B(t)$ and $S_z^2(t)$ [shown in panel (a) and (c), respectively] are monotonically evolving in a rather moderate manner to a new, slightly modified long time limit in agreement with the final thermal expectation values. This regime is contained below the cyan dashed lines indicating the crossover between the phases. However, when $\Gamma_S(t>0)/U>0.5$ (a range of $\Gamma_S$ values above the cyan line), the system undergoes a transition to a spinless phase and the time dependencies reveal a rapid drop of the magnetic properties at time $t\Gamma_S \sim 10^1$. Qualitatively, for the considered system both quantities $\chi_B(t)$ and $S_z^2(t)$ have a very similar time-dependencies and only small differences are exposed mainly due to distinct thermal expectation values for the initial and final states. Additionally, the squared magnetization evolves in a smoother manner, while the magnetic susceptibility may undergo weak oscillations at times around $t\Gamma_S \sim 10^2$ before fully relaxing to the new final state. As a reference, in Fig.~\ref{tnrg_sd_suscept}(e) we also show $S_z^2(t)$ evaluated for $T/\Gamma_N \sim 10^{-7}$, which is in good agreement with dependencies at higher temperatures. However, $\chi_B(t)$ at $T/\Gamma_N \sim 10^{-7}$, does no longer exhibit the discussed behavior due to the full suppression of magnetic susceptibility at low temperatures, as shown in Fig.~\ref{tnrg_suscept_static}.
On the other hand, when the system is initially in the spinless phase and the coupling quench is performed from $\Gamma_S(t<0)/U=0.75$ (see right column of Fig.~\ref{tnrg_sd_suscept}), the response is significantly altered when confronted with the above-discussed case. The striking difference is that here, the dynamics no longer strongly depends on the coupling $\Gamma_S$. To clearly show this effect, we plot all the time-dependent expectation values as functions of $t \Gamma_N$. For relatively small change in the coupling strength $\Gamma_S(t>0)/U>0.5$, i.e. when following the quench the system remains in the spinless phase, the quantities sustain a mild and monotonic evolution toward new thermal limit. However, when the system undergoes a phase transition to a spinful state and $\Gamma_S(t>0)/U<0.5$, see the regime below the cyan dashed line, the rise of the magnetic susceptibility and square of magnetization is considerable. The buildup of $\chi_B(t)$ is noticeable at times $t \Gamma_N \sim 10^0$, subsequently revealing similar oscillations as in the case of transition in the opposite direction. Finally, the new long time limit is achieved for time in range $10^1 \! < \! t \Gamma_N \! < \! 10^2$, depending on the size of the quench. The dynamics of $S_z^2(t)$ is again similar to the evaluated time-dependent magnetic susceptibility, but it exhibits suppressed quantum oscillations and the buildup is considerably ahead of $\chi_B(t)$. At times $t \Gamma_N \approx 10^0$, it achieves maximum, which is quickly followed by thermalization to a new thermal value obtained for times $t \Gamma_N \ll 10^1$. Finally, the low temperature behavior of $S_z^2(t)$, see Fig.~\ref{tnrg_sd_suscept}(f), allows one to predict some dynamical magnetic behavior of the system at higher temperatures and, conversely, similar to the previously discussed quench.
\section{Biased heterojunction}
\label{Sec:biased}
Finally, we discuss the time-dependent quantities under the nonequilibrium conditions. We thus analyze the case, when the chemical potential of the normal lead $\mu_{N}$ is detuned from $\mu_{S}$ by an applied bias $eV=\mu_{N}-\mu_{S}$. For convenience we assume the superconductor to be grounded, $\mu_{S}\!=\!0$. The bias directly affects
the observables, as illustrated in Fig.~\ref{hfb_nonequilibrium}.
\begin{figure}[t]
\includegraphics[width=1\columnwidth]{Fig_HFB_3.pdf}
\caption{The time-dependent current $j_{S}(t)$ obtained for the same set of parameters as in Fig.~\ref{tnrg_eps_quench2} in the presence of finite bias voltage $V$. The panels (a),(b) and (c) correspond to $eV/U=0$, $-0.5$ and $-1.0$, respectively.}
\label{hfb_nonequilibrium}
\end{figure}
In what follows, we focus on the differential conductance $G_{N}(V,t)=\frac{d}{dV} I_{N}(t)$ of the charge current induced between the quantum dot and the normal lead.
The other current $j_{S}(t)$, flowing between the superconducting lead and QD, can be inferred from the charge conservation law $j_{S}(t)=\frac{dn(t)}{dt}-j_{N}(t)$.
The flow of electrons from the metallic lead to QD can be formally expressed by the following expectation value $j_{N}(t)= e \left<\frac{d}{dt} \sum_{{\bf k},\sigma}\hat{c}_{\bf k \sigma}^{\dagger}(t) \hat{c}_{\bf k \sigma}(t) \right>$. Determining the time derivative from the Heisenberg equation we can recast this current into the familiar formula
\begin{eqnarray}
j_{N}(t) = 2 e \sum_{{\bf k},\sigma} \; \mbox{\rm Im} \left\{ V_{\bf k}\left<
\hat{d}^{\dagger}_{\sigma}(t) \hat{c}_{{\bf k}\sigma}(t)\right> \right\} .
\label{current_N}
\end{eqnarray}
The second quantization operators of the metallic bath electrons are governed by $\hat{c}_{{\bf k}\sigma}(t) \!=\! \hat{c}_{{\bf k}\sigma}(0) e^{-i \xi_{\bf k} t} \!- \! i \int_{0}^{t}\!\! dt' V_{{\bf k}} e^{-i \xi_{\bf k} (t-t')} \hat{d}_{\sigma}\!(t')$. \cite{Taranko-2018}. Our main computational difficulty is related here with the time-dependent operators $\hat{d}_{\sigma}^{(\dagger)}\!(t)$. Depending on any specific type of the quantum quench these operators can be determined, applying the equation of motion procedure proposed earlier by us for investigating the dynamics of uncorrelated QD (for details see Appendix A.1 in Ref.\ \cite{Taranko-2018}).
For investigating both types of the quantum quenches we shall treat the correlations within the Hartree Fock Bogoliubov approximation (\ref{HFB}), because, as we have presented in Sec.~\ref{unbiased_junction} by comparison with tNRG, such procedure yields reliable results.
\begin{figure}[t]
\includegraphics[width=0.95\columnwidth]{Fig_HFB_Gn_1.pdf}
\caption{Variation of the differential conductance $G_{N}$ (in units of $2e^2/h$) with respect to voltage $V$ and time $t$ obtained in the mean-field approximation, imposing a sudden coupling of the QD to both external leads at $t=0^{+}$. In (a) and (b), $U=0.025$ and $U=0.1$, respectively. The other model parameters are $\Gamma_{N}=0.01$, $\Gamma_{S}=0.1$, $\varepsilon_{d}=-U/2$.}
\label{hfb_conductance_1}
\end{figure}
The steady limit value $j_{N}(\infty)$ of Eq.~(\ref{current_N}) can be independently evaluated, for instance within the Landauer formalism. Such Andreev-type spectroscopy has been widely discussed in the literature \cite{Rodero-11,Paaske-2010}. Our major interest here would be the evolution of the tunneling current $j_{N}(t)$ towards its steady-state limit, which encompasses the relaxation processes (imposed by the coupling $\Gamma_{N}$) and the quantum oscillations with frequencies sensitive to the ratio $\Gamma_{S}/U$ and dependent on the QD level $\varepsilon_{d}$.
Let us first inspect the case, when the quantum dot is coupled at $t=0^{+}$ simultaneously to the both external leads. Under such circumstances we can observe signatures of the bound states formation manifested in the time-dependent differential conductance $G_{N}(V,t)$. Figure~\ref{hfb_conductance_1} presents these transient effects for the selected model parameters $\varepsilon_{d}$, $\Gamma_{S}$, $U$ (as indicated). These plots clearly reveal the emerging bound states around $\pm E_{A}$ of either symmetric (for $\varepsilon_{d}=-U/2$) or asymmetric spectral weights (when the quantum dot is away from its half-filling). The asymptotic features are developed at times $t \sim 1/\Gamma_{N}$ and in a meantime there occur the quantum oscillations with the period $2\pi/E_{A}$.
\begin{figure}
\includegraphics[width=0.95\columnwidth]{Fig_Gn_2.pdf}
\caption{The differential conductance $G_{N}$ as a function of voltage (vertical axis) and time (horizontal axis) obtained for the quench of hybridization $\Gamma_{S}=0\rightarrow 0.1$ at $t=0$. Calculations were done for $\Gamma=0.01$, $U=0.1$ and $\varepsilon_{d}=-U/2$.}
\label{conductance_2}
\end{figure}
Let us now turn our attention to the quantum quenches. Figure~\ref{conductance_2} displays the differential conductance obtained for the half-filled QD ($\varepsilon_{d}=-U/2$) abruptly coupled to the superconducting lead. We set the Coulomb potential $U=0.1$ and impose the quench $\Gamma_{S}(t)=U\theta(t)$. Initially the normal quantum dot is characterized by the quasiparticle peaks at energies $\varepsilon_{d}$ and $\varepsilon_{d}+U$, which for the half-filled QD occur at $\pm U/2$. The superconducting proximity effect drives the quantum dot to the new quasiparticle states at energies $\pm E_{A}$ (their values in the limit of $\Gamma_{N}=0$ are $E_{A}\sim \sqrt{(\varepsilon_{d}+U/2)^{2}+\Gamma_{S}^{2}}$). We notice, that emergence of such new quasiparictles resembles transient phenomena presented in Fig.~\ref{hfb_conductance_1}. This behavior is rather not surprising, considering that the coupling $\Gamma_{N}$ is much weaker compared to $\Gamma_{S}$ and $U$.
\begin{figure}
\includegraphics[width=0.95\columnwidth]{Fig_HFB_Gn_4.pdf}
\caption{The differential conductance $G_{N}$ obtained for $\Gamma_{N}=0.01$ and $\Gamma_{S}/U=1$, imposing a sudden change of the QD energy level $\varepsilon_{d}=U/2\rightarrow -U/2$ at $t=0$.}
\label{conductance_4}
\end{figure}
Figure~\ref{conductance_4} shows the differential conductance obtained for the quench of the energy level, from its initial value $\varepsilon_{d}(t\leq 0)=-U/2$ to $\varepsilon_{d}(t > 0)=U/2$. We assume $\Gamma_{S}=U$, therefore both at initial and final stages the quantum dot would be safely in the BCS-type configuration. Sudden change of the energy level is here responsible for modifying the energies $\pm E_{A}$ of subgap quasiparticles and gradual development of their asymmetric spectral weights.
\begin{figure}
\includegraphics[width=0.95\columnwidth]{Fig_HFB_Gn_3.pdf}
\caption{The differential conductance $G_{N}$ (in units of $2e^2/h$) obtained across the doublet-singlet transition due to the quench of $\Gamma_{S}$: from $\Gamma_{S}=0.03$ up to $\Gamma_{S}=0.07$ (upper panel) and from $\Gamma_{S}=0.07$ down to $\Gamma_{S}=0.03$ (bottom panel). The quench was imposed at $t=5/\Gamma_{N}$ using the model parameters $\Gamma_{N}=0.01$, $U=0.1$ and $\varepsilon_{d}=-U/2$.}
\label{hfb_conductance_3}
\end{figure}
Finally, we consider the evolution of the quasiparticle spectra, in which one could observe transitions between the singlet and doublet configurations. Such situation can be achieved in two steps, as displayed in Fig.~\ref{hfb_conductance_3}. Initially, at $t=0^{+}$, the half-filled QD quantum dot is coupled to both electrodes, with $\Gamma_{S}>U/2$ (upper panel) or weakly $\Gamma_{S}<U/2$ (bottom panel). In the time interval $t \in \left( 0, 5/\Gamma_{N} \right>$ we analyze the transient effects. Next, at $t=5/\Gamma_{N}$, we abruptly reverse these couplings $\Gamma_{S}$. This quench triggers transitions from the doublet-to-singlet (in the upper panel) and from the singlet-to-doublet (in the bottom panel), respectively. We notice, that postquench behaviour is not completely identical for both cases but the quasiparticle features in the upper/bottom panel right before the quench are pretty similar to the asymptotic ones in the bottom/upper panels.
\section{Summary}
\label{Sec:conclusions}
We have studied the dynamical properties of the correlated quantum dot sandwiched between the metallic and superconducting leads, considering the quantum quenches driven by (a) sudden change of the energy level and (b) abrupt variation of the coupling of the quantum dot to the superconductor. We have treated the correlations within the non-perturbative time-dependent numerical renormalization group scheme and compared such results to the Hartree Fock Bogoliubov mean field approach.
For both types of quenches, we observe that the time-dependent observables (such as quantum dot charge, complex order parameter, and local currents) gradually evolve to their stationary limit values through a series of damped quantum oscillations. Frequencies of these oscillations coincide with the energies of the in-gap quasiparticles, whereas the rate of relaxation processes depends on the dot coupling $\Gamma_{N}$ to a continuous spectrum of the metallic reservoir.
We have inspected in more detail the specific realizations of quenches, which enable a changeover of the quantum dot ground states between the singlet/doublet (spinless/spinful) configurations. Traversing from the BCS-type to the doublet configuration (and {\it vice versa}) we have noticed $\pi$-shift of the charge current flowing from the superconductor to the quantum dot $j_{S}(t)$ observable at arbitrary time $t$. It can be regarded as the time-dependent signature of the, so called, $0-\pi$ transition reported previously under the stationary conditions for the correlated quantum dot embedded in the Josephson-type junctions \cite{Rodero-11,Zonda-2015,Meden-2019}.
We have also found qualitative changes showing up in the magnetic properties upon approaching the quantum phase transition (induced either by the quench of the energy level $\varepsilon_{d}$ or the coupling $\Gamma_{S}$). The time-dependent magnetic susceptibility and the squared quantum dot spin clearly reveal a competition between the on-dot paring and the Coulomb repulsion. Dynamical signatures of such competition are manifested also in the time-dependent order parameter.
Since practical verification of the mentioned dynamical properties could be obtained from measurements of the tunneling currents, we have investigated the time-dependent differential conductance. In particular, we have focused on the charge flow induced between the metallic lead and the dot in presence of the bias potential. We have found, that its voltage characteristics clearly reveal all the necessary details of the time-dependent subgap quasiparticles. The quantum quenches could thus be used for inspecting the energies and life-times of such in-gap quasiparticles from a dynamical perspective.
\begin{acknowledgments}
This work is supported by the National Science Centre (Poland) under the grants 2017/27/B/ST3/00621 (KW, IW)
2017/27/B/ST3/01911 (BB, RT), and 2018/29/B/ST3/00937 (TD).
\end{acknowledgments}
| proofpile-arXiv_065-226 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The SIR model~\cite{KermackMcK1927} is a simple compartmental model that is widely used to model infectious diseases~\cite{Hethcote2000}. Letting $S(t)$, $I(t)$, and $R(t)$ denote the number of susceptible, infectious and removed (or recovered) individuals at time $t$, and letting $\dot S(t)$, $\dot I(t)$, and $\dot R(t)$ denote their time derivatives, the SIR model consists in the following three-dimensional continuous-time autonomous dynamical system
\begin{subequations} \label{eq:SIR}
\begin{align}
\dot S(t) &= - \frac{\beta}N S(t) I(t) \label{eq:S}
\\ \dot I(t) &= \frac{\beta}N S(t) I(t) - \gamma I(t) \label{eq:I}
\\ \dot R(t) &= \gamma I(t) \label{eq:R},
\end{align}
\end{subequations}
where $N = S(t) + I(t) + R(t)$ is the constant total population and $\beta$ and $\gamma$ are parameters.
The SIR model, and several (sometimes deep) variations thereof, have been applied in several works to model the COVID-19 dynamics (see, e.g.,~\cite{LiuGayWilRoc2020,Atkeson2020,2007.01411,Nesterov2020,2003.14391,FanelliPia2020,CarlettiFanPia2020}) with known limitations (see~\cite{RodaVarHanLi2020,2006.06373,WangFle2020}). Sometimes, an SIR-like model is used to make long-term predictions (see~\cite{BhanotDeL2020}).
However, at the time of writing this paper, it appears that studies are still rare (see, e.g.,~\cite{Singh2020.07.08.20148619}) where the SIR model parameters and initial conditions are learned on a ``train'' part of the available data in order to predict a ``test'' part of the data, making it possible to assess the prediction accuracy of the model.
In this paper, we adapt the SIR model to the situation where (i) $S$, $I$ and $R$ are hidden variables but $I(t)$ is observed through a ``proxy'' $H(t) = \alpha I(t)$, where $\alpha$ is unknown but constant, and (ii) not only $\beta$ and $\gamma$ but also the total population $N$ are unknown and have thus to be estimated.
In the context of the COVID-19 application, $H$ will stand for the total number of lab-confirmed hospitalized patients.
The proposed adapted SIR model, which we term \emph{SH model}, is given in~\eqref{eq:SH}. It has two state variables ($\bar{S}$---a scaled version $S$---and $H$) and two parameters ($\bar\beta$---which lumps together the parameters $\beta$, $N$, and $\alpha$---and $\gamma$).
We leverage the proposed SH model as follows in order to make hospitalization predictions. Given observed values $(H_o(t))_{t=t_i,\dots,t_c}$, we estimate the parameters $\bar\beta$, $\gamma$, and the initial conditions $\bar{S}(t_i)$ and $H(t_i)$ of the SH model. Then we simulate the SH model in order to predict $(H(t))_{t=t_c+1,\dots,t_f}$ for a specified final prediction time $t_f$. This approach thus combines the areas of parameter estimation (for obvious reasons), data assimilation (for the generation of the initial conditions) and machine learning (for the train-test approach).
\section{Data}
\label{sec:data}
In Section~\ref{sec:results}, we will use a COVID-19 dataset for Belgium\footnote{\url{https://epistat.sciensano.be/Data/COVID19BE_HOSP.csv} obtained from \url{https://epistat.wiv-isp.be/covid/}} that provides us with the following data for $t=t_s,\dots,t_e$, where $t_s$ is 2020-03-15 and $t_e$ is 2020-07-15:
\begin{itemize}
\item $H_o(t)$: number of COVID-19 hospitalized patients on day $t$ (TOTAL\_IN column);
\item $E_o(t)$: number of COVID-19 patients entering the hospital (number of lab-confirmed hospital intakes) on day $t$ (NEW\_IN column);
\item $L_o(t)$: number of COVID-19 patients discharged from the hospital on day $t$ (NEW\_OUT column).
\end{itemize}
The subscript $_o$ stands for ``observed''.
We will also mention results obtained with a dataset for France\footnote{donnees-hospitalieres-covid19-2020-07-17-19h00.csv obtained from \url{https://www.data.gouv.fr/en/datasets/donnees-hospitalieres-relatives-a-lepidemie-de-covid-19/}} where $t_s$ is 2020-03-18 and $t_e$ is 2020-07-17.
\subsection{Discussion}
In the data for Belgium, there is a mismatch between $H_o(t)$ and $H_o(t-1) + E_o(t) - L_o(t)$ for most $t$
and $H_o(t_s) + \sum_{t=t_s+1}^{t_e} E_o(t)-L_o(t)$ is significantly larger than $H_o(t_e)$. This can be due to the patients who get infected at the hospital (they would be counted in $H_o$ without appearing in $E_o$) and to the patients who die at the hospital (they would be removed from $H_o$ without appearing in $L_o$).
In order to remedy this mismatch, we redefine $L_o(t)$ by $L_o(t) := -H_o(t) + H_o(t-1) + E_o(t)$.
For the French data, we sum the ``rad'' (daily number of new home returns) and ``dc'' (daily number of newly deceased persons) columns to get $L_o(t)$. Since there is no column for $E_o$, we define $E_o(t) = H_o(t)-H_o(t-1) + L_o(t)$.
Several other COVID-19-related data are available. In particular, the daily number of infected individuals, $I_o(t)$, is also reported by health authorities. However, a visual inspection reveals that the graph of $I_o$ is less smooth than the graph of $H_o$.
A possible reason is that $I_o$ is affected by two technical sources of variation: the fraction of tested persons and the accuracy of the tests. In contrast, the reported number of COVID-19 hospitalized individuals, $H_o$, is expected to be much more accurate.
Moreover, for the authorities, predicting $H$ is more crucial than predicting $I$.
Therefore, as in~\cite{2007.01411}, we focus on $H$.
\section{Models}
\label{sec:models}
\subsection{Case hospitalization ratio}
We assume that, for all $t$,
\begin{equation} \label{eq:alpha}
H(t) = \alpha I(t)
\end{equation}
where $\alpha$ is unknown but constant over time. In other words,~\eqref{eq:alpha} posits that a constant fraction of the infected people is hospitalized.
Equation~\ref{eq:alpha} is reminiscent of~\cite[(3)]{CarlettiFanPia2020}, where the number of dead individuals plays the role of $H$ and $\alpha$ is time dependent.
\subsection{Observation models}
We assume the following observation models with additive noise:
\begin{subequations} \label{eq:observed}
\begin{align}
H_o(t) &= H(t) + \epsilon_H(t) \label{eq:Ho}
\\ E_o(t) &= E(t) + \epsilon_E(t)
\\ L_o(t) &= L(t) + \epsilon_L(t).
\end{align}
\end{subequations}
Assuming that the $\epsilon$ noises are independent Gaussian centered random variables confers a maximum likelihood interpretation to some subsequent estimators, but this assumption is very simplistic.
\subsection{Proposed SH model}
Multiplying~\eqref{eq:S} and~\eqref{eq:I} by $\alpha$, and multiplying the numerator and denominator of~\eqref{eq:S} by $\alpha$, we obtain
\begin{align}
\alpha \dot{S}(t) &= -\frac{\beta}{N\alpha} \, \alpha S(t) \, \alpha I(t)
\\ \alpha \dot{I}(t) &= \frac{\beta}{N\alpha} \, \alpha S(t) \, \alpha I(t) - \gamma \alpha I(t).
\end{align}
Letting
\begin{align}
\bar{S} &:= \alpha S
\\ \bar\beta &:= \frac{\beta}{N\alpha} \label{eq:barbeta}
\end{align}
and using~\eqref{eq:alpha}, we obtain the simplified SIR model
\begin{subequations} \label{eq:SH}
\begin{align}
\dot{\bar{S}}(t) &= -\bar\beta \bar{S}(t) H(t) \label{eq:SH-S}
\\ \dot{H}(t) &= \bar\beta \bar{S}(t) H(t) - \gamma H(t) \label{eq:SH-H}
\end{align}
\end{subequations}
which we term the \emph{SH model}. (The ``S'' in this SH model can be interpreted as the number of individuals susceptible of being hospitalized.) The SH model has only two parameters ($\bar\beta$ and $\gamma$), one hidden state variable ($\bar{S}$) and one observed state variable ($H$) with observation model~\eqref{eq:Ho}.
Note that, in the SH model~\eqref{eq:SH}, the number of patients entering the hospital by unit of time is
\begin{equation} \label{eq:E}
E(t) := \bar\beta \bar{S}(t) H(t)
\end{equation}
and the number of patients leaving the hospital by unit of time is
\begin{equation} \label{eq:L}
L(t) := \gamma H(t).
\end{equation}
\section{Estimation and prediction method}
\label{sec:estimation}
The goal is now to leverage the SH model~\eqref{eq:SH} in order to predict future values of $H$ based on its past and current observations $(H_o(t))_{t=t_s,\dots,t_c}$. To this end, we have to estimate (or ``learn'') four variables, which we term \emph{estimands}: the two parameters $\bar\beta$ and $\gamma$ and the two initial values $\bar{S}(t_i)$ and $H(t_i)$, where $t_i$ is the chosen initial time for the SH model~\eqref{eq:SH}. One possible approach is to minimize some error measure between the simulated values $(H(t))_{t=t_i,\dots,t_c}$ and the observed values $(H_o(t))_{t=t_i,\dots,t_c}$ as a function of the four estimands. However, the error measure is not available as a closed-form expression of the four estimands, and this makes this four-variable optimization problem challenging. We now show that it is possible to estimate $H(t_i)$ and $\gamma$ separately. This leaves us with an optimization problem in the two remaining estimands $\bar\beta$ and $\bar{S}(t_i)$, making it possible to visualize the objective function by means of a contour plot.
\subsection{Train and test sets}
To recap, we have $t_s \leq t_i < t_c < t_e$. The provided dataset goes from $t_s$ to $t_e$. The \emph{test set} is $(H_o(t),E_o(t),L_o(t))_{t\in[t_c+1,t_e]}$, and this data cannot be used to estimate the variables and simulate the SH model. The SH model is initialized at $t_i$, and we refer to the data $(H_o(t),E_o(t),L_o(t))_{t\in[t_i,t_c]}$ as the \emph{train set}, though it is legitimate to widen it to $t\in[t_s,t_c]$.
\subsection{Estimation of $H(t_i)$}
\label{sec:H0}
It is reasonable to believe that $\epsilon_H$ in~\eqref{eq:Ho} is small in practice. Hence we simply take
\[
H(t_i) := H_o(t_i).
\]
\subsection{Estimation of $\gamma$}
\label{sec:gamma}
We have $L(t) = \gamma H(t)$, see~\eqref{eq:L}.
In view of the observation model~\eqref{eq:observed}, we can estimate $\gamma$ by a ratio of means:
\[
\hat\gamma^{\text{RM}} = \frac{\sum_{t=t_i}^{t_c} L_o(t)}{\sum_{t=t_i}^{t_f} H_o(t)}.
\]
Several other estimators are possible, such as the least square estimator,
or the total least squares estimator
which is the maximum likelihood estimator of $\gamma$ for the iid Gaussian noise model~\eqref{eq:observed}.
Note that $t_i$ in the expression of $\hat\gamma$ can legitimately be replaced by any time between $t_s$ and $t_c$. Only data in the test set, i.e., occurring after $t_c$, are unavailable in the variable estimation phase.
\subsection{Joint estimation of $\bar\beta$ and $\bar{S}(t_i)$}
\label{sec:b-S0}
We now have to estimate the two remaining estimands, namely $\bar\beta$ and $\bar{S}(t_i)$. We choose the following sum-of-squared-errors objective function
\begin{equation} \label{eq:phi}
\phi(\bar\beta,\bar{S}(t_i)) = c_H \sum_{t=t_i}^{t_c} (H(t) - H_o(t))^2 + c_E \sum_{t=t_i}^{t_c} (E(t) - E_o(t))^2 + c_L \sum_{t=t_i}^{t_c} (L(t) - L_o(t))^2,
\end{equation}
where the $c$ coefficients are parameters, all set to $1$ in our experiments unless otherwise stated.
In~\eqref{eq:phi}, $H(t)$, $E(t)$ as in~\eqref{eq:E}, and $L(t)$ as in~\eqref{eq:L}, are given by the (approximate) solution of the SH model~\eqref{eq:SH} in which (i) $H(t_i)$ and $\gamma$ take the values estimated as above, and (ii) $\bar\beta$ and $\bar{S}(t_i)$ take the values specified in the argument of $\phi$. In order to compute the required (approximate) solution of the SH model~\eqref{eq:SH}, we use the explicit Euler integration with a time step of one day, yielding, for $t=t_i,\dots,t_c-1$,
\begin{subequations} \label{eq:SH-DT}
\begin{align}
\bar{S}(t+1) &= \bar{S}(t) - \bar\beta \bar{S}(t) H(t) \label{eq:SH-DT-S}
\\ H(t+1) &= H(t) + \bar\beta \bar{S}(t) H(t) - \gamma H(t). \label{eq:SH-DT-H}
\end{align}
\end{subequations}
Now that the objective function $\phi$ (also termed ``cost function'' or ``loss function'') is defined, we let the estimated $(\bar\beta,\bar{S}(t_i))$ be the (approximate) minimizer of $\phi$ returned by some optimization solver.
\subsection{Prediction of $H$}
Recall that the time range between $t_i$ and $t_c$ is the train period and the time range between $t_c+1$ and $t_e$ is termed the test period.
In order to predict the values of $H$ over the test period, we apply the above procedure to estimate the four estimand variables $\bar\beta$, $\gamma$, $\bar{S}(t_i)$, and $H(t_i)$, and we compute the solution $H(t)$ of~\eqref{eq:SH-DT} for $t$ from $t_i$ to $t_e$. The prediction is then $(H(t))_{t=t_c+1,\dots,t_e}$.
The discrepancy between $(H(t))_{t=t_c+1,\dots,t_e}$ and $(H_o(t))_{t=t_c+1,\dots,t_e}$ reveals the accuracy of the prediction.
\section{Alternative estimation and prediction methods}
\label{sec:alternatives}
\subsection{Successive estimation of $\bar\beta$ and $\bar{S}(t_i)$}
\label{sec:b-then-S0}
As an alternative to Section~\ref{sec:b-S0}, we now present a method to estimate $\bar\beta$ independently. We do not recommend this alternative, but it sheds light on the various forecast accuracies observed in Section~\ref{sec:results}.
From~\eqref{eq:SH-S} and~\eqref{eq:E}, we obtain
\[
\frac{\mathrm{d}}{\mathrm{d}t} \frac{E(t)}{H(t)} = - \bar\beta E(t).
\]
Since $\frac{\mathrm{d}}{\mathrm{d}t} \frac{E}{H} = \frac{H\dot{E}-E\dot{H}}{H^2}$,
this yields
\[
\bar\beta = \frac{\dot{H}(t)}{(H(t))^2} - \frac{\dot{E}(t)}{E(t)H(t)}.
\]
Hence a possible estimator for $\bar\beta$ is
\begin{equation} \label{eq:hat-bar-beta}
\widehat{\bar\beta} = \frac{H_o(t+1)-H_o(t)}{(H_o(t))^2} - \frac{E_o(t+1)-E_o(t)}{E_o(t)H_o(t)}
\end{equation}
and, from~\eqref{eq:E}, a possible simple estimator for the remaining estimand is $\widehat{\bar{S}}(t_i) = \frac{E(t_i)}{\widehat{\bar\beta} H(t_i)}$.
We can now investigate how the $\epsilon$ error terms in the observation model~\eqref{eq:observed} impact $\widehat{\bar\beta}$. We assume throughout that the errors in $H_o(t+1)-H_o(t)$ and $E_o(t+1)-E_o(t)$ are comparable. Except at the very beginning of the outbreak, $E_o(t)H_o(t)$ is considerably smaller than $(H_o(t))^2$, and thus the second term of~\eqref{eq:hat-bar-beta} drives the error.
Consequently, the estimation of $\bar\beta$ should be the most accurate when $E_o(t)H_o(t)$ is the largest. This occurs slightly before the peak of $H_o(t)$. This means that the estimation of $\bar\beta$ should be the most accurate for a train period slightly before the peak. However, this does not mean that this position of the train period gives the most accurate forecasts, as we will see below.
Let us consider the situation where the train period is located \emph{before} the peak. Then the estimation of $\bar\beta$ is less accurate, and this impacts $\widehat{\bar{S}}(t_i)$. At the initial time $t_i$, this does not impact the right-hand term of~\eqref{eq:SH-H} in view of the definition of $\widehat{\bar{S}}(t_i)$. However, an overestimation of $\bar\beta$ will induce an underestimation of $\bar{S}(t_i)$ and, in view of~\eqref{eq:SH-S}, a subsequent even stronger underestimation of $\bar{S}(t)$. Hence the first term of~\eqref{eq:SH-H} will be underestimated. As a consequence, the peak in $H$ will appear sooner and lower. The case of an underestimation of $\bar\beta$ leads to the opposite conclusion, namely a peak in $H$ that appears later and higher. In summary, the further before the peak the train period is located, the more inaccurate the position and height of the peak is expected to be.
Finally, let us consider the situation where the train period is located \emph{after} the peak. Then we can make the same observations as in the previous paragraph, except that predicting the peak is now irrelevant. Moreover, we are in the decrease phase, where the first term of~\eqref{eq:SH-H} (which involves $\bar\beta$ and $\bar{S}(t)$) is smaller than the second term (which does not involve these quantities). Consequently, the possibly large estimation errors on $\bar\beta$ and $\bar{S}(t)$ will only slightly affect the forecast of $H(t)$.
\subsection{Alternative: joint estimation of the four estimands}
\label{sec:all-estimands}
An alternative to Sections~\ref{sec:H0}--\ref{sec:b-S0} is to reconsider~\eqref{eq:phi} as a function of all four estimands:
\begin{equation} \label{eq:phitilde}
\tilde\phi(\bar\beta,\bar{S}(t_i),\gamma,H(t_i)) = c_H \sum_{t=t_i}^{t_c} (H(t) - H_o(t))^2 + c_E \sum_{t=t_i}^{t_c} (E(t) - E_o(t))^2 + c_L \sum_{t=t_i}^{t_c} (L(t) - L_o(t))^2.
\end{equation}
In~\eqref{eq:phitilde}, $H(t)$, $E(t)$ as in~\eqref{eq:E}, and $L(t)$ as in~\eqref{eq:L}, are given by the solution of the discrete-time SH model~\eqref{eq:SH-DT} where the parameters $\bar\beta$ and $\gamma$ and the initial conditions $\bar{S}(t_i)$ and $H(t_i)$ take the values specified in the argument of $\tilde\phi$. Minimizing $\tilde\phi$ is a more challenging problem than mimizing $\phi$~\eqref{eq:phi} in view of the larger number of optimization variables. It may be essential to give a good initial guess to the optimization solver, and a natural candidate for this is the values obtained by the procedure described in Sections~\ref{sec:H0}--\ref{sec:b-S0}.
In our preliminary experiments, we have found that this alternative does not present a clear advantage in terms of the prediction mean absolute percentage error (MAPE). The results reported in Section~\ref{sec:results} are obtained with the sequential prediction approach of Section~\ref{sec:estimation}, unless otherwise specified.
\section{Results} \label{sec:results}
We now apply the method of Section~\ref{sec:estimation} (by default) or a method of Section~\ref{sec:alternatives} (when specified) to the data of Section~\ref{sec:data} available for Belgium (by default) and France (when specified).
The methods are implemented in Python 3 and run with Anaconda 2019.10. The code to reproduce the results is available from \url{https://sites.uclouvain.be/absil/2020.05}.
\subsection{Fitting experiment}
We first check how well the SH model~\eqref{eq:SH} can fit the available data for Belgium. For this experiment, we use the method of Section~\ref{sec:all-estimands} with $c_E=c_L=0$ in order to get the best possible fit (in the least squares sense) to the $H_o$ curve. The result is shown in Figure~\ref{fig:SHR_12PA_BEL_traintstart1_traintstop117_c100_4Dopt}.
\begin{figure}
\centerline{\includegraphics[width=.7\textwidth]{Code_Python/Figures/SHR_16PA_py_BEL_1sttraintstart1_1sttraintstop123_c100_4D.pdf}}
\caption{Belgium, fitting the SH model to the $H_o$ (total hospitalized) curve. In this experiment, the train set is the whole dataset, hence there is no test (prediction) curve. Reproduce with SHR\_16PA\_py\_BEL\_1sttraintstart1\_1sttraintstop123\_c100.zip.}
\label{fig:SHR_12PA_BEL_traintstart1_traintstop117_c100_4Dopt}
\end{figure}
The fitting error is remarkably small (MAPE below 15\%). For the French data, the fit is even better in terms of MAPE (about 3\%).
Note that the parameters of the SH model are constant with respect to time in our experiments. This contrasts with~\cite{2007.01411} where there are two phases, and with~\cite{Nesterov2020} where the infection rate is piecewise constant with several pieces.
We stress that Figure~\ref{fig:SHR_12PA_BEL_traintstart1_traintstop117_c100_4Dopt} tells us little about the prediction capability of the model. If the fit over some period is bad, then predictions (i.e., forecasts) over that period can only be bad. But if the fit is good (as it is the case here), the predictions can still be bad due to their sensitivity with respect to the data preceding the to-be-predicted period. For example, a better fit (in the RMSE sense) than in Figure~\ref{fig:SHR_12PA_BEL_traintstart1_traintstop117_c100_4Dopt} can be obtained with a polynomial of degree 8; however, its prediction capability is abysmal.
In order to assess the prediction capability of the model, we have to learn the estimand variables over a \emph{train period} that we make available to the algorithm, use the learned estimand variables in order to predict $H$ over a subsequent \emph{test period} (whose data is not available to the algorithm), and finally compare the prediction with the data on the test period. This is what we proceed to do in the rest of this Section~\ref{sec:results}.
\subsection{Predictions from a train period around the peak}
\label{sec:results-BEL-peak}
We start with a prediction experiment where the train period is around the peak. According to Section~\ref{sec:b-then-S0}, this is a promising location.
A contour plot of the objective function $\phi$~\eqref{eq:phi} is given in Figure~\ref{fig:SHR_12PA_BEL_traintstart8_traintstop38_c111}. In order to make the minimizer easier to visualize, the plot shows equispaced-level curves of $\log(\phi - 0.99\,\phi_*$), where $\phi_*$ is an approximation of the minimal value of $\phi$. Based on a visual inspection, we choose (1e-5,1e4) as the initial guess of the optimization solver.
The optimization solver is scipy.optimize.fmin with its default parameters.
\begin{figure}
\centerline{\includegraphics[width=.4\textwidth]{Code_Python/Figures/SHR_16PA_py_BEL_1sttraintstart16_1sttraintstop32_c111_contour.pdf}
\includegraphics[width=.6\textwidth]{Code_Python/Figures/SHR_16PA_py_BEL_1sttraintstart16_1sttraintstop32_c111_2D.pdf}}
\caption{Belgium, train period around the peak. Left: contour plot of $\phi$~\eqref{eq:phi}. Right: fitting and predictions with the SH model. Reproduce with SHR\_16PA\_py\_BEL\_1sttraintstart16\_1sttraintstop32\_c111.zip.}
\label{fig:SHR_12PA_BEL_traintstart8_traintstop38_c111}
\end{figure}
The middle plot of Figure~\ref{fig:SHR_12PA_BEL_traintstart8_traintstop38_c111} shows $(H_o(t))_{t=t_s,\dots,t_e}$ (observed hospitalizations, gray solid line), $(H(t))_{t=t_i,\dots,t_c}$ (hospitalizations given by the model over the train period, blue dashed line), and $(H(t))_{t=t_c+1,\dots,t_e}$ (hospitalizations predicted over the test period, in red dash-dot line). In order to give a sense of the sensitivity of the results, we superpose the curves obtained for three slightly different train periods. The test MAPE for the three curves are 27\%, 7\%, and 8\%.
The right-hand plot of Figure~\ref{fig:SHR_12PA_BEL_traintstart8_traintstop38_c111} shows the evolution of $\bar{S}(t)$.
\subsection{Predictions from various train periods}
\label{sec:results-BEL-various}
In Figure~\ref{fig:SHR_15PA_py_BEL_1sttraintstart1_1sttraintstop15_c111_2D}, we superpose the results obtained with various train periods of 14 days. The figure corroborates the comments of Section~\ref{sec:b-then-S0}.
In particular, if the train period is fully located before the peak, then the predictions are rather inaccurate. Placing the train period around the peak gives excellent prediction results. When the train period is fully located in the decrease phase, the estimation of $\bar\beta$ and $\bar{S}(t_i)$ is seen to be very sensitive, but this does not affect much the quality of the prediction of $H(t)$.
\begin{figure}
\centerline{\includegraphics[width=.8\textwidth]{Code_Python/Figures/SHR_16PA_py_BEL_1sttraintstart1_1sttraintstop15_c111_2D.pdf}}
\caption{Belgium, various train periods. Reproduce with SHR\_16PA\_py\_BEL\_1sttraintstart1\_1sttraintstop15\_c111.zip.}
\label{fig:SHR_15PA_py_BEL_1sttraintstart1_1sttraintstop15_c111_2D}
\end{figure}
\subsection{Results for France}
Figure~\ref{fig:SHR_15PA_py_FRA_1sttraintstart1_1sttraintstop15_c111_2D} is the counterpart of Figure~\ref{fig:SHR_15PA_py_BEL_1sttraintstart1_1sttraintstop15_c111_2D} for France. These experiments are also compatible with the comments of Section~\ref{sec:b-then-S0}. We also considered some departments separately, with similar results.
A disconcerting aspect is the evolution of the estimated $\gamma$ as a function of the location of the train period. In Figure~\ref{fig:SHR_15PA_py_BEL_1sttraintstart1_1sttraintstop15_c111_2D} (Belgium), the estimation of $\gamma$ is grouped around 0.08 for several train periods. However, in Figure~\ref{fig:SHR_15PA_py_FRA_1sttraintstart1_1sttraintstop15_c111_2D}, the estimation of $\gamma$ keeps decreasing, indicating that the daily number of patients leaving the hospital is an increasingly small fraction of the number of patients at the hospital.
\begin{figure}
\centerline{\includegraphics[width=.8\textwidth]{Code_Python/Figures/SHR_16PA_py_FRA_1sttraintstart1_1sttraintstop15_c111_2D.pdf}}
\caption{France, various train periods. Reproduce with SHR\_16PA\_py\_FRA\_1sttraintstart1\_1sttraintstop15\_c111.zip.}
\label{fig:SHR_15PA_py_FRA_1sttraintstart1_1sttraintstop15_c111_2D}
\end{figure}
\section{Conclusion} \label{sec:conclusion}
The experiments in Section~\ref{sec:results} have shown that the proposed method has a remarkably good fitting capability over the whole available data, and also a remarkably good predictive value over certain time ranges for the Belgian data. However, there are also time ranges where the prediction is very inaccurate, and the accuracy is also found to be lower for the French data. The predictions returned by the model should thus be taken with much caution. In keeping with this warning, we refrained from displaying predictions beyond the end time of the datasets. The Python code is freely available to make such predictions, but there is no warranty on their accuracy.
Another source of caution is that we cannot rule out the situation where the considered objective function would be multimodal. The optimization solver might thus get stuck in a local nonglobal minimum, yielding a suboptimal fit of the train data and possibly a poorer prediction than what an omniscient solver would achieve. Moreover, even if the objective function is unimodal, the stopping criterion of the solver may trigger before an accurate approximation of the minimum is reached.
If the proposed model is used to guide prevention policies, then further caveats are in order. We have seen that the estimation of $\bar\beta$ is very sensitive. Hence the proposed model can hardly help assess the impact of prevention measures on $\bar\beta$. Without knowing sufficiently accurately the impact of prevention measures on $\bar\beta$, we may not aptly use the model to predict their impact on the evolution of the hospitalizations.
Yet another caveat is that it may be tempting to deduce from the excellent fit with a constant-parameter model (Figure~\ref{fig:SHR_12PA_BEL_traintstart1_traintstop117_c100_4Dopt}) that the evolution of the prevention measures over the dataset period has had no impact on $\bar\beta$. But the deduction is flawed. Indeed, in view of the comments made in Section~\ref{sec:b-then-S0}, the available data could also be very well explained with fairly large jumps in $\bar\beta$ during the decrease phase.
In spite of all these caveats, the hospitalization forecasts returned by the method, and also the evolution of $\bar{S}(t)$, might be of practical use in the context of various disease outbreaks, e.g., for resource planning. To this end, it will be important to understand which specific features of the COVID-19 outbreak in Belgium made it possible to forecast so accurately the hospitalization decrease several months ahead.
\bibliographystyle{alphaurl}
| proofpile-arXiv_065-227 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The fingerprints of quantum mechanics on Brownian motion is an intriguing theme \cite{Caldeira1983,CALDEIRA1983374,HakimAmbegaokar1985}.
This theme concerns also the motion of a particle or an exciton on a lattice
\cite{MadhukarPost1977,Weiss1985,Kumar1985,aslangul1986quantum,Weiss1991,Dibyendu2008,Amir2009,
lloyd2011quantum,Moix_2013,CaoSilbeyWu2013,Kaplan2017ExitSite,Kaplan2017B,
dekorsy2000coupled,dubin2006macroscopic,nelson2018coherent},
or the closely related studies of motion in a washboard potential \cite{Schmid1983,Fisher1985QuantumBrownianPeriodic,Fisher1985QuantumBrownianPeriodic,AslangulPeriodicPotential1987}.
The traditional paradigm is/was that the effects of quantum mechanics show up only at low-temperatures, where non-classical effects are related to the failure of the Markovian approximation. This view has been challenged by publications regarding excitation transport in photosynthetic light-harvesting complexes, most notably by the experiment in \cite{engel2007evidence}, and by many theoretical publications
\cite{amerongen2000photosynthetic,ritz2002quantum,FlemingCheng2009,plenio2008dephasing,Rebentrost_2009,Alan2009,Sarovar_2013,higgins2014superabsorption,celardo2012superradiance,park2016enhanced}.
But by now it has been argued \cite{tiwari2013electronic,Tempelaar2014,Duan2017,Maiuri2018,Thyrhaug2018,QuanBioRevCao2020} that the transport there, by itself, is ``classical" in nature.
Nevertheless, contrary to the traditional paradigm,
we suggest below that quantum manifestation in stochastic motion can be detected via the high-temperature dependence of the transport coefficients. This opens a new avenue for challenging the traditional (classical) paradigm of Brownian motion.
\subsection{Brownian motion}
High temperature ($T$) classical Brownian motion is described by
the Langevin equation
\begin{eqnarray} \label{eq:langevin-p}
\dot{p} \ = \ -\eta \dot{x} + f,
\eeq
where $f$ is white noise of intensity $\nu$,
related to the friction coefficient
via the fluctuation dissipation relation
\begin{eqnarray} \label{eFDR}
\nu = 2 \eta T, \ \ \ \ \
\text{[can be used as definition of $T$]}
\eeq
For the standard dispersion relation ${\dot{x}=(1/\mass)p}$,
where $\mass$ is the mass of the particle,
one obtains the following simple results
for the transport coefficients:
\begin{eqnarray} \label{eMu}
\mu &=& \frac{1}{\eta}, \ \ \ \ \ \text{[mobility]}
\\ \label{eDcoef}
D &=& \frac{T}{\eta}, \ \ \ \ \ \text{[diffusion coefficient]}
\eeq
The mobility $\mu$ is used to determine the drift velocity
due to an applied bias; while $D$ is the coefficient
that enters Fick's law.
The Einstein relation ${D/\mu =T}$ is satisfied.
It is important to realize that \Eq{eFDR}
characterizes the thermal environment,
while the Einstein relation characterizes the
dissipative dynamics of the particle.
\subsection{Quantum signature}
One wonders whether the dependence of the transport
coefficients ($\mu$ and $D$)
on the dissipation parameters ($\eta$ and $\nu$) is universal.
This is the main question that motivates the present study.
\sect{Common wisdom}
The high temperature noise arises from a fluctuating potential,
namely,
\begin{eqnarray}
f \ \ = \ \ -\partial_x \mathcal{U}(x,t)
\eeq
This potential features in general a spatial correlation scale~$\ell$.
Semiclassicaly, the transport coefficients do not depend on $\ell$, and the common practice,
as in the Caldeira-Leggett model \cite{Caldeira1983,CALDEIRA1983374},
is to assume that $f$ is independent of~$x$, meaning that $\ell{=}\infty$.
But in the quantum treatment $\ell$ does show up in the analysis,
because it determines the lineshape of the stochastic kernel $\mathcal{W}(k|k')$
for scattering from momentum~$k'$ to momentum~$k$.
Namely, the width of the kernel (${\sim} 2\pi/\ell$)
has implication on the transient decoherence process \cite{Cohen97Brownian,Cohen1997,EspositoGaspard2005}.
Yet, one does not expect that this lineshape will have any effect
on the long time spreading. The argument is simple:
on the basis of the central limit theorem successive convolutions
should lead to a result that does not depend on the $\ell$-dependent
lineshape of the stochastic kernel, but only on its second moment,
which is characterized by~$\nu$.
Consequently robust quantum-to-classical correspondence (QCC) is expected at high temperatures.
Such QCC can be regarded as an implication of the Thomas-Reiche-Kuhn-Bethe-Wang sum rule \cite{WangSumRule99},
or as an extension of the {\em restricted} QCC principle \cite{Cohen99DissipationChaotic,StotlandCohen2006}.
\sect{Main Statement}
In the present work we show that~${\ell}$~independence of the transport
coefficients ($\mu$ and $D$) is a fallacy. Given $\eta$ we shall see that~$D$
acquires a non-universal dependence on the temperature,
that constitutes a quantum-mechanical signature.
\subsection{Tight binding model}
\label{sec:tb}
Here we consider a particle or a single exciton that can hop along
a one-dimensional chain whose sites are labeled by an integer index~$x$.
The dynamics of the isolated system is determined by the Hamiltonian
\begin{eqnarray}
\label{eq:H-tb-1}
\bm{H}^{(c)} \ \ = \ \ -c \cos(a\bm{p}) - f_0 \bm{x}
\eeq
where $a$ is the lattice constant, and $c$ is the hopping frequency,
and $f_0$ is an applied bias.
The operators $e^{\mp i a\bm{p}}$ generate one-site displacements.
This Hamiltonian is of quantum mechanical origin,
but may be treated semiclassically, by deriving
the equations of motion ${\dot{\bm{x}} = ca\sin(a \bm{p})}$ and ${\dot{\bm{p}}=f_0}$.
Adopting the standard jargon of Condensed Matter textbooks,
we shall call this {\em semiclassical} treatment of the dynamics.
The exact {\em quantum} dynamics of \Eq{eq:H-tb-1} is obtained
from the Schrodinger equation \cite{Hartmann_korsch_2004}
or equivaelently from the Lionville-von-Neumann equation for the
probability matrix~$\rho$.
The dynamics on the lattice features the dispersion relation ${\dot{\bm{x}} = v(\bm{p})}$,
where ${v(p) = ca\sin(a p)}$. The continuum limit (small~$ap$)
leads to the standard dispersion relation ${v=(1/\mass)p}$
with ${ \mass = 1/(c a^2) }$. Therefore we can regard the latter case
as a special regime of the former.
Irrespective of the dispersion relation, if the particle is coupled
to a thermal environment, the semiclassical treatment
leads to ${\dot{\bm{p}}=F(t)}$, where the force contains
a stochastic {\em noise} term and a {\em friction} term,
namely, ${ F(t) = f_0 + f(t) -\eta \dot{x} }$.
In the absence of external bias ($f_0{=}0$)
this leads to the Langevin equation \Eq{eq:langevin-p}.
In the corresponding high-temperature Markovian {\em quantum} treatment
the dynamics is given by a master equation for the probability matrix \cite{Breuer2002}.
This master equation incorporates extra term, aka dissipator,
that represent the noise and the friction:
\begin{eqnarray} \label{e1}
\frac{d\rho}{dt} \ = \ \mathcal{L} \rho \ = \
-i[\bm{H}^{(c)},\rho] + \mathcal{L}^{(\text{bath})} \rho
\eeq
The dissipator $\mathcal{L}^{(\text{bath})}$ is determined by the coupling
between the isolated chain and the environment,
and depends on the temperature of the bath.
\subsection{Regime diagram}
Disregarding the optional applied bias~$f_0$,
the isolated tight binding model has no free parameters
(formally we can set the units of time and length such that ${c=a=1}$).
With bath, the continuum-version of Quantum Brownian Motion (QBM)
features a single dimensionless parameter,
the scaled inverse temperature $\beta$, which is the ratio between
the thermal time $1/T$ and damping time ${\mass/\eta}$.
In the lattice problem one can define two dimensionless parameters
\begin{eqnarray}
\alpha = \frac{\eta a^2}{2\pi},
\hspace{2cm}
\theta=\frac{T}{c}
\eeq
Accordingly $\beta = \alpha/\theta$.
In our model we set the units such that ${a=1}$,
hence, disregarding $2\pi$ factor,
our scaled friction parameter~$\eta$ is the same as~$\alpha$.
The regime diagram of the problem is displayed in \Fig{fg2},
and further discussed below. It contains both Classical-like Brownian Motion (CBM) regime,
where memory effects are either not expressed or appear as a transient,
and QBM regimes where the dynamics is drastically different.
\begin{figure}
\begin{overpic}
{regime-diagram}
\put(03,105){(a)}
\end{overpic}
\begin{overpic}
{mu-div-eta-vs-theta}
\put(20,105){(b)}
\end{overpic}
\caption{\label{fg2}
{\bf The Brownian Motion regime diagram.}
(a) The various regions in the $(\eta,\theta)$ diagram are indicated.
We distinguish between the Classical-like Brownian Motion (CBM) region;
the low-temperature QBM region where memory effects dominates;
and the high-temperature QBM region that is discussed in this work.
Note that below the dashed diagonal line (${\beta>1}$) memory effects should be
taken into account.
(b) The scaled mobility $\mu/\mu_0$ where ${\mu_0=1/\eta}$ versus~$\theta$,
based on the analytical results that have been obtained for diffusion in the X/S coupling schemes.
The result is independent of $\eta$. We also add the result for the B coupling scheme
that approaches the finite asymptotic value ${\mu_{\infty}=2\eta}$ (horizontal line).
In the latter case ${\eta=0.3}$ has been assumed.
Note that the S/B results are applicable only in the ${\theta>1}$ regime.
}
\end{figure}
\subsection{Relation to past studies}
The standard analysis of QBM \cite{HakimAmbegaokar1985} reveals
that quantum-implied memory effects
are expressed in the regime ${\beta > 1 }$,
where a transient $\log(t)$ spreading is observed
in the absence of bias, followed by diffusion.
The later quantum dissipation literature,
regarding the two-site spin-boson model \cite{LeggettEtAlDynamicsTwoLevel1987}
and regarding multi-site chains \cite{aslangul1986quantum,AslangulPeriodicPotential1987},
is focused in this low temperature regime
where a transition from CBM-like behavior
to over-damped or localized behavior is observed,
notably for large~$\alpha$ of order unity.
Our interest is focused in the ${\alpha, \beta \ll 1 }$ regime.
This regime is roughly divided into two regions by the line ${\theta \sim 1}$, see \Fig{fg2}.
Along this line the thermal de-Broglie wavelength
of the particle is of order of the lattice constant,
hence it bears formal analogy to the analysis
of QBM in cosine potential \cite{Fisher1985QuantumBrownianPeriodic},
where it marks the border to the regime where activation mechanism comes into action.
In our tight binding model we have a single band,
hence transport via thermal activation is not possible.
Rather, in the ${\theta > 1}$ regime the momentum distribution within the band is roughly flat.
To avoid miss-understanding, what we call in the present study ``high temperature" regime
assumes a single band approximation by construction.
\subsection{Outline}
Overview of the main results is presented in \Sec{sec:overview}.
The Ohmic master equation is explained in \Sec{sec:ohmic-dis}.
The semiclassical analysis is detailed in \Sec{sec:semi-X} to \Sec{sec:semi-B}.
The quantum analysis is detailed in \Sec{sec:quantum-anal}.
The effective stochastic description is presented in \Sec{sec:stochastic},
where we discuss detailed-balance consideration as well.
Concise summary is provided in \Sec{sec:discussion}.
\section{Overview of main results}
\label{sec:overview}
In order to demonstrate that the temperature dependency of the
transport coefficients is $\ell$ dependent,
we consider in detail two extreme cases:
{\bf (a)} The Caldeira-Leggett X{-}dissipator $\mathcal{L}^{(\text{X})}$
where a single bath is coupled to $\bm{x}$.
This corresponds to non-disordered ($\ell{=}\infty$) bath.
{\bf (b)} The S{-}dissipator $\mathcal{L}^{(\text{S})}$
where each site is coupled to an independent bath.
For this coupling $\ell{=}a$.
In \Sec{sec:stochastic} we also present results for intermediate
values of~$\ell$.
For completeness we also consider another case:
{\bf (c)} The B{-}dissipator $\mathcal{L}^{(\text{B})}$
where each bond is coupled to an independent bath.
For all cases the dynamics is governed by the Ohmic master equation \Eq{e1}
and the dissipator $\mathcal{L}^{(\text{bath})}$
takes different forms according to the couplings.
For the 3 cases above the bath parameters are $\nu_i$ and $\eta_i$ with ${i=X,S,B}$.
\subsection{X-dissipation}
As a reference case we calculate the transport coefficients
for a particle in a tight binding model,
that is coupled to an Ohmic Caldeira-Leggett bath via the $x$ variable.
We term this standard case "X-dissipation".
We set the length units such ${a=1}$.
The bath parameters $\nu_X$ and $\eta_X$ are chosen such that
in the semiclassical \Eq{eq:langevin-p},
we have $\nu = \nu_X$ and $\eta = \eta_X$.
The result that we get for the diffusion coefficient is
\begin{align}\label{eq:D-cl}
D^{\text{(X)}} &= \left[1 - \frac{1}{[\mathrm{I}_0 (c/T)]^{2}} \right] \dfrac{T}{\eta}
\end{align}
where $\textrm{I}_n$ is the modified Bessel function.
This result is exact to the extent that the (Markovian) Ohmic master equation can be trusted.
For the mobility we get ${\mu = D/T}$ as expected for the Einstein relation.
A plot of the mobility versus temperature is provided in \Fig{fg2}.
For low temperatures (in the sense ${T \ll c}$) one recovers the standard
results \Eq{eMu} and \Eq{eDcoef} that apply for non-relativistic (linear) dispersion.
For high temperatures the result takes the form ${D^{\text{(X)}} = D_{\parallel}}$ with
\begin{eqnarray} \label{eDXS}
D_{\parallel} \ \ \approx \ C_{\parallel} \left[ 1 + A_{\parallel} \left( \dfrac{c}{T} \right)^2 \right] \dfrac{c^{2}}{\nu}
\eeq
where $C_{\parallel}{=}1$ and $A_{\parallel}{=}-5/16$.
The reason for using the subscript notation is clarified below.
The same expression appears for the S/B dissipators,
with $\nu$ replaced by $\nu_S$ and $\nu_B$ respectively.
The dependence of $D$ on the temperature is plotted in \Fig{fg1}.
For sake of comparison we plot also the naive expectation ${D \propto \braket{v^2}}$,
with ${ v = c \sin(p) }$, where the average is over the canonical distribution.
This naive expectation would be valid, if the correlation time
were independent of temperature (which is not the case).
The high-temperature dependence is
\begin{eqnarray} \label{eq:vsqr}
\braket{v^2} \approx \left[ 1 + A \left( \dfrac{c}{T} \right)^2 \right]\frac{c^2}{2},
\ \ \ \mbox{with $A=-1/8$} \ \
\eeq
In \Sec{sec:quantum-anal} we obtain the {\em same} result also within the framework
of an {\em exact} quantum treatment.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{DvsBetacAll}
\caption{\label{fg1}
{\bf Dependence of the diffusion coefficient on the temperature.}
Given~$\nu$ the coefficient $D_{\parallel}$ is plotted versus $(c/T)$
for the Ohmic master equation and different coupling schemes:
Caldeira-Leggett~(X); Sites~(S); and Bonds~(B).
The symbols are obtained numerically through an effective rate equation (see text).
We also show results for the canonical (``Boltzmann") versions of the S/B master equation,
and for the semi-classical result in the case of B-coupling
(The semiclassical result for S-coupling is the same as X-coupling).
The naive expectation~${D \propto \braket{v^2}}$ is displayed for sake of comparison.
}
\end{figure}
\subsection{S-dissipation}
We shall explain that if the fluctuations of the bath are characterized
by a finite correlation scale $\ell$, the semiclassical result for the transport
coefficient are the same as for ${\ell=\infty}$.
This rather trivial observation holds for any dispersion relation.
Specifically for the tight binding model \Eq{eq:D-cl} is $\ell$ independent.
But this is not so in the quantum analysis.
Here we analyze the other extreme limit of ${\ell \sim a}$,
meaning that the bath fluctuations at different sites are uncorrelated.
We obtain \Eq{eDXS} with $A_{\parallel}{=}1/16$ and $\nu \mapsto \nu_S$,
which is not even the same sign when compared to the X-dissipation result.
\sect{Technical note:}
For a tight binding model the parameter $\eta$ is defined conventionally as for a two site system (aka the Spin-Boson model). The definition via $-\eta \dot{x}$ is not practical because $x$ is a discrete variable. Still, in a semiclassical context, disregarding an ambiguity regarding numerical prefactor, the dissipation parameter $\eta$ has the meaning of a friction coefficient, as for X-coupling. With our standard conventions we get $C_{\parallel}{=}1/2$ (for S-dissipation) instead of $C_{\parallel}{=}1$ (for X-dissipation), which reflects a convention and not a profound difference. In contrast the $A$ coefficients are independent of convention and reflect quantum signature.
\subsection{B-dissipation}
For completeness we also consider the case where the dissipation is
due to uncorrelated noisy bonds (rather than sites).
Here we have an additional term in the expression
for the diffusion coefficient, namely ${D = D_{\parallel} + D_{\perp} }$,
where
\begin{eqnarray} \label{eDXB}
D_{\perp} \ \approx \ C_{\perp} \left[1 + A_{\perp} \left( \dfrac{c}{T} \right)^2 \right] \nu_B
\eeq
with ${ A_{\perp} = -1/4 }$. This additional term reflects extra spreading in the space
due to stochastic hopping as discussed in previous publication \cite{qss_sr}.
The quantum fingerprints are not related to this term,
but to the $D_{\parallel}$ term that arises due to the noise-induced
spreading in~$p$. For that term we find $A_{\parallel}{=}0$.
\sect{Technical note:}
The dissipation parameter $\eta_B$ is defined as for the Spin-Boson model. It is the ``same" Ohmic bath as assumed for S-dissipation, but the coupling term is different. We get $C_{\parallel}{=}1/6$ and ${C_{\perp}{=}1}$,
as opposed to X/S-coupling for which ${C_{\perp}{=}0}$.
\subsection{Non-universality}
In general, for high temperatures, the diffusion is composed of a term that originates
from non-coherent hopping that is induced by the bath, namely $D_{\perp}$ of \Eq{eDXB},
and from the interplay of noise with coherent hopping between sites, namely $D_{\parallel}$ of \Eq{eDXS}.
The $\ell$ dependence of the latter is a quantum signature, and consequently
our result for $D_{\parallel}$ reflect details of the dissipation mechanism.
The $A$ coefficients are {\em not} a matter of convention.
Rather they reflect the thermalization and the spreading mechanism,
and hence indicate {\em quantum} manifestation.
We summarize our results:
\begin{eqnarray} \label{eA}
&& A_{\parallel} \ = \ \left\{
\amatrix{
-5/16 & \ \text{for X-coupling} \cr
+1/16 & \ \text{for S-coupling} \cr
0 & \ \text{for B-coupling}
} \right.
\eeq
Contrary to the X-coupling case, for local (S/B) dissipators
the canonical $\rho$ is not the exact steady-state,
and satisfies $\mathcal{L} \rho \sim O(\beta^3)$ rather than zero.
We shall explain that if we ad-hock correct the transition rates
to get agreement with Boltzmann,
the results for the $A$-s are modified as follows:
\begin{eqnarray} \label{eAsc}
&& A_{\parallel} \ = \ \left\{
\amatrix{
-1/32 & \ \text{for S-coupling} \cr
-1/16 & \ \text{for B-coupling}
} \right.
\eeq
We emphasize again that the value of $A$ is a sensitive probe
that is affected by the line-shape of the spreading kernel.
Therefore its precise value is non-universal but depends on
the weights of the quantum transitions.
For completeness we introduce in \Sec{sec:stochastic}
results for intermediate values of~$\ell$,
demonstrating the crossover from S-coupling ($\ell{\sim}a$) to X-coupling ($\ell{\sim}\infty$).
\section{The Ohmic dissipator}
\label{sec:ohmic-dis}
The isolated chain is defined by the~$\bm{H}^{(c)}$ Hamiltonian.
The X-dissipation scheme involves a single bath,
with interaction term ${-\bm{W} F}$,
where $\bm{W}$ is the position operator $\bm{x}$,
and $F$ is a bath operator that induces Ohmic fluctuations
with intensity $\nu$.
More generally we assume a disordered thermal environment
that is composed of numerous uncorrelated baths
such that the interaction term is $\sum_{\alpha} \bm{W}_{\alpha} F_{\alpha}$,
where $\alpha$ labels different locations.
For S-dissipation ${\bm{W}_{\alpha} = \kb{x_{\alpha}}{x_{\alpha}}}$,
leading to a fluctuating potential that dephases the different sites.
For B-dissipation ${\bm{W}_{\alpha} = \kb{x_{\alpha}{+}1}{x_{\alpha}} + \text{h.c.}}$,
which induces incoherent hopping between neighbouring sites.
The Ohmic dissipator $\mathcal{L}^{(\text{X/S/B})} \rho$ takes the form
\cite{qss_sr,cohen2012lecture}:
\begin{eqnarray} \label{e2}
-\sum_{\alpha} \left(
\dfrac{\nu}{2} [\bm{W}_{\alpha}, [\bm{W}_{\alpha}, \rho]]
+ \dfrac{\eta}{2}\, i [\bm{W}_{\alpha}, \{\bm{V}_{\alpha}, \rho\}]
\right) \ \
\eeq
where ${\eta=\nu/(2T)}$ is the friction coefficient, and
\begin{eqnarray} \label{eFR}
\bm{V}_{\alpha} \ \equiv \ i[\bm{H}^{(c)}, \bm{W}_{\alpha}]
\eeq
The friction terms represent the response of the bath
to the rate of change of the $\bm{W}_{\alpha}$.
For X-dissipation ${\bm{V} = c\sin(\bm{p}) }$ is the velocity operator.
If we treat the the friction term of \Eq{e2} in a semi-classical way,
the expression for the dissipator in the Wigner phase-space representation $\rho_w(R,P)$
takes the familiar Fokker-Plank (FP) form with ${v=c\sin(P)}$, namely,
\begin{eqnarray} \label{eFP}
\mathcal{L}^{\text{FP}}\rho_w \ \ = \ \ \frac{\nu}{2}\partial_P^2 [\rho_w] - \partial_P [(f_0-\eta v) \rho_w ]
\eeq
which is a sum of momentum-diffusion and momentum-drift terms.
For the sake of later reference we have added to the friction force ($-\eta v$) a constant field~$f_0$.
The X-dissipator leads to canonical steady-state for any friction and for any temperature. This is not the case for S/B-dissipation, for which the agreement of the steady-state with the canonical result is guaranteed only to second order in~$\eta$. The reason for that is related to the proper identification of the ``small parameter" that controls the deviation from canonical thermalization.
The X-dissipator induces transitions between neighboring momenta,
and therefore the small parameter is $\Delta/T$, where the level spacing $\Delta$ goes to zero in the $L\rightarrow\infty$ limit, where $L$ is the length of the chain. But for local baths, the coupling is to local scatterers,
that create transitions to all the levels within the band.
Therefore the small parameter is~$c/T$, and canonical thermalization is expected only for~${c/T < 1}$.
\section{Semiclassical analysis for X-dissipation}
\label{sec:semi-X}
We shall argue later that for X-dissipation the semiclassical dynamics
that is generated by $\mathcal{L}^{\text{FP}}$ is {\em exact} for the purposed
of the~$A$ coefficient evaluation.
Here we present the semiclassical solution.
In \Sec{sec:CL-classical} below we find the steady-state momentum distribution
in the presence of a constant field~$f_0$.
In \Sec{sec:transport-coef} we obtain for weak field ${\braket{v} = \mu f_0}$,
where $\mu$ is the mobility.
Then the diffusion coefficient is deduced from the Einstein relation,
namely, ${D = \mu T}$, leading to \Eq{eq:D-cl}.
Optionally we can calculate directly the velocity-velocity correlation
function $\braket{v(t) v(0)}$ in the absence of an external field.
This requires a rather complicated recursive procedure, see \App{sec:sine-corr}.
The diffusion coefficient is obtained via
\begin{eqnarray} \label{eq:D-vvcorr}
D \ \ = \ \ \int_{0}^{\infty} dt \, \braket{v(t) v(0)}
\eeq
The same result is obtained, namely \Eq{eq:D-cl}.
Later we shall calculate the diffusion in a proper
quantum calculation, this again yields the same result. See \Sec{sec:quantum-anal}.
\subsection{The steady-state}
\label{sec:CL-classical}
We consider a Brownian particle that is described by \Eq{eq:H-tb-1},
under the influence of thermal non-disordered fluctuating field (X-coupling).
Below we set ${a=1}$ for the lattice constant.
A fully-quantum treatment of this model has been introduced by \cite{aslangul1986quantum,AslangulPeriodicPotential1987},
with focus on low temperature QBM regime,
while here we focus on the high temperature regime.
The semiclassical equations of motion are formally obtained by the
substitution ${f_0 \mapsto F(t)}$,
where the total force ${ F(t) = f_0 + f(t) -\eta \dot{x} }$
includes a stochastic term that
has zero average with correlation function $\avg{f(t)f(t')} = \nu \delta(t-t')$,
and an associated friction term with coefficient $\eta$,
in additional to the bias term $f_0$.
Thus we get the Langevin equation
\begin{eqnarray}
\dot{x} &=& \dfrac{\partial H}{\partial p} \ = \ c \sin{(p)} \label{eq:x-dot}\\
\dot{p} &=& -\dfrac{\partial H}{\partial x} \ = \ f_0 - \eta \dot{x} + f(t) \label{eq:p-dot}
\eeq
The steady-state for $p$ is solved by inserting \Eq{eq:x-dot} to \Eq{eq:p-dot}.
Changing notation $p\mapsto\varphi$,
and ${u(\varphi)=f_0-\eta c \sin(\varphi)}$,
and $D_{\varphi}=(1/2)\nu$,
one get the equation ${\dot{\varphi} = u(\varphi) + f(t)}$,
with the associated Fokker-Planck equation
\begin{align}\label{eq:fp-phi}
\dfrac{\partial}{\partial t} \rho(\varphi,t) = - \dfrac{\partial}{\partial \varphi} I,
\end{align}
with
\begin{eqnarray} \nonumber
I &=& u(\varphi) \rho - D_{\varphi} \dfrac{\partial \rho}{\partial\varphi}
\equiv - D_{\varphi} \left[ V'(\varphi) \rho + \dfrac{\partial \rho }{\partial \varphi} \right]
\\ \label{eq:p-current}
&=& - D_{\varphi} e^{-V(\varphi)} \dfrac{\partial}{\partial \varphi} \left[ e^{V(\varphi)} \rho \right]
\eeq
This equation describes motion in a tilted potential
\begin{eqnarray} \nonumber
V(\varphi) \ &=& \ -\frac{\eta c}{D_{\varphi}} \cos(\varphi) - \frac{f_0}{D_{\varphi}} \varphi
\\ \label{eq:v-phi}
\ \ &\equiv& \ \ W(\varphi) - \mathcal{E} \varphi
\eeq
The non-equilibrium steady-state (NESS) solution is
\begin{align} \label{eq:rho-steady-state-w-I}
\rho(\varphi) \ \ = \ \ \left[ C - I \int_0^{\varphi} \frac{ e^{V(\varphi')} }{D_{\varphi}} d\varphi' \right] e^{-V(\varphi)}
\end{align}
where the integration constant $C$ is determined
by the periodic boundary conditions ${\rho_0(0) = \rho_0(2\pi)}$,
namely,
\begin{eqnarray}
C \ \ = \ \ \frac{I}{1-e^{-2\pi\mathcal{E}}} \int_0^{2\pi} \frac{ e^{V(\varphi')} }{D_{\varphi}} d\varphi'
\eeq
Simplifying, the final expression for the NESS is
\begin{eqnarray} \label{eq:rho-ss}
\rho(\varphi) = \frac{I}{1{-}e^{-2\pi\mathcal{E}}}
\left[\int_0^{2\pi} \frac{dr}{D_{\varphi}} e^{W(\varphi+r) - \mathcal{E} r} \right]
e^{-W(\varphi)} \ \ \ \
\eeq
Where the $\varphi$-current $I$ is determined by normalization.
\subsection{The transport coefficients}
\label{sec:transport-coef}
Reverting to the original notations the first order result in $f_0$ is
${I = [2 \pi \mathrm{I}_0^2 (c/T)]^{-1} f_0}$,
where $\mathrm{I}_n(x)$ is the modified Bessel function.
For zero field the canonical distribution is recovered:
\begin{eqnarray}
\rho(p) \ \propto \ \exp[-W(p)] \ = \ \exp[(c/T) \cos{(p)}]
\eeq
Averaging over \Eq{eq:p-dot}, and using $\avg{\dot{p}} = 2 \pi I$, one obtains
\begin{eqnarray} \nonumber
\avg{\dot{x}} = \left[1 - 2 \pi I \right] \frac{f_0}{\eta}
= \left[ 1 - \mathrm{I}_{0}^{-2} \Big(\dfrac{c}{T} \Big) \right] \frac{f_0}{\eta}
\ \equiv \ \mu f_0 \ \ \ \ \
\eeq
where $\mu$ is the so-called linear mobility.
This result for $\mu$ is consistent with direct calculation of $D$
in accordance with the Einstein relation, namely ${\mu = D/T}$.
The \textit{direct} calculation of $D$ is more involved.
It is obtained by calculating the variance of~$x$, after time~$t$,
for a particle initially located at $x{=}0$:
\begin{eqnarray} \nonumber
\avg{x^2} = c^2 \int_{0}^{t} \int_0^t dt'dt'' \avg{\sin(\varphi_{t'})\sin{(\varphi_{t''})}}
\ \equiv \ 2 D t \ \ \ \
\eeq
Defining $S_1$ as the area of the sine correlation function
we write $D = c^2 S_1$. The calculation of $S_1$ is outlined in \App{sec:sine-corr}.
\section{Semiclassical analysis for S-dissipation}
\label{sec:semi-S}
In the semiclassical treatment $x$ is regarded as as a continuous coordinate,
and therefore we write
\begin{eqnarray}
\bm{W}_{\alpha} \ = \ u_{\alpha}(\bm{x}) \ = \ u(\bm{x}{-}x_{\alpha})
\eeq
that involves a short-range interaction potential ${u(r)}$.
The fluctuating potential is
\begin{eqnarray}
\mathcal{U}(x,t) \ = \ \sum_{\alpha} F_{\alpha}(t) u(x{-}x_{\alpha})
\eeq
In the semiclassical analysis we define $\nu$ as the variance of ${f= -\mathcal{U}'(x,t)}$.
These fluctuations have the same intensity at any~$x$ because we assume
that the $x_{\alpha}$ are homogeneously distributed.
It follows automatically that $\eta=\nu/2T$ is the friction coefficient,
as in the case of X-dissipation. See \cite{Cohen1997} for details.
So in the semiclassical description we get the same Langevine equation,
irrespective of the correlation distance $\ell$ that is determined by the width of $u(r)$.
In the tight-binding {\em quantum} model,
we define $\nu_S$ as the variance of the on-site fluctuation of the potential.
With that we associate a fluctuating force intensity
\begin{eqnarray}
\nu \ \ = \ \ \frac{1}{\ell^2} \nu_{S}
\eeq
where $\ell$ is the correlation scale.
We set ${\ell \sim a}$ where $a{=}1$ is the lattice constant.
Consequently $\nu$, up to numerical factor, is the same as $\nu_S$.
The price for having a vague definition for $\nu$
is the prefactor $C$ that we get in the formula for $D$.
This prefactor reflects that the semiclassical limit has an
inherent numerical ambiguity due to the residual freedom
in the choice of~$u(r)$.
\section{Semiclassical analysis for B-dissipation}
\label{sec:semi-B}
Using the same prescription as for the S-dissipation case,
and ignoring commutation issues, we write
${\sum_{\alpha} \left( \kb{x_{\alpha}{+}1}{x_{\alpha}} + \text{h.c.} \right)}$
as ${ [2 \cos{(\bm p)}] \kb{\bm x}{\bm x}}$,
and get for the B-coupling term
\begin{eqnarray}
\bm{W}_{\alpha} \ =
\ [2 \cos{(\bm{p})}] \, u_{\alpha}(\bm{x})
\eeq
This means that motion with momentum ${ |p| \sim \pi/2}$ is not affected by the baths.
This is an artifact of the semiclassical treatment,
and does not hold for the quantum dynamics.
Still, the semiclassical perspective provides some insight
that helps to clarify how \Eq{eDXB} comes out.
The equations of motion that are derived from the full Hamiltonian
are of Langevin-type:
\begin{eqnarray} \label{eLEQx}
\dot{x} &=& \left[ c + 2 \sum_{\alpha} u_{\alpha}(x) F_{\alpha}(t) \right] \sin{(p)}
\\ \label{eLEQp}
\dot{p} &=& \left[ 2\sum_{\alpha} u'_{\alpha}(x) F_{\alpha}(t) \right] \cos{(p)}
\eeq
For infinite temperature the $F_{\alpha}$ are uncorrelated white noise terms,
with some intensity proportional to~$\nu_B$.
Therefore we get from \Eq{eLEQp} diffusion in~$p$
with coefficient ${ \nu_p = (1/\ell)^2 [2\cos(ap)]^2 \nu_B }$,
and from \Eq{eLEQx} extra diffusion in~$x$
with coefficient ${ \nu_x = (a)^2 [2\cos(ap)]^2 \nu_B }$,
where ${\ell \approx a}$ and $a{=}1$.
The latter term, after momentum averaging, is responsible for getting
the $D_{\perp}$ term in \Eq{eDXB}.
For a particle that moves with constant momentum ${p}$, ignoring the variation in $p$, the velocity-velocity correlation decays as ${\exp(-\nu_x t)}$
due to this $x$-diffusion. This leads to an extra Drude term ${D_{\parallel} = v^2 / \nu_x}$ that diverges at $p{=}\pi/2$. However, taking the variation of the momentum into account, this divergence has zero measure, and the final result is finite, leading to the first term in \Eq{eDXS} with ${C_{\parallel} = 0.49 }$.
For finite temperature the fluctuations gain a non-zero average
${\avg{F_{\alpha}} = 2 \eta_B \left([u_{\alpha}(x) \sin{(p)}] \dot{p} - [u'_{\alpha}(x)\cos{(p)}] \dot{x} \right)}$, where ${\eta_B = \nu_B / T}$,
leading to canonical-like thermalization, and over-estimated ${A_{\parallel} = -0.2}$.
The results for $A_{\parallel}$ and $C_{\parallel}$ were obtained using a procedure
that is described in the \Sec{sec:stochastic},
where we treat the {\em quantum} and the {\em semiclassical} on equal footing:
the latter can be regarded as a special case of the former.
\section{The quantum analysis}
\label{sec:quantum-anal}
The quantum evolution is generated by $\mathcal{L}$ of \Eq{e1} with the dissipators of \Eq{e2},
and it can be written as sum of Hamiltonian, noise and friction terms,
namely ${\mathcal{L} = c\mathcal{L}^{(c)} + \nu \mathcal{L}^{(\nu)} + \eta c \mathcal{L}^{(\eta)}}$.
various representations can be used, notably the Wigner and the Bloch representations see \App{sec:wigner}.
For the purpose of finding the spectrum (and from that the transport coefficients)
it is most convenient to use the latter (Bloch), as explained below.
The elements of the super-vector $\rho$ are given in the standard representation
by ${\rho(R,r) \equiv \BraKet{R+r/2}{\rho}{R-r/2}}$,
and in Dirac notation we write $\rho = \sum_{R,r} \rho(R,r) \ket{R,r}$.
The super-matrix $\mathcal{L}$ is invariant under $R$-translations,
and therefore it is convenient to switch to a Bloch representation $\rho(q;r)$
where $\mathcal{L}$ decomposes into $q$~blocks. In the $q$ subspace we have
the following expressions \App{sec:bloch}:
\begin{eqnarray}
\label{eq:L-H-bloch-nongauged} \nonumber
\mathcal{L}^{(c)} &=&
+\sin(q/2) \Big(\mathcal{D}_{\perp} - \mathcal{D}_{\perp}^{\dag} \Big)
\\
\label{eq:L-X-bloch} \nonumber
\mathcal{L}^{(\nu_X)} &=& - (1/2)\hat{r}^2 \\ \nonumber
\mathcal{L}^{(\eta_X)} &=& \cos{(q/2)} \dfrac{\hat{r}}{2} \left( D_{\perp} - D_{\perp}^{\dag} \right)
\\
\label{eq:L-S-bloch} \nonumber
\mathcal{L}^{(\nu_S)} &=& -1 + 1 \kb{0}{0} \\ \nonumber
\mathcal{L}^{(\eta_S)} &=&
\dfrac{\cos{(q/2)}}{2} \Big(
\mathcal{D}_{\perp} + D_{\perp}^{\dag} + \kb{\pm 1}{0} - \kb{0}{\pm 1} \Big)
\\
\nonumber
\mathcal{L}^{(\nu_B)} &=& -2 \ + \ 2\cos(q) \kb{0}{0}
+ \Big(\kb{1}{{-1}} + \kb{{-1}}{1} \Big) \\
\nonumber
\mathcal{L}^{(\eta_B)} &=& \frac{1}{2}\cos{(q/2)} \Big(\mathcal{D}_{\perp}+\mathcal{D}_{\perp}^{\dag} \Big) \\ \nonumber
&+& \dfrac{1}{2} \cos(3q/2) \Big( \kb{\pm 1}{0} - \kb{0}{\pm 1} \Big) \\
&+& \dfrac{1}{2}\cos(q/2) \Big( \kb{{\mp 2}}{\pm 1} - \kb{\pm 1}{{\mp 2}} \Big)
\label{eq:L-B-bloch} \label{eLterms}
\eeq
The subscripts X/S/B distinguish the different coupling schemes, and $\mathcal{D}_{\perp} = \kb{r{+}1}{r}$ is the displacement operator in~$r$ space.
\subsection{Extracting the diffusion coefficient}
To obtain the diffusion coefficient, we consider the spectrum of $\mathcal{L}$ for a finite system of $L$ sites. In the Bloch representation the equation ${\mathcal{L} \rho = - \lambda \rho}$ decomposes into $q$-blocks. For a given~$q$ we have a tight binding equation in the $\ket{r}$ basis. For example $\mathcal{L}^{(c)}$ induces near-neighbor hopping in~$r$.
The eigenvalues for a given $q$ are labeled $\lambda_{q,s}$, where ${s}$ is a band index. The long-time dynamics is determined by the slow ($s{=}0$) modes. Specifically, the diffusion coefficient is determined by the small~$q$ expansion
\begin{eqnarray} \label{elambda}
\lambda_{q,0} \ \ = \ \ D q^2 + \mathcal{O}(q^4)
\eeq
The NESS eigenvector belongs to the $q{=}0$ block, and for $\eta{=}0$ it is given by $\ket{r{=}0}$.
Non-zero $q$ and $\eta$ can be treated as a perturbation. The key observation is that in order to get an {\em exact} result for $D$ it is enough to use second-order perturbation theory in~$q$. The outcome of this procedure is the analytical expression for $D$ with the associated results for the $A$ coefficients.
Extra technical details are provided in the next subsection.
\subsection{Perturbation theory}
\label{sec:perturbation}
We use perturbation theory to find the eigenvalue $\lambda_{q,0}$ of $\mathcal{L}^{(q)}$, from which we can obtain~$D$. We regard the Bloch quasimomentum~$q$ and the friction~$\eta$ as the perturbation. For ${q=\eta=0}$ the state $\ket{r={0}}$ is an exact eigenstate that is associated with the eigenvalue ${\lambda=0}$. Due to the perturbation it is mixed with neighboring $\ket{r}$ states. We outline below how we get analytical expressions for $\lambda_{q,0}$ to any order in $q$ and $\eta$. In practice we go up to second order.
In the following we demonstrate how we perform perturbation theory for the X-coupling scheme. The same method is
used for the S/B coupling schemes either with the Ohmic dissipators or with the Boltzmann dissipators.
We would like to diagonalize the $q$~block
\begin{eqnarray} \nonumber
\mathcal{L}^{(q)} & = & c \mathcal{L}^{(c)} + \nu \mathcal{L}^{(\nu_X)} + (c \eta)\mathcal{L}^{(\eta_X)}
\\ \nonumber
&=& c \sin(q/2) \Big(\mathcal{D}_{\perp} - \mathcal{D}_{\perp}^{\dag} \Big) - (\nu/2)\hat{r}^2
\\ \label{eq:L-X-SM}
&& + (c \eta) \cos{(q/2)} \dfrac{\hat{r}}{2} \left( D_{\perp} - D_{\perp}^{\dag} \right)
\eeq
Each such block produces eigenvalues $\mathcal{L}^{(q)} \ket{s} = -\lambda_{q,s} \ket{s}$,
that are distinguished by the index $s$. We are interested in the slowest mode $\lambda_{q,0}$.
The NESS is the eigenvector that corresponds to the zero eigenvalue.
It belongs to the $q{=}0$ block, which results from probability conservation.
In the Bloch representation, probability conservation means that ${\bra{0}\mathcal{L}^{(0)} =0}$.
To obtain the eigenvalues to order $q^2$ it is enough to Taylor expand the operator to that order.
Accordingly,
\begin{eqnarray} \nonumber
\mathcal{L}^{(q)} \ \ &=& \ \
- (\nu/2)\hat{r}^2
\ + \ c (q/2) \Big(\mathcal{D}_{\perp} - \mathcal{D}_{\perp}^{\dag} \Big)
\\ \label{eq:L-X-taylor}
&& + \ (c \eta) \left[ 1 - (q/2)^{2} \right] \dfrac{\hat{r}}{2} \left( D_{\perp} - D_{\perp}^{\dag} \right)
\eeq
The first term is the zero order term. Here (for X-coupling) it is diagonal in~$r$.
For the other coupling schemes it is not necessarily diagonal in~$r$,
but for any of them $\ket{r={0}}$ is an eigenstate of the zero-order term.
To find the eigenvalue $\lambda_{q,0}$ via perturbation theory one has to sum over different paths that begin and end in $r{=}0$. In the case of \Eq{eq:L-X-taylor} these paths are composed of hops between near neighbor sites. Second order contributions involve terms with
$\bra{0} \mathcal{L}^{(q)} \kb{r}{r} \mathcal{L}^{(q)} \ket{0}$, with $r{\ne}0$.
Each transition involves a factor $cq$ or $(c \eta)$, or $(c \eta q^2)$.
Hence only the sites ${|r| \le 2}$ contribute to the perturbed eigenvalue up to order~$\eta^2q^2$.
Furthermore, the $(c \eta q^2)$ transitions are always multiplied by other $\mathcal{O}(q)$ transitions,
and therefore can be ignored in any second order expansion.
From the above it should be clear that for X-coupling the matrix that should be diagonalized is
\begin{eqnarray} \nonumber
\mathcal{L}^{(q)} \mapsto
\dfrac{1}{2}
\left(
\begin{array}{ccccc}
-4 \nu & 2 c \eta {-} cq & 0 & 0 & 0 \\
-c \eta {+} c q & -\nu & c \eta {-} c q & 0 & 0 \\
0 & c q & 0 & -c q & 0 \\
0 & 0 & c \eta {+} c q & - \nu & -c \eta {-} c q \\
0 & 0 & 0 & 2 c \eta {+} c q & -4 \nu \\
\end{array}
\right)
\eeq
A convenient way to obtain analytical result is to write the characteristic equation ${\det[\lambda + \mathcal{L}^{(q)}] = 0}$ with the above (truncated) matrix, and to substitute an expansion ${\lambda_{q,0} = \sum_{n} a_n q^n}$. Then we solve for the coefficients $a_n$ iteratively. The outcome is expanded in $\eta$ to order $\eta^2$. Note that to go beyond second order in $\eta$ does not makes sense, because the Ohmic master equation and the associated NESS are valid only up to this order.
\section{Effective stochastic description}
\label{sec:stochastic}
The propagation of the Wigner distribution function $\rho_w(R,P)$ is generated by a kernel ${\mathcal{L}(R,P|R_0,P_0)}$ that is obtained from \Eq{eLterms} in a straightforward manner via Fourier transform \App{sec:wigner}. For simulations of the long time spreading it is enough to approximate $\mathcal{L}$ in a way that is consistent with second order perturbation theory in~$q$. As explained in the previous paragraph, such approximation provides an {\em exact} result as far as $D$ calculation is concerned.
Replacing $\sin(q/2)$ by $(q/2)$, the $\mathcal{L}^{(c)}$ term by itself generates classical motion in the $X$ direction with velocity ${v=c\sin(P)}$. In the quantum calculation this motion is decorated by a Bessel function, but~$D$ is not affected.
The $\cos(q)$ in the $\mathcal{L}^{(\nu_B)}$, after expansion to second order and Fourier transform, leads to an $x$-diffusion term, that is responsible to for the ${C_{\perp} }$ contribution in \Eq{eDXB}. As far as this term is concerned, there is no difference between the quantum and the semiclassical picture, and therefore we ignore it in the subsequent analysis.
The cosine factors in the other dissipators can be replaced by unity. The reason is as follows: by themselves those cosine terms do not lead to any diffusion; only when combined with the $\mathcal{L}^{(c)}$ term they lead to the Drude-type ${C_{\parallel} }$ contribution in \Eq{eDXS}; the $\mathcal{L}^{(c)}$ is already first order in~$q$; hence no need to expand the cosines beyond zero order.
\subsection{The effective rate equation}
With the approximations that were discussed in the previous paragraph (excluding for presentation purpose the trivial $R$ diffusion in the case of B-disspation), we find that the evolution of the Wigner function is generated by a stochastic-like kernel ${\mathcal{L}(R,P|R_0,P_0) = \mathcal{W}(P|P_0) \delta(R-R_0)}$. The explicit expressions for infinite temperature ($\eta{=}0$) are:
\begin{eqnarray}
\label{eW15}
\mathcal{W}^{(\nu_X)}(P|P_0) &=& \left(\frac{L}{2\pi}\right)^2 \dfrac{\nu}{2} \, \delta_{P, P_0 \pm (2\pi/L)} \\
\label{eW16}
\mathcal{W}^{(\nu_S)}(P|P_0) &=& \left(\frac{\nu_{S}}{L}\right) \\
\label{eW17}
\mathcal{W}^{(\nu_B)}(P|P_0) &=& \left(\frac{\nu_{B}}{L}\right) 4\cos^2{ \left( \dfrac{P+P_0}{2} \right)}
\eeq
These are the transition rates (${P\ne P_0}$), while the diagonal elements of $\mathcal{W}$ are implied by conservation of probability. For X-dissipation \Eq{eW15} describes local spreading of momentum which is in complete correspondence with the semiclassical analysis.
The noise intensity is reflected in the second moment:
\begin{eqnarray} \label{eWrr}
\nu \ \ = \ \ \sum_{p} W(p) \, p^2
\eeq
where ${p = (P - P_0)}$.
This implies consistency with the Langevin equation \Eq{eq:langevin-p}.
Optionally \Eq{eW15} can be regarded as the discrete version
of the Fokker-plank equation \Eq{eFP}.
For S-dissipation \Eq{eW16} describes quantum diffractive spreading. In the latter case, if the dynamics were treated semiclassically one would obtain the same result as for X-dissipation, namely \Eq{eW15}, with prefactor of order unity that can be by re-scaled to unity by adopting the appropriate convention for the definition of~$\nu$. In other words: the coupling strength to the bath should be re-defined such that $\nu$ is the second-moment of $\mathcal{W}(P|P_0)$ irrespective of the lineshape. Similarly, if the dynamics were treated semiclassically for the B-coupling, one would obtain \Eq{eW15} multiplied by ${4\cos^2(P)}$, as implied by the semiclassical analysis.
The result for $\mathcal{W}$ for finite temperature,
in leading order in $\eta$ (which serves here as a dimensionless version of the inverse temperature) can be written as
\begin{eqnarray}\label{eq:Wkk-Boltzmann}
\mathcal{W}(P|P_0) = \mathcal{W}^{(\nu)}(P|P_0)
\exp\left[- \dfrac{E(P) {-} E(P_0)}{2 T}\right]
\ \ \ \
\eeq
where ${E(P)=-c \cos(P)}$. More precisely, if we incorporate the $\mathcal{L}^{(\eta)}$ term of the Ohmic master equation, we get \Eq{eq:Wkk-Boltzmann} with $e^x \mapsto (1+x)$. This reflects the well known observation that the Ohmic approximation satisfies detailed balance to second order in $\eta$.
Accordingly the Ohmic steady-state agrees to {\em second order} with the canonical steady-state ${\rho_{\text{SS}}(P) \propto \exp[-E(P)/T] }$.
\subsection{Analytical and numerical estimates}
\label{sec:simulation}
The stochastic description allows a convenient way to obtain {\em exact} results for~$D$
either analytically or numerically.
Analytically we use the same procedure as in the quantum case,
namely, given the dissipator $\mathcal{L}^{(q)}$,
we extract $D$ from \Eq{elambda}. The relation between $\mathcal{L}^{(q)}$ and the
stochastic kernel is
\begin{eqnarray} \nonumber
&& \BraKet{r}{\mathcal{L}^{(q)}}{r_0} \ = \ \BraKet{r,q}{\mathcal{L}}{r_0,q}
\\ \label{eLqW}
&& \ \ \ \ \ = \ \dfrac{1}{L} \sum_{P, P_0} \mathcal{W}(P|P_0) e^{i P r-i P_0 r_0}
\eeq
The X-coupling and S-coupling schemes provide two extremes, with ${\ell = L}$ and ${\ell = 1}$ respectively.
This is mirrored in the infinite-temperature kernel $\mathcal{W}$ of \Eq{eW15} and \Eq{eW16}.
On equal footing we can interpolate between the two extremes by introducing a kernel
of width $2 \pi/\ell$. Then we use \Eq{eq:Wkk-Boltzmann} to get the finite-temperature kernel.
The calculation of $\mathcal{L}^{(q)}$ using \Eq{eLqW} is provided in \App{sec:label}.
The result for $A_{\parallel}$ is displayed in \Fig{fig:a-vs-ellL}.
Note that the convention regarding the prefactor in $\mathcal{W}^{(\nu)}$
plays no role in the determination of $A_{\parallel}$.
At this point we have to emphasize again that for the ``Ohmic" results we
use the prescription ${e^x \mapsto (1+x)}$ as explained after \Eq{eq:Wkk-Boltzmann}.
If we perform the calculation literally using \Eq{eq:Wkk-Boltzmann} we get \Eq{eAsc} instead of \Eq{eA}.
Note that the same results are obtained with ${e^x \mapsto (1+x+(1/2)x^2)}$,
because higher orders do not affect the expansion in \Eq{elambda}.
The difference between \Eq{eAsc} and \Eq{eA} reflects the limited accuracy
of the Ohmic master equation with respect to the small parameter~$c/T$.
The analytial results for the $A_{\parallel}$ coefficients that are plotted in \Fig{fig:a-vs-ellL}
are derived and displayed in \App{sec:label}. Here we write expressions that
approximate very well the exact results:
\begin{eqnarray}\label{eq:A-w-quad}
A_{\parallel} &\approx& -\dfrac{5}{16} \left( 1 - \dfrac{6}{5} \left( \dfrac{a}{\ell} \right)^2 \right)
\ \ \ \ \ \mbox{[Ohmic]} \\
A_{\parallel} &\approx& -\dfrac{5}{16} \left( 1 - \dfrac{9}{10} \left( \dfrac{a}{\ell} \right)^2\right)
\ \ \ \ \ \mbox{[Boltzmann]}
\eeq
Note that this practical approximation provides the exact results
for both X-coupling ($\ell{=}\infty$) and S-coupling ($\ell{=}a{=}1$).
\begin{figure}
\centering
\includegraphics[width=\hsize]{A-vs-w.pdf}
\caption{\label{fig:a-vs-ellL}
{\bf Quantum non-universality}.
The dependence of the coefficient $A_{\parallel}$ on $\ell$.
The insets caricature the fluctuating potential
for large (left) and for small (right) values of~$\ell$.
In the semiclassical analysis the result (dashed orange line) is universal, independent of~$\ell$.
In the quantum analysis we obtain an interpolation between
the X-coupling and the S-coupling case \Eq{eA}.
We plot results (see text) for the Ohmic (blue triangles)
and for the Boltzmann-corrected (green crosses)
versions of the master equation.
}
\end{figure}
In \Fig{fg1} we test the analytical approximation \Eq{eDXS} against exact numerical calculation that is based on the effective rate equation. In the numerical procedure the diffusion coefficient~$D$ is calculated using \Eq{eq:D-vvcorr}. The momentum spreading kernel is ${K(t) \equiv \exp(\mathcal{W} t)}$, and the velocity is ${v_P = c \sin(P)}$. Accordingly
\begin{eqnarray} \label{eq:vvcorr}
\avg{v(t)v(0)} = \sum_{P,P_0} v_{P} [K_{P,P_0}(t)] v_{P_0} \rho_{\text{SS}}(P_0)
\eeq
If we perform the calculation literally using \Eq{eq:Wkk-Boltzmann} we get results that agree with \Eq{eAsc}.
If on the other hand we use for $\mathcal{W}$ the Ohmic expression (as specified after \Eq{eq:Wkk-Boltzmann})
we get results that agree with \Eq{eA}.
Note that for $\rho_{SS}$ we can use the canonical steady state,
because it agree with the Ohmic steady state to second order.
\newpage
\section{Discussion}
\label{sec:discussion}
The prototype Caldeira-Leggett model corresponds to the standard Langevin equation where the dispersion relation is ${v=(1/\mass)p}$. In the tight-binding framework we have the identification ${ \mass \mapsto 1/(c a^2) }$, where $a$ is the lattice constant. There is a crossover to standard QBM as ${\theta \equiv T/c}$ is lowered.
It is illuminating to summarize this crossover in terms of mobility.
Using the Einstein relation \Eq{eDXS} and \Eq{eDXB} imply
\begin{eqnarray} \label{eMOB}
\mu \ \ = \ \ \frac{D}{T} \ \ = \ \ \frac{B(\theta)}{\eta} \ + \ 2 Q(\theta) \eta
\eeq
where the $B(\theta)$ term is related to the coherent hopping,
and the $Q(\theta)$ term is due to bath-induced incoherent hopping.
We believe that this functional form is rather robust, and apply to any type of dissipation mechanism.
The traditional result is the first term with ${B(\theta)=1}$,
while \Eq{eDXS} implies that for large~${\theta}$ the result is
\begin{eqnarray}
B(\theta) \ \propto \ (1/\theta)^2+A_{\parallel}(1/\theta)^{4}
\eeq
We have shown how $A_{\parallel}$ depend on $\ell$,
with emphasis on the extreme limits of X-coupling and S-coupling. We conclude that the $A$ coefficients provide a way to probe the underlying mechanism of dissipation, and to identify the {\em high-temperature fingerprints} of quantum mechanics.
It should be instructive to demonstrate experimentally that $\mu(T;\ell)$ indeed depends on~$\ell$. Ref.\cite{muMeas} provides an experimental demonstration of measuring mobility versus temperature for a semiconductor device, while Ref.\cite{KARL2003649} reviews experimental methods used to extract the mobility in organic semiconductors.
Consider the possibility of fabricating a metallic {\em gate} that produces thermal electrostatic fluctuations. Metals that differ in their {\em granularity} are characterized by a different form factor, with different correlation scale $\ell$.
Thus it would be possible to demonstrate that $\ell$ has significance. Hopefully it would be possible to further extract, experimentally, the non-universal dependence of $A_{\parallel}$ on the correlation distance $\ell$, and to test the prediction of \Fig{fig:a-vs-ellL}.
\ \\
\sect{Acknowledgment}
This research was supported by the Israel Science Foundation (Grant No.283/18). We thank Muntaser Naamneh for his advice on the experimental aspect.
\clearpage
\onecolumngrid
\pagestyle{empty}
| proofpile-arXiv_065-228 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
3D correspondence grouping (a.k.a. 3D correspondence selection or 3D mismatch removal) is essential to a number of point-to-point correspondences-based tasks, such as 3D point cloud registration~\cite{rusu2009fast}, 3D object recognition~\cite{tombari2010unique}, and 3D reconstruction~\cite{mian2005automatic}. The aim is to classify initial feature correspondences between two 3D point clouds obtained by matching local geometric descriptors into inliers and outliers. Due to a number of factors, e.g., repetitive patterns, keypoint localization errors, and data nuisances including noise, limited overlap, clutter and occlusion, heavy outliers are generated in the initial correspondence set~\cite{Yang2020corr_group_eval}. Thus, it is very challenging to mine the consistency of scarce inliers and find those inliers.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{illus.pdf}\\
\caption{Illustration of the proposed CF-based method for 3D correspondence grouping. {\bf(a)} 3D point-to-point feature correspondences between two point clouds. {\bf(b)} The geometrical compatibility scores of each correspondence with others are computed. {\bf(c)} Typical CF features of inliers and outliers, which indicate the discriminative power of CF. {\bf(d)} CF features are fed to an MLP network for binary classification.}
\label{fig:illus}
\end{figure}
Existing 3D correspondence grouping methods can be divided into two categories: group-based and individual-based. Group-based methods~\cite{fischler1981random,leordeanu2005spectral,chen20073d,tombari2010object,rodola2013scale} assume that inliers constitute a cluster in a particular domain and struggle to recover such cluster. By contrast, individual-based ones~\cite{mian2006three,yang2016fast,lowe2004distinctive,buch2014search,yang2019ranking,sahloul2020accurate} usually first assign confidence scores to correspondences based on feature or geometrics constraints, and then select top-scored correspondences independently. However, as revealed by a recent evaluation~\cite{Yang2020corr_group_eval}, existing methods in both categories {\textbf{1)}} generalize poorly across datasets with different application scenarios and data modalities, and {\textbf{2)}} deliver limited precision performance which is critical to successful 3D registration with sparse correspondences.
To overcome above limitations, we present a new feature presentation to describe 3D correspondences dubbed compatibility feature (CF) along with a CF-based 3D correspondence grouping method, as illustrated in Fig.~\ref{fig:illus}. CF consists of top-ranked compatibility scores of a candidate to other correspondences. CF is supposed to hold strong discriminative power because {\textit{inliers are geometrically compatible with each other whereas outliers are unlikely to be compatible with either outliers or inliers}} due to their unordered spatial distributions. This results in clear distinctions between CF features of inliers and outliers. Since the correspondence grouping problem can be viewed as a {\textit{binary classification problem}}, we train a simple multilayer perceptron (MLP) network as a robust classifier to distinguish inliers and outliers. Although there have been some ``end-to-end'' learning-based 2D correspondence selection methods~\cite{moo2018learning,zhang2019learning,sarlin2019superglue}, our method follows a ``geometry + learning'' fashion due to the following reasons. {\textbf{First,}} even for 2D images with pixel coordinate values being in a small range, training ``end-to-end'' networks still requires a huge amount number of labeled image pairs~\cite{moo2018learning}. By contrast, the coordinates of 3D points can be arbitrary in a 3D space, greatly increasing the challenges of training data preparation and training. We will show that dozens of point cloud pairs are suffice to train an MLP to classify CF features. {\textbf{Second,}} pixel/point coordinates are sensitive to rotations~\cite{deng2018ppf}. Although augmenting training data can sometimes alleviates this problem, the network is still not fully rotation-invariant in nature. By contrast, CF features are extracted with rotation-invariant geometric constraints and are robust to arbitrary 3D rotations. {\textbf{Third,}} most of existing ``end-to-end'' methods are not practical on real-world data as demonstrated in~\cite{choy2020deep}. {\textbf{Fourth,}} with CF features, the learning network (i.e., MLP) in our method is very lightweight and can be trained with a few number of point cloud pairs. In a nutshell, this paper has the following contributions.
\begin{itemize}
\item A compatibility feature (CF) representation is proposed to describe 3D feature correspondences. CF captures the key differences between inliers and outliers regarding pairwise geometrical compatibility, which is distinctive, robust, and rotation-invariant.
\item A 3D correspondence grouping method based on CF is proposed. In 3D correspondence grouping domain, our method is the first learning-based one (to the best of our knowledge), while it holds the ``geometry + learning'' property and works with a simple MLP network. Comprehensive experiments and comparisons with all methods evaluated in~\cite{Yang2020corr_group_eval} on datasets with different application contexts and data modalities verify that our method has good generalization abilities and achieves outstanding precision performance.
\end{itemize}
\section{Related Work}
This section briefly reviews group-based and individual-based methods for 3D correspondence grouping. Methods in both categories are geometric-only ones. Because our method includes a learning-based classier, we also discuss some learning-based techniques for correspondence problems in 2D domain.
\subsection{3D Correspondence Grouping}
\noindent\textbf{Group-based methods}
Random sampling consensus~\cite{fischler1981random} is arguably the most commonly used method for 3D correspondence grouping and transformation estimation. It iteratively estimates a model from correspondences and verifies its rationality; correspondences coherent with the best estimated model are served as inliers. The variants of RANSAC~\cite{guo2013rotational,quan2020_cgsac} generally follow the same pipeline. Some methods try to find the main cluster within initial correspondences by analyzing the affinity matrix computed for correspondences. For instance, game theory matching (GTM)~\cite{rodola2013scale} and spectral technique~\cite{leordeanu2005spectral} perform spectral analysis and dynamic evolution on the affinity matrix to determine the inlier cluster, respectively. Geometric consistency (GC)~\cite{johnson1998surface,chen20073d} performs inlier cluster selection more straightforwardly. In particular, GC forms a cluster for each correspondence by ensuring correspondences in the cluster are compatible with the query correspondence; the cluster with the maximum element count is served as the inlier cluster. Different from above iterative methods, 3D Hough voting (3DHV)~\cite{tombari2010object} is a one-shot method, which first transforms correspondences to 3D points in a 3D Hough space and then finds the cluster in Hough space.
\begin{figure*}[t]
\centering
\includegraphics[width=\linewidth]{pipeline.pdf}\\
\caption{Pipeline of the proposed method. {\textit{Compatibility check}}: computing the compatibility scores of a correspondence with others; {\textit{CF feature extraction}}: parameterizing each correspondence by a distinctive CF feature; {\textit{CF classification}}: classifying CF features as inliers and outliers with an MLP network.}
\label{fig:pipeline}
\end{figure*}
As demonstrated in a recent evaluation~\cite{Yang2020corr_group_eval}, {\textit{group-based methods often miss isolated inliers and are sensitive to low inlier ratios.}}
\\\\\noindent\textbf{Individual-based methods} In early studies, some individual-based methods group correspondences based on feature distances only~\cite{mian2006three,guo2013rotational}, which are straightforward but rely heavily on the performance of descriptors. To achieve more robust grouping, several voting-based methods have been proposed such as search of inliers (SI)~\cite{buch2014search} and consistency voting (CV)~\cite{yang2019ranking}. The common peculiarity of these methods is that one or more voting sets are first defined and then all voters will cast a vote to each correspondence based on some pre-defined rules.
Compared with group-based methods, individual-based ones assign scores to correspondences independently and thus can more reliably recall isolated inliers. However, existing individual-based methods still exhibit limited precision performance. {\textit{We note that the proposed method is individual-based as well, but is highly selective with outstanding precision performance.}}
\subsection{Learning for Correspondence Grouping}
Existing 3D correspondence grouping methods are still geometric-based ones~\cite{Yang2020corr_group_eval}. In 2D domains, there exist a few mismatch removal methods based on deep learning~\cite{moo2018learning,ma2019lmr,zhao2019nm,sun2020acne}. Yi et al.~\cite{moo2018learning} presented the first attempt to find inliers with an ``end-to-end'' network. To mine local information, Ma et al.~\cite{ma2019lmr} and Zhao et al.~\cite{zhao2019nm} associated spatial and compatibility-specific neighbors to each correspondence for classifier training, respectively.
Nonetheless, most of existing learning-based image correspondence grouping methods suffer from the following limitations: {\textbf{1)}} the requirement of a large amount of training matching pairs; {\textbf{2)}} the sensitivity to rotations due to the input of coordinate information; {\textbf{3)}} redundant network architectures. By contrast, {\textit{our method properly interprets the roles of geometric and learning techniques, and can effectively overcome these limitations.}}
\section{Methodology}
The pipeline of our method is presented in Fig.~\ref{fig:pipeline}. It consists of three main steps, including compatibility check, CF feature extraction, and CF classification. They play the following roles in the whole pipeline:
\begin{itemize}
\item {\bf Compatibility check:} one critical difference between inliers and outliers is that inliers are compatible with each other while outliers are usually incompatible with either inliers or outliers. Checking the compatibility between correspondences is the basis of the following steps.
\item {\bf CF feature extraction:} CF features are extracted based on the compatibility cue to parametrize 3D feature correspondences and distinguish inliers and outliers.
\item {\bf CF classification:} we train a classifier to classify CF features extracted for correspondences and accomplish the 3D correspondence grouping goal.
\end{itemize}
To improve readability, we introduce the following notations. Let ${\bf P}^s\in {\mathbb{R}^3}$ and ${\bf P}^t\in {\mathbb{R}^3}$ be the source point cloud and the target point cloud, respectively. A feature correspondence set ${\bf C}\in {{\mathbb{R}^6}}$ can be generated by matching local geometric descriptors for ${\bf P}^s$ and ${\bf P}^t$. The aim of our method is to assign a binary label (inlier or outlier) to each element ${\bf c}=({\bf p}^s,{\bf p}^t)$ in ${\bf C}$, where ${\bf p}^s \in {\bf P}^s$ and ${\bf p}^t \in {\bf P}^t$.
\subsection{Compatibility Check}
\begin{figure}[b]
\centering
\includegraphics[width=\linewidth]{compt_illus.pdf}\\
\caption{Illustration of the statement that (a) inliers are compatible with each other, while ouliers are usually incompatible with either (b) inliers or (c) outliers. Green and red dashed lines denote inliers and outliers, respectively.}
\label{fig:compt_illus}
\end{figure}
In order to distinguish inliers and outliers, we should fully mine the consistency information within inliers. As depicted in Fig.~\ref{fig:compt_illus}, an important observation is that inliers are geometrically compatible with each other, while outliers are unlikely to be compatible with either outliers or inliers, because the spatial distribution of outliers are unordered. Following this cue, we are motivated to define a metric to check the compatibility between two correspondences.
In the context of 3D point cloud matching, we consider distance and angle constraints~\cite{buch2014search,yang2019ranking} that are invariant to rotations for compatibility metric definition. Let $\bf n$ be the normal of $\bf p$, the distance and angle constraints for two correspondences $({\bf c}_i,{\bf c}_j)$ are respectively defined as:
\begin{equation}
{s}_{dist}({\bf c}_i,{\bf c}_j)=\left|||{\bf p}^s_i-{\bf p}^s_j||-||{\bf p}^t_i-{\bf p}^t_j||\right|,
\end{equation}
and
\begin{equation}
{s}_{ang}({\bf c}_i,{\bf c}_j)=\left|{\rm acos}({\bf n}^s_i\cdot{\bf n}^s_j)-{\rm acos}({\bf n}^t_i\cdot{\bf n}^t_j) \right|.
\end{equation}
We note that ${s}_{dist}$ and ${s}_{ang}$ are calculated based on linear operation on relative distances and angles, thus being rotation-invariant. Both constraints are complementary to each other (Sect.~\ref{subsec:anay}). By integrating the two constraints, we define the compatibility metric as:
\begin{equation}\label{eq:compt}
S({\bf c}_i,{\bf c}_j)={\rm exp}(-\frac{{s_{dist}}({\bf c}_i,{\bf c}_j)^2}{2\alpha_{dist}^2}-\frac{{s_{ang}}(c_i,c_j)^2}{2\alpha_{ang}^2}),
\end{equation}
where $\alpha_{dist}$ and $\alpha_{ang}$ represent a distance parameter and an angle parameter, respectively. One can see that $S({\bf c}_i,{\bf c}_j) \in [0,1]$ and $S({\bf c}_i,{\bf c}_j)$ equals 1 only if both constraints are fully satisfied.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{compt_features.pdf}\\
\caption{Sample CF features of (a) inliers and (b) outliers. We find that with the metric defined in Eq.~\ref{eq:compt} and a proper dimensionality $N$ (50 in the figure), the generated CF features are quite distinctive and intuitively classifiable.}
\label{fig:compt_features}
\end{figure}
\subsection{CF Feature Extraction}
With a compatibility metric, a naive way for correspondence grouping is to first assess the greatest compatibility score of each correspondence to others and then set a threshold to filter those with low scores. This is not robust and the distinctiveness of a single compatibility score is limited, as demonstrated in~\cite{chen20073d}. Instead, we consider top-$k$ compatibility scores and render them as a feature vector. Remarkably, most prior works focus on assign scores to correspondences, and the main difference among them is the scoring functions. Our method differs from those ones as we exact feature vectors for correspondences.
Specifically, the calculation of CF features consists of three steps: {\textbf{1)}} compute the compatibility scores of ${\bf c}$ to other correspondences in $\bf C$ based on Eq.~\ref{eq:compt}, obtaining a score set ${F}=\{S({\bf c},{\bf c}_1),\cdots,S({\bf c},{\bf c}_{D-1})\}$ ($D$ being the cardinality of $\bf C$); {\textbf{2)}} sort elements in $F$ by a descending order, resulting in ${\bf F}=\left[ {\begin{array}{*{10}{c}}{S({\bf c},{\bf c}'_1)}&\cdots&{S({\bf c},{\bf c}'_{D-1})} \end{array}} \right]$; {\textbf{3)}} compute the $N$-dimensional CF feature ${\bf f}({\bf c})$ of $\bf c$ as the concatenation of the former $N$ elements in $\bf F$, i.e., ${\bf f}({\bf c})=\left[{\begin{array}{*{10}{c}}{{\bf F}(1)}&\cdots&{{\bf F}(N)} \end{array}} \right]$.
Assume that: {\textbf{1)}} an ideal compatibility scoring metric is defined, which assigns `1' to correspondence pairs composed by inliers and `0' to those with at least one outlier, and {\textbf{2)}} a proper $N$ is defined, we can obtain CF features with all elements being `1' and `0' for inliers and outliers, respectively. Hence, {\textit{from the theoretic perspective, our proposed CF can be ultra distinctive.}} At present, robust compatibility metric definition for 3D correspondences is still an open issue~\cite{yang2019ranking} and estimating a proper $N$ appears to be a chicken-and-egg problem, resulting in {\textit{noise}} in CF features. However, with the metric defined in Eq.~\ref{eq:compt} and an empirically determined $N$ (based on experiments in Sect. ~\ref{subsec:anay}), {\textit{our CF features, in real case, still hold strong distinctiveness,}} as shown in Fig.~\ref{fig:compt_features}.
\subsection{CF Classification}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{sample_tsne.pdf}\\
\caption{Classifying CF features in cases with low inlier ratios appears to be a non-linear classification problem. Left: feature correspondences between two 3D point clouds, where green lines and red lines represent inliers and outliers, respectively. Right: the CF features of all correspondences are projected in a 2D space with t-SNE~\cite{maaten2008visualizing}.}
\label{fig:sample_tsne}
\end{figure}
Finally, the 3D correspondence grouping problem boils down to a binary feature classification problem. In recent years, deep learning has achieved remarkable success in classification tasks~\cite{deng2009imagenet,krizhevsky2012imagenet}. In addition, we find that classifying CF features in cases with low inlier ratios sometimes appears to be a non-linear classification problem. As shown in Fig.~\ref{fig:sample_tsne}, the CF features of inliers and outliers cannot be linearly separated. Thus, we are motivated to employ a deep-learning classifier.
In particular, the MLP network is suffice to our task because CF feature vectors are inputs to the network. {\textit{This makes the network ultra lightweight}} as compared with other networks for image correspondence problem~\cite{moo2018learning,zhao2019nm,sun2020acne}, which is also demonstrated to be quite effective as will be verified in the experiments. The employed MLP network has 6 layers with 50, 128, 128, 64, 32, and 2 neurons, respectively. Regarding the loss function, we have considered both cross-entropy loss and focal loss~\cite{lin2017focal} (Sect.~\ref{subsec:anay}). We note that the training samples of inliers and outliers are imbalanced for 3D correspondence grouping problem, and eventually we use focal loss to train our network.
\begin{table*}[t]\small
\centering
\scalebox{1}{
\begin{tabular}{ccccccc}
\hline
\bf{Dataset} & \bf{Scenario}& \bf{Nuisances} & \bf{Modality}& \bf{\# Matching Pairs}& \bf Avg. inlier ratio&\\
\hline
U3M~\cite{mian2006novel} & Registration & Limited overlap, self-occlusion & LiDAR & 496&0.1480\\
BMR~\cite{salti2014shot} & Registration & Limited overlap, self-occlusion, real noise & Kinect & 485&0.0563\\
U3OR~\cite{mian2006three,mian2010repeatability} & Object recognition & Clutter, occlusion & LiDAR& 188&0.0809\\
BoD5~\cite{salti2014shot}& Object recognition & Clutter, occlusion, real noise, holes & Kinect & 43&0.1575\\
\hline
\end{tabular}}
\caption{Experimental datasets and their properties.}
\label{tab:dataset}
\end{table*}
\section{Experiments}
This section presents the experimental setup, analysis and comparative results, along with necessary explanations.
\subsection{Experimental Setup}
\subsubsection{Datasets}
Four datasets are considered in our experiments, including UWA 3D modeling (U3M)~\cite{mian2006novel}, Bologna Mesh Registration (BMR)~\cite{salti2014shot}, UWA 3D object recognition (U3OR)~\cite{mian2006three,mian2010repeatability}, and Bologna Dataset5 (BoD5)~\cite{salti2014shot}. The main properties of experimental datasets are summarized in Table~\ref{tab:dataset}. These datasets have {\bf 1)} different application scenarios, {\bf 2)} a variety of nuisances, and {\bf 3)} different data modalities, which can ensure a comprehensive evaluation. For each dataset, we use correspondence data generated by 75\% matching pairs for training and the remaining for testing. Note that we will also test the generalization performance of our method without training a model for each dataset.
\subsubsection{Metrics}
Precision (P), Recall (R), and F-score (F) are popular metrics for evaluating the performance of correspondence grouping~\cite{zhao2019nm,yang2017performance,Yang2020corr_group_eval}. A correspondence ${\bf c}=({\bf p}^s,{\bf p}^t)$ is judged as correct if:
\begin{equation}
||{\bf p}^s{\bf R}_{gt}+{\bf t}_{gt}-{\bf p}^t||< d_{inlier},
\end{equation}
where $d_{inlier}$ is a distance threshold; ${\bf R}_{gt}$ and ${\bf t}_{gt}$ denote the ground-truth rotation matrix and translation vector, respectively. We set $d_{inlier}$ to 5 pr as in~\cite{yang2017performance,Yang2020corr_group_eval}. The unit `pr' denotes the point cloud resolution, i.e., the average shortest distance among neighboring points in the point cloud. Thus, precision is defined as:
\begin{equation}
{\rm P}=\frac{|{\bf C}_{inlier}|}{|{\bf C}_{group}|},
\end{equation}
and recall is defined as:
\begin{equation}
{\rm R}=\frac{|{\bf C}_{inlier}|}{|{\bf C}_{inlier}^{gt}|},
\end{equation}
where ${\bf C}_{group}$, ${\bf C}_{inlier}$, and ${\bf C}_{inlier}^{gt}$ represent the grouped inlier set by a grouping method, the true inlier subset in the grouped inlier set, and the true inlier subset in the raw correspondence set. F-score is given by ${\rm F}=\frac{{\rm P}{\rm R}}{{\rm P}+{\rm R}}$.
We note that 3D correspondence grouping methods are typically applied to rigid registration tasks, e.g., point cloud registration and 3D object recognition, which require sparse and accurate correspondences~\cite{guo20143d}. {\textit{Thus, the precision performance is more critical to these practical applications.}}
\subsubsection{Implementation Details}
For our method, the compatibility check and CF feature exaction modules are implemented in the point cloud library (PCL)~\cite{rusu20113d}, and the MLP classifier is trained in PyTorch with a GTX1050 GPU. The network is optimized via stochastic gradient descent (SGD) with a learning rate of 0.02. All evaluated methods in~\cite{Yang2020corr_group_eval} are compared in our experiments, including similarity score (SS), nearest neighbor similarity ratio (NNSR)~\cite{lowe2004distinctive}, spectral technique (ST)~\cite{leordeanu2005spectral}, random sampling consensus (RANSAC)~\cite{fischler1981random}, geometric consistency (GC)~\cite{chen20073d}, 3D Hough voting (3DHV)~\cite{tombari2010object}, game theory matching (GTM)~\cite{rodola2013scale}, search of inliers (SI)~\cite{buch2014search}, and consistency voting (CV)~\cite{yang2019ranking}.
To generate 3D feature correspondences between point clouds, we employ the Harris 3D (H3D) detector~\cite{sipiran2011harris} for keypoints detection and the signatures of histograms of orientations (SHOT)~\cite{tombari2010unique} descriptor for local geometric feature extraction. By matching SHOT descriptors via $L_2$ distance, we can obtain initial correspondences. It has been verified in~\cite{Yang2020corr_group_eval} that H3D+SHOT can generate correspondences with {\textit{different spatial distributions, different scales, and different inlier ratios}}, enabling a thorough evaluation.
\subsection{Method Analysis}\label{subsec:anay}
The following experiments were conducted on the U3M dataset (the largest scale one) to analyze the rationality, peculiarities, and parameters of our method.
\\\\\noindent\textbf{Dimensionality of CF features} The dimensionality $N$ of CF features is a key parameter of the proposed method. We test the performance of our method with $N$ being 10, 20, 50, 100, and 200, respectively. The results are shown in Table~\ref{tab:dim}.
\begin{table}[t]\small
\renewcommand{\arraystretch}{1}
\centering
\begin{tabular}{c|ccccc}
\hline
&\bf 10&\bf 20&\bf 50&\bf 100&\bf 200\\
\hline
P &0.8031&0.7625&0.7483&0.7386&0.7468\\
R &0.4754&0.5364&0.5308&0.5114&0.4870\\
F &0.5973&0.6298&0.6211&0.6044&0.5896\\
\# Epochs &77&44&7&15&9\\
\hline
\end{tabular}
\caption{Performance of our method when varying the dimensionality of CF features.}
\label{tab:dim}
\end{table}
\begin{table*}[t]\small
\renewcommand{\arraystretch}{1}
\centering
\begin{tabular}{c|cccccccccc}
\hline
&\bf CE(1:1)& \bf CE(1:4)&\bf CE(1:8)&\bf CE(1:10)&\bf CE(raw)&\bf FL(1:1)&\bf FL(1:4)&\bf FL(1:8)&\bf FL(1:10)&\bf FL(raw)\\
\hline
P &0.2893&0.4149&0.5688&0.6120&NC&0.2431&0.4362&0.5510&0.6180&0.7483\\
R &0.8615&0.7828&0.6736&0.6439&NC&0.8827&0.7692&0.6877&0.6394&0.5308\\
F &0.4332&0.5424&0.6168&0.6275&NC&0.3812&0.5567&0.6118&0.6285&0.6210\\
\hline
\end{tabular}
\caption{Comparison of cross entropy loss (CE) and focal loss (FL) when varying the ratio of positive sample count to negative sample count (NC: not converge; raw: the ratio is about 1:25 in raw training data).}
\label{tab:loss}
\end{table*}
The results indicate that $N=20$ and $N=50$ achieve the best and the second best performance, respectively. Thus, a proper $N$ is needed to maximize the distinctiveness between the CF features of inliers and outliers. In addition, we find that the network converges much faster with $N=50$ than other settings, and we set $N$ to 50 by default.
\\\\\noindent\textbf{Focal loss vs. cross entropy}
To prepare training data, we have two alternatives: using {\textit{equal}} or {\textit{imbalanced} numbers of positive samples and negative samples. The later one is closer to real matching case. Here, we compare the cross entropy loss and focal loss when varying the ratio of positive sample count to negative sample count. The results are reported in Table~\ref{tab:loss}.
One can see that the performance of both losses improves when ratio of positive samples to negative samples decreases from 1:1 to 1:10, and their gap is marginal. When more negative samples are included (i.e., all samples in raw training data), focal loss achieves better precision performance while the network with cross entropy loss fails to converge. As expected, focal loss is more suitable to 3D correspondence grouping problem where a large portion of training data are outliers.
\\\\\noindent\textbf{Varying compatibility metrics} A critical factor to the proposed CF features is the definition of compatibility metrics. In our defined compatibility metric (Eq.~\ref{eq:compt}), both distance and angle constraints are considered. Here, we test the effect when using looser constraints, i.e., solely using either distance constraint or angle constraint, as shown in Table~\ref{tab:compt}.
\begin{table}[t]\small
\renewcommand{\arraystretch}{1}
\centering
\begin{tabular}{c|ccc}
\hline
&\bf Distance&\bf Angle&\bf Both\\
\hline
P &0.6443&NC&0.7483\\
R &0.6885&NC&0.5308\\
F &0.6657&NC&0.6211\\
\hline
\end{tabular}
\caption{The effect of using compatibility metrics with different geometric constraints (NC: not converge).}
\label{tab:compt}
\end{table}
It is interesting to see that using a slightly looser constraint (distance only) can achieve better F-score performance than using both constraints. However, when the constraint is too loose (angle only), the network cannot converge because the generated CF features are ambiguous. Because using both constraints achieves the best precision performance, which is preferred in most application scenarios, so we consider both constraints to define the compatibility metric.
\\\\\noindent\textbf{PointNet vs. MLP} As similar to some 2D correspondence methods~\cite{moo2018learning,sun2020acne}, directly setting the coordinates of correspondences as the input to networks can be another way for grouping. We tested the performance of using coordinate information for learning with PointNet~\cite{qi2017pointnet} on testing data with and without arbitrary rotations. The results are reported in Table~\ref{tab:rot}.
\begin{table}[t]\small
\renewcommand{\arraystretch}{1}
\centering
\begin{tabular}{c|cccc}
\hline
&\bf PointNet&\bf PointNet ($SO(3)$)&\bf Ours&\bf Ours ($SO(3)$)\\
\hline
P &0.3888&0.1290&0.7483&0.7483\\
R &0.0355&0.0018&0.5308&0.5308\\
F &0.0651&0.0035&0.6211&0.6211\\
\hline
\end{tabular}
\caption{Comparison of PointNet~\cite{qi2017pointnet} with coordinates being input and our method with CF features being input on testing data with and without arbitrary $SO(3)$ rotations.}
\label{tab:rot}
\end{table}
\begin{table}[t]\small
\renewcommand{\arraystretch}{1}
\centering
\begin{tabular}{c|cccc}
\hline
&\bf $\bf \frac{1}{8} \times$ 490k&\bf $\bf \frac{1}{4} \times$ 490k&\bf $\bf \frac{1}{2} \times$ 490k&\bf 490k\\
\hline
P &0.7653&0.7533&0.7558&0.7483\\
R &0.5130&0.5219&0.5199&0.5308\\
F &0.6142&0.6166&0.6160&0.6211\\
\# Epochs &156&96&48&15\\
\hline
\end{tabular}
\caption{The effect of varying the amount of training data on our method.}
\label{tab:num_data}
\end{table}
\begin{figure*}[t]
\centering
\includegraphics[width=0.9\linewidth]{coor_feat_tsne.pdf}\\
\caption{Sample results of (a) 3D feature correspondences, and 2D projections (by t-SNE~\cite{maaten2008visualizing}) of (b) correspondence coordinates, (c) CF features, and (d) the features of the second last layer of MLP.}
\label{fig:coor_feat_tsne}
\end{figure*}
\begin{table*}[t]\small
\renewcommand{\arraystretch}{1}
\centering
\begin{tabular}{c|cccccccccc}
\hline
&\bf SS&\bf NNSR~\cite{lowe2004distinctive} &\bf ST~\cite{leordeanu2005spectral}&\bf RANSAC~\cite{fischler1981random}&\bf GC~\cite{chen20073d}&\bf 3DHV~\cite{tombari2010object}&\bf GTM~\cite{rodola2013scale}&\bf SI~\cite{buch2014search}&\bf CV~\cite{yang2019ranking}&\bf CF (Ours)\\
\hline
\multicolumn{11}{c}{(a) \textit{U3M dataset} } \\
\hline
\rowcolor{green!30}P&0.0374&0.1289&{0.3984}&\underline{0.5442}&0.2920&0.1960&0.5285&0.0380&0.1092&\bf 0.7483\\
R&0.3819&0.4084&0.5833&0.8493&0.7499&0.6999&0.5987&\bf 0.9996&0.9839&0.5308\\
F&0.0681&0.1960&0.4734&\bf 0.6634&0.4203&0.3062&0.5614&0.0733&0.1966&\underline{0.6211} \\
\hline
\multicolumn{11}{c}{(b) \textit{BMR dataset}} \\
\hline
\rowcolor{green!30}P&0.0243&0.0606&0.2993&0.3737&0.1458&0.1492&\underline{0.3946}&0.0350&0.0700&\bf 0.8575\\
R&0.3405&0.0967&0.3734&0.8178&\underline{0.5740}&0.5049&0.3626&0.5522&\bf 0.9438&0.1529\\
F&0.0454&0.0745&0.3323&\bf 0.5129&0.2325&0.2304&\underline{0.3779}&0.0658&0.1303&0.2596\\
\hline
\multicolumn{11}{c}{(c) \textit{BoD5 dataset}} \\
\hline
\rowcolor{green!30}P&0.0474&0.1635&0.5660&\underline{0.5961}&0.5207&0.3927&\bf 0.7022&0.0748&0.3593&0.5699\\
R&0.2024&0.1136&0.4086&0.8747&0.7559&\underline{0.8890}&0.4556&0.7337&\bf 0.9869&0.4151\\
F&0.0768&0.1341&0.4746&\bf 0.7090&\underline{0.6166}&0.5448&0.5527&0.1359&0.5268&0.4804\\
\hline
\multicolumn{11}{c}{(d) \textit{U3OR dataset}} \\
\hline
\rowcolor{green!30} P&0.0171&0.0724&0.1119&\underline{0.5812}&0.1918&0.1190&0.4907&0.0143&0.0523&\bf 0.8641\\
R&0.4111&0.5296&0.1670&0.2442&0.6302&0.3537&0.5224&\bf 1.0000&\underline{0.9461}&0.3196\\
F&0.0328&0.1274&0.1340&0.3438&0.2941&0.1781&\bf 0.5061&0.0282&0.0991&\underline{0.4666} \\
\hline
\end{tabular}
\caption{Comparison of the proposed method with nine state-of-the-art methods in terms of precision, recall, and F-score performance on four experimental datasets (bold: the best; underlined: the second best).}
\label{tab:compare}
\end{table*}
Two observations can be made from the table. {\textbf{1)}} PointNet with coordinates being the input achieves significantly worse performance than our MLP architecture with CF features being input. This is because the range of 3D real-world coordinate information is too large, which makes the network very difficult to mine the patterns within dataset. {\textbf{2)} Coordinates are sensitive to rotations, making the performance of PointNet even worse when the testing data undergoing rotations. By contrast, because our CF features consist of compatibility scores computed based on rotation-invariant constraints, making CF and the CF-based learning network rotation-invariant as well.
To further support our statement, we visualize some exemplar results of feature correspondences, projections of correspondence coordinates, CF features, and the features of the second last layer of MLP in Fig.~\ref{fig:coor_feat_tsne}. Obviously, one can hardly mine consistencies within inliers from the coordinate information. By contrast, CF features hold strong distinctiveness. In addition, learned CF features by MLP can further enhance the distinctiveness (the clusters of inliers and outliers in Fig.~\ref{fig:coor_feat_tsne}(d) are tighter than these in Fig.~\ref{fig:coor_feat_tsne}(c)).
\\\\\noindent\textbf{Varying the amount of training data}
The initial number of correspondences used for training in the U3M dataset is around 490k. We test the cases with less training data and report the results in Table~\ref{tab:num_data}.
The table suggests that our method behaves well even removing $\frac{7}{8}$ training data, while requiring much more training epochs. We note that dozens of point cloud pairs can generate correspondences at $\frac{1}{8}\times$490k level. As compared with methods relying over tens thousand of matching pairs~\cite{moo2018learning,zhao2019nm}, {\textit{our method can be trained with significantly less matching pairs.}}
\subsection{Comparative Results \& Visualization}
\noindent\textbf{Start-of-the-art comparison} All evaluated methods in a recent evaluation~\cite{Yang2020corr_group_eval} are compared with the proposed method on four experimental datasets. All methods are tested on the same testing data. The results are shown in Table~\ref{tab:compare}.
\begin{table}[t]\small
\renewcommand{\arraystretch}{1}
\centering
\begin{tabular}{c|cccc}
\hline
&\bf BMR&\bf U3M+noise&\bf U3M+simplification&\bf ISS+FPFH\\
\hline
P &0.6928&0.7407&0.7088&0.7409\\
R &0.3241&0.4111&0.3247&0.4342\\
F &0.4416&0.5287&0.4454&0.5475\\
\hline
\end{tabular}
\caption{Generalization performance of the proposed method (the model is trained on the original U3M dataset).}
\label{tab:gene}
\end{table}
The following observations can be made from the table. {\textbf{1)}} Our method achieves the best precision performance on the U3M, BMR, and U3OR dataset. Moreover, the gap between our method and the second best one is significant on the BMR and U3OR datasets. On the BoD5 dataset, our method is surpassed by GTM and RANSAC. However, this dataset is less challenging than the other three ones (Table~\ref{tab:dataset}). This indicates that our method can achieve superior precision performance especially on data with low inlier ratios. We also note that only 33 pairs of data are leveraged to train our network on the BoD5 dataset. {\textbf{2)}} In terms of the recall performance, SI and CV, as two typical individual-based methods, achieve top-ranked performance. Unfortunately, their precision performance is quite limited. This could result in inaccurate and time-consuming rigid registration results due to heavy outliers in the grouped inlier set. {\textit{We note that a looser geometric constraint can be used if a balance is needed between precision and recall (as verified in Table~\ref{tab:compt}), indicating that our method is flexible.}} {\textbf{3)}} Although the proposed method is an individual-based one, it is quite selective with superior precision performance. Notably, GTM appears to be the most selective method as evaluated by~\cite{Yang2020corr_group_eval}, while our method generally outperforms it by a large margin in terms of precision.
\\\\\noindent\textbf{Generalization performance}
We use the model trained on the initial U3M dataset to predict inliers on the following datasets: BMR dataset, variants of U3M datasets with 0.3 pr Gaussian noise, $\frac{1}{8}$ random data decimation, and ``ISS detector~\cite{zhong2009intrinsic} + FPFH descriptor~\cite{rusu2009fast}'', respectively. The results are shown in Table~\ref{tab:gene}. {\textit{One can see that the model trained on the U3M dataset also achieves decent performance when changing the testing dataset, injecting additional nuisances, and changing ``detector-descriptor'' combinations.}} This is potentially because the eventual results caused by above test conditions is the variation in inlier ratios, while our CF features can effectively mine the hidden consistencies of inliers and inconsistencies of outliers in different inlier ratio cases.
\\\\\noindent\textbf{Visualization}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{reg_visual.pdf}\\
\caption{Sample visualization results. From left to right: initial correspondences with colors obtained by projecting CF features to 3D RGB space, grouped correspondences by our method, and the registration result with the grouped correspondences using PCL~\cite{rusu20113d}.}
\label{fig:vis}
\end{figure}
Finally, we give some visualization results of our method in Fig.~\ref{fig:vis}. Two observations can be made. First, the colors of correspondences obtained by projecting CF features to 3D RGB space can reflect the consistency of inliers. Second, the grouped correspondences by our method are quite consistent and can achieve accurate 3D registration results.
\section{Conclusion}
We presented a novel representation to describe 3D feature correspondence named compatibility feature (CF), along with a CF-based 3D correspondence grouping method for 3D correspondence grouping. CF captures the main distinctiveness between inliers and outliers regarding pairwise geometrical compatibility, which is rotation-invariant as well. With CF features, a lightweight MLP network is able to classify them and achieve outstanding performance. Experiments on four standard datasets with a rich variety of application scenarios and nuisances paired with comparisons with nine state-of-the-art methods demonstrate the overall superiority of our method. We also find that the pipeline of our proposed CF-based 3D correspondence grouping method can be generalized to matching problems for many other data representations, such as 2D images and non-rigid point clouds/meshes, which remains an interesting future research direction.
{\small
\bibliographystyle{ieee_fullname}
| proofpile-arXiv_065-229 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction and Motivation}
High-quality software documentation is crucial for software development, comprehension, and maintenance, but the ways that documentation can suffer poor quality are numerous.
For example, Aghajani et al.~\cite{aghajani2019software} have designed a taxonomy of 162 documentation issue types, covering information content, presentation, process-related matters, and tool-related matters.
Additionally, because software documentation is written in informal natural language which is inherently ambiguous, imprecise, unstructured, and complex in syntax and semantics, its quality can often only be evaluated manually~\cite{khamis2013applying}.
The assessment depends on context for some quality attributes (e.g., task orientation, usefulness), it mixes content and medium for others (e.g., visual effectiveness, retrievability), and sometimes it requires looking beyond the documentation itself (e.g., accuracy, completeness).
While most related work has focused on documentation accuracy and completeness (e.g., for API documentation~\cite{zhong2013detecting, dagenais2014using}), the fact that many developers are reluctant to carefully read documentation~\cite{zhong2009inferring} suggests that documentation suffers from issues beyond accuracy.
As such, we lack a comprehensive framework and instruments for assessing software documentation quality beyond these characteristics.
In this work we adapt quality frameworks from the data and information quality community to the software engineering domain.
We design a survey instrument to assess software documentation quality from different sources.
A pilot study with four technical editors and 41 documents related to the R programming language provides initial evidence for the strengths and weaknesses of different genres of documentation (blog articles, reference documentation, README files, Stack Overflow threads, tutorials) based on the ten dimensions of our software documentation quality framework.
The contributions of this work are:
\begin{itemize}
\item A ten-dimensional framework for asking questions about software documentation quality,
\item A partially validated survey instrument to evaluate document quality over multiple documentation genres, and
\item A vision for the expansion of a unified quality framework through further experimentation.
\end{itemize}
\section{Background and Related Work}
\label{sec:background}
The most related piece of work to this paper is the seminal 1995 article ``Beyond Accuracy: What Data Quality Means to Data Consumers'' by Wang and Strong~\cite{wang1996beyond}. We follow the same \textit{beyond-accuracy} approach for the domain of software documentation.
\paragraph{Defective Software Documentation}
Defect detection tools have been widely investigated at the code level, but very few studies focus on defects at the document level~\cite{zhou2019drone}. The existing approaches in the documentation space investigate inconsistencies between code and documentation. In one of the first such attempts, Tan et al.~\cite{tan2012tcomment} presented @tcomment for testing Javadoc comments related to null values and exceptions. \textsc{DocRef} by Zhong and Su~\cite{zhong2013detecting} detects API documentation errors by seeking out mismatches between code names in natural-language documentation and code. AdDoc by Dagenais and Robillard~\cite{dagenais2014using} automatically discovers documentation patterns which are defined as coherent sets of code elements that are documented together. Also aimed at inconsistencies between code and documentation, Ratol and Robillard~\cite{ratol2017detecting} presented Fraco, a tool to detect source code comments that are fragile with respect to identifier renaming.
Wen et al.~\cite{wen2019large} presented a large-scale empirical study of code-comment inconsistencies, revealing causes such as deprecation and refactoring. Zhou et al.~\cite{zhou2018automatic, zhou2019drone} contributed a line of work on detecting defects of API documents with techniques from program comprehension and natural language processing.
They presented DRONE to automatically detect directive defects and recommend solutions to fix them.
And in recent work, Panthaplackel et al.~\cite{panthaplackel2020learning} proposed an approach that learns to correlate changes across two distinct language representations, to generate a sequence of edits that are applied to existing comments to reflect code modifications.
To the best of our knowledge, little of the existing work targets the assessment or improvement of the quality of software documentation beyond accuracy and completeness; one example is Pl\"{o}sch and colleagues' survey on what developers value in documentation~\cite{plosch2014value}.
We turn to related work from the data and information quality community to further fill this gap.
\paragraph{Information Quality}
In their seminal 1995 work ``Beyond Accuracy: What Data Quality Means to Data Consumers'', Wang and Strong~\cite{wang1996beyond} conducted a survey to generate a list of data quality attributes that capture data consumers' perspectives. These perspectives were grouped into accuracy, relevancy, representation, and accessibility, and they contained a total of 15 items such as believability, reputation, and ease of understanding. A few years later, Eppler~\cite{eppler2001generic} published a similar list of dimensions which contained 16 dimensions such as clarity, conciseness, and consistency.
These two sets of dimensions built the starting point into our investigation of dimensions from the data and information quality community which might be applicable to software documentation. Owing to the fact that software documentation tends to be disseminated over the Internet, we further included the work of Knight and Burn~\cite{knight2005developing}, who discussed the development of a framework for assessing information quality on the World Wide Web and compiled a list of 20 common dimensions for information and data quality from previous work, including accuracy, consistency, and timeliness.
\section{Software Documentation Quality}
\begin{table*}
\caption{Dimensions of software documentation quality}
\label{tab:framework}
\begin{tabular}{@{\hspace{0em}}l@{\hspace{.6em}}l@{\hspace{.6em}}l@{\hspace{0em}}}
\toprule
Dimension & Question & Source \\
\midrule
\textsc{Quality} & How well-written is this document (e.g., spelling, grammar)? & question from~\cite{pitler2008revisiting} \\
\textsc{Appeal} & How interesting is it? & question from~\cite{pitler2008revisiting} \\
\textsc{Readability} & How easy was it to read? & `accessibility'~\cite{eppler2001generic, knight2005developing} \\
\textsc{Understandability} & How easy was it to understand? & `ease of understanding'~\cite{wang1996beyond}; question from~\cite{pitler2008revisiting} \\
\textsc{Structure} & How well-structured is the document? & `navigation' from~\cite{knight2005developing} \\
\textsc{Cohesion} & How well does the text fit together? & question from~\cite{pitler2008revisiting} \\
\textsc{Conciseness} & How succinct is the information provided? & `concise', `amount of data'~\cite{knight2005developing, wang1996beyond}; `conciseness'~\cite{eppler2001generic} \\
\textsc{Effectiveness} & Does the document make effective use of technical vocabulary? & `vocabulary'~\cite{pitler2008revisiting} \\
\textsc{Consistency} & How consistent is the use of terminology? & `consistency'~\cite{eppler2001generic, knight2005developing} \\
\textsc{Clarity} & Does the document contain ambiguity? & `clarity'~\cite{eppler2001generic}; `understandability'~\cite{knight2005developing} \\
\bottomrule
\end{tabular}
\end{table*}
Inspired by the dimensions of information and data quality identified in related work, we designed the software documentation quality framework summarised in Table~\ref{tab:framework}. The first and last author of this paper collaboratively went through the dimensions from related work cited in Section~\ref{sec:background} to select those that (i) could apply to software documentation, (ii) do not depend on the logistics of accessing the documentation (e.g., security), and (iii) can be checked independently of other artefacts (i.e., not accuracy or completeness). As a result, we omitted dimensions such as relevancy and timeliness from Wang and Strong's work~\cite{wang1996beyond} since they require context beyond the documentation itself (e.g., relevancy cannot be assessed without a specific task in mind); and we omitted dimensions such as interactivity and speed from Eppler's work~\cite{eppler2001generic} which are characteristics of the medium rather than the content. Table~\ref{tab:framework} indicates the sources from related work for each dimension.
In addition, we formulated questions that an assessor can answer about a piece of software documentation to indicate its quality in each dimension. Our pilot study in Section~\ref{study} provides initial evidence that technical editors are able to answer these questions and shows interesting trends across genres of documentation. The questions were inspired by work on readability by Pitler and Nenkova~\cite{pitler2008revisiting} who collected readability ratings using four questions: How well-written is this article? How well does the text fit together? How easy was it to understand? How interesting is this article? Our framework contains the same questions, and complements them with one question each for the additional six dimensions.
\section{Pilot Study} \label{study}
To gather preliminary evidence on whether experts would be able to use the proposed framework for assessing software documentation, we conducted a pilot study with technical editors and different genres in software documentation.
\paragraph{Recruitment}
To recruit technical editors, we posted an advertisement on Upwork,\footnote{\url{https://www.upwork.com/}} a website for freelance recruitment and employment.
Our posting explained our goal of evaluating software documentation on ten dimensions and identifying weaknesses in their design.
Therefore, we required applicants to be qualified as technical editors with programming experience, and accordingly, we would compensate them hourly according to their established rates.
From this, we recruited the first four qualified applicants with experience in technical writing and development.
We refer to our participants henceforth as E1 through E4.
\paragraph{Data Collection} \label{datacollect}
Because developers write software instructions in a variety of contexts, we use a broad definition of documentation, accepting both official, formal documentation and implicit documentation that emerges from online discussion.
We focus on the genres of software documentation that have been the subject of investigation in previous work on software documentation: reference documentation~\cite{fucci2019using}, README files~\cite{prana2019categorizing}, tutorials~\cite{petrosyan2015discovering}, blogs~\cite{parnin2013blogging}, and Stack Overflow threads~\cite{barua2014developers}.
To better control for variation between language, we focused exclusively on resources documenting the R programming language or projects built with it.
All documentation was then randomly sampled from the following sources:
\begin{itemize}
\item \textbf{Reference Documentation (RD)}: from the 79 subsections of the R language manual.\footnote{\url{https://cran.r-project.org/doc/manuals/r-release/R-intro.html}}
\item \textbf{README files (R)}: from the first 159 items in a list\footnote{\url{https://github.com/qinwf/awesome-R}, the remaining items were not software projects} of open-source R projects curated by GitHub users.
\item \textbf{Tutorials (T)}: from the 46 R tutorials on tutorialspoint.com.\footnote{\url{https://www.tutorialspoint.com/r/index.htm}}
\item \textbf{Articles (A)}: from 1,208 articles posted on R-bloggers.com,\footnote{\url{https://www.r-bloggers.com/}} a blog aggregation website, within the last six months.
\item \textbf{Stack Overflow threads (SO)}: from questions tagged ``R''.
\end{itemize}
We asked the editors to read the documents, suggest edits, and answer the ten questions listed in Table~\ref{tab:framework} using a scale from 1 (low) to 10 (high).
From a methodological standpoint, we asked for edits in addition to the ten assessments in order to encourage and evidence reflection on the specific ways the documentation failed, beyond the general impressions of effectiveness.
These edits were collected through Microsoft Word's Review interface.
We assigned the quantity and selection of documentation according to the amount of time editors had available while trying to balance distribution across genres.
Furthermore, we tried to give each individual document to at least two editors.
We show the distribution in Table~\ref{tab:participant_assignments}.
\begin{table}[t]
\centering
\caption{Distribution of documentation genres to editors}
\begin{tabular}{lrrrrrr} \toprule
Editor & RD & R & T & A & SO & Sum \\ \midrule
E1 & 7 & 7 & 7 & 8 & 7 & 36 \\
E2 & 6 & 6 & 5 & 6 & 6 & 29 \\
E3 & 1 & 1 & 1 & 1 & 1 & 5 \\
E4 & 1 & 1 & 1 & 1 & 1 & 5 \\
\midrule
Sum & 15 & 15 & 14 & 16 & 15 & 75 \\
Union & 8 & 8 & 8 & 9 & 8 & 41 \\ \bottomrule
\end{tabular}
\label{tab:participant_assignments}
\end{table}
Overall, the editors worked on 41 documents.
Seven documents were assessed by only one editor, but the rest were by two; therefore, we had 75 records of document evaluations.
\paragraph{Results}
We display the average dimension rating per genre of documentation in Table~\ref{tab:results}; for each row, the highest value is in bold, and the lowest is italicised.
For example, this table shows that the quality of the reference documentation, or how well-written it is according to the definition in Table~\ref{tab:framework}, is rated 8.1 out of 10, on average higher than all the other genres.
Meanwhile, the average blog article was rated 6.1 out of 10, the lowest.
Within dimensions, the highest ratings are distributed primarily among reference documentation and README files; tutorials received the highest in cohesion and conciseness, but only by a difference of at most 0.3 of a rating point with the two aforementioned genres.
Nevertheless, blog articles received the global lowest rating across every dimension, especially so in readability and structure.
\begin{table}[t]
\caption{Results of document rating by technical editors}
\begin{tabular}{lrrrrr} \toprule
Dimension & RD & R & T & A & SO \\ \midrule
\textsc{Quality} & \textbf{8.1} & 7.6 & 7.5 & \textit{6.1} & 6.7 \\
\textsc{Appeal} & 5.7 & \textbf{6.5} & 6.4 & \textit{5.3} & 5.8 \\
\textsc{Readability} & \textbf{6.6} & 6.4 & 6.5 & \textit{4.6} & 6.1 \\
\textsc{Understandability} & 6.9 & \textbf{7.0} & 6.9 & \textit{5.3} & 6.1 \\
\textsc{Structure} & 6.3 & \textbf{7.6} & 7.1 & \textit{3.8} & 5.8 \\
\textsc{Cohesion} & 6.7 & 6.7 & \textbf{7.0} & \textit{4.9} & 6.1 \\
\textsc{Conciseness} & 6.7 & 6.7 & \textbf{6.8} & \textit{5.3} & 6.3 \\
\textsc{Effectiveness} & \textbf{8.3} & 7.9 & 8.1 & \textit{7.1} & 7.5 \\
\textsc{Consistency} & \textbf{9.3} & 8.8 & 9.0 & \textit{8.5} & 8.6 \\
\textsc{Clarity} & 8.6 & \textbf{8.7} & 7.6 & \textit{6.3} & 7.7 \\ \bottomrule
\end{tabular}
\label{tab:results}
\end{table}
Different dimensions do not seem to have equal distributions.
For example, reference documentation has a 9.3 in consistency, but the lowest score is an 8.5 for blog articles.
These are nevertheless high scores on average.
Appeal also has a small spread, from 6.5 in README files to 5.3 in blog articles, but a lower average overall across dimensions.
Structure, on the other hand, varies from 7 (README files) to 3.8 (blog articles), demonstrating a larger interaction with genres.
\paragraph{Reflection}
Our findings suggest a trend in rating samples from certain genres highly while disapproving others.
The relatively high rating of reference documentation and README files may be explained by their typical origins within the software product and team,
whereas blog articles and Stack Overflow posts can be published more freely~\cite{parnin2012crowd}.
Nevertheless, the result that blog articles received the lowest in every category did surprise us, especially as we designed the framework to reduce the impact of context.
There may be many factors involved in these results.
For one, the corpora from which we randomly sampled may have different underlying distributions on our dimensions.
Our reference documentation was sampled from larger related products, whereas manual inspection of the randomly sampled blog articles did evidence a broad variety of R-related topics, such as discussions of R package updates or write-ups and reflections on personal projects.
Furthermore, as with any participant study, the soundness of our results depends on the accuracy with which we communicated our ideas and the participants understood and enacted them.
Editors may approach different genres with different preconceptions; an 8 rating for a README file may not be the same as an 8 rating for a Stack Overflow thread.
\paragraph{Technical Challenges} As noted previously, we asked editors to make edits on document copies in Microsoft Word as well, hoping to gain insight into technical editing strategies for software documentation.
We obtained over 4,000 edit events across all 75 documents, ranging from reformattings and single character insertions to deep paragraph revisions.
However, we encountered several obstacles when attempting to analyse this data.
First, when copying online documents to Microsoft Word, interactions and images in hypertext documents were distractingly reformatted for a linear document.
As a result, several of the edits addressed changes to structure that did not exist when viewing the document in the browser instead.
Furthermore, many edits were not recorded as intended---for example, replacing words with synonyms that do not make sense in a programming context.
Although we can recreate changes to the document text through before-and-after comparison, small edits blend together and large edits confound the differencing algorithm.
Because the editor's original intent is lost, any coding scheme applied over it becomes less secure.
Due to these circumstances, we decided against drawing conclusions from this data.
\section{Impact and Future Work}
Accuracy of software documentation is a necessary but not sufficient condition for its success. To empower software developers to make effective use of software documentation, it must be carefully designed in appearance, content, and more. Our vision, then, is to provide a more precise definition to better explore and transform what it means for software documentation to be of high quality, along with a research agenda enabled by such a quality framework. To that end, we have drawn from seminal work in the information and data quality communities to design a framework consisting of ten dimensions for software documentation quality.
Furthermore, our research agenda proposes future work to evaluate the impact of each dimension with end-users and improve the framework.
For one, we can verify the assessments of technical editors by introducing end-users to different versions of the documents (e.g., edited by technical editors based on quality criteria and original version) and observing their use.
Another direction is to explore trade-offs between quality attributes, such as whether readability outweighs structure in terms of document usability, and to further disambiguate similar dimensions (e.g., structure and cohesion).
We will further revisit the edits from the technical editors to extract surface-level (e.g., word count) and deep-level (e.g., cohesion of adjacent sentences) lexical, syntactic, and discourse features to build classification models for predicting documentation quality.
Such classification models can be used to assess and rank the quality of software documentation.
We believe these efforts are important because of the volume of software documentation on the web.
A simple Google search related to software development will return documentation that matches the query without explicitly considering the quality of the material.
For example, a query on `reading CSV file in r' returns blog articles and tutorials as the top results, yet our preliminary results demonstrate that on average, blog articles are ranked worst across all ten quality dimensions.
This is not to say that blog articles are essentially faulty and should be abandoned moving forward; rather, we hope to spur reflection on how end-users interact with blog articles and how each of our dimensions manifest uniquely under the genre's constraints, while nevertheless using what we currently know to emphasise more useful results.
Therefore, applying the framework can influence guidelines and recommendation systems for documentation improvement as well as automatic assessing and ranking systems for navigating the large volumes of documentation and emphasising high-quality documentation.
\section*{Acknowledgements}
The authors thank Emerson Murphy-Hill and Kathryn T.~Stolee for their contributions to this work. This research was undertaken, in part, thanks to funding from a University of Adelaide -- NC State Starter Grant and the Australian Research Council's Discovery Early Career Researcher Award (DECRA) funding scheme (DE180100153). This work was inspired by the International Workshop series on Dynamic Software Documentation, held at McGill's Bellairs Research Institute.
| proofpile-arXiv_065-230 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\label{sec:intro}
DCASE 2020 task 4 \cite{dcase2020task4web} is the follow-up to DCASE 2019 task 4 \cite{dcase2019task4web}. While DCASE 2019 task 4 targets on exploring the usage of weakly labeled data, unlabeled data and synthetic data in sound event detection (SED), DCASE 2020 task 4 encourages participants to combine sound separation with SED in addition to the same task in DCASE 2019. There are three subtasks in DCASE 2020 task 4: SED without sound separation, SED with sound separation and sound separation (using the SED baseline system). We participated in the first two subtasks. However, for the second subtask, we just use the baseline system for sound separation provided by the challenge organizer and focus on combination of sound separation and SED.
In this paper, we describe in detail our systems for the two subtasks we participated in DCASE2020 task 4. The systems are based on the first-place system of DCASE 2019 task 4 developed by Institute of Computing Technology (ICT), Chinese Academy of Sciences \cite{DCASE2019ICT}, which adopts the multiple instance learning framework with embedding-level attention pooling \cite{SDS} and a semi-supervised learning approach called guided learning \cite{Guidedlearning}. The multi-branch learning approach (MBL) \cite{MBL} is then incorporated into the system to further improve the performance. Multiple branches with different pooling strategies (embedding-level or instance-level) and different pooling modules (attention pooling, global max pooling or global average pooling) are used and share the same feature encoder. To better exploit the synthetic data with strong labels, inspired by multi-task learning \cite{Multitask}, a sound event detection branch is also added. Therefore, multiple branches pursuing different purposes and focusing on different characteristics of the data can help the feature encoder model the feature space better and avoid over-fitting. To incorporate sound separation into SED, we train models using output of the baseline system of sound separation and fuse the event detection results of models with or without sound separation.
\section{The DCASE 2019 task 4 system by ICT}
\label{sec:format}
Our systems for DCASE 2020 task 4 follows the framework of the DCASE 2019 task 4 system by ICT \cite{DCASE2019ICT}, which won the 1st place and the reproducible system award in the DCASE 2019 task 4 challenge. The system utilizes convolutional neural network (CNN) with embedding-level attention pooling module for weakly-supervised SED and uses disentangled features to solve the problem of unbalanced data with co-occurrences of sound events \cite{SDS}. To better use the unlabeled data jointly with weakly-labeled data, the system adopts a semi-supervised learning method named guided learning \cite{Guidedlearning}, which uses different models for the teacher model and student model to achieve different purposes implied in weakly-supervised SED. For the synthetic data, the system regards them as weakly annotated training data and the time stamps of sound events in the strong labels are not used. The system is trained by the DCASE 2019 training data, including weakly-labeled data, synthetic data and unlabeled data without data augmentation. The system that won the 1st place in DCASE 2019 task 4 was the ensemble system of 6 systems with the same model architecture, and the ensemble method is averaging all the probabilities output by the systems.
\section{Method}
\label{sec:pagelimit}
\subsection{Guided learning for semi-supervised SED}
\label{ssec:subhead}
We use the guided learning method as our basic model framework. The guided learning method is composed of two parts: a professional teacher model (PT-model) and a promising student model (PS-model).
The PT-model is designed to achieve reliable audio tagging. As a result, The instance-level feature generated by the PT-model has large receptive field.
The PS-model is designed to detect the sound events, where the audio tags and event boundaries both need to be predicted. Since the PS-model focuses on frame-level pediction, the instance-level feature generated by the PS-model has small receptive field.
During training, the PT-model and PS-model use the same input data. For the data with weak labels in a batch of input data, the PT-model and PS-model both use the labels as their training target. For event category $c$, the loss function is calculated as:
\begin{equation}
Loss_{\rm{labeled}} = \sum_c cross\_entropy(y_c, \hat{\mathbf{P}}({y_c|\mathbf{x}}))
\end{equation}
where $y_c$ is the ground truth.
For the unlabeled data, the PS-model uses the pseudo labels generated by the PT-model as the training target and the PT-model does not have any training target. The pseudo label generated by the
PT-model is obtained as:
\begin{equation}
\psi^{\rm{PT}}_c =
\left\{ \begin{matrix}
1, &\hat{\mathbf{P}}^{\rm{PT}}(y_c|\mathbf{x}) \geq 0.5 \\
0, &\rm{otherwise}
\end{matrix}\right.
\label{eql}
\end{equation}
where $\hat{\mathbf{P}}^{\rm{PT}}(y_c|\mathbf{x})$ denotes the probability of audio tagging output by the PT-model. Then the loss function of the unlabeled data is:
\begin{equation}
Loss^{\rm{PS}}_{\rm{unlabeled}} = \sum_c cross\_entropy(\psi^{\rm{PT}}_c, \hat{\mathbf{P}}^{\rm{PS}}({y_c|\mathbf{x}}))
\end{equation}
where $\hat{\mathbf{P}}^{\rm{PS}}({y_c|\mathbf{x}})$ denotes the probability of audio tagging output by the PS-model.
After $s$ training epochs, the PS-model is able to achieve reliable audio tagging. Then, the audio tagging pseudo labels of unlabeled data output by the PS-model is also used as the training target of the PT-model. The loss function is calculated as:
\begin{equation}
Loss^{\rm{PT}}_{\rm{unlabeled}} = \alpha \sum_c cross\_entropy(\psi^{\rm{PS}}_c, \hat{\mathbf{P}}^{\rm{PT}}({y_c|\mathbf{x}}))
\end{equation}
where $\alpha$ is the hyperparameter to adjust the loss weight. In our experiments, we set $\alpha = 1 - 0.997 ^ {epoch - s}$, $s = 15$.
\subsection{Multi-branch learning for semi-supervised SED}
\label{ssec:subhead}
To further improve the performance, the MBL \cite{MBL} is incorporated into the guided learning system (MBL-GL). Multiple branches with different pooling strategies such as embedding-level pooling and instance-level pooling and different pooling modules such as attention pooling (ATP), global max pooling (GMP) and global average pooling (GAP), are used and share the same feature encoder. As shown in Figure \ref{fig1}, one branch is set as the main branch which takes part in training and detection and another branch is set as the auxiliary branch which is only used for training.
In our system, we apply the MBL into the PS-model. We choose the embedding-level ATP as the main branch and instance-level GMP or instance-level GAP as the auxiliary branch. The loss function is calculated as:
\begin{equation}
Loss_{\rm{PS-total}} = a L_{\rm{PS-main}} + b Loss_{\rm{PS-auxiliary}}
\end{equation}
where $a$ and $b$ are the loss weights of the main branch and the auxiliary branch. The $a$ is set to $1.0$ and the $b$ is set to $0.5$ or $1.0$ based on the performance of the validation set.
The reason why we apply the MBL method in the PS-model is that the PS-model outputs the final results of SED while the PT-model only outputs the audio tagging results which is only used in the training process of PS-model. In our early study, we found that the improvement of MBL for audio tagging was limited compared with the improvement for SED.
By using multiple branches, we can also fuse the results of both branches to obtain better result at the inference stage. In this paper, if the auxiliary branch is instance-level GAP, we ensemble the detection results of the main branch and auxiliary branch by taking the average results of instance-level probabilities.
\begin{equation}
\hat{\mathbf{P}}_{\rm{fusion}}({y_{ct}}|\mathbf{{x_\textit{t}}}) =
\alpha \hat{\mathbf{P}}_{\rm{GAP}}({y_{ct}}|\mathbf{{x_\textit{t}}}) + (1- \alpha)
\hat{\mathbf{P}}_{\rm{ATP}}({y_{ct}}|\mathbf{{x_\textit{t}}})
\end{equation}
We set $\alpha = 0.5$ in our experiments.
In \cite{MBL}, the MBL approach is proposed only for weakly-labeled data. However, in this work, we need to use the unlabeled data to train our model. In our early experiments, we found that if the ratio between the weakly-labeled data and unlabeled data in each mini-batch was set to be the ratio in the whole training set, which was about 1:9, the MBL-GL performed poorly. The reason for this phenomenon may be that for MBL, although more branches can make the common feature be fit for various learning purposes so that reduce the risk of overfitting, more branches can increase the risk that the common feature can not fit for any learning purpose when the training data contain much noise. For the guided learning framework, the training targets of the unlabeled data for PS-model are produced by the PT-model, which contain noise. To reduce the risk mentioned above, we increase the ratio between the amount of labeled data and unlabeled data to reduce the influence of noise in training data. Besides, different from \cite{MBL}, we only use one auxiliary branch since we find that using two auxiliary branches can decrease the model performance of MBL-GL for the same reason that in guided learning, the training data for the PS-model contain noise and may increase the difficulty to train multiple branches.
\begin{figure}[t]
\centering
\includegraphics[width=0.85\linewidth]{ps_system_overview_4.pdf}
\vskip -0.12in
\caption{An overview of the architecture of the PS-model}
\label{fig1}
\vskip -0.21in
\end{figure}
\subsection{The detection branch for synthetic data}
\label{ssec:subhead}
In previous study, the multi-task learning of SED in which detecting sound event boundaries and deciding the existence of sound events are considered as two tasks are proved to be a good method to improve the performance of SED. However, this method needs data with strong labels for training. In this work, only the synthetic data has strong labels. To better exploit the synthetic data with strong labels, inspired by multi-task learning, a sound event detection branch (SEDB) is also added.
As shown in Figure \ref{fig1}, only the synthetic data are used for training the SEDB and the output of the SEDB is the probability of each instance. Then the loss function is calculated as:
\begin{equation}
Loss_{\rm{SEDB}} = \sum_c\sum_t cross\_entropy(y_{ct}, \hat{\mathbf{P}}({y_{ct}}|\mathbf{x_\textit{t}}))
\end{equation}
where $c$ denotes event category, $t$ denotes frame number, $\hat{\mathbf{P}}({y_{ct}}|\mathbf{x_\textit{t}})$ is the instance-level probability output by the SEDB and $y_{ct}$ is the instance-level ground truth.
While using the strong labels of the synthetic data to train the SEDB, we also use the weak labels of synthetic data to train other branches of the MBL-GL model.
In our method, we only apply the SEDB in PS-model because the PS-model is mainly used to detect the sound events.
\subsection{Combination of sound separation and SED}
\label{ssec:subhead}
To incorporate sound separation into SED, we train SED models by the separated data output from the baseline system of sound separation. We use the MBL-GL model with instance-level GAP auxiliary branch (no SEDB) as the SS-SED model. Then, we fuse the SED results of models trained by real data and separated data to get the final SS-SED-Ensemble system result (For SS-SED model, the data should be separated and then be used for the SS-SED model at the inference stage).
\begin{equation}
\begin{split}
\hat{\mathbf{P}}^{\rm{SS-SED-Ensemble}}({y_{ct}}|\mathbf{{x_\textit{t}}}) =
\sum_i{w_i \hat{\mathbf{P}}_{i}^{\rm{SED}}({y_{ct}}|\mathbf{{x_\textit{t}}})}
+ \\
\sum_j{w_j \hat{\mathbf{P}}_{j}^{\rm{SS-SED}}({y_{ct}}|\mathbf{{x^{\rm{SS}}_\textit{t}}})}
\end{split}
\end{equation}
where $\sum_i {w_i} + \sum_j{w_j} = 1$ and $\mathbf{{x^{SS}_\textit{t}}}$ is the feature of separated data.
\section{Experimental Setup}
\label{sec:typestyle}
\subsection{Model architecture}
\label{ssec:Model_architecture}
As shown in Figure 2, for the PS-model, the feature encoder consists of 3 CNN blocks, each of which contains a convolutional layer, a batch normalization layer and a ReLU activation layer. For the PT-model, the feature encoder consists of 9 CNN blocks. The main branch of the PS-model and the PT-model uses embedding-level ATP. And the PS-model has an auxiliary branch which uses instance-level GMP or GAP. The SEDB is optional and is added to the PS-model in some systems.
\subsection{Data}
\label{ssec:exp_data}
The training set of our system contains
a weakly-labeled set (1578 clips), an unlabeled set (14412 clips), and a strongly labeled synthetic set (2584 clips). The validation set contains 1168 strongly-labeled clips. The public test set contains 692 strong-labeled clips. Data augmentation is also applied in the training process. For all training data, we use time-shifting and frequency-shifting to generate augmented data. For time-shifting, all frames (500) are shifted for exactly 90 steps. For frequency-shifting, all frequencies (64) are shifted for exactly 8 steps. The ratio between original data and augmented data is 8:1.
\subsection{Model training}
\label{ssec:model_training}
We use the Adam optimizer with learning rate 0.0018 to train the model. The learning rate is reduced by 20\% for every 10 epochs. The mini-batch size is set to be 64. For a mini-batch of data, we set the ratio of the weakly-labeled data:synthetic data:unlabeled data to be 3:1:12 (It means that there are 12 weakly-labeled clips, 4 synthetic clips and 48 unlabeled clips in a mini-batch).
We report the event-based marco F1 score \cite{mesaros2016metrics}. All the experiments are repeated 20 times with random initiation and we report both the average result and the best result of each model.
\subsection{System ensemble}
\label{ssec:subhead}
For system ensemble, we choose different kinds of systems to construct the ensemble system. We take 3 systems with instance-level GAP as auxiliary branch and 3 systems with instance-level GMP as auxiliary branch to construct the SED-Ensemble system. To make the difference between systems large enough, 2 of the 3 systems with instance-level GMP auxiliary branch are with SEDB. To construct the SS-SED-Ensembel system (SS denotes sound separation), besides the 6 systems in the SED-Ensemble system, we add 3 other systems which are trained by sound separated data and has instance-level GAP auxiliary branch.
We take the weighted sum of all the system outputs as the final results. The ensembling function is:
\begin{equation}
\hat{\mathbf{P}}^{\rm{ensemble}}({y_{ct}}|\mathbf{{x_\textit{t}}}) =
\sum_i{w_i \hat{\mathbf{P}}_{i}^{\rm{single-system}}({y_{ct}}|\mathbf{{x_\textit{t}}})}
\end{equation}
where $\sum_i{w_i}=1$. The default value of $w_i$ may be $1 / number\_of\_systems$ and in our work, the values are tuned based on the performance on the validation set. Multiple systems with the same model are just trained with different random initializations.
\subsection{The competition systems}
\label{ssec:competition_system}
We participated in 2 subtasks which are SED without SS and SED with SS.
We submitted 4 systems for each subtask and the best system for subtask 1 achieves an event-based F1 score of 44.6\% and the best system for subtask 2 achieves an event-based F1 score of 44.7\%. For our best system of subtask 1, we use 6 systems to make the ensemble system. For our best system of subtask 2, besides the 6 systems in subtask 1, we use 3 systems trained by sound separated data to make the ensemble system.
\section{Experiment Results}
\label{sec:typestyle}
\begin{figure}[t]
\vskip -0.1in
\label{fig2}
\begin{minipage}{0.5\linewidth}
\centering
\centerline{\includegraphics[width=0.8\linewidth]{PT.pdf}}
\centerline{(a) PT}\medskip
\end{minipage}
\hfill
\begin{minipage}{0.55\linewidth}
\begin{minipage}{\linewidth}
\centerline{\includegraphics[width=0.9\linewidth]{PS.pdf}}
\centerline{(b) PS}\medskip
\end{minipage}
\begin{minipage}{0.9\linewidth}
\centering
\centerline{\includegraphics[width=\linewidth]{CNN-block.pdf}}
\centerline{(c) CNN Block}\medskip
\end{minipage}
\end{minipage}
\vskip -0.1in
\caption{The model architectures}
\end{figure}
\begin{table}[t]
\vskip -0.25in
\caption{The event-based F1 of individual systems}
\vskip 0.1in
\label{table1}
\centering
\begin{tabular}{lcc}
\toprule
\textbf{Model}
&\textbf{validation}
&
\textbf{public test}\\
\midrule
\hline
E-ATP + I-GAP-1&$ 0.447 $ & $ 0.463 $\\
E-ATP + I-GAP-2&$ 0.448 $ & $ 0.466 $\\
E-ATP + I-GAP-3&$ 0.451 $ & $ 0.461 $\\
E-ATP + I-GMP-1&$ 0.450 $ & $ 0.466 $\\
E-ATP + I-GMP-2 (with SEDB)&$0.448 $ & $ 0.477 $\\
E-ATP + I-GMP-3 (with SEDB)&$0.454 $ & $ 0.474 $\\
SS - E-ATP + I-GAP-1 & $ 0.378 $ & $ 0.404 $ \\
SS - E-ATP + I-GAP-2 & $ 0.381 $ & $ 0.390 $ \\
SS - E-ATP + I-GAP-3 & $ 0.378 $ & $ 0.394 $ \\
E-ATP + I-GMP-4 (not submitted)&$0.451 $ & $ 0.473 $\\
E-ATP + I-GMP-5 (not submitted)&$0.449 $ & $ 0.474 $\\
E-ATP + I-GAP-4 (not submitted)&$ 0.414 $ & $ 0.441 $\\
E-ATP + I-GAP-5 (not submitted)&$ 0.417 $ & $ 0.439 $\\
E-ATP + I-GAP-6 (not submitted)&$ 0.429 $ & $ 0.439 $\\
\bottomrule
\end{tabular}
\vskip -0.15in
\end{table}
Experimental results are shown in Table \ref{table1}, Table \ref{table_valid} and Table \ref{table_test}. Table \ref{table1} shows the results of individual systems which are used for system ensembling, and Table \ref{table_valid} and Table \ref{table_test} show the average and the best results of each kind of system and the results of ensemble systems. For all the experiments, we do not change the PT model and only change the PS models. In the tables, E-* denotes the embedding-level approach and I-* denotes the instance-level approach. SS-* denotes the system uses the sound separated data.
For the baseline system E-ATP, we use the MBL-GL model structure. The only difference between the baseline system and other two kinds of system (E-ATP + I-GAP, E-ATP + I-GMP) is that the baseline system dose not have any auxiliary branch. As shown in Table \ref{table_valid} and Table \ref{table_test}, we find that adding auxiliary branch such as I-GMP or I-GAP can have a beneficial effect.
For the ensemble system, we use 3 E-ATP + I-GMP and 3 E-ATP + I-GAP systems to construct it. Besides, to make the difference between models larger, 2 of the 3 E-ATP + I-GMP models use SEDB. The ensemble system achieves an F1 score of 0.497 on the public test set and 0.467 on the validation set. Compared to only using systems without the SEDB, using some systems with the SEDB has potential to improve the performance of the ensemble system: We use the 3 E-ATP + I-GAP and 3 E-ATP + I-GMP systems (2 of the 3 systems which use SEDB are replaced by E-ATP + I-GMP-4 and E-ATP + I-GMP-5 ) without SEDB to construct ensemble system which is named SED-Ensemble\_6\_systems. It achieves F1 scores of 0.495 on public test set and 0.463 on the validation set, which are not as good as the ensemble system using SEDB, i. e., SED-Ensemble (submitted).
For the SS-SED-Ensemble system, besides the 6 models used in SED-Ensemble, 3 E-ATP + I-GAP models which are trained by separated data are used and the SS-SED ensemble system achieves F1 scores of 0.495 on the public test set and 0.472 on the validation set. We use 3 other E-ATP+I-GAP systems (E-ATP + I-GAP-4, E-ATP + I-GAP-5, E-ATP + I-GAP-6) trained by real data to replace the SS-SED systems in the SS-SED-Ensemble system. The ensemble system which is named SED-Ensemble\_9\_systems achieves F1 scores of 0.485 on the public test set and 0.463 on validation set, which are lower than the SS-SED-Ensemble system. Although the performances of the 3 E-ATP + I-GAP systems trained by real data are better than the SS-E-ATP + I-GAP systems, they can not improve the system performance while the SS-E-ATP + I-GAP can improve the performance. It proves that adding SS-SED systems to construct the ensemble system can achieve a better performance since the SS-SED systems may have some feature that SED systems do not have. If the performance of the SS-SED system can be further improved, it is expected to further improve the performance of the ensemble system.
\label{ssec:exp_res_valid}
\begin{table}[t]
\vskip -0.08in
\caption{The event-based F1 scores on the validation set}
\vskip 0.1in
\label{table_valid}
\centering
\begin{tabular}{lcc}
\toprule
\textbf{Model}
&\textbf{Average F1}
&
\textbf{Best F1}\\
\midrule
\hline
E-ATP&$ 0.421\pm 0.0115$ & $ 0.444 $\\
E-ATP + I-GMP&$0.430\pm 0.0088$ & $ 0.445 $\\
E-ATP + I-GAP&$0.431\pm 0.0156$ & $ 0.451 $\\
SED-Ensemble (submitted) & - & $0.467$\\
SED-Ensemble\_6\_systems & - & $ 0.463 $ \\
SS-SED-Ensemble (submitted) & - & $0.472$\\
SED-Ensemble\_9\_systems & - & $ 0.463 $ \\
\bottomrule
\end{tabular}
\vskip -0.15in
\end{table}
\label{ssec:exp_res_test}
\begin{table}[!t]
\vskip -0.08in
\caption{The event-based F1 scores on the public test set}
\vskip 0.1in
\label{table_test}
\centering
\begin{tabular}{lcc}
\toprule
\textbf{Model}
&\textbf{Average F1}
&
\textbf{Best F1}\\
\midrule
\hline
E-ATP&$ 0.449\pm 0.0124$ & $ 0.47 $\\
E-ATP + I-GMP&$0.458\pm 0.0125$ & $ 0.478 $\\
E-ATP + I-GAP&$0.450\pm 0.0130$ & $ 0.470 $\\
SED-Ensemble (submitted) & - & $0.497$\\
SED-Ensemble\_6\_systems & - & $ 0.495 $ \\
SS-SED-Ensemble (submitted) & - & $0.495$\\
SED-Ensemble\_9\_systems & - & $ 0.485 $ \\
\bottomrule
\end{tabular}
\vskip -0.15in
\end{table}
\section{Conclusions}
\label{sec:con}
This paper presents the details of our systems for DCASE 2020 task 4. The systems are based on the first-place system of DCASE 2019 task 4, which adopts the multiple instance learning framework with embedding-level attention pooling and a semi-supervised learning approach called guided learning. The multi-branch learning approach is then incorporated into the system to further improve the performance. Multiple branches with different pooling strategies and different pooling modules are used and share the same feature encoder. To better exploit the synthetic data, inspired by multi-task learning, a sound event detection branch is also added. Therefore, multiple branches pursuing different purposes and focusing on different characteristics of the data can help the feature encoder model the feature space better and avoid over-fitting. The sound separation method is also used and we find that combining the sound separation method to make the ensemble system has great potential to improve the system performance.
\newpage
\newpage
\bibliographystyle{IEEEtran}
| proofpile-arXiv_065-231 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
In recent years, deep learning has made an unprecedented leap in the ability of human discovering knowledge and comprehending the world. Nevertheless, the adoption of deep learning is now faced with two barriers, namely data fragmentation and privacy preservation\cite{yang2019federated}. Federated learning has come up as a new machine learning paradigm to tackle the issues, learning models from decentralized datasets in a secure way.
To preserve data privacy, federated learning usually employs various mechanisms like differential privacy (DP), homomorphic encryption (HE), secure multiparty computation (SMC), etc. Whereas DP does not prevent data leakage completely, and the intricate protocols that SMC introduces to the system renders it virtually impractical, HE achieves a balance between security and operability. Moreover, one of the HE scheme named Paillier encryption scheme\cite{paillier1999public-key} has been adopted to protect the data privacy in neural networks\cite{ma2017secure}, logistic regression\cite{hardy2017private}, Bayesian network\cite{wright2004privacy-preserving}, clustering\cite{bunn2007secure}, showcasing its great generality as a privacy preserving mechanism in machine learning.
However, the complicated operations and large operands of HE still impose overhead on federated learning that cannot be ignored. Research community and industry have been haunted by the question of how to provide \emph{secure}, \emph{accurate}, and yet \emph{efficient} federated learning. Previous effort such as FATE\cite{FATE}, a cutting edge federated learning system, has provided convenient interface to implement learning algorithms secured by Paillier HE, but the learning throughput is limited due to encryption by software.
In this work, we seek for a hardware solution to improve the training throughput of federated learning, designing a homomorphic encryption framework based on FPGA, since FPGA acceleration card has been commonly available in datacenters\cite{putnam2014large} and usually achieve a lower power consumption than GPU. The framework devises a customized FPGA implementation of the Paillier homomorphic encryption, and provides support for federated learning models with secure information exchange.
We demonstrate in this work that homomorphic encryption is usually composed of iterative operations that are hard to parallelize. Therefore, it is more reasonable to consider parallelism across data items to be encrypted, and make each encryption core compact and resource efficient, so as to maximize the overall throughput to handle the massive data in federated learning. The existing works fail to do that, as they either try to exhaust the resource on a single FPGA chip to produce one encryption unit to minimize the processing latency, or they mainly utilize the common circuit units (usually termed CLB or LUT) without making use of the digital signal processing (DSP) units, which are the powerful units for high performance arithmetic operation on modern FPGA. Moreover, most of them rely on the traditional register-level transfer (RTL) approach, lacking the flexibility of fast development and reconfiguration.
In this work, we base our design and implementation on high level synthesis (HLS) that describe the FPGA circuit with high-level programming language for flexibility, allowing the algorithm and operations to be parametric and portable, and we try to derive an analytical model that determines the encryption performance, carry out optimization from multiple dimension.
Since the bulk of computation of Paillier cryptosystem boils down to modular multiplication(ModMult), we focus on designing compact architecture for ModMult operation. We adopt the Montgomery algorithm\cite{montgomery1985modular} to carry out the operation, which is FPGA-friendly as it eliminates integer division operations. We figure out the key factors that determines the total en/decryption throughput on an FPGA chip, conduct overall optimization on Paillier processors in terms of clock cycle, resource consumption, clock frequency and memory usage respectively to attain the best throughput.
The hardware module are built as OpenCL kernels and incorporate into FATE as an encryption library.
Each kernel performs en/decryption for a batch of data to relieve the kernel invocation overhead, and kernels are queued in the OpenCL command queue to help overlap data transfer with computation and hide latency. The proposed encryption framework is general and does not require any change of the model, while preserving the security and accuracy.
We perform extensive evaluation on the proposed framework, demonstrating that it reduces the iteration time for training linear models by up to 26\%, and the encryption time in each iteration by 71\%. Our hardware framework delivers an acceleration ratio of 10.6 for encryption and 2.8 for decryption compared with software solutions. Our circuit for ModMult operation achieves a better DSP-efficiency than existing FPGA solutions, with a comparable execution latency but a lower usage of DSP blocks.
We summarize our contributions as follows.
\begin{itemize}
\item Introducing a hardware-based encryption framework for federated learning, achieving high efficiency without sacrifice of security and utility, supporting accelerated computation in cloud datacenters.
\item Presenting architectures for Paillier homomorphic cryptosystem taking a scalable approach making efficient utilization of the FPGA resources, especially DSP blocks.
\item Incorporating the encryption framework into cutting-edge federated learning framework, and showing an improvement on training throughput for federated learning models.
\end{itemize}
The rest of the article will be organized as follows.
In Section \ref{sec:rel} we will provide the background about federated learning and existing privacy preserving machine learning systems, and introduce the Paillier cryptosystem we work on. Section \ref{sec:des} will present the design and implementation of the framework in detail. Section \ref{sec:eval} shows the methodology and results of evaluation. Finally in Section \ref{sec:con} we conclude the article.
\section{Background}\label{sec:rel}
\subsection{Federated learning with HE}
Federated learning is a privacy-preserving, decentralized distributed machine learning paradigm. One effective method of preserving privacy and securing computation is homomorphic encryption (HE), i.e.\ encryption schemes that allows encrypted values to be involved in computation. For the applications of HE in federated learning, we refer the readers to \cite{hardy2017private}, \cite{gilad2016cryptonets}, \cite{aono2017privacy}, \cite{liu2019privacy}, \cite{liu2018secure}, \cite{chai2019secure}, which broadly cover machine learning models including linear model, neural network and deep learning, boosting tree, transfer learning and matrix factorization. Typically, HE is employed to encrypt the intermediate data during computation, which will then be transferred and aggregated by homomorphic operation. For nonlinear operations composing the model, such as activation function in a neural network, these works usually rely on approximation to make the model agree with HE computation.
\subsection{Privacy-preserving ML systems}
There has also been machine learning systems that take privacy preservation into account, such as SecureML\cite{mohassel2017secureml} that proposes a system for two non-colluding party collectively training a model, and Sage\cite{Sage} that presents a differentially private machine learning platform. Among them, FATE\cite{FATE} introduces a federated learning framework that provides the abstraction and utilities for implementing algorithm and models, along with an architecture to enable distributed, multiparty machine learning. It mainly utilizes Paillier homomorphic encryption to guarantee data security. However, it purely relies on a software solution of encryption that greatly harms the execution efficiency of federated training. Our goal in the work is to find a hardware solution as a rescue to this issue.
\subsection{Paillier Homomorphic Encryption}
\begin{figure*}[th]
\centering
\includegraphics[width=0.75\linewidth]{homo}
\caption{General workflow of homomorphic encryption-based federated learning}
\label{fig:homo}
\end{figure*}
Paillier HE is an additive homomorphic encryption scheme allowing to perform addition and multiplication with scalar on encrypted values without decrypting them.
In federated learning, usually multiple parties are involved, each one having a private dataset and wanting to maintain a local model learned from the aggregated dataset, and there may be a coordinator to manage the computation and data exchange among parties (Figure \ref{fig:homo}). The role of Paillier homomorphic encryption is to encrypt the intermediate data to transfer, so that in each training iteration the coordinator receives the encrypted local updates from parties, aggregates them with the homomorphic property, and sends back the result to each party for decryption and updating local model. In this way, each party obtain a model extracting information from the aggregate dataset, without leaking its private information.
The Paillier HE scheme associates each party with a public key $(n, g)$ and a private key $(\lambda, \mu)$, where $n, g, \lambda, \mu$ are large integers, typically 1024-bit in FATE. Messages and ciphertexts are also represented as long integers. A message $m$ can be encrypted into ciphertext $c$ by $c=g^mr^n\mod n^2$ with random number $r$, and decryption is performed by $m=((c^\lambda \mod n^2)-1)/n*\mu\mod n^2$.
We can see from the formulation that the majority of the computation of the Paillier en/decryption is related to modular exponentiation (ModExp), which can be further decomposed to a series of ModMult operations. Hence, the execution of ModMult has a decisive effect on the overall performance. We choose the Montgomery ModMult algorithm\cite{montgomery1985modular} to perform this operation because it is FPGA-friendly, in that it disposes of the costly integer division. The Montgomery algorithm, shown in Algorithm \ref{alg:mul_word}, computes $XY\cdot 2^{-l}\mod M$ for $l$-bit integers $X$, $Y$ and $M$. It divides integers into $k$-bit words. The body of the algorithm is a two-level loop, where each outer iteration (line 2-8) aims to compute an intermediate result $S_i=X\cdot Y^i \cdot 2^k \mod M$ for the $i$th word of $Y$, and it further decomposes the computation by each word of $X$ and forms the inner loop (line 4-6).
\begin{algorithm}[ht]
\KwIn{$X=\sum_{j=0}^{l/k-1}X^j\cdot 2^{jk}$, $Y=\sum_{j=0}^{l/k-1}Y^j\cdot 2^{jk}$, $M=\sum_{j=0}^{l/k-1}M^j\cdot 2^{jk}$, $r=2^k$}
\KwOut{$S = X \cdot Y / 2^{l} \mbox{ mod } M$}
$S_0\leftarrow 0$\;
\For{$i=0\ldots l/k-1$}{
$q \leftarrow ((S_i + X*Y^i)\cdot (-M^{-1})) \mbox{ mod }r$\;
\For{$j=0\ldots l/k$} {
$\bar{S}_{i+1}^j \leftarrow S_{i}^j + X^j*Y^{i} + q * M^j$\;
}
$S_{i+1} \leftarrow \bar{S}_{i+1} / 2^k$
}
\If{$S_{l/k} > M$}{$S_{l/k} \leftarrow S_{l/k} - M$\;}
\KwRet{$S_{l/k}$}
\caption{Montgomery Algorithm for Modular Multiplication with Radix $2^k$}
\label{alg:mul_word}
\end{algorithm}
\section{Design and Implementation}\label{sec:des}
\subsection{System Overview}
\begin{figure}[th]
\centering
\includegraphics[width=0.8\linewidth]{overview}
\caption{Overview of Our Encryption Framework}
\label{fig:overview}
\end{figure}
The overall architecture of our encryption framework is shown in Figure \ref{fig:overview}. The framework is envisioned to be hosted on cloud servers belonging to geo-distributed parties of federated learning. It includes components residing on both the host CPU and the FPGA, where a PCI-e bus provides communication between them.
The host CPU is responsible for the normal training workload of a machine learning model, while it batches the requests of encryption to sends to the FPGA, and encodes the floating point number used by machine learning to integers agree with HE schemes.
Apart from these necessities, our main contribution is designing high performance processors for Paillier computation on FPGA and encapsulating the hardware module as OpenCL kernel for invocation, which we will detail in Section \ref{ssec:micro} and \ref{ssec:impl} respectively.
\subsection{Micro-architecture for Montgomery ModMult}\label{ssec:micro}
A Paillier processor encapsulates units for operations involved, i.e.\ modular multiplication, random number generator and integer divisor, along with its local storage. We replicate Paillier processors in HLS to deploy multiple copies, and the top level function is responsible for dispatching input data and collecting results. Since the Paillier processors are independent and work in parallel, the overall throughput of an FPGA chip can be determined by
$$
\mathrm{Throughput} = \frac{\mbox{Total amount of resource}}{\mbox{Latency} \times \mbox{Resource consumption per core}},
$$
where resource broadly refers to multipliers, adders, memory, etc., and latency can be further decomposed to clock cycle of execution $\times$ clock frequency. Therefore, our design guideline is to optimize the Montgomery ModMult operation lying at the heart of Paillier cryptosystem, with respect to clock cycle, resource allocation, clock frequency, in addition to memory usage. We elaborate on the optimization on these dimensions as below.
\subsubsection{Clock Cycle}
Generally, the clock cycle required by an algorithm is intrinsically lower bounded by the number of operations and the critical path in the dependency graph. As shown in Algorithm \ref{alg:mul_word}, the body of the Montgomery algorithm is a two-level loop, consisting of $2(l/k)(l/k+1)$ multiplications. Thus, the ideal clock cycle will be $2(l/k)(l/k+1)$ divided by the number of multipliers, even if we ignore the rest of the operations. On the other hand, as the execution of each inner iteration depends on the iteration before, it is hard to force a parallel execution of the inner iterations. Our goal is to deploy two multipliers for an inner iteration, and obtain a clock cycle number as close to $(l/k)(l/k+1)$ as possible.
Another dependency issue that deserves attention is the computation of $q$ in each outer iteration. In the $i$th iteration, $q$ depends on the value of $S_{i-1}$, while it is necessary in the computation of inner loops. However, if $q$ is computed before the start of inner loop, the latency will be magnified by the number of outer iterations.
To enforce a tight scheduling, we make the following optimizations in HLS:
\begin{itemize}
\item Unrolling the inner loop. This can be done through an UNROLL directive in HLS, or manually repeat part of the loop. Unrolling the loop does not lead to parallel execution of iterations. However, this is the only way to disassemble all the operations composing the loop, achieving the flexibility of scheduling to overlap operations as much as possible. Also, without unrolling we are not able to insert the computation of $q$ into the middle of an inner loop.
\item Interleaving the $q$ computation with the inner loop. As discussed before, the $q$ value used in each inner loop must be computed before. Since the $q$ value for computing $S_i$ only relies on first few words of $S_{i-1}$, it is possible to start generating $q$ in the last inner loop when those words are ready. In this way we can obtain $q$ in advance and hide its latency.
\item Pipelining the outer loop. We achieve this by inserting a PIPELINE directive in HLS, with the initiation interval set to the number of iterations contained in an inner loop. The final step of schedule enforcement is to pipeline all the iterations. We aim to pipeline both the outer loop and the inner loop by unrolling the inner loop, and pipelining the outer loop, so that the inner loop is naturally pipelined by scheduling the disassembled operations, and the outer loop try to start an interaction each time when a whole inner loop is initiated.
\end{itemize}
\begin{figure*}[ht]
\begin{tabular}{cc}
\begin{minipage}[t]{0.70\linewidth}
\includegraphics[width = 1\linewidth]{sched.png}
\caption{Pipeline Execution of the Montgomery ModMult Operation}
\label{fig:sched}
\end{minipage}
\begin{minipage}[t]{0.28\linewidth}
\includegraphics[width = 1\linewidth]{pe.png}
\caption{Processing Element Implementing the Inner Loop of Algorithm \ref{alg:mul_word}}
\label{fig:pe}
\end{minipage}
\end{tabular}
\label{fig:design}
\end{figure*}
The resulted scheduling is shown in Figure \ref{fig:sched}. We illustrate with an example with operands 4 words in length (i.e.\ $l/k=4$), and the computation of each inner iteration takes 4 clock cycles to complete. Initially, the schedule computes $q$ for the first inner loop (not shown in the figure), and then initiates the inner iterations sequentially. In the meantime, as soon as $S^0$ is ready, it can be used to compute $q$ for the next inner loop. Hence, when the last inner iteration ends, the first iteration of the next inner loop can start immediately with the precomputed $q$. Therefore, we enable a tight schedule that initiates an inner iteration each clock cycle. The resulted execution clock cycle is $(l/k)(l/k + 1)$, plus the number of pipeline stages, and a few cycles for data read-in and write-out.
\subsubsection{Resource Allocation}
In this work, we utilize the embedded DSP blocks on the FPGA chip to construct pipelined multipliers. For the remaining logic, including adder, multiplexer, integer comparison, finite state machine, etc., we leave them to lookup-table (LUT). As DSP on FPGA is scarce and expensive, we use them to carry out the heavy multiplication only. Further, we will show purely relying on LUT to implement ModMult operation is not economic (Section \ref{sec:eval}). Therefore, we will focus on the usage of LUT and DSP, and reduce the area and DSP usage without sacrificing the performance.
We encapsulate the operations comprising an inner iteration into a processing element (PE), as shown in Figure \ref{fig:pe}. Each PE contains two multipliers to perform the two independent multiplications $x*y$ and $q*m$. Then it accepts $S_{i-1}^j$ of the last outer iteration, and a carry word (not shown in the figure), adds them with the multiplication results, and then outputs $S_{i}^j$ and a carry word. Then we limit the number of PE to 1, with an ALLOCATION directive in HLS. This is to avoid the resource bloating owing to loop unrolling, so that only resource for computing one inner iteration is actually allocated, and to reduce the overall area of the micro-architecture.
We also employ the Karatsuba algorithm to construct DSP-conservative multipliers. As shown in Algorithm \ref{alg:karatsuba}, Karatsuba algorithm performs an integer multiplication by recursively breaking it into three of half size. Its efficiency is attributed to one multiplication less than the schoolbook algorithm, and we take advantage of it to allocate DSPs according to the actual number of operations. For instance, a DSP48E1 block is able to carry out $18\times 25$-bit multiplication, and a $32\times 32$ one can be divided into $16\times 16$ ones, and takes up 3 DSP blocks.
\begin{algorithm}[ht]
\KwIn{Operands $X$ and $Y$, the length of operand $k$}
\KwOut{$S=X*Y$}
Let $X=\overline{X_hX_l}$, $Y=\overline{Y_hY_l}$, where $X_h, X_l, Y_h, Y_L$ are $k/2$-bit integers\;
$HH\leftarrow Karatsuba(X_h, Y_h)$\;
$LL\leftarrow Karatsuba(X_l, Y_l)$\;
$HL\leftarrow Katatsuba(X_h + X_l, Y_h + Y_l)$\;
$S\leftarrow HH * 2^k + (HL - HH - LL) * 2 ^{k/2} + LL$\;
\KwRet{$S$}
\caption{Karatsuba algorithm}
\label{alg:karatsuba}
\end{algorithm}
\subsubsection{Clock Frequency}
The DSP units on the Xilinx FPGA run at a maximum frequency of 400-500MHz. To approach the frequency limit, we need to pay attention to the following measures:
\begin{itemize}
\item Declare the multipliers as pipelined multipliers. A pipelined multiplier takes multiple cycles to accomplish a multiplication, distributing its workload and relieving the burden of each cycle. It does no harm to the multiplication throughput since we have resolved the dependency between its input and output.
\item Restrict bitwidth of operands. The clock frequency is constraint by the critical path of the circuit, i.e.\ the longest path of gates a signal needs to pass through during one cycle. Arithmetic on integers such as addition or comparison usually results in a long carry chain, and thus we need to avoid computation on very long integers directly. In this work, we use 32-bit as the operand size, and the maximum bitwidth involved is 64 bits.
\item Simplify the control logic. For the finite state machine in charge of controlling the compute units, we use one-hot encoding scheme to represent the states for a fast lookup and match. The number of states is related to the number of iterations of each loop and thus one-hot encoding will be acceptable.
\end{itemize}
\subsubsection{Memory Usage}
Our design allocate each Paillier processors its own block RAM (BRAM) as local buffer, to hold the input/output data and the intermediate large integers involved in the computation. We do not share storage among processors to prevent data access contention.
Large integers are normally stored as arrays of words in the BRAM. However, we notice that the input data for encryption, which are encoded from floating point numbers used in machine learning, have few effective digits compared with the length of large integers. Therefore, we are able to store the input data as a sparse vector, i.e.\ only recording the non-zero elements and their indices, reducing the memory footprint.
\subsection{Implementation}\label{ssec:impl}
We develop our encryption framework with the AWS F1 instance and Xilinx SDAccel development suite. The basic logic of the encryption and decryption function is implemented with Xilinx high level synthesis (HLS), allowing to transform an algorithm described in C/C++ into tailor-made implementation on FPGA. Directives like loop pipelining and instance allocation are inserted into the HLS code to fine tune the performance of the resulted architecture.
On the host side, we use the OpenCL API to access the acceleration hardware. The OpenCL API provides an abstraction of the computing device like CPU, GPU and FPGA. An invocation to the device function is named a kernel. OpenCL is used to manage the data transfer between host and device, queue and invoke kernels, and monitor the execution events. We adopt the PyOpenCL APIs to implement a module that makes use of the FPGA device for cryptographic processes and incorporate it into the FATE framework.
The requests from the host side are divided into fixed size batches, and each batch invokes a kernel on device. Multiple kernels will be queued in the OpenCL command queue. This helps overlap data transfer with computation and hide latency.
We also preallocate buffers on the device, arranging them as a ring buffer, in order to reuse buffers among kernels and avoid frequent memory allocation.
\begin{figure}[th]
\centering
\includegraphics[width=0.8\linewidth]{cmdQ.png}
\caption{Queueing kernels for execution}
\label{fig:cmdQ}
\end{figure}
\section{Evaluation}\label{sec:eval}
We conduct experiments aiming to perform an extensive evaluation on the proposed encryption framework. We first perform a microscopic examination, comparing the implementation of Paillier algorithm and ModMult operation with software solutions and existing FPGA designs. Then we study its improvement on the overall performance of training process of federated learning. The training tasks are carried out on the open-sourced version of the FATE machine learning framework. We choose two linear models, and adopt Kaggle datasets on breast cancer\footnote{https://www.kaggle.com/uciml/breast-cancer-wisconsin-data} and motor temperature\footnote{https://www.kaggle.com/wkirgsn/electric-motor-temperature} and partition the datasets vertically.
We attempt to answer the following questions empirically with the evaluation experiments:
\begin{itemize}
\item How do the Paillier processors perform, especially for the ModMult operation, in terms of throughput and resource-efficiency?
\item How does the hardware framework compare with software solutions of Paillier cryptosystem in terms of en/decryption throughput?
\item How much does the framework affect the training throughput of federated learning with respect to different models or algorithms?
\end{itemize}
\begin{table*}[h]
\centering
\begin{tabular}{cccccc}
\toprule
Implementation & Area(slice) & DSP & Clock frequency(MHz) & Execution time(us) & Throughput per DSP(op/s) \\
\midrule
This work & 483 & 9 & 500 & 8.81 & {\bf 12626} \\
\hline
\cite{SanA14} & 567 & 13 & 490 & 8.64 & 8903 \\
\hline
\cite{SongKNI10} & 180 & 1 & 447 & 135.4 & 7385 \\
\hline
\cite{HuangGE11} & 9268 & NA & 129 & 18.70 & NA \\
\hline
\end{tabular}
\caption{Comparison of ModMult operaion}\label{tab:mm}
\end{table*}
Given the broad adoption of the ModMult operation, many implementation has been proposed by researchers, and we compare ours with them in Table \ref{tab:mm}. Since we are targeting datacenter acceleration chips and applications, the DSP efficiency is a key factor evaluating an implementation. Comparing with the state of the art solution \cite{SanA14}, our ModMult module delivers a close latency but uses fewer DSPs due to our precise limit on resource usage. The authors of \cite{SongKNI10} proposes an implementation using only one DSP and one block RAM. However, without employing the Karatsuba algorithm, their version turns out to be less efficient than ours. \cite{HuangGE11} gives an implementation using circuit elements entirely without DSP, and it shows that an such a ModMult module consumes much area and limits the clock frequency, and hence not recommendable. Moreover, most of existing solutions are based on register-transfer level (RTL) that describes the circuit directly, but lacks the flexibility of parametrizing and reusing the ModMult module as our HLS version does.
To evaluate the effectiveness of the scheduling of ModMult operation, we compare the number of execution clock cycles with the theoretically ideal clock cycle, given as $T=(l/k)(l/k+1)$ (Section \ref{sec:des}). As shown in Figure \ref{fig:cc}, for different sizes of operands, our implementation keeps no more than 10\% higher than the ideal. The gap is mainly due to pipeline stages, time for initialization and data transfer.
\begin{figure}[ht]
\begin{tabular}{cc}
\begin{minipage}[t]{0.48\linewidth}
\includegraphics[width = 1\linewidth]{cc.pdf}
\caption{Number of execution clock cycles of ModMult operation}
\label{fig:cc}
\end{minipage}
\begin{minipage}[t]{0.48\linewidth}
\includegraphics[width = 1\linewidth]{mc.pdf}
\caption{Throughput of FPGA and multicore processor}
\label{fig:mc}
\end{minipage}
\end{tabular}
\label{fig:tmp}
\end{figure}
\begin{figure}[ht]
\begin{tabular}{cc}
\begin{minipage}[t]{0.48\linewidth}
\includegraphics[width = 1\linewidth]{phe1.pdf}
\caption{Encryption Throughput Compared with Software}
\label{fig:phe1}
\end{minipage}
\begin{minipage}[t]{0.48\linewidth}
\includegraphics[width = 1\linewidth]{phe2.pdf}
\caption{Decryption Throughput Compared with Software}
\label{fig:phe2}
\end{minipage}
\end{tabular}
\label{fig:phe}
\end{figure}
To investigate the performance of FPGA and software solution, we compare the framework with PHE, a popular Paillier library, as shown in Figure \ref{fig:phe1} and \ref{fig:phe2}. We can see that for a 1024-bit public key, our framework delivers an acceleration ratio of $10.62\times$ and $2.76\times$ for encryption and decryption, respectively. We also compare FPGA with a multicore processor using {\tt libpaillier} library, as shown in Figure \ref{fig:mc}. It shows that an FPGA effectively outperforms a multicore CPU and is advisable to be used in accelerating computational intensive applications.
Additionally, we test the modified FATE with linear models and the breast and motor datasets. We train a logistic regression and a linear regression model on the two datasets respectively for 10 iterations, and record the timing.
Figure \ref{fig:fate1} and Figure \ref{fig:fate2} show the training iteration time and the encryption time in each iteration respectively. It demonstrates that for linear models, our framework reduce the training iteration time by up to 26\%, and the encryption time during one iteration by 71.2\%.
\begin{figure}[ht]
\begin{tabular}{cc}
\begin{minipage}[t]{0.48\linewidth}
\includegraphics[width = 1\linewidth]{fate1.pdf}
\caption{Improvement on Iteration Time}
\label{fig:fate1}
\end{minipage}
\begin{minipage}[t]{0.48\linewidth}
\includegraphics[width = 1\linewidth]{fate2.pdf}
\caption{Improvement on Encryption Time Per Iteration}
\label{fig:fate2}
\end{minipage}
\end{tabular}
\label{fig:fate}
\end{figure}
\section{Conclusion}\label{sec:con}
In this paper, we have demonstrated the significance of accelerating homomorphic encryption and modular operations. We explored a compact architecture for Paillier cryptosystem with an HLS-based approach, investigating how to optimize the performance, and incorporated the FPGA framework into a federated learning system. We conducted extensive experiments to present the effectiveness and efficiency of our encryption framework.
\newpage
\bibliographystyle{named}
\section{Introduction}
In recent years, deep learning has made an unprecedented leap in the ability of human discovering knowledge and comprehending the world. Nevertheless, the adoption of deep learning is now faced with two barriers, namely data fragmentation and privacy preservation\cite{yang2019federated}. Federated learning has come up as a new machine learning paradigm to tackle the issues, learning models from decentralized datasets in a secure way.
To preserve data privacy, federated learning usually employs various mechanisms like differential privacy (DP), homomorphic encryption (HE), secure multiparty computation (SMC), etc. Whereas DP does not prevent data leakage completely, and the intricate protocols that SMC introduces to the system renders it virtually impractical, HE achieves a balance between security and operability. Moreover, one of the HE scheme named Paillier encryption scheme\cite{paillier1999public-key} has been adopted to protect the data privacy in neural networks\cite{ma2017secure}, logistic regression\cite{hardy2017private}, Bayesian network\cite{wright2004privacy-preserving}, clustering\cite{bunn2007secure}, showcasing its great generality as a privacy preserving mechanism in machine learning.
However, the complicated operations and large operands of HE still impose overhead on federated learning that cannot be ignored. Research community and industry have been haunted by the question of how to provide \emph{secure}, \emph{accurate}, and yet \emph{efficient} federated learning. Previous effort such as FATE\cite{FATE}, a cutting edge federated learning system, has provided convenient interface to implement learning algorithms secured by Paillier HE, but the learning throughput is limited due to encryption by software.
In this work, we seek for a hardware solution to improve the training throughput of federated learning, designing a homomorphic encryption framework based on FPGA, since FPGA acceleration card has been commonly available in datacenters\cite{putnam2014large} and usually achieve a lower power consumption than GPU. The framework devises a customized FPGA implementation of the Paillier homomorphic encryption, and provides support for federated learning models with secure information exchange.
We demonstrate in this work that homomorphic encryption is usually composed of iterative operations that are hard to parallelize. Therefore, it is more reasonable to consider parallelism across data items to be encrypted, and make each encryption core compact and resource efficient, so as to maximize the overall throughput to handle the massive data in federated learning. The existing works fail to do that, as they either try to exhaust the resource on a single FPGA chip to produce one encryption unit to minimize the processing latency, or they mainly utilize the common circuit units (usually termed CLB or LUT) without making use of the digital signal processing (DSP) units, which are the powerful units for high performance arithmetic operation on modern FPGA. Moreover, most of them rely on the traditional register-level transfer (RTL) approach, lacking the flexibility of fast development and reconfiguration.
In this work, we base our design and implementation on high level synthesis (HLS) that describe the FPGA circuit with high-level programming language for flexibility, allowing the algorithm and operations to be parametric and portable, and we try to derive an analytical model that determines the encryption performance, carry out optimization from multiple dimension.
Since the bulk of computation of Paillier cryptosystem boils down to modular multiplication(ModMult), we focus on designing compact architecture for ModMult operation. We adopt the Montgomery algorithm\cite{montgomery1985modular} to carry out the operation, which is FPGA-friendly as it eliminates integer division operations. We figure out the key factors that determines the total en/decryption throughput on an FPGA chip, conduct overall optimization on Paillier processors in terms of clock cycle, resource consumption, clock frequency and memory usage respectively to attain the best throughput.
The hardware module are built as OpenCL kernels and incorporate into FATE as an encryption library.
Each kernel performs en/decryption for a batch of data to relieve the kernel invocation overhead, and kernels are queued in the OpenCL command queue to help overlap data transfer with computation and hide latency. The proposed encryption framework is general and does not require any change of the model, while preserving the security and accuracy.
We perform extensive evaluation on the proposed framework, demonstrating that it reduces the iteration time for training linear models by up to 26\%, and the encryption time in each iteration by 71\%. Our hardware framework delivers an acceleration ratio of 10.6 for encryption and 2.8 for decryption compared with software solutions. Our circuit for ModMult operation achieves a better DSP-efficiency than existing FPGA solutions, with a comparable execution latency but a lower usage of DSP blocks.
We summarize our contributions as follows.
\begin{itemize}
\item Introducing a hardware-based encryption framework for federated learning, achieving high efficiency without sacrifice of security and utility, supporting accelerated computation in cloud datacenters.
\item Presenting architectures for Paillier homomorphic cryptosystem taking a scalable approach making efficient utilization of the FPGA resources, especially DSP blocks.
\item Incorporating the encryption framework into cutting-edge federated learning framework, and showing an improvement on training throughput for federated learning models.
\end{itemize}
The rest of the article will be organized as follows.
In Section \ref{sec:rel} we will provide the background about federated learning and existing privacy preserving machine learning systems, and introduce the Paillier cryptosystem we work on. Section \ref{sec:des} will present the design and implementation of the framework in detail. Section \ref{sec:eval} shows the methodology and results of evaluation. Finally in Section \ref{sec:con} we conclude the article.
\section{Background}\label{sec:rel}
\subsection{Federated learning with HE}
Federated learning is a privacy-preserving, decentralized distributed machine learning paradigm. One effective method of preserving privacy and securing computation is homomorphic encryption (HE), i.e.\ encryption schemes that allows encrypted values to be involved in computation. For the applications of HE in federated learning, we refer the readers to \cite{hardy2017private}, \cite{gilad2016cryptonets}, \cite{aono2017privacy}, \cite{liu2019privacy}, \cite{liu2018secure}, \cite{chai2019secure}, which broadly cover machine learning models including linear model, neural network and deep learning, boosting tree, transfer learning and matrix factorization. Typically, HE is employed to encrypt the intermediate data during computation, which will then be transferred and aggregated by homomorphic operation. For nonlinear operations composing the model, such as activation function in a neural network, these works usually rely on approximation to make the model agree with HE computation.
\subsection{Privacy-preserving ML systems}
There has also been machine learning systems that take privacy preservation into account, such as SecureML\cite{mohassel2017secureml} that proposes a system for two non-colluding party collectively training a model, and Sage\cite{Sage} that presents a differentially private machine learning platform. Among them, FATE\cite{FATE} introduces a federated learning framework that provides the abstraction and utilities for implementing algorithm and models, along with an architecture to enable distributed, multiparty machine learning. It mainly utilizes Paillier homomorphic encryption to guarantee data security. However, it purely relies on a software solution of encryption that greatly harms the execution efficiency of federated training. Our goal in the work is to find a hardware solution as a rescue to this issue.
\subsection{Paillier Homomorphic Encryption}
\begin{figure*}[th]
\centering
\includegraphics[width=0.75\linewidth]{homo}
\caption{General workflow of homomorphic encryption-based federated learning}
\label{fig:homo}
\end{figure*}
Paillier HE is an additive homomorphic encryption scheme allowing to perform addition and multiplication with scalar on encrypted values without decrypting them.
In federated learning, usually multiple parties are involved, each one having a private dataset and wanting to maintain a local model learned from the aggregated dataset, and there may be a coordinator to manage the computation and data exchange among parties (Figure \ref{fig:homo}). The role of Paillier homomorphic encryption is to encrypt the intermediate data to transfer, so that in each training iteration the coordinator receives the encrypted local updates from parties, aggregates them with the homomorphic property, and sends back the result to each party for decryption and updating local model. In this way, each party obtain a model extracting information from the aggregate dataset, without leaking its private information.
The Paillier HE scheme associates each party with a public key $(n, g)$ and a private key $(\lambda, \mu)$, where $n, g, \lambda, \mu$ are large integers, typically 1024-bit in FATE. Messages and ciphertexts are also represented as long integers. A message $m$ can be encrypted into ciphertext $c$ by $c=g^mr^n\mod n^2$ with random number $r$, and decryption is performed by $m=((c^\lambda \mod n^2)-1)/n*\mu\mod n^2$.
We can see from the formulation that the majority of the computation of the Paillier en/decryption is related to modular exponentiation (ModExp), which can be further decomposed to a series of ModMult operations. Hence, the execution of ModMult has a decisive effect on the overall performance. We choose the Montgomery ModMult algorithm\cite{montgomery1985modular} to perform this operation because it is FPGA-friendly, in that it disposes of the costly integer division. The Montgomery algorithm, shown in Algorithm \ref{alg:mul_word}, computes $XY\cdot 2^{-l}\mod M$ for $l$-bit integers $X$, $Y$ and $M$. It divides integers into $k$-bit words. The body of the algorithm is a two-level loop, where each outer iteration (line 2-8) aims to compute an intermediate result $S_i=X\cdot Y^i \cdot 2^k \mod M$ for the $i$th word of $Y$, and it further decomposes the computation by each word of $X$ and forms the inner loop (line 4-6).
\begin{algorithm}[ht]
\KwIn{$X=\sum_{j=0}^{l/k-1}X^j\cdot 2^{jk}$, $Y=\sum_{j=0}^{l/k-1}Y^j\cdot 2^{jk}$, $M=\sum_{j=0}^{l/k-1}M^j\cdot 2^{jk}$, $r=2^k$}
\KwOut{$S = X \cdot Y / 2^{l} \mbox{ mod } M$}
$S_0\leftarrow 0$\;
\For{$i=0\ldots l/k-1$}{
$q \leftarrow ((S_i + X*Y^i)\cdot (-M^{-1})) \mbox{ mod }r$\;
\For{$j=0\ldots l/k$} {
$\bar{S}_{i+1}^j \leftarrow S_{i}^j + X^j*Y^{i} + q * M^j$\;
}
$S_{i+1} \leftarrow \bar{S}_{i+1} / 2^k$
}
\If{$S_{l/k} > M$}{$S_{l/k} \leftarrow S_{l/k} - M$\;}
\KwRet{$S_{l/k}$}
\caption{Montgomery Algorithm for Modular Multiplication with Radix $2^k$}
\label{alg:mul_word}
\end{algorithm}
\section{Design and Implementation}\label{sec:des}
\subsection{System Overview}
\begin{figure}[th]
\centering
\includegraphics[width=0.8\linewidth]{overview}
\caption{Overview of Our Encryption Framework}
\label{fig:overview}
\end{figure}
The overall architecture of our encryption framework is shown in Figure \ref{fig:overview}. The framework is envisioned to be hosted on cloud servers belonging to geo-distributed parties of federated learning. It includes components residing on both the host CPU and the FPGA, where a PCI-e bus provides communication between them.
The host CPU is responsible for the normal training workload of a machine learning model, while it batches the requests of encryption to sends to the FPGA, and encodes the floating point number used by machine learning to integers agree with HE schemes.
Apart from these necessities, our main contribution is designing high performance processors for Paillier computation on FPGA and encapsulating the hardware module as OpenCL kernel for invocation, which we will detail in Section \ref{ssec:micro} and \ref{ssec:impl} respectively.
\subsection{Micro-architecture for Montgomery ModMult}\label{ssec:micro}
A Paillier processor encapsulates units for operations involved, i.e.\ modular multiplication, random number generator and integer divisor, along with its local storage. We replicate Paillier processors in HLS to deploy multiple copies, and the top level function is responsible for dispatching input data and collecting results. Since the Paillier processors are independent and work in parallel, the overall throughput of an FPGA chip can be determined by
$$
\mathrm{Throughput} = \frac{\mbox{Total amount of resource}}{\mbox{Latency} \times \mbox{Resource consumption per core}},
$$
where resource broadly refers to multipliers, adders, memory, etc., and latency can be further decomposed to clock cycle of execution $\times$ clock frequency. Therefore, our design guideline is to optimize the Montgomery ModMult operation lying at the heart of Paillier cryptosystem, with respect to clock cycle, resource allocation, clock frequency, in addition to memory usage. We elaborate on the optimization on these dimensions as below.
\subsubsection{Clock Cycle}
Generally, the clock cycle required by an algorithm is intrinsically lower bounded by the number of operations and the critical path in the dependency graph. As shown in Algorithm \ref{alg:mul_word}, the body of the Montgomery algorithm is a two-level loop, consisting of $2(l/k)(l/k+1)$ multiplications. Thus, the ideal clock cycle will be $2(l/k)(l/k+1)$ divided by the number of multipliers, even if we ignore the rest of the operations. On the other hand, as the execution of each inner iteration depends on the iteration before, it is hard to force a parallel execution of the inner iterations. Our goal is to deploy two multipliers for an inner iteration, and obtain a clock cycle number as close to $(l/k)(l/k+1)$ as possible.
Another dependency issue that deserves attention is the computation of $q$ in each outer iteration. In the $i$th iteration, $q$ depends on the value of $S_{i-1}$, while it is necessary in the computation of inner loops. However, if $q$ is computed before the start of inner loop, the latency will be magnified by the number of outer iterations.
To enforce a tight scheduling, we make the following optimizations in HLS:
\begin{itemize}
\item Unrolling the inner loop. This can be done through an UNROLL directive in HLS, or manually repeat part of the loop. Unrolling the loop does not lead to parallel execution of iterations. However, this is the only way to disassemble all the operations composing the loop, achieving the flexibility of scheduling to overlap operations as much as possible. Also, without unrolling we are not able to insert the computation of $q$ into the middle of an inner loop.
\item Interleaving the $q$ computation with the inner loop. As discussed before, the $q$ value used in each inner loop must be computed before. Since the $q$ value for computing $S_i$ only relies on first few words of $S_{i-1}$, it is possible to start generating $q$ in the last inner loop when those words are ready. In this way we can obtain $q$ in advance and hide its latency.
\item Pipelining the outer loop. We achieve this by inserting a PIPELINE directive in HLS, with the initiation interval set to the number of iterations contained in an inner loop. The final step of schedule enforcement is to pipeline all the iterations. We aim to pipeline both the outer loop and the inner loop by unrolling the inner loop, and pipelining the outer loop, so that the inner loop is naturally pipelined by scheduling the disassembled operations, and the outer loop try to start an interaction each time when a whole inner loop is initiated.
\end{itemize}
\begin{figure*}[ht]
\begin{tabular}{cc}
\begin{minipage}[t]{0.70\linewidth}
\includegraphics[width = 1\linewidth]{sched.png}
\caption{Pipeline Execution of the Montgomery ModMult Operation}
\label{fig:sched}
\end{minipage}
\begin{minipage}[t]{0.28\linewidth}
\includegraphics[width = 1\linewidth]{pe.png}
\caption{Processing Element Implementing the Inner Loop of Algorithm \ref{alg:mul_word}}
\label{fig:pe}
\end{minipage}
\end{tabular}
\label{fig:design}
\end{figure*}
The resulted scheduling is shown in Figure \ref{fig:sched}. We illustrate with an example with operands 4 words in length (i.e.\ $l/k=4$), and the computation of each inner iteration takes 4 clock cycles to complete. Initially, the schedule computes $q$ for the first inner loop (not shown in the figure), and then initiates the inner iterations sequentially. In the meantime, as soon as $S^0$ is ready, it can be used to compute $q$ for the next inner loop. Hence, when the last inner iteration ends, the first iteration of the next inner loop can start immediately with the precomputed $q$. Therefore, we enable a tight schedule that initiates an inner iteration each clock cycle. The resulted execution clock cycle is $(l/k)(l/k + 1)$, plus the number of pipeline stages, and a few cycles for data read-in and write-out.
\subsubsection{Resource Allocation}
In this work, we utilize the embedded DSP blocks on the FPGA chip to construct pipelined multipliers. For the remaining logic, including adder, multiplexer, integer comparison, finite state machine, etc., we leave them to lookup-table (LUT). As DSP on FPGA is scarce and expensive, we use them to carry out the heavy multiplication only. Further, we will show purely relying on LUT to implement ModMult operation is not economic (Section \ref{sec:eval}). Therefore, we will focus on the usage of LUT and DSP, and reduce the area and DSP usage without sacrificing the performance.
We encapsulate the operations comprising an inner iteration into a processing element (PE), as shown in Figure \ref{fig:pe}. Each PE contains two multipliers to perform the two independent multiplications $x*y$ and $q*m$. Then it accepts $S_{i-1}^j$ of the last outer iteration, and a carry word (not shown in the figure), adds them with the multiplication results, and then outputs $S_{i}^j$ and a carry word. Then we limit the number of PE to 1, with an ALLOCATION directive in HLS. This is to avoid the resource bloating owing to loop unrolling, so that only resource for computing one inner iteration is actually allocated, and to reduce the overall area of the micro-architecture.
We also employ the Karatsuba algorithm to construct DSP-conservative multipliers. As shown in Algorithm \ref{alg:karatsuba}, Karatsuba algorithm performs an integer multiplication by recursively breaking it into three of half size. Its efficiency is attributed to one multiplication less than the schoolbook algorithm, and we take advantage of it to allocate DSPs according to the actual number of operations. For instance, a DSP48E1 block is able to carry out $18\times 25$-bit multiplication, and a $32\times 32$ one can be divided into $16\times 16$ ones, and takes up 3 DSP blocks.
\begin{algorithm}[ht]
\KwIn{Operands $X$ and $Y$, the length of operand $k$}
\KwOut{$S=X*Y$}
Let $X=\overline{X_hX_l}$, $Y=\overline{Y_hY_l}$, where $X_h, X_l, Y_h, Y_L$ are $k/2$-bit integers\;
$HH\leftarrow Karatsuba(X_h, Y_h)$\;
$LL\leftarrow Karatsuba(X_l, Y_l)$\;
$HL\leftarrow Katatsuba(X_h + X_l, Y_h + Y_l)$\;
$S\leftarrow HH * 2^k + (HL - HH - LL) * 2 ^{k/2} + LL$\;
\KwRet{$S$}
\caption{Karatsuba algorithm}
\label{alg:karatsuba}
\end{algorithm}
\subsubsection{Clock Frequency}
The DSP units on the Xilinx FPGA run at a maximum frequency of 400-500MHz. To approach the frequency limit, we need to pay attention to the following measures:
\begin{itemize}
\item Declare the multipliers as pipelined multipliers. A pipelined multiplier takes multiple cycles to accomplish a multiplication, distributing its workload and relieving the burden of each cycle. It does no harm to the multiplication throughput since we have resolved the dependency between its input and output.
\item Restrict bitwidth of operands. The clock frequency is constraint by the critical path of the circuit, i.e.\ the longest path of gates a signal needs to pass through during one cycle. Arithmetic on integers such as addition or comparison usually results in a long carry chain, and thus we need to avoid computation on very long integers directly. In this work, we use 32-bit as the operand size, and the maximum bitwidth involved is 64 bits.
\item Simplify the control logic. For the finite state machine in charge of controlling the compute units, we use one-hot encoding scheme to represent the states for a fast lookup and match. The number of states is related to the number of iterations of each loop and thus one-hot encoding will be acceptable.
\end{itemize}
\subsubsection{Memory Usage}
Our design allocate each Paillier processors its own block RAM (BRAM) as local buffer, to hold the input/output data and the intermediate large integers involved in the computation. We do not share storage among processors to prevent data access contention.
Large integers are normally stored as arrays of words in the BRAM. However, we notice that the input data for encryption, which are encoded from floating point numbers used in machine learning, have few effective digits compared with the length of large integers. Therefore, we are able to store the input data as a sparse vector, i.e.\ only recording the non-zero elements and their indices, reducing the memory footprint.
\subsection{Implementation}\label{ssec:impl}
We develop our encryption framework with the AWS F1 instance and Xilinx SDAccel development suite. The basic logic of the encryption and decryption function is implemented with Xilinx high level synthesis (HLS), allowing to transform an algorithm described in C/C++ into tailor-made implementation on FPGA. Directives like loop pipelining and instance allocation are inserted into the HLS code to fine tune the performance of the resulted architecture.
On the host side, we use the OpenCL API to access the acceleration hardware. The OpenCL API provides an abstraction of the computing device like CPU, GPU and FPGA. An invocation to the device function is named a kernel. OpenCL is used to manage the data transfer between host and device, queue and invoke kernels, and monitor the execution events. We adopt the PyOpenCL APIs to implement a module that makes use of the FPGA device for cryptographic processes and incorporate it into the FATE framework.
The requests from the host side are divided into fixed size batches, and each batch invokes a kernel on device. Multiple kernels will be queued in the OpenCL command queue. This helps overlap data transfer with computation and hide latency.
We also preallocate buffers on the device, arranging them as a ring buffer, in order to reuse buffers among kernels and avoid frequent memory allocation.
\begin{figure}[th]
\centering
\includegraphics[width=0.8\linewidth]{cmdQ.png}
\caption{Queueing kernels for execution}
\label{fig:cmdQ}
\end{figure}
\section{Evaluation}\label{sec:eval}
We conduct experiments aiming to perform an extensive evaluation on the proposed encryption framework. We first perform a microscopic examination, comparing the implementation of Paillier algorithm and ModMult operation with software solutions and existing FPGA designs. Then we study its improvement on the overall performance of training process of federated learning. The training tasks are carried out on the open-sourced version of the FATE machine learning framework. We choose two linear models, and adopt Kaggle datasets on breast cancer\footnote{https://www.kaggle.com/uciml/breast-cancer-wisconsin-data} and motor temperature\footnote{https://www.kaggle.com/wkirgsn/electric-motor-temperature} and partition the datasets vertically.
We attempt to answer the following questions empirically with the evaluation experiments:
\begin{itemize}
\item How do the Paillier processors perform, especially for the ModMult operation, in terms of throughput and resource-efficiency?
\item How does the hardware framework compare with software solutions of Paillier cryptosystem in terms of en/decryption throughput?
\item How much does the framework affect the training throughput of federated learning with respect to different models or algorithms?
\end{itemize}
\begin{table*}[h]
\centering
\begin{tabular}{cccccc}
\toprule
Implementation & Area(slice) & DSP & Clock frequency(MHz) & Execution time(us) & Throughput per DSP(op/s) \\
\midrule
This work & 483 & 9 & 500 & 8.81 & {\bf 12626} \\
\hline
\cite{SanA14} & 567 & 13 & 490 & 8.64 & 8903 \\
\hline
\cite{SongKNI10} & 180 & 1 & 447 & 135.4 & 7385 \\
\hline
\cite{HuangGE11} & 9268 & NA & 129 & 18.70 & NA \\
\hline
\end{tabular}
\caption{Comparison of ModMult operaion}\label{tab:mm}
\end{table*}
Given the broad adoption of the ModMult operation, many implementation has been proposed by researchers, and we compare ours with them in Table \ref{tab:mm}. Since we are targeting datacenter acceleration chips and applications, the DSP efficiency is a key factor evaluating an implementation. Comparing with the state of the art solution \cite{SanA14}, our ModMult module delivers a close latency but uses fewer DSPs due to our precise limit on resource usage. The authors of \cite{SongKNI10} proposes an implementation using only one DSP and one block RAM. However, without employing the Karatsuba algorithm, their version turns out to be less efficient than ours. \cite{HuangGE11} gives an implementation using circuit elements entirely without DSP, and it shows that an such a ModMult module consumes much area and limits the clock frequency, and hence not recommendable. Moreover, most of existing solutions are based on register-transfer level (RTL) that describes the circuit directly, but lacks the flexibility of parametrizing and reusing the ModMult module as our HLS version does.
To evaluate the effectiveness of the scheduling of ModMult operation, we compare the number of execution clock cycles with the theoretically ideal clock cycle, given as $T=(l/k)(l/k+1)$ (Section \ref{sec:des}). As shown in Figure \ref{fig:cc}, for different sizes of operands, our implementation keeps no more than 10\% higher than the ideal. The gap is mainly due to pipeline stages, time for initialization and data transfer.
\begin{figure}[ht]
\begin{tabular}{cc}
\begin{minipage}[t]{0.48\linewidth}
\includegraphics[width = 1\linewidth]{cc.pdf}
\caption{Number of execution clock cycles of ModMult operation}
\label{fig:cc}
\end{minipage}
\begin{minipage}[t]{0.48\linewidth}
\includegraphics[width = 1\linewidth]{mc.pdf}
\caption{Throughput of FPGA and multicore processor}
\label{fig:mc}
\end{minipage}
\end{tabular}
\label{fig:tmp}
\end{figure}
\begin{figure}[ht]
\begin{tabular}{cc}
\begin{minipage}[t]{0.48\linewidth}
\includegraphics[width = 1\linewidth]{phe1.pdf}
\caption{Encryption Throughput Compared with Software}
\label{fig:phe1}
\end{minipage}
\begin{minipage}[t]{0.48\linewidth}
\includegraphics[width = 1\linewidth]{phe2.pdf}
\caption{Decryption Throughput Compared with Software}
\label{fig:phe2}
\end{minipage}
\end{tabular}
\label{fig:phe}
\end{figure}
To investigate the performance of FPGA and software solution, we compare the framework with PHE, a popular Paillier library, as shown in Figure \ref{fig:phe1} and \ref{fig:phe2}. We can see that for a 1024-bit public key, our framework delivers an acceleration ratio of $10.62\times$ and $2.76\times$ for encryption and decryption, respectively. We also compare FPGA with a multicore processor using {\tt libpaillier} library, as shown in Figure \ref{fig:mc}. It shows that an FPGA effectively outperforms a multicore CPU and is advisable to be used in accelerating computational intensive applications.
Additionally, we test the modified FATE with linear models and the breast and motor datasets. We train a logistic regression and a linear regression model on the two datasets respectively for 10 iterations, and record the timing.
Figure \ref{fig:fate1} and Figure \ref{fig:fate2} show the training iteration time and the encryption time in each iteration respectively. It demonstrates that for linear models, our framework reduce the training iteration time by up to 26\%, and the encryption time during one iteration by 71.2\%.
\begin{figure}[ht]
\begin{tabular}{cc}
\begin{minipage}[t]{0.48\linewidth}
\includegraphics[width = 1\linewidth]{fate1.pdf}
\caption{Improvement on Iteration Time}
\label{fig:fate1}
\end{minipage}
\begin{minipage}[t]{0.48\linewidth}
\includegraphics[width = 1\linewidth]{fate2.pdf}
\caption{Improvement on Encryption Time Per Iteration}
\label{fig:fate2}
\end{minipage}
\end{tabular}
\label{fig:fate}
\end{figure}
\section{Conclusion}\label{sec:con}
In this paper, we have demonstrated the significance of accelerating homomorphic encryption and modular operations. We explored a compact architecture for Paillier cryptosystem with an HLS-based approach, investigating how to optimize the performance, and incorporated the FPGA framework into a federated learning system. We conducted extensive experiments to present the effectiveness and efficiency of our encryption framework.
\newpage
\bibliographystyle{named}
| proofpile-arXiv_065-232 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Invariances of CNNs}
There are typically six types of layers in the deep CNN architecture: an input layer, convolution layers, non-linearity (ReLU) layers, (MAX) pooling layers, fully connected layers, and an output (Softmax) layer. Pixels in the input layer or units in other layers are arranged in three dimensions: \textit{width} (denoted by $W$), \textit{height} (denoted by $H$), and \textit{channel} (denoted by $C$). Each layer before fully connected layers maps a 3D input volume to a 3D output volume. The 3D output volume is the activation of the current layer and becomes the 3D input volume to the next layer.
\begin{figure}[ht!]
\centering
\includegraphics[width = 0.4\linewidth, bb=0 0 200 200]{figures/rf.pdf}
\caption{Computing the receptive field size.
}
\label{fig:rf}
\end{figure}
\subsection{Receptive Fields}
The \textit{receptive field} of a unit resides in a $WH$ slice of its previous layer and is a maximal 2D region that affects the activation of the unit. Any element at the outside of the receptive field does not affect the unit~\cite{LLUZ-NIPS-2016}. The receptive field of a unit \textit{in a specific layer} (not the layer to which the unit belongs) resides in a $WH$ slice of the layer and is the maximal region that can possibly affect the unit's activation.
Suppose that a unit in the $(i+1)$th layer has a $w_{i} \times h_{i}$ receptive field in the $i$th layer. Also, the $i$th layer has a $w_{K_{i}} \times h_{K_{i}}$ filter with strides of $s_{W_{i}}$ in width and $s_{H_{i}}$ in height. Then, the unit has a $w_{i-1} \times h_{i-1}$ receptive field in the $(i-1)$th layer in the following manner:
\[ w_{i-1} = s_{W_{i}} \cdot w_{i} + w_{K_{i}} - s_{W_{i}} \]
\[ h_{i-1} = s_{H_{i}} \cdot h_{i} + h_{K_{i}} - s_{H_{i}} \]
For example, assume that a unit has a $3 \times 3$ receptive field in its previous layer, and that the previous layer has a $3 \times 3$ filter with strides of $s_W=3$ and $s_H=2$. Then, the unit will have a $9 \times 7$ receptive field in its previous layer as shown in Figure~\ref{fig:rf}. Similarly, we can compute the input-image receptive field of a unit in any layer.
Deep CNN models increase the size of a unit's input-image receptive field by adding more convolutional and pooling layers. However, units located at the boundaries of a $WH$ slice have a much smaller input-image receptive field than units in the middle of the slice.
\subsection{Invariances}
A function $h$ is \textit{equivariant} to a transformation $g$ of input $\bm{x}$ if a corresponding transformation $g'$ of the output $h(\bm{x})$ can be found for all input $\bm{x}$, \textit{i.e.}, $h(g(\bm{x})) = g'(h(\bm{x}))$. When $g'$ is an identity function, $h$ is \textit{invariant} to $g$~\cite{SCRO-CVPR-2012, COWE-ICML-2016, DFKA-ICML-2016}. An invariant transformation is also equivariant, but not \textit{vice versa}.
CNN models are known to be able to automatically extract invariant features to translation and small rotation/scaling using three mechanisms: \textit{local receptive fields}, \textit{parameter sharing}, and \textit{pooling}.
Pooling layers are approximately invariant to small translation, rotation, and scaling. In other words, pooling layers provide CNN models with spatial invariance to small changes in feature positions because of their filter size and selection mechanism.
Sliding window and parameter sharing mechanisms in a convolutional layer make each unit to have a small local receptive field (the same size as its filter) that sweeps the input volume, resulting in translation equivariance of the convolutional layer. Thus, each unit in the same channel of the layer detects the same feature irrespective of the position. However, the resulting feature map is not translation invariant.
Even though a convolutional layer is not translation invariant, it builds up higher-level features by combining lower-level features. After going through the deep hierarchy of convolutional and pooling layers, a CNN model can capture more complex features. That is, each pooling layer in the hierarchy picks up more complex features because of the previous convolutional layers, and its spatial invariance to small changes in feature positions is amplified because of the previous pooling layers.
As a result, the last pooling layer captures the highest-level features and has the strongest spatial invariance among the convolutional and pooling layers. Moreover, for a unit in the last pooling layer, the deep hierarchy makes its input-image receptive field to be the biggest. Thus, the deep hierarchy of convolutional and pooling layers enables the CNN model to integrate features over a large spatial extent in the original input image and to have moderate translation invariance.
Using a polar coordinate system and the cylindrically sliding window mechanism, CyCNN exploits and enhances such moderate translation invariance that already exists in CNN models.
\section{Conclusions}
In this paper, we propose CyCNN that exploits polar coordinate mapping and cylindrical convolutional layers to achieve rotational invariance in image classification tasks. Basically, any CNN model can be converted to a CyCNN model by applying a polar mapping to the input image and by replacing a convolutional layer with a cylindrical convolutional layer. The experimental result indicates that when the training dataset is not augmented, CyCNN has significantly better rotated-image-classification accuracy than conventional CNNs. CyCNN models still achieve competitive accuracies when both rotation and translation augmentations applied to the training images. To speedup computation in cylindrical convolutional layers, we also propose a Winograd algorithm for cylindrical convolution.
One major advantage of CyCNN is that the polar coordinate conversion and cylindrical convolution can be easily applied to any conventional CNN model without significant slowdown nor the need for more memory. We expect further studies to adapt CyCNN on various CNN models to enhance rotational invariance in their tasks.
Our implementation of CyCNN is publicly available on \url{https://github.com/mcrl/CyCNN}.
\section{Achieving Rotational Invariance}
CyCNN exploits the moderate translation invariance property of CNNs to achieve rotation invariance. CyCNN converts the rotation of an input image to a translation by converting the Cartesian representation of the input image to the polar representation. Then, it applies the \textit{cylindrically sliding window} (CSW) mechanism to convolutional layers to maximize the existing translation invariance of CNNs.
\subsection{Polar Coordinate System}
We assume each pixel in an input image is a point in the Cartesian coordinate system without having any physical area occupied by itself. A point $(x, y)$ in the Cartesian coordinate system is converted to a point $(\rho, \phi)$ in the polar coordinate system as follows:
\begin{small}
\[
\rho = \sqrt{x^2 + y^2}
\]
\[
\phi = \left\{
\begin{array}{ll}
\arctan(\frac{y}{x}) & \;\mbox{if}\; x > 0 \;\mbox{and}\; y \geq 0 \\
\frac{\pi}{2} & \;\mbox{if}\; x = 0 \;\mbox{and}\; y > 0 \\
\pi + \arctan(\frac{y}{x}) & \;\mbox{if}\; x < 0 \;\mbox{and}\; y \geq 0\\
\pi + \arctan(\frac{y}{x}) & \;\mbox{if}\; x < 0 \;\mbox{and}\; y < 0\\
\frac{3\pi}{2} & \;\mbox{if}\; x = 0 \;\mbox{and}\; y < 0 \\
2\pi + \arctan(\frac{y}{x}) & \;\mbox{if}\; x > 0 \;\mbox{and}\; y < 0 \\
\mbox{undefined} & \;\mbox{if}\; x = 0 \;\mbox{and}\; y = 0
\end{array}
\right.
\]
\end{small}
\begin{figure}[ht!]
\centering
\subfigure[]{\includegraphics[width = 0.5\linewidth, bb=0 0 550 550]{figures/cartesian3.PNG}}
~
\subfigure[]{\includegraphics[width = 0.45\linewidth, bb=0 0 550 550]{figures/polar3.PNG}}
\caption{Converting the Cartesian coordinate system to the polar coordinate system. The concentric circles in the Cartesian coordinate system in (a) are mapped to vertical lines in the polar coordinate system in (b).
}
\label{fig:polar-point}
\end{figure}
Rotation in the Cartesian coordinate system become vertical translation in the polar coordinate system. For example, A point $p=(x_p, y_p)$ (colored in red) in the Cartesian coordinate system in Figure~\ref{fig:polar-point} (a) corresponds to a point $p=(\rho_p, \phi_p)$ in the polar coordinate system in (b). A point $p'$ in Figure~\ref{fig:polar-point} (a) is obtained after rotating a point $p$ around the origin $(0, 0)$ by $\phi_{p'} - \phi_p$ radians. The polar coordinate conversion maps $p$ to a point $p'=(\rho_{p'}, \phi_{p'})$ in (b). Since $\rho_{p'} = \rho_p$, the rotation becomes translation along the $\phi$ axis by $\phi_{p'} - \phi_p$ radians in (b). Note that vertical translation in polar coordinate system can go over the boundary of the image. Rotation of the point $q=(x_q, y_q)$ (colored in blue) in Figure~\ref{fig:polar-point} (a) shows the case. Since the rotation crosses the $(x>0, y=0)$ ray in the Cartesian coordinate system, its vertical translation in polar coordinate system goes over the $\phi=2\pi$ boundary as shown in Figure~\ref{fig:polar-point} (b).
The log-polar coordinate system is exactly the same as the polar coordinate system except that it takes a logarithm when calculating the distance from the origin. The calculation of $\rho$ changes into $\rho = log(\sqrt{x^2 + y^2})$. The log-polar representation of an image is inspired by the structure of the human eye and is widely used in various vision tasks~\cite{traver_bernardino_2010, 670927}.
\subsection{Input Image Conversion}
In CyCNN, an input image is first converted to the polar or log-polar representation. Assuming that the object is placed at the center of the image, we take the center of the input image as the origin. The origin becomes the point on the bottom left corner in the polar and log-polar representations.
\begin{figure}[htbp]
\centering
\subfigure[]{\includegraphics[width = 0.3\linewidth, bb=0 0 240 240]{figures/cheetah-cartesian.png}}
~
\subfigure[]{\includegraphics[width = 0.3\linewidth, bb=0 0 240 240]{figures/cheetah-polar.png}}
~
\subfigure[]{\includegraphics[width = 0.298\linewidth, bb=0 0 240 240]{figures/cheetah-logpolar.png}}
\vskip -0.1in
\caption{Polar and log-polar representations of an image. The red circle in (a) shows the bounding circle that indicates the maximum radius ($\rho_{max}$) of the polar coordinate. This circle is transformed to a straight line in polar and log-polar representations as shown in (b) and (c).}
\label{fig:cheetah-polar-comparison}
\end{figure}
Figure~\ref{fig:cheetah-polar-comparison} shows an example of polar and log-polar representations of an image. The polar representation already has some distortion. This is inevitable because the central and outer sections of the original image cannot preserve their area in the converted image. Furthermore, we see that the log-polar representation of the image is more distorted because the logarithm makes the central section of the original image to occupy more area.
Note that we cannot exactly map each pixel in the original image to a pixel in the polar representation because each pixel physically occupies a space. Hence, to avoid the aliasing problem that frequently occurs in image conversion, we use the bilinear interpolation technique.
The maximum radius ($\rho_{max}$) in the polar representation is a configurable parameter in the Cartesian to polar coordinate conversion. That is, we can vary the size of \textit{bounding circle} of the original image. An example of the bounding circle is shown in Figure~\ref{fig:cheetah-polar-comparison} (a). In this paper, we set the size of the bounding circle to maximally fit in the original image as shown in Figure~\ref{fig:cheetah-polar-comparison} (a).
\begin{figure}[htbp]
\vskip 0.4in
\centering
\subfigure[]{\includegraphics[width = 0.17\linewidth, bb=0 0 300 300]{figures/cheetah-transform-rot0.png}}
~
\subfigure[]{\includegraphics[width = 0.17\linewidth, bb=0 0 300 300]{figures/cheetah-transform-rot90.png}}
~
\subfigure[]{\includegraphics[width = 0.165\linewidth, bb=0 0 300 300]{figures/cheetah-transform-rot180.png}}
~
\subfigure[]{\includegraphics[width = 0.17\linewidth, bb=0 0 300 300]{figures/cheetah-transform-rot270.png}}
\vskip -0.1in
\caption{Images in the top row are generated by rotating the image in (a) by (b) 90$^{\circ}$, (c) 180$^{\circ}$, and (d) 270$^{\circ}$. Their corresponding polar coordinate representations are at the bottom row.}
\label{fig:cheetah-transform}
\end{figure}
Another example of the Cartesian to polar coordinate conversion is shown in Figure~\ref{fig:cheetah-transform}. The top row of Figure~\ref{fig:cheetah-transform} shows the results of a lion image rotated by
90$^{\circ}$, 180$^{\circ}$, and 270$^{\circ}$ in the Cartesian representation. The bottom row shows corresponding polar representations of them. We see that rotations in the Cartesian representation become cyclic vertical translations in the polar representation.
\subsection{Cylindrically Sliding Windows}
As mentioned before, a CNN model can integrate features over a large spatial extent in the original input image. This is because the deep hierarchy of convolution and pooling layers makes the effective receptive field of each unit bigger. It also allows the CNN model to have moderate translation invariance.
\begin{figure}[ht!]
\centering
\subfigure[]{\includegraphics[width = 0.17\linewidth, bb=0 0 230 230]{figures/cheetah-frame0.PNG}}
~
\subfigure[]{\includegraphics[width = 0.17\linewidth, bb=0 0 230 230]{figures/cheetah-frame90.PNG}}
~
\subfigure[]{\includegraphics[width = 0.17\linewidth, bb=0 0 230 230]{figures/cheetah-frame180.PNG}}
~
\subfigure[]{\includegraphics[width = 0.17\linewidth, bb=0 0 230 230]{figures/cheetah-frame270.PNG}}
\vskip -0.1in
\caption{The effect of the input-image receptive field of a unit in CyCNN. (a) is the original image in the polar representation. (b), (c) and (d) are created by rotating the original image by 90$^{\circ}$, 180$^{\circ}$ and 270$^{\circ}$ respectively.}
\label{fig:erf}
\end{figure}
The input to CyCNN is an image that is in the polar representation. Consider the images (a), (b), (c), and (d) in Figure~\ref{fig:erf}. (a) is the original image represented in the polar coordinate system. (b), (c) and (d) are images created by rotating the original image by 90$^{\circ}$, 180$^{\circ}$ and 270$^{\circ}$ before represented in the polar coordinate system. The red rectangle is the input-image receptive field of a unit in some pooling layer. When a CNN model is trained with the image in (a), the unit captures and learns important features (pair of eyes in this example) in the lion's face. However, when the 270$^{\circ}$-rotated image in (d) is used as a test image, the CNN model may not recognize it as a lion because the two eyes are too far apart. Even if the receptive field captures the two eyes, the CNN model might not be able to infer them as the two eyes because their relative positions are switched.
\begin{figure}[htbp]
\centering
\centerline{\includegraphics[width=0.6\linewidth, bb=0 0 250 250]{figures/cylinder.pdf}}
\caption{Cylindrically Sliding Windows (CSW) in CyCNN.}
\label{fig:sliding-window}
\end{figure}
To solve this problem, we propose \textit{cylindrically sliding windows} (CSW) for units in convolutional layers. We call such a convolutional layer a \textit{cylindrical convolutional layer} (a \textsf{CyConv} layer in short). The CSW is illustrated in Figure~\ref{fig:sliding-window}. Instead of performing zero padding at the top and bottom boundaries of the input to the convolutional layer, pixels in the first row (row 0) are copied to the boundary at the bottom, and pixels in the last row (row 7) are copied to the boundary at the top of the input. As usual, zero padding is applied to the left and right boundaries of the input. This process is the same as rolling the input vertically to make the top boundary and the bottom boundary meet together. Rolling the input in this way results in a cylinder-shaped input. The CyConv layer cyclically scans the surface of the cylindrical input with its filter.
What essentially CSW is doing is vertically extending the size of a boundary unit's receptive field in the original input image. Conceptually, CSW wraps around the input and provides it to each convolutional layer. By combining the CSW with the deep hierarchy of convolutional and pooling layers, CyCNN captures more relationships between features.
\subsection{Converting a CNN to a CyCNN}
We can transform any CNN model into a CyCNN model easily by applying the Cartesian to polar coordinate conversion to the input image and by replacing every convolutional layer into a \textsf{CyConv} layer. Most of conventional CNNs use convolutional layers with paddings of size 1, which makes input and output feature maps have the same $WH$ size. This allows us to keep all other layers in the same configuration.
Since \textsf{CyConv} layers only extend the size of the boundary unit's receptive field, a CyCNN model has exactly the same amount of learnable parameters as the corresponding original CNN model. Also, optimizations used in convolutional layers, such as the Winograd convolution algorithm~\cite{7780804}, can be applied to \textsf{CyConv} layers. The Cartesian to polar coordinate conversion takes only a small portion of overall computation. Hence, the CyCNN model requires the same amount of memory and runs at almost the same speed as the original CNN model.
\subsection{Cylindrical Winograd Convolution}
\begin{figure}[htbp]
\vskip 0.3in
\centering
\centerline{\includegraphics[width=0.8\linewidth, bb=0 0 550 550]{figures/cywino.pdf}}
\caption{Data tiling phases of the Winograd algorithm and the \textsf{CyWino} algorithm.}
\label{fig:cywino}
\end{figure}
To train CyCNN in a reasonable time frame, we propose to implement a \textsf{CyConv} layer using the Winograd algorithm~\cite{7780804}. We call this layer a \textit{cylinderical Winograd convolutional} layer (a \textsf{CyWino} layer in short).
The Winograd algorithm consists of five steps; (1) 4x4 tiles are fetched with the stride of 2 from the padded input image (i.e., the data tiling phase), (2) 3x3 filters are fetched, (3) both input tiles and filters are transformed into 4x4 Winograd tiles, (4) element-wise multiplications are performed, and (5) the results are transformed to the output features. The \textsf{CyWino} layer performs the same computation as that of the original Winograd convolution layer except for the first step.
Figure \ref{fig:cywino} describes the difference between the Winograd algorithm and the \textsf{CyWino} algorithm. In this example, a small, $4\times4$, zero-padded input image is assumed to be convolved with a $3\times3$ filter, where the padding size is 1 ((A)). In this case, we fetch four $4\times4$ tiles as shown in the figure ((a)). In the case of \textsf{CyWino}, we fill the padding considering the nature of CSW to generate a new padded input ((B)) and fetches 4x4 tiles as usual ((b)). The rest of the computations are the same as those of the original Winograd algorithm.
\section{Experiments}
In this section, we evaluate CyCNN using four image datasets: MNIST~\cite{LBBH-IEEE-Proc-1998}, Street View House Numbers (SVHN)~\cite{Netzer2011}, CIFAR-10 and CIFAR-100~\cite{Krizhevsky09learningmultiple}.
We are aiming at showing the effectiveness of the polar mapping and cylindrically sliding windows by comparing CyCNN models with conventional CNN models.
\subsection{CNN Models}
We take VGG19 (with batch normalization)~\cite{SIZI-ICLR-2015} and ResNet56 ~\cite{HZRS-CVPR-2016} as our baseline models. By applying the polar transformation to the input image and replacing convolutional layers with \textsf{CyConv} layers, we obtain CyVGG19 and CyResNet56 models. Suffixes -P and -LP indicate that input images are transformed into polar and log-polar representations, respectively.
\subsection{Datasets}
\textbf{MNIST}. The MNIST dataset~\cite{LBBH-IEEE-Proc-1998} is an image database of handwritten digits. It consists of a training set of 60,000 images and a test set of 10,000 images. The digits in the images have been size-normalized and centered in a fixed-size $28 \times 28$ image. To match the size of images with other datasets, every image is resized to $32 \times 32$.
\textbf{SVHN}. The Street View House Numbers~\cite{Netzer2011} (SVHN) dataset consists of over 600,000 $32 \times 32$ color images of house numbers in Google Street View. The training set consists of 75237 images, and the test set consists of 26032 images. Remaining images are extra training data that are not used in this experiment. Unlike the MNIST dataset, digits 6 and 9 are hardly distinguishable if images are rotated. Thus, we treat these two digits as the same at training/testing.
\textbf{CIFAR-10 and CIFAR-100}. The CIFAR-10 dataset~\cite{Krizhevsky09learningmultiple} consists of 60,000 $32 \times 32$ colour images in 10 classes with 6,000 images per class. There are 50,000 training images (5,000 for each class) and 10,000 test images (1,000 images for each class) in CIFAR-10. The CIFAR-100 dataset is the same as the CIFAR-10 dataset except that it has 100 image classes. Thus, there are 500 training images and 100 test images for each class in CIFAR-100.
In every dataset, 10\% of the training set is used as the validation set. The only additional preprocessing we perform on input images is normalization.
\subsection{Implementation}
We use PyTorch~\cite{NEURIPS2019_9015} library to implement and evaluate models. We manually implement \textsf{CyConv} layers in CUDA~\cite{cuda} to train CyCNN models on GPUs. We integrate the CUDA kernels into PyTorch. We use OpenCV~\cite{opencv_library} library to implement image rotation and the polar coordinate conversion.
We manually implement the \textsf{CyWino} layer as well. When we use the \textsf{CyWino} layer, training CyVGG19~\cite{SIZI-ICLR-2015}
becomes $15\times$ faster compared to the case of using \textsf{CyConv} layer.
When we manually implement the convolution layer using the original Winograd convolution algorithm and check its execution time, it is almost the same as that of \textsf{CyWino} layer. The \textsf{CyWino} algorithm can be integrated into highly optimized Winograd convolution implementations (\textit{e.g.} cuDNN) as well only with negligible overhead.
\subsection{Training and Testing}
We train every model using the Stochastic Gradient Descent optimizer with weight decay=$1\times10^{-5}$ and momentum=$0.9$. The cross-entropy loss is used to compare the output with the label. The learning rate is set to the initial value of 0.05, then it is reduced by half whenever the validation loss does not decrease for 5 epochs. Training completes if there is no validation accuracy improvement for 15 epochs.
In every experiment, models are tested with a rotated version of each dataset. That is, each image in the datasets is rotated by a random angle between $[0^{\circ}, 360^{\circ})$. Rotated datasets are denoted as MNIST-r, and SVHN-r, CIFAR-10-r, and CIFAR-100-r.
We train each model with four different types of training data augmentation. No augmentation (original dataset), rotation (suffixed with -r), translation (suffixed with -t), and rotation+translation (suffixed with -rt). Rotation augmentation in training is done in the same way as the test dataset. The translation augmentation randomly translates each image vertically by at most quarter of the height of the image and horizontally by at most quarter of the width of the image.
All experiments are done without any extensive hyper-parameter tuning nor fine-tuning of each model. We checked that we can obtain stable test accuracy for multiple training runs.
\subsection{Accuracy}
\begin{table}[h]
\caption{Test accuracies on rotated datasets. Models are trained with original datasets without any data augmentation.}
\label{tab:result1}
\begin{center}
\begin{scriptsize}
\vskip 0.1in
\begin{tabular}{c||cccc}
\toprule
Train Dataset & MNIST & SVHN & CIFAR-10 & CIFAR-100\\
\hline
Test Dataset & MNIST-r & SVHN-r & CIFAR-10-r & CIFAR-100-r \\
\hline
VGG19 & 47.20\% & 36.12\% & 32.56\% & 16.73\%\\
VGG19-P & 55.53\% & 43.24\% & 38.21\% & 19.96\%\\
VGG19-LP & 55.38\% & 44.76\% & 37.3\% & 18.14\%\\
\textbf{CyVGG19-P} & \textbf{85.49\%} & \textbf{79.77\%} & \textbf{57.58\%} & \textbf{29.76\%} \\
\textbf{CyVGG19-LP} & \textbf{82.90\%} & \textbf{73.91\%} & \textbf{55.94\%} & \textbf{28.32\%} \\
ResNet56 & 44.11\% & 35.34\% & 32.05\% & 17.00\%\\
ResNet56-P & 58.95\% & 50.39\% & 38.74\% & 21.26\%\\
ResNet56-LP & 59.55\% & 48.95\% & 37.54\% & 20.06\%\\
\textbf{CyResNet56-P} & \textbf{96.71\%} & \textbf{80.25\%} & \textbf{61.27\%} & \textbf{34.10\%} \\
\textbf{CyResNet56-LP} & \textbf{96.84\%} & \textbf{76.71\%} & \textbf{57.08\%} & \textbf{29.15\%} \\
\bottomrule
\end{tabular}
\end{scriptsize}
\end{center}
\end{table}
Table \ref{tab:result1} shows the classification accuracies of the models trained with original datasets without any data augmentation. Applying the polar coordinate conversion to input images increase classification accuracies in both VGG19 and ResNet56 models. It shows that applying the polar mapping to the input images is beneficial to conventional CNN models. CyCNN significantly improves classification accuracies by exploiting the cylindrical property of polar coordinates. This indicates that our approach is effective to achieve rotational invariance in CNNs.
\begin{table}[h]
\caption{Test accuracies on rotated datasets. Models are trained with rotation-augmented training datasets.}
\label{tab:result2}
\begin{center}
\begin{scriptsize}
\begin{tabular}{c||cccc}
\toprule
Train Dataset & MNIST-r & SVHN-r & CIFAR-10-r & CIFAR-100-r\\
\hline
Test Dataset & MNIST-r & SVHN-r & CIFAR-10-r & CIFAR-100-r \\
\hline
VGG19 & 99.61\% & 88.70\% & 85.61\% & 57.87\%\\
VGG19-P & 99.35\% & 88.19\% & 75.88\% & 44.83\%\\
VGG19-LP & 98.65\% & 87.80\% & 72.03\% & 38.73\%\\
\textbf{CyVGG19-P} & \textbf{99.43\%} & \textbf{88.16\%} & \textbf{75.06\%} & \textbf{41.36\%} \\
\textbf{CyVGG19-LP} & \textbf{98.14\%} & \textbf{87.20\%} & \textbf{71.65\%} & \textbf{37.16\%} \\
ResNet56 & 99.49\% & 89.35\% & 83.92\% & 57.94\%\\
ResNet56-P & 99.41\% & 87.87\% & 73.16\% & 41.99\%\\
ResNet56-LP & 98.71\% & 87.86\% & 68.05\% & 38.33\%\\
\textbf{CyResNet56-P} & \textbf{99.47\%} & \textbf{87.47\%} & \textbf{71.24\%} & \textbf{41.94\%} \\
\textbf{CyResNet56-LP} & \textbf{98.30\%} & \textbf{87.21\%} & \textbf{67.38\%} & \textbf{37.94\%} \\
\bottomrule
\end{tabular}
\end{scriptsize}
\end{center}
\end{table}
We also would like to see how training data augmentations affect accuracies of the models. Table~\ref{tab:result2}, Table~\ref{tab:result3}, and Table~\ref{tab:result4} contain the experimental results.
\textbf{Rotation Augmentation.} A rotational augmentation of a dataset is more beneficial to conventional CNN models because they have strength in translation but weakness in rotation. As expected, CyCNN fails to improve its classification accuracy compared to the baseline CNN models. CNN-P/LP and corresponding CyCNN models achieve almost the same classification accuracies. This implies that the reason for the loss of accuracies is the side-effect of the polar mapping: translation in the original image does not remain the same in the converted image.
\begin{table}[h]
\caption{Test accuracies on translated datasets. Models are trained with translation-augmented training datasets.}
\label{tab:result3}
\begin{center}
\begin{scriptsize}
\begin{tabular}{c||cccc}
\toprule
Train Dataset & MNIST-t & SVHN-t & CIFAR-10-t & CIFAR-100-t\\
\hline
Test Dataset & MNIST-r & SVHN-r & CIFAR-10-r & CIFAR-100-r \\
\hline
VGG19 & 46.98\% & 37.92\% & 37.15\% & 21.21\%\\
VGG19-P & 52.48\% & 46.58\% & 45.15\% & 30.08\%\\
VGG19-LP & 50.27\% & 49.22\% & 45.87\% & 30.13\%\\
\textbf{CyVGG19-P} & \textbf{80.30\%} & \textbf{81.60\%} & \textbf{66.12\%} & \textbf{45.61\%} \\
\textbf{CyVGG19-LP} & \textbf{81.69\%} & \textbf{84.26\%} & \textbf{67.99\%} & \textbf{41.59\%} \\
ResNet56 & 46.46\% & 36.80\% & 34.68\% & 23.53\%\\
ResNet56-P & 58.29\% & 54.62\% & 47.80\% & 35.64\%\\
ResNet56-LP & 56.71\% & 53.49\% & 47.29\% & 32.03\%\\
\textbf{CyResNet56-P} & \textbf{94.07\%} & \textbf{84.78\%} & \textbf{68.37\%} & \textbf{50.86\%} \\
\textbf{CyResNet56-LP} & \textbf{96.60\%} & \textbf{88.87\%} & \textbf{73.23\%} & \textbf{46.71\%} \\
\bottomrule
\end{tabular}
\end{scriptsize}
\end{center}
\end{table}
\textbf{Translation Augmentation.} In contrast to the rotational augmentation, a translation augmentation can benefit more to CyCNN models because they are weak to the translation of an object in the input image. That is, a feature in the original image after translation does not preserve the original shape in the polar coordinates. By comparing the results of Table~\ref{tab:result1}, Table~\ref{tab:result2} and Table~\ref{tab:result3}, we see that the classification accuracies of CyCNN models are improved significantly by the translation augmentation. MNIST dataset is an exceptional case because numbers are already positioned at the exact center of the image. Thus, the translation augmentation does not give any benefit to CyCNN for MNIST.
\begin{table}[h]
\caption{Test accuracies on rotated and translated datasets. Models are trained with rotation+translation-augmented training datasets.}
\label{tab:result4}
\begin{center}
\begin{scriptsize}
\begin{tabular}{c||cccc}
\toprule
Train Dataset & MNIST-rt & SVHN-rt & CIFAR-10-rt & CIFAR-100-rt\\
\hline
Test Dataset & MNIST-r & SVHN-r & CIFAR-10-r & CIFAR-100-r \\
\hline
VGG19 & 99.47\% & 93.20\% & 83.56\% & 58.93\%\\
VGG19-P & 99.29\% & 91.50\% & 81.90\% & 54.68\%\\
VGG19-LP & 96.83\% & 92.00\% & 80.08\% & 49.31\%\\
\textbf{CyVGG19-P} & \textbf{99.44\%} & \textbf{92.30\%} & \textbf{83.31\%} & \textbf{55.22\%} \\
\textbf{CyVGG19-LP} & \textbf{98.22\%} & \textbf{91.70\%} & \textbf{78.92\%} & \textbf{51.29\%} \\
ResNet56 & 99.40\% & 90.90\% & 82.85\% & 58.27\%\\
ResNet56-P & 99.33\% & 88.60\% & 79.76\% & 53.97\%\\
ResNet56-LP & 97.77\% & 87.60\% & 79.11\% & 52.99\%\\
\textbf{CyResNet56-P} & \textbf{99.38\%} & \textbf{91.60\%} & \textbf{80.24\%} & \textbf{51.25\%} \\
\textbf{CyResNet56-LP} & \textbf{97.41\%} & \textbf{91.10\%} & \textbf{80.30\%} & \textbf{50.78\%} \\
\bottomrule
\end{tabular}
\end{scriptsize}
\end{center}
\end{table}
\textbf{Rotation and Translation Augmentation.} As shown in Table~\ref{tab:result4}, when both rotation and translation augmentations are applied to the training images, CyCNN models achieve competitive classification accuracies to the baseline CNN models.
\subsection{Parameters and Training Time}
\begin{table}[htbp]
\caption{The number of parameters and the training time per epoch of each model on CIFAR-10 dataset. The training time is measured on a single NVIDIA Tesla V100 GPU.}
\label{tab:params}
\begin{center}
\begin{scriptsize}
\begin{tabular}{c||c|c}
\toprule
Model & \# Params & Training time per epoch\\
\hline
VGG19 & 20.6M & 10.1 sec\\
CyVGG19 & 20.6M & 14.8 sec\\
ResNet56 & 0.85M & 13.9 sec\\
CyResNet56 & 0.85M & 31.6 sec\\
\bottomrule
\end{tabular}
\end{scriptsize}
\end{center}
\end{table}
As shown in Table~\ref{tab:params}, a CyCNN model has exactly the same number of parameters as that of its baseline CNN model. The CyCNN models run slower than baseline CNN models, especially for ResNet56. This is because our CUDA \textsf{CyWino} kernels (in CyVGG19 and CyResNet56) called by PyTorch are not fully optimized while cuDNN Winograd convolutions called by PyTorch in the baseline CNN models (VGG19 and ResNet56) are fully optimized. As mentioned earlier, we can speed up CyCNN models further by applying more optimizations to the kernels of \textsf{CyConv} layers.
\section{Implementation}
We use PyTorch~\cite{NEURIPS2019_9015} library to implement and evaluate models. We manually implement \textsf{CyConv} layers using CUDA~\cite{cuda} to train CyCNN models on GPUs. We integrate our CUDA kernels into PyTorch source code. We use OpenCV~\cite{opencv_library} library to implement image rotation and the polar coordinate conversion.
\subsection{Cylindrical Winograd Convolution}
While our manually-implemented CUDA kernels are functionally correct, those kernels are too slow compared to cuDNN~\cite{cudnn} non-cylindrical convolution primitives. For instance, when we convert convolution layers of VGG19~\cite{SIZI-ICLR-2015} to \textsf{CyConv} layers, the elapsed time for training one epoch becomes $20\times$ larger.
When we use the \textsf{CyWino} layer, still, training one epoch for VGG19 is $1.45\times$ slower than the cuDNN-based baseline. However, this is not because \textsf{CyConv} algorithm is inefficient, but because cuDNN is highly optimized compared to our implementation. We manually implement the Winograd convolution layer as well, and observe that the performance of it is the same as that of \textsf{CyConv} layer. This implies our techniques can be integrated into other highly-optimized GPU kernels (e.g.,cuDNN) without additional overheads.
\section{Introduction}
Convolutional Neural Networks (CNNs) have been very successful for various computer vision tasks in the past few years. CNNs are especially well suited to tackling problems of pattern and image recognition because of their use of learned convolution filters~\cite{LBBH-IEEE-Proc-1998, KSHI-NIPS-2012, ZEFE-ECCV-2014, SIZI-ICLR-2015, SLJS-CVPR-2015, HZRS-CVPR-2016}. Deep CNN models with some fine-tuning have achieved performance that is close to the human-level performance for image classification on various datasets~\cite{yalniz2019billionscale, touvron2019fixing}.
CNN models are empirically known to be invariant to a moderate translation of their input image even though the invariance is not explicitly encoded in them~\cite{NIPS2009_3790, SCRO-CVPR-2012, HZRS-ECCV-2014, LEVE-CoRR-2014, COWE-ICLR-2015, JSZK-NIPS-2015, COWE-ICML-2016, DFKA-ICML-2016}. This invariance becomes a beneficial property of the CNN models when we use them for image classification tasks. However, it is also known that conventional CNN models are not invariant to rotations of the input image. Such weakness leads researchers to explicitly encode rotational invariance in the model by augmenting training data or adding new structures to it.
In this paper, we propose a rotation-invariant CNN, called \textit{CyCNN}. CyCNN is based on the following key ideas to achieve rotational invariance:
\begin{itemize}
\item CyCNN converts an input image to a polar representation~\cite{SCHW-BC-1977, WECH-CGIP-1979, WIHO-CVIP-1992, BOLE-CVIU-1998}. Rotation of an image becomes translation in such a polar coordinate system.
\item To deal with the cylindrical property of the polar coordinate system, CyCNN uses \textit{cylindrical convolutional} (\textsf{CyConv}) layers. A \textsf{CyConv} layer exploits \textit{cylindrically sliding windows} (CSWs) to apply its convolutional filters to its inputs. Conceptually, the CSW mechanism wraps around the input, thus transforms the input into a cylindrical shape. Then, a \textsf{CyConv} layer makes its convolutional filters to sweep the entire cylindrical input.
\end{itemize}
\begin{figure}[htbp]
\centering
\centerline{\includegraphics[width=0.8\linewidth, bb=100 0 650 550]{figures/cycnn2.png}}
\caption{The structure of CyCNN.}
\label{fig:cycnn}
\end{figure}
Figure~\ref{fig:cycnn} shows the structure of a CyCNN model. It first converts an input image into a polar coordinate representation. Then the converted image is processed through multiple \textsf{CyConv} layers, non-linearity layers, and pooling layers to extract feature maps of the image. Finally, fully connected layers take the resulting feature map to produce a classification result. Note that any conventional CNN can be easily transformed to CyCNN by applying polar transformation to the input, and replacing their convolutional layers with \textsf{CyConv} layers.
We evaluate some conventional CNN models and corresponding CyCNN models for classification tasks on rotated MNIST, SVHN, CIFAR-10, and CIFAR-100 datasets. We show the rotational invariance of CyCNN models by comparing their classification accuracies with those of the baseline CNN models.
\section{Related Work}
There are several approaches to give invariance properties to CNN models. It is a common practice to augment the training set with many transformed versions of an input image to encode invariance in CNN models. Spatial transformer networks (STN)~\cite{JSZK-NIPS-2015} explicitly allows the spatial transformation of feature maps or input images to reduce pose variations in subsequent layers. The transformation is learned by the STN module in the CNN without any extra training supervision. TI-pooling layers~\cite{laptev2016ti} can efficiently handle nuisance variations in the input image caused by rotation or scaling. The layer accumulates all of the branches of activations caused by multiple transformed versions of the original input image and takes the maximum. The maximum allows the following fully connected layer to choose transformation invariant features. RIFD-CNN~\cite{CZHA-CVPR-2016} introduces two extra layers: a rotation-invariant layer and a Fisher discriminative layer. The rotation-invariant layer enforces rotation invariance on features. The Fisher discriminative layer makes the features to have small within-class scatter but large between-class separation. It uses several rotated versions of an input image for training. These approaches often result in a significant slowdown due to their computational complexities.
Another way is transforming input images or feature maps. Polar Transformer Networks (PTN)~\cite{esteves2018polar} combines ideas from the Spatial Transformer Network and canonical coordinate representations. PTN consists of a polar origin predictor, a polar transformer module, and a classifier to achieve invariance to translations and equivariance to the group of dilations/rotations.
Polar Coordinate CNN (PC-CNN)~\cite{8802940} transforms input images to polar coordinates to achieve rotation-invariant feature learning. The overall structure of the model is identical to traditional CNNs except that it adopts the center loss function to learn rotation-invariant features. PC-CNN outperforms AlexNet, TI-Pooling, and Ri-CNN~\cite{7560644} on a rotated image classification test when the trained dataset is also rotation-augmented. Amorim \textit{et al.}~\cite{8489295} and Remmelzwaal \textit{et al.}~\cite{remmelzwaal2019human} analyze the effectiveness of applying the log-polar coordinate conversion to input images. Both of the approaches focus on the property that the global rotation of the original image becomes translation in the log-polar coordinate system.
Finally, by transforming convolution filters~\cite{SCRO-CVPR-2012, SOLE-ICML-2012, NIPS2014_5424, DWDA-MNRAS-2015, COWE-ICML-2016, DFKA-ICML-2016, marcos2016, worrall2017harmonic}, we can give invariance properties to CNN models. Sohn and Lee~\cite{SOLE-ICML-2012} propose a transformation-invariant restricted Boltzmann machine. It achieves the invariance of feature representation using probabilistic MAX pooling. Schmidt and Roth~\cite{SCRO-CVPR-2012} propose a general framework for incorporating transformation invariance into product models. It predicts how feature activations change as the input image is being transformed. SymNet~\cite{NIPS2014_5424} forms feature maps over arbitrary symmetry groups. It applies learnable filter-based pooling operations to achieve invariance to such symmetries. Dieleman \textit{et al.}~\cite{DWDA-MNRAS-2015} exploit rotation symmetry by rotating feature maps to solve the galaxy morphology problem. Dieleman \textit{et al.}~\cite{DFKA-ICML-2016} further extend this idea to cyclic symmetries. G-CNN~\cite{COWE-ICML-2016} shows how CNNs can be generalized to exploit larger symmetry groups including rotations and reflections. Marcos \textit{et al.}~\cite{marcos2016} propose a method for learning discriminative filters in a shallow CNN. They tie the weights of groups of filters to several rotated versions of the canonical filter of the group to extract rotation-invariant features. Harmonic Networks~\cite{worrall2017harmonic} achieve rotational invariance by replacing regular CNN filters with harmonics. These approaches of transforming convolution filters have a limitation that it is not easy to adapt the mechanism into the structure of existing models.
CyCNN does not use data augmentation nor transform convolution filters. While it applies a polar conversion to input images, it replaces the original convolutional layers with cylindrical convolutional layers to extend their receptive field. There do exist some recent studies~\cite{esteves2018polar,8802940,8489295,remmelzwaal2019human} to apply a polar conversion to input images. However, none of them considers the cylindrical property of the polar representation.
| proofpile-arXiv_065-233 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
It is well known that in certain disordered media wave propagation can be completely halted due to the back-scattering of the randomly distributed impurities.
This phenomenon, known as Anderson localization~\cite{Anderson:LocAnderson:PR58}, has been reported for different kinds of waves, such as light waves in diffusive media~\cite{Wiersma:LightLoc:N97,Maret:AndersonTransLight:PRL06} or in disordered photonic crystals~\cite{Segev:LocAnderson2DLight:N07,Lahini:AndersonLocNonlinPhotonicLattices:PRL08}, ultrasound~\cite{vanTiggelen:AndersonSound:NP08}, microwaves~\cite{Chabanov:StatisticalSignaturesPhotonLoc:N00} and atomic matter waves~\cite{Billy:AndersonBEC1D:N08,Roati:AubryAndreBEC1D:N08}.
Its occurrence is ruled by the spatial dimension of the system and by the symmetries of the model, which determine its universality class~\cite{Altland:PRB1997}.
When both spin-rotational and time-reversal symmetries are preserved, notably in the absence of magnetic fields and spin-orbit couplings, all wave-functions are exponentially localized in one and two dimensions. In three and higher dimensions the system possesses both localized and extended states, separated in energy by a critical point, dubbed the mobility edge, where the system
undergoes a metal-insulator transition~\cite{Evers:AndersonTransitions:RMP08}.
Anderson transitions have recently been detected using noninteracting atomic quantum gases~\cite{Kondov:ThreeDimensionalAnderson:S11,Jendrzejewski:AndersonLoc3D:NP12,Semeghini:2014} exposed to three-dimensional (3D) speckle potentials. Theoretical predictions for the mobility edge of atoms have also been reported~\cite{Yedjour:2010,Piraud:PRA2014,Delande:MobEdgeSpeckle:PRL2014,Pilati:LevelStats:2015,Pasek:3DAndersonSpeckle:PRA2015,Pilati:3DAndersonSpeckle:2015,Pasek:PRL2017,Orso:SpinOrbit:PRL2017} and compared with the experimental data.
Interactions can nevertheless significantly perturb the single-particle picture of Anderson localization. Puzzling metal-insulator transitions~\cite{Kravchenko:PRB1994}, discovered in high-mobility 2D electron systems in silicon, were later interpreted theoretically
in terms of a two-parameter scaling theory of localization, which combines disorder and strong electron-electron interactions~\cite{Punnoose:Science2005,Knyazev:PRL2008}.
In more recent years a growing interest has emerged
around the concept of many-body localization~\cite{GornyiPRL2005,Altshuler:MetalInsulator:ANP06} (MBL), namely the generalization of Anderson localization to disordered interacting quantum systems at finite particle density (for recent reviews see Refs.~\cite{Nandkishore2015,ALET2018498,Abanin:RMP2019}).
In analogy with the single-particle problem, MBL phases are separated from (ergodic) thermal phases by critical points situated at finite energy density, known as many-body mobility edges.
While MBL has been largely explored in one dimensional systems with short range interactions,
both experimentally~\cite{Schreiber:Science2015,Rispoli:Nature2019} and
theoretically~\cite{PhysRevB.75.155111,PhysRevB.91.081103,Michal:PRL2014,Andraschko:PRL2014,Mondaini:PRA2015,Reichl:PRA2016,Prelovsek:PRB2016,Zakrzewski:PRB2018,Hopjan:PRA2020,krause2019nucleation,yao2020manybody}, its very existence
in systems with higher dimensions remains unclear.
In particular it has been suggested~\cite{DeRoeck:PRB2016,DeRoeck:PRB2017} that the MBL is inherently unstable against thermalization in large enough samples. This prediction contrasts with subsequent experimental~\cite{Choi1547} and numerical~\cite{WahlNatPhys2019,geiler2019manybody,De_Tomasi_2019,Thomson:PRB2018} studies of 2D systems of moderate sizes, showing evidence of a many-body mobility edge.
It must be emphasized that thorough numerical investigations, including a finite-size scaling analysis, are computationally challenging beyond one dimension~\cite{theveniaut2019manybody}.
In the light of the above difficulties, it is interesting to focus on the localization properties of few interacting particles in large (ideally infinite) disordered lattices.
Although these systems may represent overly simplified examples of MBL states,
they can show similar effects, including interaction-induced delocalization transitions with genuine mobility edges\cite{Stellin:PRB2019,stellin2020twobody}.
In a seminal paper~\cite{Shepelyansky:AndLocTIP1D:PRL94}, Shepelyansky showed that two particles moving in a one-dimensional lattice and coupled by contact interactions can travel over a distance much larger than the single-particle localization length, before being localized by the disorder. This intriguing effect was confirmed by several numerical studies~\cite{Weinmann:PRL1995,vonOppen:AndLocTIPDeloc:PRL96,Frahm1999,Roemer:PhysicaE2001,Krimer:JETP2011,Dias:PhysicaA2014,Lee:PRA2014,Krimer:InterConnDisord2PStates:PRB15,Frahm:EigStructAL1DTIP16,Thongjaomayum:PRB2019,thongjaomayum2020multifractality}, trying to identify the explicit dependence of the pair localization length on the interaction strength. Quantum walk dynamics of two interacting particles moving in a disordered one-dimensional lattice has also been explored, revealing subtle correlation effects~\cite{Lahini:PRL2010,Chattaraj:PRA2016,Dariusz:PRA2017,Toikka:PRB2020,Malishava:PRB2020}.
Interacting few-body systems with more than two particles have also been studied numerically in one dimension, confirming the stability of the localized phase. In particular Ref.~\cite{Mujal:PRA2019} investigated a model of up to three bosonic atoms with mutual contact interactions and subject to a spatially correlated disorder generated by laser speckles, while Ref.~\cite{Schmidtke:PRB2017} addressed
the localization in the few-particle regime of the XXZ spin-chain with a random magnetic field.
The localization of two interacting particles has been much less explored in dimensions higher then one. Based on analytical arguments, it was suggested~\cite{Imry:CohPropTIP:EPL95, Borgonovi:NonLinearity1995} that all two-particle states are localized by the disorder in two dimensions, whereas in three dimensions a delocalization transition for the pair could occur even if all single-particle states are localized.
Nevertheless subsequent numerical investigations~\cite{Ortugno:AndLocTIPDeloc:EPL99,Cuevas:PRL1999,Roemer1999} in two dimensions reported evidence of an Anderson transition for the pair, providing explicit results for the corresponding position of the mobility edge and the value of the critical exponent.
Using large-scale numerics, we recently investigated~\cite{Stellin:PRB2019,stellin2020twobody}
Anderson transitions for a system of two interacting particles (either bosons or fermions with opposite spins), obeying the 3D Anderson-Hubbard model. We showed that the phase diagram in the energy-interaction-disorder space contains multiple metallic and insulating regions, separated by two-body mobility edges. In particular we observed metallic pair states for relatively strong disorder, where all single-particle states are localized, which can be thought of as a proxy for interaction-induced many-body delocalization. Importantly, our numerical data for the metal-insulator transition were found to be consistent with the (orthogonal) universality class of the noninteracting model. This feature is not unique to our model, since single-particle excitations in a disordered many-body electronic system also undergo a metal-insulator transition belonging to the noninteracting universality class~\cite{Burmistrov:PRB2014}.
In this work we revisit the Shepelyansky problem in two dimensions and shed light on the controversy. We find that no mobility edge exists for a single pair in an infinite lattice, although interactions can dramatically enhance the pair localization length. In particular we show that previous claims~\cite{Ortugno:AndLocTIPDeloc:EPL99,Cuevas:PRL1999,Roemer1999} of 2D interaction-driven Anderson transitions
were plagued by strong finite-size effects.
The paper is organized as follows. In Sec.~\ref{sec:theory} we revisit the theoretical approach based on the exact mapping
of the two-body Schrodinger equation onto an effective single-particle problem for the center-of-mass motion.
The effective model allows to recover the entire energy spectrum of orbitally symmetric pair states and is therefore equivalent to the exact diagonalization of the full Hamiltonian in the same subspace; an explicit proof for a toy
Hamiltonian is given in Sec.~\ref{sec:equivalence}.
In Sec.~\ref{sec:absence} we present the
finite-size scaling analysis used to discard the existence of the 2D Anderson transition for the pair, while in Sec.~\ref{sec:loclength}
we discuss the dependence of the two-body localization length on the interaction strength. The generality of the obtained results
is discussed in Sec.~\ref{general} while in Sec.~\ref{sec:conclusions} we provide
a summary and an outlook
\section{Effective single-particle model for the pair}
\label{sec:theory}
The Hamiltonian of the two-body system can be written as $\hat H=\hat H_0 + \hat U$, whose noninteracting part $\hat H_0$ can be decomposed as $\hat H^\textrm{sp} \otimes \hat{\mathds{1}} +\hat{\mathds{1}} \otimes \hat H^\textrm{sp}$. Here $\hat{\mathds{1}}$ refers to the one-particle identity operator, while $\hat H^\textrm{sp}$ denotes the single-particle Anderson Hamiltonian:
\begin{equation}
\label{Anderson3D}
\hat H^\textrm{sp}= -J \sum_{\langle \mathbf n, \mathbf m\rangle} |\mathbf m \rangle \langle \mathbf n| + \sum_{\mathbf n}V_\mathbf n |\mathbf n\rangle \langle \mathbf n|,
\end{equation}
where $J$ is the tunneling amplitude between nearest neighbor sites $\mathbf{m}$ and $\mathbf{n}$, whereas $V_{\mathbf{n}}$ represents the value of the random potential at site $\mathbf{n}$.
In the following we consider a random potential which is spatially uncorrelated $\langle V_\mathbf n V_{\mathbf n^\prime} \rangle= \langle V_\mathbf n^2\rangle \delta_{\mathbf n \mathbf n^\prime}$ and obeys a uniform on-site distribution, as in Anderson's original work~\cite{Anderson:LocAnderson:PR58}:
\begin{equation}\label{randombox}
P(V)=\frac{1}{W}\Theta(W/2-|V|),
\end{equation}
where $\Theta(x)$ is the Heaviside (unit-step) function and $W$ is the disorder strength. The two particles are coupled together by contact (Hubbard) interactions described by
\begin{equation}\label{intro1}
\hat U=U\sum_{\mathbf m}|{\mathbf m},{\mathbf m}\rangle \langle {\mathbf m},{\mathbf m}|,
\end{equation}
where $U$ represents the corresponding strength. We start by writing the two-particle Schr{\"o}dinger equation as $(E -\hat H_0)|\psi\rangle=\hat U|\psi\rangle$, where $E$ is the total energy of the pair.
If $U|\psi\rangle =0$, then $E$ must belong to the energy spectrum of the
noninteracting Hamiltonian $\hat H_0$. This occurs for instance if the two-particles correspond to fermions in the spin-triplet state, as in this
case the orbital part of the wave-function is antisymmetric and therefore
$\langle {\mathbf m},{\mathbf m}|\psi\rangle=0$.
Interactions are instead relevant for orbitally symmetric wave-functions, describing either bosons or fermions with opposite spins in the singlet state.
In this case from Eq.~(\ref{intro1}) we find that the wave-function obeys the following self-consistent equation
\begin{equation}
\label{formalism2}
|\psi\rangle=\sum_{\mathbf m} U \hat G(E) |{\mathbf m},{\mathbf m}\rangle \langle {\mathbf m},{\mathbf m}|\psi\rangle,
\end{equation}
where $\hat G(E)=(E \hat I -\hat H_0)^{-1}$ is the non-interacting two-particle Green's function. Eq.~(\ref{formalism2}) shows that
for contact interactions the wave-function of the pair can be completely determined once its diagonal amplitudes
$f_{\mathbf m}=\langle {\mathbf m},{\mathbf m}|\psi\rangle$ are known.
By projecting Eq.(\ref{formalism2}) over the state
$|{\mathbf n},{\mathbf n}\rangle$, we see that these terms obey a closed equation~\cite{Stellin:PRB2019,Dufour:PRL2012,Orso:PRL2005}:
\begin{equation}
\label{integral}
\sum_{\mathbf m} K_{\mathbf n \mathbf m} f_{\mathbf m} = \frac{1}{U}f_{\mathbf n},
\end{equation}
where $K_{\mathbf n \mathbf m} =\langle {\mathbf n},{\mathbf n }|\hat G(E) |{\mathbf m},{\mathbf m}\rangle$. Eq.(\ref{integral}) is then interpreted as an effective single-particle problem with Hamiltonian matrix $K$ and pseudoenergy $\lambda=1/U$, corresponding to the inverse of the interaction strength.
In the following we will address the localization properties of this effective
model for the pair.
To this respect, we notice that the matrix elements of $K$ are unknown and must be calculated explicitly in terms of the eigenbasis of the single-particle model, $\hat H^\textrm{sp} | \phi_r\rangle=\varepsilon_r | \phi_r\rangle$, as
\begin{equation}\label{KE0}
K_{\mathbf n \mathbf m} = \sum_{r,s=1}^N \frac{\phi_{\mathbf n r} \phi_{\mathbf m r}^* \phi_{\mathbf n s} \phi_{\mathbf m s}^*}{E-\varepsilon_r-\varepsilon_s},
\end{equation}
where $N$ is the total number of lattice sites in the grid and $\phi_{\mathbf n r} =\langle \mathbf n | \phi_r\rangle$ are the amplitudes of the one-particle wave-functions.
\section{Equivalence with exact diagonalization of the full model}
\label{sec:equivalence}
The effective single-particle model of the pair, Eq.~(\ref{integral}), allows to
reconstruct the entire energy spectrum of orbitally symmetric states for a given interaction strength $U$.
At first sight this is not obvious because the matrix $K$ is $N\times N$, and therefore possesses $N$ eigenvalues, while the dimension of the Hilbert space of orbitally symmetric states is $N(N+1)/2$, which is much larger.
The key point is that one needs to compute the matrix $K$ and the associated eigenvalues $\lambda_{r}=\lambda_{r}(E)$, with $r=1,2 ...N$, for different values of the energy $E$. The energy levels for fixed $U$
are then obtained by solving the equations $\lambda_{r}(E)=1/U$ via
standard root-finding algorithms.
Let us illustrate the above point for a toy model with $N=2$ lattice sites in the absence of disorder.
In this case the Hilbert space of symmetric states is spanned by the three vectors $|1,1\rangle$,
$|2,2\rangle $ and $(|1,2\rangle +|2,1\rangle)/\sqrt{2}$.
The corresponding energy levels of the pair can be found from the exact diagonalization of the $3\times 3$ matrix of the projected Hamiltonian:
\begin{equation}
H_{ed}=
\begin{pmatrix}
U & -\sqrt{2} & 0 \\
-\sqrt{2} & 0 & -\sqrt{2} \\
0 & -\sqrt{2} & U
\end{pmatrix}.
\end{equation}
An explicit calculation yields $E=U$ and $E=(U\pm \sqrt{U^2+16})/2$.
Let us now show that we recover exactly the same results using our effective model.
The single-particle Hamiltonian is represented by the matrix
\begin{equation}\label{example}
H^{sp}=\begin{pmatrix}
0 & -1\\
-1 & 0
\end{pmatrix},
\end{equation}
whose eigenvalues are given by $\varepsilon_1=-1$ and $\varepsilon_2=1$. The associated wavevectors are $| \phi_1\rangle =(|1\rangle +|2\rangle)/2$ and
$| \phi_2 \rangle =(|1\rangle -|2\rangle)/2$.
From Eq.(\ref{KE0}) we immediately find
\begin{equation}\label{example2}
K=\begin{pmatrix}
A & B\\
B & A
\end{pmatrix},
\end{equation}
where $A=(E/(E^2-4)+1/E)/2$ and $B=(E/(E^2-4)-1/E)/2$. The corresponding eigenvalues of $K$ are given by $\lambda_1(E)=A-B=1/E$ and $\lambda_2(E)=A+B=E/(E^2-4)$. The condition $\lambda_1=1/U$ yields $E=U$, while
$\lambda_2=1/U$ admits two solutions, $E=(U\pm \sqrt{U^2+16})/2$, allowing to recover the exact-diagonalization energy spectrum.
In Fig.\ref{fig:example} we plot the energy dependence of the two eigenvalues of $K$ for our toy model. Intersecting the curves
with the horizontal line $\lambda=1/U$ (dashed red line) yields visually the three sought energy levels for the orbitally symmetric states.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{exampleK.eps}
\caption{Eigenvalues of the matrix $K$ of the effective model of the pair, Eq.~(\ref{integral}) for a toy model of $N=2$ coupled sites with no disorder, plotted as a function of the energy $E$ of the pair (blues data curves).
For a given interaction strength $U$, the entire spectrum of $N(N+1)/2$ energy levels of orbitally symmetric states of the pair can be obtained by intersecting the data curves with the horizontal line, $\lambda=1/U$, here shown for $U=1$ (dashed red line). The corresponding three energy levels are $E=-1.56155$, $E=1$ and $E=2.56155$. }
\label{fig:example}
\end{figure}
We stress that extracting the full energy spectrum of the pair based on the effective model, for a fixed value of the interaction strength $U$, is computationally demanding as $N$ becomes large.
The effective model is instead very efficient, as compared to the exact diagonalization, when we look at the properties of the pair as a function of the interaction strength $U$, for a fixed value of the total energy $E$. This is exactly the situation that we will be interested in below.
\section{Absence of 2D delocalization transitions for the pair}
\label{sec:absence}
Numerical evidence of 2D Anderson transition for two particles obeying the Anderson-Hubbard model in two dimensions
was first reported~\cite{Ortugno:AndLocTIPDeloc:EPL99} on the basis of transmission-amplitude calculations~\cite{McKinnonKramer:TransferMatrix:ZPB83} performed on
rectangular strips of length $L=62$ and variable width up to $M=10$. For a pair with zero total energy and for interaction strength $U=1$, the delocalization transition was found to occur for $W=9.3\pm 0.5$.
The result was also confirmed~\cite{Cuevas:PRL1999} from the analysis of the energy-level statistics, although with slightly different numbers.
The existence of a 2D mobility edge for the pair was also reported in Ref.~\cite{Roemer1999}, where a decimation method was employed to compute the critical disorder strength as a function of the interaction strength $U$, based on lattices of similar sizes.
For $U=1.59$, a pair with zero total energy was shown to undergo an Anderson transition at $W=9\pm 0.13$.
Below we verify the existence of the 2D delocalization transition of the pair, following the procedure developed in Ref.~\cite{Stellin:PRB2019}. In order to compare with the previous numerical predictions, we set $E=0$ and $W=9$.
We consider a rectangular strip of dimensions $L, M$, with $L\gg M$, containing $N=ML$ lattice sites. In order to minimize finite-size effects, the boundary conditions on the single-particle Hamiltonian $H^{sp}$ are chosen periodic in the orthogonal direction ($y$) and open along the transmission axis ($x$).
We rewrite the
rhs of Eq.~(\ref{KE0}) as
\begin{equation}\label{KE0bis}
K_{\mathbf n \mathbf m} =\sum_{r=1}\phi_{\mathbf n r} \phi_{\mathbf m r}^* \langle \mathbf{n}|G^{\mathrm{sp}}(E-\varepsilon_{r})|\mathbf{m}\rangle,
\end{equation}
where $G^{\mathrm{sp}}(\varepsilon)=(\varepsilon I - H^{\mathrm{sp}})^{-1}$ is the Green's function (e.g. the resolvent) of the single-particle Anderson Hamiltonian (\ref{Anderson3D}), $I$ being the identity matrix.
Due to the open boundary conditions along the longitudinal direction, the Anderson
Hamiltonian possesses a block tridiagonal structure, each block corresponding
to a transverse section of the grid. This structure can be exploited to efficiently compute the
Green's function $G^{\mathrm{sp}}(\varepsilon)$ in Eq.~(\ref{KE0bis}) via matrix inversion.
In this way the
total number of elementary operations needed to compute the matrix $K$ scales as $M^{4}L^{3}$, instead of $M^{4}L^{4}$, as naively expected from the rhs of Eq.~(\ref{KE0}).
Once computed the matrix $K$ of the effective model, we use it to evaluate the logarithm of the transmission amplitude between two transverse sections of the strip as a function of their relative distance $n_x$:
\begin{equation}\label{logT}
F(n_x)=\ln \sum_{ m_y,n_y} |\langle 1,m_y| G^{\textrm p}(\lambda )| n_x,n_y \rangle |^2.
\end{equation}
In Eq.~(\ref{logT}) $G^{\textrm p}(\lambda)=(\lambda I -K)^{-1}$ is the Green's function
associated to $K$ with $\lambda=1/U$ and the sum is taken over the sites $m_y,n_{y}$ of the two transverse sections.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{AL2D_LambdaM-U_SmallM.eps}
\caption{ Reduced localization length of the pair plotted as a function of the interaction strength for increasing values of the transverse size $M=8, 10, 12, 16, 20$ of the grid. The results are obtained by averaging over $N_{tr}$ different disorder realizations, varying from $N_{tr}=600\; (M=8)$ to $N_{tr}=1000\; (M=20)$. The disorder strength is fixed to $W=9$ and the pair has zero total energy, $E=0$,
implying that $\Lambda(-U)=\Lambda(U)$.
The different curves cross in the interval $0.75<U<1.1$, indicating a possible 2D delocalization transition, as claimed in previous investigations~\cite{Ortugno:AndLocTIPDeloc:EPL99,Roemer1999}. The 2D Anderson transition is actually a finite-size effect, as the crossing points disappear for larger values of $M$, see Fig.\ref{fig:TIP_2D_U-LM_HighM}.}
\label{fig:TIP_2D_U-LM_SmallM}
\end{figure}
For each disorder realization, we evaluate $F(n_x)$ at regular intervals along the bar and apply a linear fit to the data, $f_{fit}(n_x)=p n_x+q$. For a given value of the interaction strength, we evaluate the (disorder-averaged) Lyapunov exponent $\gamma=\gamma(M,U)$ as $\gamma=-\overline{p}/2$, where $\overline{p}$ is the average of the slope.
We then infer the localization properties of the system from the behavior of the reduced localization length, which is defined as $\Lambda=(M \gamma)^{-1}$. In the metallic phase $\Lambda$ increases as $M$ increases, whereas in the insulating phase the opposite trend is seen. At the critical point, $\Lambda$ becomes constant for values of $M$ sufficiently large. Hence the critical point $U=U_c$ of the Anderson transition can be identified by plotting the reduced localization length versus $U$ for different values of the transverse size $M$ and looking at their common crossing points.
In Fig. \ref{fig:TIP_2D_U-LM_SmallM} we show the reduced localization length
$\Lambda$ as a function of the interaction strength for increasing values of
the strip width, ranging from $M=8$ to $M=20$. The length
of the grid is fixed to $L=400$. Notice that, since $E=0$, the reduced localization length is an even function of the interaction strength,
$\Lambda(-U)=\Lambda(U)$.
We see that $\Lambda$ exhibits a nonmonotonic dependence on $U$, as previously found
in one~\cite{Frahm:EigStructAL1DTIP16} and in three~\cite{Stellin:PRB2019} dimensions. In particular, interactions favor the
delocalization of the pair, the effect being more pronounced near $U=6$.
We also notice from Fig. \ref{fig:TIP_2D_U-LM_SmallM} that the curves corresponding to different values of $M$ intersect each others around $U=1$, suggesting a possible phase transition, as previously reported in Ref.~\cite{Ortugno:AndLocTIPDeloc:EPL99,Roemer1999}. A closer inspection of the data, however, reveals that the crossing points are spread out in the interval $0.73 \lesssim U \lesssim 1.1$; in particular, they drift to stronger interactions as the system size increases, in analogy with the three-dimensional case~\cite{Stellin:PRB2019}.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{AL2D_LambdaM-U_HighM.eps}
\caption{Same plot as in Fig.\ref{fig:TIP_2D_U-LM_SmallM} but for larger grids with transverse sizes $M=30, 40, 50$
obtained by averaging over $N_{tr}=3600\; (M=30), 4400\; (M=40)$, and $N_{tr}=2850\; (M=50)$ different disorder realizations.
Notice that all crossing points have disappeared, indicating that the pair is ultimately localized by the disorder for any value of the
interaction strength.}
\label{fig:TIP_2D_U-LM_HighM}
\end{figure}
A key question is whether a further increase of the strip's width $M$ will only cause a (possibly large) shift of the critical point, or
rather, the localized phase will ultimately take over for any value of the interaction strength. To answer this question, we have performed additional calculations using larger grids, corresponding to $M=30, 40, 50$. In order to guarantee a sufficiently large aspect ratio, the
length of the bar was fixed to $L=500$. The obtained results are displayed in Fig.\ref{fig:TIP_2D_U-LM_HighM}.
We notice that the crossing points have completely disappeared and the pair localizes in an infinite lattice irrespectively of the specific value of $U$.
This leads us to conclude that the results of Refs.~\cite{Ortugno:AndLocTIPDeloc:EPL99,Roemer1999} were plagued by severe finite-size effects, due to the limited computational
ressources, and no Anderson transition can actually take place for a pair in a disordered lattice of infinite size.
\section{Pair localization length}
\label{sec:loclength}
Although the pair cannot fully delocalize in two dimensions,
interactions can lead to a drastic enhancement of the two-particle localization length.
This quantity can be estimated using the one-parameter scaling
ansatz $\Lambda=f(\tilde \xi/M)$, stating that the reduced localization length
depends solely on the ratio between two quantities: the width $M$ of the strip and a characteristic length $\tilde \xi=\tilde \xi(U,W,E)$, which instead depends on the model parameters and on the total energy of the pair (but not on the system sizes $L, M$). This latter quantity coincides, up to a multiplicative numerical constant $a$, with the pair localization length, $\xi=a\tilde \xi$.
We test the scaling ansatz for our effective model (\ref{integral}) using the numerical data for $M=30,40, 50$ displayed in Fig.\ref{fig:TIP_2D_U-LM_HighM}, corresponding to the largest system sizes.
Let $U_j$, with $j=1,2 ..N_U$, be the values of the interaction strength
used to compute the reduced localization length (in our case $N_U=44$).
We then determine the corresponding unknown parameters $\tilde \xi(U=U_j)$ through a least squares procedure,
following the procedure developed in Ref.~\cite{McKinnonKramer:TransferMatrix:ZPB83}.
Plotting our data in the form $\ln \Lambda(M,U)$ vs $\ln M$ results in multiple data curves, each of them containing three data points connected by straight lines (corresponding to linear interpolation).
Let $\Lambda_i$ be one of the $(3N_U)$ numerical values available for the reduced localization length. The horizontal line $\ln \Lambda=\ln \Lambda_i$ will generally intersect some of these curves. We find convenient to introduce
a matrix $\eta$ which keeps track of such events: if the curve $U=U_j$ is crossed,
we set $\eta_{ij}=1$ and call $\ln M_{ij}$ the corresponding point; otherwise we set $\eta_{ij}=0$.
The unknown parameters are then obtained by minimizing the variance of the difference $\ln M-\ln \tilde \xi$, yielding the
following set of equations (see Ref.~\cite{McKinnonKramer:TransferMatrix:ZPB83} for a detailed derivation):
\begin{multline}
\label{eqn:scaling}
\sum_{j}\left [\sum_{i}\eta_{ij}\biggl(\frac{1}{N_{i}^{2}}-\frac{\delta_{jk}}{N_{i}}\biggr)\right ]\ln{\tilde \xi (U_{j})}=\\=\sum_{j}\left [\sum_{i}\eta_{ij}\biggl(\frac{1}{N_{i}^{2}}-\frac{\delta_{jk}}{N_{i}}\biggr) \ln M_{ij} \hspace{0.1cm} \right ] ,
\end{multline}
where $N_i=\sum_j \eta_{ij}$ is the total number of crossing points obtained for each $\Lambda_i$ value.
Equation (\ref{eqn:scaling}) is of the form $AX=B$ and can be easily solved. Notice however that the solution is not unique because
the matrix $A$ is singular. Indeed the correlation length $\tilde \xi(U)$ is defined up to a multiplicative constant,
$\tilde \xi\rightarrow a \tilde \xi$, implying that $\ln \tilde \xi$ is defined up to an \emph{additive}
constant, $\ln \tilde \xi \rightarrow \ln \tilde \xi +\ln a$.
\begin{figure}
\includegraphics[width=\columnwidth]{scaling2D.eps}
\caption{ Double logarithmic plot of the reduced localization length
as a function of the ratio $\tilde \xi/M$, where $\tilde \xi$ is the unnormalized localization length obtained from the solution of Eq.~(\ref{eqn:scaling}) and $M$ is the width of the strip.
The different symbols correspond to the data for $M=30$ (up triangles), $M=40$ (circles) and $M=50$ (diamonds), shown in Fig.~\ref{fig:TIP_2D_U-LM_HighM}. All data approximately collapse on a single curve, verifying the scaling ansatz $\Lambda=f(\tilde \xi/M)$. }
\label{fig:TIP_2D_xiInvM-LM}
\end{figure}
In Fig.\ref{fig:TIP_2D_xiInvM-LM} we verify the correctness of the scaling ansatz, by plotting the reduced localization length as a function of the ratio
$\tilde \xi/M$, where $\tilde \xi$ is obtained from the solution of Eq.~(\ref{eqn:scaling}). We see that our numerical data for different values of the interaction strength and system size do collapse on a single curve, thus confirming the scaling hypothesis.
In the main panel of Fig. \ref{fig:TIP_2D_xi-U} we plot the unnormalized localization length of the pair as a function of the interaction strength. We see that $\tilde \xi$ varies over more than three orders of magnitude in the interval of $U$ values considered.
In particular, for weak interactions the growth is approximately exponential in $U$, as highlighted by the semi-logarithmic plot.
Based on analytical arguments, Imry suggested~\cite{Imry:CohPropTIP:EPL95} that the localization length of the pair in the weakly interacting regime should obey the relation $\xi \propto \xi_{\mathrm{sp}}\mathrm{e}^{b(U\xi_{\mathrm{sp}})^{2}}$,
where $\xi_{\mathrm{sp}}$ is the single-particle localization length of the Anderson model and $b$ is a numerical factor.
A possible reason of the discrepancy is that the cited formula might apply only for relatively modest
values of the interaction strength, which were not explored in our numerics.
Further work will be needed to address this point explicitly.
\begin{figure}
\includegraphics[width=\columnwidth]{xi2Dv3.eps}
\caption{Unnormalized localization length $\tilde \xi$ of the pair plotted as a function of the interaction strength.
Notice the logarithmic scale in the $y$ axis, showing
that interactions can enhance the 2D localization length of the pair by more than three orders of magnitude. The inset displays the estimate of the multiplicative constant $a$, fixing the absolute scale of the localization length, plotted as a function of the interaction strength. The estimate is obtained by fitting the numerical data in Fig.\ref{fig:TIP_2D_U-LM_HighM} corresponding to weak interactions using Eq.~(\ref{eqn:finda}), from which we extract $a_\textrm{est}=\xi/\tilde \xi$. This quantity keeps increasing as $U$ diminishes, signaling that the strongly localized regime is not fully reached in our simulations.
}
\label{fig:TIP_2D_xi-U}
\end{figure}
The constant $a$, allowing to fix the absolute scale of the localization length of the pair,
is independent of the interaction strength. Its numerical value can in principle be inferred by fitting the data in the strongly localized regime,
according to
\begin{equation}
\label{eqn:finda}
\Lambda =\frac{\xi}{M}+c\biggl(\frac{\xi}{M}\biggr)^{2},
\end{equation}
where $c$ is a number.
In our case the most localized states are those at weak interactions, where the reduced localization length takes its minimum value.
For each values $U=U_j$ falling in this region, we fit our numerical data according to Eq.~(\ref{eqn:finda}), yielding $\xi=\xi(U)$.
The estimate of the multiplicative constant, which is defined as $a_\textrm{est}=\xi(U)/\tilde \xi (U)$, is displayed
in the inset of Fig.~\ref{fig:TIP_2D_xi-U}.
Since the estimate of $a$ does not saturates for small $U$, we conclude that, even for the weakest interactions and the largest system sizes considered, the pair has not yet entered the strongly localized regime underlying Eq.~(\ref{eqn:finda}). This asymptotic regime is typically achieved for $\Lambda \lesssim 0.1$, whereas our smallest value of the reduced localization length is $\Lambda(M=50,U=0.5)\simeq 0.2929$.
From the inset of Fig.~\ref{fig:TIP_2D_xi-U} we also see that $a_\textrm{est}$ increases as $U$ diminishes, suggesting that the result obtained for $U=0.5$ actually provides a lower bound for the multiplicative constant. This allows us to conclude that $a \geq 18.2$.
\section{Generality of the obtained results}
\label{general}
\begin{figure}
\includegraphics[width=\columnwidth]{Evar_2D_W9_M12_N400_Ns1000.eps}
\caption{Reduced localization length of the pair as a function of the interaction strength for $W=9$ and for different values of the total energy going from
$E=0$ (top curve) to $E=-12$ (bottom curve). The sizes of the strip
is $M=12$ and $L=400$, while the number of different disorder realizations is $N_{tr}=1000$.
The data show that the pair state with zero total energy possesses the largest reduced localization length, see Eq.~(\ref{nonzeroE}), implying that for $W=9$ the pair remains localized for any nonzero total energy. }
\label{fig:Efinite}
\end{figure}
In Sec.~\ref{sec:absence} we have shown that all pair states with total energy $E=0$ are localized for $W=9$. A natural question is whether
the localization scenario changes at nonzero energy or at weak disorder.
Let us consider the two cases separately.
Our numerical results indicate that, for any values of $U,W$ and system size $M$, the reduced localization length always takes its maximum value for $E=0$:\begin{equation}\label{nonzeroE}
\Lambda (E,M,U,W)\leq \Lambda(0,M,U,W).
\end{equation}
As an example, in Fig.\ref{fig:Efinite} we plot $\Lambda$ as a function of the interaction strength,
for $W=9$ and for different negative values of the energy (results for positive energies are simply obtained from
the corresponding data at energy $-E$ by reversing the sign of the interaction strength, $U\rightarrow -U$).
All calculations are performed on a strip with constant sizes $M=12$ and $L=400$.
When combined with the finite-size scaling analysis, the inequality~(\ref{nonzeroE}) implies that the pair remains localized for \emph{any} nonzero energy
with an even shorter localization length, thus excluding a delocalization transition.
The above inequality expresses the general fact that the pair can better spread when its total energy lies in the middle of the noninteracting two-particle
energy spectrum. For instance, in three dimensions, where genuine Anderson transitions for the pair do occur, we found~\cite{stellin2020twobody} that
metallic regions in the
interaction-disorder plane become progressively insulating as the energy of the pair departs from zero.
We note from Fig.\ref{fig:Efinite} that all data curves with $|E|\leq 8$ have absolute minimum at $U=0$. Moreover, the largest enhancement of the reduced localization length
takes place for weaker interactions as $|E|$ increases. These are specific features of scattering states, whose energy lies inside the noninteracting two-body energy spectrum,
as already observed in one~\cite{Frahm:EigStructAL1DTIP16} and in three~\cite{stellin2020twobody} dimensions.
In the asymptotic regime $|E|\gg W$, pairs behave as pointlike molecules and the effective model $K$ takes the form of a single-particle Anderson model,
as discussed in Ref.~\cite{stellin2020twobody}, which again precludes the possibility of a delocalization transition in two dimensions.
Let us now discuss whether an Anderson transition for the pair can appear for weak disorder at fixed total energy, $E=0$. The effective single-particle model $K$ possesses both time reversal and spin rotational symmetries, suggesting that $K$ belongs to the same (orthogonal) universality class of the Anderson model $\hat H^\textrm{sp}$.
In Ref.~\cite{Stellin:PRB2019} we showed numerically that, in three dimensions, the Anderson transition for a pair with zero energy yields critical exponents in agreement with the predictions of the orthogonal class.
Since 2D Anderson transitions are generally forbidden in the orthogonal class, one expects that the pair is localized for \emph{any} finite disorder. For this reason,
the previous claims of 2D delocalization transitions for two particles are puzzling. Our numerics shows explicitly that these results were
biased by strong finite-size effects and there is no evidence of violation of the conventional localization scenario.
From the numerical point of view, the observation of the asymptotic 2D scaling behavior for $W=9$ required large system sizes as compared to
the 3D case studied in Ref.~\cite{Stellin:PRB2019}, where the finite-size scaling analysis was limited to system sizes up to $M=17$.
Verifying numerically the absence of the 2D transition for weaker disorder is very challenging, because
the reduced localization length will exhibit an apparent crossing for even larger values of $M$ as $W$ diminishes. To appreciate this point, we have repeated the
same finite-size scaling analysis for $W=10$ and plotted the results in Fig.\ref{fig:W=10}. We see that, already for $M=22$, the pair is localized for any values of the interaction strength, whereas for $W=9$
the same asymptotic behavior is reached for larger system sizes, between $M=30$ and $M=40$.
\begin{figure}
\includegraphics[width=\columnwidth]{U-LM_E0_W10.eps}
\caption{Finite-size scaling analysis for $W=10$ and $E=0$. The reduced localization length is plotted as a function of the interaction strength
for different system sizes $M=8$ (squares), $10$ (circles), $13$ (up triangles), $22$ (down triangles), and $38$ (right triangles). The length of the strip is $L=400$ for $M\leq 13$
and $L=500$ otherwise. Notice that the two-particle system exhibits an insulating
behavior already for $M=22$. The number of different disorder realizations is $N_{tr}=600$ for $M=38$ and $N_{tr}=1000$ otherwise. }
\label{fig:W=10}
\end{figure}
\section{Conclusion and outlook}
\label{sec:conclusions}
Based on an efficient mapping of the two-body Schrodinger equation, we have addressed the localization properties of two bosons or two spin 1/2 fermions in a singlet state obeying the 2D Anderson-Hubbard model.
We have found that no interaction-induced Anderson transition occurs for disordered lattices of infinite size in contrast with previous numerical works, which we have shown to be biased by finite-size effects. In this way we
reconcile the numerics with the one-parameter scaling theory of localization, predicting the absence of one-particle Anderson transition in two dimensions, in the presence of both time reversal and spin rotational symmetries. Moreover, we found that the pair localization length exhibits a nonmonotonic behavior as a function of $U$, characterized by an exponential
growth for weak interactions.
We point out that the absence of the 2D mobility edge for the two-particle system has been proven for the case of contact interactions; similar conclusions should apply also for short but finite-range interactions. The case of true long-range (e.g Coulomb) interactions is conceptually different and can lead to opposite conclusions~\cite{Cuevas:PRL1999,Shepelyanski:PRB2000}.
From the above discussion, we also expect that the 2D delocalization transition will appear when the two particles are exposed to spin-orbit couplings, driving the system towards the symplectic universality class, where single-particle metal-insulator transitions are generally allowed even in two dimensions~\cite{Evers:AndersonTransitions:RMP08}.
An interesting and compelling problem is to investigate the implications of our results for a 2D system at finite density of particles, where many-body delocalization transitions have instead been observed, both numerically and experimentally, in the strongly interacting regime.
We expect that, in the zero density limit, the many-body mobility edge disappears, irrespective of the bosonic or fermionic
statistics of the two particles.
Another interesting direction is to generalize our numerical approach to study the effect of disorder on the transport and spectral properties of excitons in 2D
semiconductors~\cite{C9CP04111G}.
\section*{ACKNOWLEDGEMENTS}
We acknowledge D. Delande, K. Frahm, C. Monthus, S. Skipetrov and T. Roscilde for fruitful discussions.
This project has received funding from the European Union's Horizon 2020 research and innovation programme under the
Marie Sklodowska-Curie grant agreement No 665850. This work was granted access to the HPC resources of CINES (Centre Informatique National de l'Enseignement Sup\' erieur) under the allocations 2017-A0020507629, 2018-A0040507629, 2019-A0060507629 and 2020-A0080507629 supplied by GENCI (Grand Equipement National de Calcul Intensif).
\bibliographystyle{apsrev}
| proofpile-arXiv_065-234 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\subsubsection*{\bibname}}
\usepackage[top=3cm, bottom=3cm, left=3cm, right=3cm]{geometry}
\usepackage{booktabs}
\usepackage[table]{xcolor}
\usepackage{color}
\usepackage{latexsym}
\usepackage{amsmath}
\usepackage{amssymb}
\usepackage{amsfonts}
\usepackage{amsthm}
\usepackage{multirow}
\usepackage{tikz}
\usetikzlibrary{arrows,positioning,shapes,bayesnet}
\usepackage{tcolorbox}
\usepackage{xspace}
\usepackage{float}
\usepackage{bbold}
\usepackage{cuted}
\usepackage{algorithm, algorithmic}
\restylefloat{table}
\renewcommand{\labelitemi}{$\bullet$}
\newenvironment{theopargself}{}
\renewcommand{\qedsymbol}{}
\usepackage[english]{babel}
\RequirePackage[%
pdfstartview=FitH,%
breaklinks=true,%
bookmarks=true,%
colorlinks=true,%
linkcolor= blue,
anchorcolor=blue,%
citecolor=blue,
filecolor=blue,%
menucolor=blue,%
urlcolor=blue%
]{hyperref}
\AtBeginDocument{%
\hypersetup{%
pdfauthor={},%
colorlinks = true,%
urlcolor = blue,%
linkcolor = blue,%
citecolor = orange,%
pdftitle={Compilation: \today}%
}
}
\usepackage[mathcal]{eucal}
\usepackage{cleveref}
\crefname{assumption}{Assumption}{Assumptions}
\crefname{equation}{Eq.}{Eqs.}
\crefname{figure}{Fig.}{Figs.}
\crefname{table}{Table}{Tables}
\crefname{section}{Sec.}{Secs.}
\crefname{theorem}{Thm.}{Thms.}
\crefname{lemma}{Lemma}{Lemmas}
\crefname{corollary}{Cor.}{Cors.}
\crefname{example}{Example}{Examples}
\crefname{appendix}{Appendix}{Appendixes}
\crefname{remark}{Remark}{Remark}
\renewenvironment{proof}[1][\proofname]{{\bfseries #1.}}{\qed \\ }
\makeatother
\newcommand{\note}[1]{{\textbf{\color{red}#1}}}
\theoremstyle{plain}
\newtheorem{theorem}{Theorem}[section]
\newtheorem{definition}[theorem]{Definition}
\newtheorem{attempt}[theorem]{Attempt}
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{proposition}[theorem]{Proposition}
\newtheorem{property}[theorem]{Property}
\newtheorem{properties}[theorem]{Properties}
\newtheorem{corollary}[theorem]{Corollary}
\newtheorem{remark}[theorem]{Remark}
\newtheorem{warning}[theorem]{\textcolor{red}{Warning}}
\newtheorem{example}[theorem]{Example}
\newtheorem{examples}[theorem]{Examples}
\bibliographystyle{plainnat}
\input{our_files/macros}
\begin{document}
\title{MAGMA: Inference and Prediction with Multi-Task Gaussian Processes}
\author
\textbf{Arthur Leroy} \\ [2ex]
Université de Paris, CNRS, MAP5 UMR 8145, \\
F-75006 Paris, France \\
arthur.leroy.pro@gmail.com \\\\
\textbf{Pierre Latouche} \\ [2ex]
Université de Paris, CNRS, MAP5 UMR 8145, \\
F-75006 Paris, France \\
pierre.latouche@math.cnrs.fr \\\\
\textbf{Benjamin Guedj} \\ [2ex]
Inria, France and \\
University College London, United Kingdom \\
benjamin.guedj@inria.fr \\\\
\textbf{Servane Gey} \\ [2ex]
Université de Paris, CNRS, MAP5 UMR 8145, \\
F-75006 Paris, France \\
servane.gey@parisdescartes.fr
}
\date{\today}
\maketitle
\begin{abstract}
We investigate the problem of multiple time series forecasting, with the objective to improve multiple-step-ahead predictions.
We propose a multi-task Gaussian process framework to simultaneously model batches of individuals with a common mean function and a specific covariance structure.
This common mean is defined as a Gaussian process for which the hyper-posterior distribution is tractable.
Therefore an EM algorithm can be derived for simultaneous hyper-parameters optimisation and hyper-posterior computation.
Unlike previous approaches in the literature, we account for uncertainty and handle uncommon grids of observations while maintaining explicit formulations, by modelling the mean process in a non-parametric probabilistic framework.
We also provide predictive formulas integrating this common mean process.
This approach greatly improves the predictive performance far from observations, where information shared across individuals provides a relevant prior mean.
Our overall algorithm is called \textsc{Magma} (standing for Multi tAsk Gaussian processes with common MeAn), and publicly available as a R package.
The quality of the mean process estimation, predictive performances, and comparisons to alternatives are assessed in various simulated scenarios and on real datasets.
\textbf{Keywords:} Multi-task learning, Gaussian process, EM algorithm, Common mean process, Functional data analysis.
\end{abstract}
\section{Introduction}
\label{sec:intro}
\input{our_files/intro}
\section{The model}
\label{sec:model}
\input{our_files/model}
\section{Inference}
\label{sec:inference}
\input{our_files/inference}
\section{Prediction}
\label{sec:prediction}
\input{our_files/prediction}
\section{Complexity analysis for training and prediction}
\label{sec:complexity}
\input{our_files/complexity}
\section{Experimental results}
\label{sec:exp}
\input{our_files/experiments}
\section{Discussion}
\label{sec:conclusion}
\input{our_files/conclusion}
\section{Proofs}
\label{sec:proofs}
\input{our_files/proof}
\subsection*{Availability of data}
The synthetic data and table of results are available at \url{https://github.com/ArthurLeroy/MAGMA/tree/master/Simulations}
\subsection*{Code availability}
The R code associated with the present work is available at \url{https://github.com/ArthurLeroy/MAGMA}
\subsection{Notation}
\label{sec:notation}
While GPs can be used for many types of data, their continuous nature makes them particularly well suited to study temporal phenomena.
Throughout, we use the term \emph{individual} as a synonym of task or batch, and adopt notation and vocabulary of time series to remain consistent with the real datasets application we provide in \Cref{sec:simu_real_data}, which addresses young swimmers performances' forecast.
These time series are considered as pointwise observations of functions we try to reconstruct thanks to the following generative model.
\newline
We are provided with functional data coming from $M \in
\mathcal{I}$ different individuals, where $\mathcal{I} \subset \mathbb{N}$.
For each individual $i$, we observe a set of inputs and outputs $\left\{ \left(t_i^1, y_i(t_i^1) \right), \dots, \left(t_i^{N_i}, y_i(t_i^{N_i}) \right) \right\}$, where $N_i$ is the number of data points for the $i$-th individual.
Since many objects are defined for all individuals, we
shorten our notation as follows: for any object $x$ existing
for all $i$, we denote $\acc{x_i}_i = \acc{x_1, \dots, x_M}$.
Moreover, as we work in a temporal context, the inputs
$\acc{t_i^{k}}_{i,k}$ are referred to as \textit{timestamps}.
In the specific case where all individuals are observed at the same timestamps, we call \textit{common} the grid of observations.
On the contrary, a grid of observations is \textit{uncommon} if the timestamps are different in number and/or location among the individuals.
Some convenient notation:
\begin{itemize}
\item $\ti = \{ t_i^1,\dots,t_i^{N_i} \}$, the set of timestamps for the $i$-th individual,
\item $\yi = y_i(\ti)$, the vector of outputs for the $i$-th individual,
\item $\Ut = \bigcup\limits_{i = 1}^M \ti$, \ the pooled set of timestamps among individuals,
\item $N = \#(\Ut)$, the total number of observed timestamps.
\end{itemize}
\subsection{Model and hypotheses}
\label{sec:model_hypo}
Suppose that a functional data is coming from the sum of a mean process, common to all individuals, and an individual-specific centred process.
To clarify relationships in the generative model, we illustrate our graphical model in \Cref{graph_model}.
\begin{figure}
\begin{center}
\begin{tikzpicture}
\node[obs] (y) {$y_i$};
\node[latent, above=of y, xshift=-1.35cm](mu){$\mu_0$};
\node[const, above=of mu, xshift=-0.5cm] (m) {$m_0$};
\node[const, above=of mu, xshift= 0.5cm] (t0) {$\theta_0$};
\node[latent, above=of y, xshift=1.35cm] (f) {$f_i$};
\node[const, above=of f, xshift= 0cm] (ti) {$\theta_i$};
\node[latent, right= 1cm of y] (e) {$\epsilon_i$};
\node[const, right= 1cm of e] (s) {$\sigma_i^2$};
\factor[above=of mu] {mu-f} {left:$\mathcal{N}$} {m,t0} {mu} ;
\factor[above=of f] {f-f} {left:$\mathcal{N}$} {ti} {f} ;
\factor[right=of e] {e-f} {above:$\mathcal{N}$} {s} {e} ;
\edge {f,mu,e} {y} ;
\plate {} {(f)(y)(e)(ti)(s)} {$\forall i \in \mathcal{I}$} ;
\end{tikzpicture}
\caption{Graphical model of dependencies between variables in the Multi-task Gaussian Process model.}
\label{graph_model}
\end{center}
\end{figure}
Let $\mathcal{T}$ be the input space, our model is
\begin{equation}
y_i(t) = \mu_0(t) + f_i(t) + \epsilon_i(t), \ \ t \in \mathcal{T}, \ \ i = 1, \dots, M,
\end{equation}
where $\mu_0(\cdot) \sim \mathcal{GP} (m_0(\cdot), K_{\theta_0}(\cdot,\cdot))$ is the mean common process and $f_i(\cdot) \sim \mathcal{GP} (0,\Sigma_{\theta_i}(\cdot,\cdot))$ the individual specific process.
Moreover, the error term is supposed to be $\epsilon_i(\cdot) \sim \mathcal{GP} (0,\sigma_i^2 I)$.
The following notation is used for parameters:
\begin{itemize}
\item $K_{\theta_0}(\cdot, \cdot)$, a covariance function of hyper-parameters $\theta_0$,
\item $\forall i, \ \Sigma_{\theta_i}(\cdot, \cdot)$, a covariance function of hyper-parameters $\theta_i$,
\item $\sigma_i^2 \in \mathbb{R}^{+}$, the noise term for individual $i$,
\item $\forall i, \ \Psii(\cdot,\cdot) = \Sigma_{\theta_i}(\cdot,\cdot) + \sigma_i^2 I_d$,
\item $\Theta = \{\theta_0, \acc{\theta_i}_i, \acc{\sigma_i^2}_i \}$, the set of all hyper-parameters of the model.
\end{itemize}
\noindent We also assume that
\begin{itemize}
\item $\{ f_i \}_{i}$ are independent,
\item $\{ \epsilon_i \}_{i}$ are independent,
\item $\forall i, \ \mu_0$ and $f_i$ are independent.
\end{itemize}
It follows that $\{ y_i \vert \mu_0 \}_{i = 1,\dots,M}$ are independent from one another, and for all $i \in \mathcal{I}$:
\begin{equation}
y_i(\cdot) \vert \mu_0(\cdot) \sim \mathcal{GP}(\mu_0(\cdot), \Psii(\cdot, \cdot)).
\end{equation}
Although this model is based on infinite-dimensional GPs, the inference will be conducted on a finite grid of observations.
According to the aforementioned notation, we observe $\{ (\ti, \yi) \}_{i}$, and the corresponding likelihoods are Gaussian:
\begin{equation}
\yi \vert \mu_0(\ti) \sim \mathcal{N}(\yi; \mu_0(\ti), \Psii^{\ti}),
\end{equation}
\noindent where $\ \Psii^{\ti} = \Psii (\ti, \ti) = \left[ \Psii(k, l) \right]_{k, \ell \in \ti}$ is a $N_i \times N_i$ covariance matrix.
Since $\ti$ might be different among individuals, we also need to evaluate $\mu_0$ on the pooled grid $\Ut$:
\begin{equation}
\mu_0(\Ut) \sim \mathcal{N} \left(\mu_0(\Ut) ; m_0(\Ut) , K_{\theta_0}^{\Ut} \right),
\end{equation}
\noindent where $K_{\theta_0}^{\Ut} = K_{\theta_0}(\Ut, \Ut) = \left[ K_{\theta_0}(k, \ell) \right]_{k,l \in \Ut}$ is a $N \times N$ covariance matrix.
\newline
An alternate hypothesis consists in considering hyper-parameters $\thetaii$ and $\sigmaii$ equal for all individuals.
We call this hypothesis \emph{Common HP} in the \Cref{sec:exp}.
This particular case models a context where individuals represent different trajectories of the same process, whereas different hyper-parameters indicate different covariance structures and thus a more flexible model.
For the sake of generality, the remainder of the paper is written with $\theta_i$ and $\sigma_i^2$ notation, when there are no differences in the procedure.
Moreover, the model above and the subsequent algorithm may use any covariance function parametrised by a finite set (usually small) of hyper-parameters. For example, a common kernel in the GP literature is known as the \emph{Exponentiated Quadratic} kernel (also called sometimes Squared Exponential or Radial Basis Function kernel). It depends only on two hyper-parameters $\theta = \acc{v, \ell}$ and is defined as:
\begin{equation}
\label{eq:kernel}
k_{\mathrm{EQ}}\left(x, x^{\prime}\right)= v^{2} \exp \left(-\frac{\left(x-x^{\prime}\right)^{2}}{2 \ell^{2}}\right).
\end{equation}
The \emph{Exponentiated Quadratic} kernel is simple and enjoys useful smoothness properties. This is the kernel we use in our implementation (see \Cref{sec:exp} for details). Note that there is a rich literature on kernel choice, their construction and properties, which is beyond the scope of the paper: we refer to \cite{RasmussenGaussianprocessesmachine2006a} or \cite{DuvenaudAutomaticmodelconstruction2014} for comprehensive studies.
\subsection{Posterior inference on mean process}
\label{sec:Posterior_mu0}
As mentioned above, we observed a new individual at timestamps $\tst$. The GP regression consists of arbitrarily choosing a vector $\tpred$ of timestamps on which we wish to make a prediction.
Since a GP is an infinite-dimensional object, we can pick a finite-dimensional vector at any new location.
Then, we define new notation for the pooled vector of timestamps $\tpst =
\begin{bmatrix}
\tpred \\
\tst
\end{bmatrix}$, which will serve as a working grid to define the prior and posterior distributions involved in the prediction process.
One can note that, although not mandatory in theory, it is often a good idea to include the observed timestamps of training individuals, $\Ut$, within $\tpst$ since they match locations which contain information for the mean process to 'help' the prediction.
In particular, if $\tpst = \Ut$, the computation of $\mu_0$'s hyper-posterior distribution is not necessary since $p(\mu_0(\Ut) \vert \yii)$ has previously been obtained with the EM algorithm.
However, in general, it is necessary to compute the hyper-posterior $p(\mu_0(\tpst) \vert \yii)$ at the new timestamps.
The idea remains similar to the E step aforementioned, and we obtain the following result.
\begin{theopargself}
\begin{proposition}[]
\label{prop:post_mu}
\noindent Let $\tpst$ be a vector of timestamps of size $\tilde{N}$.
The hyper-posterior distribution of $\mu_0$ remains Gaussian:
\begin{equation}
p\paren{\mu_0(\tpst) \vert \yii} = \mathcal{N} \paren{\mu_0(\tpst); \mhat(\tpst), \Khat_{*}^{p}},
\end{equation}
with:
\begin{align*}
\Khat_{*}^{p} &= \paren{\tilde{K}^{-1} + \sum\limits_{i = 1}^{M}{\tilde{\Psi_i}^{-1}}}^{-1} , \\
\mhat(\tpst) &= \Khat_{*}^{p} \paren{ \tilde{K}^{-1} m_0\paren{\tpst} + \sum\limits_{i = 1}^{M}{\tilde{\Psi}_i^{-1} \tilde{\mathbf{y}}_i } },
\end{align*}
where we used the shortening notation:
\begin{itemize}
\item $\tilde{K} = K_{\hat{\theta}_0} \paren{\tpst, \tpst}$ ($\tilde{N}\times\tilde{N}$ matrix),
\item $\tilde{\mathbf{y}}_i = \paren{\mathbb{1}_{ [t \in \ti ]} \times y_i(t)}_{t \in \tpst}$ ($\tilde{N}$-size vector),
\item $\tilde{\Psi}_i = \croch{ \mathbb{1}_{ [t, t' \in \ti]} \times \Psiihat \paren{t, t'} }_{t, t' \in \tpst}$ ($\tilde{N}\times\tilde{N}$ matrix).
\end{itemize}
\end{proposition}
\end{theopargself}
\begin{proof}
The sketch of the proof is similar to \Cref{prop:E_step} in the E step. The only technicality consists in dealing carefully with the dimensions of vectors and matrices involved, and whenever relevant, to define augmented versions of $\yi$ and $\Psiihat$ with 0 elements at unobserved timestamps' position for the $i$-th individual.
Note that if we pick a vector $\tpst$ including only some of the timestamps from $\ti$, information coming from $y_i$ at the remaining timestamps is ignored. We defer details to \Cref{proof:E_step}.
\qed
\end{proof}
\subsection{Computing the prior distribution for a new individual}
\label{sec:prior_new_indiv}
According to our generative model, given the mean process, any new individual $*$ is modelled as:
\begin{equation}
y_*(\cdot) \vert \mu_0(\cdot) \sim \mathcal{GP} \paren{ \mu_0(\cdot), \Psi_{\theta_*, \sigma_*^2}( \cdot, \cdot) }.
\end{equation}
Therefore, for any finite-dimensional vector of timestamps, and in particular for $\tpst$, $p(y_*(\tpst) \vert \mu_0(\tpst))$ is a multivariate Gaussian vector. Moreover, from this distribution and $\mu_0$'s hyper-posterior, we can figure out the multi-task prior distribution over $y_*(\tpst)$.
\begin{theopargself}
\begin{proposition}[]
\label{prop:integrate_mu}
For a set of timestamps $\tpst$, the multi-task prior distribution of $y_*$ is given by
\begin{equation}
\label{eq:prior}
p(y_*(\tpst) \vert \yii) = \mathcal{N} \paren{ y_*(\tpst); \mhat(\tpst), \Khat_*^p + \Psi_{\theta_*, \sigma_*^2}^{\tpst}}.
\end{equation}
\end{proposition}
\end{theopargself}
\begin{proof}
Let $\mathcal{P}$ denote the multi-task prior distribution for the new individual at timestamps $\tpst$.
To compute this prior, we need to integrate $p(y_* \vert \mu_0, \yii)$ over the mean process $\mu_0$, whereas the multi-task aspect remains through the conditioning over $\yii$.
We omit the writing of timestamps, by using the simplified notation $\mu_0$ and $y_*$ instead of $\mu_0(\tpst)$ and $y_*(\tpst)$, respectively.
We first use the assumption that $\{ y_i \vert \mu_0 \}_{i \in \{ 1,\dots,M \}} \perp \!\!\! \perp y_* \vert \mu_0$, \emph{i.e.}, the individuals are independent conditionally to $\mu_0$. Then, one can notice that the two distributions involved within the integral are Gaussian, which leads to the explicit Gaussian target distribution after integration.
\begin{align*}
\mathcal{P}
&= p(y_* \vert \yii) \\
&= \int\limits p \paren{y_*, \mu_0 \vert \yii} \mathop{}\!\mathrm{d} \mu_0 \\
&= \int\limits p \paren{y_*\vert \mu_0, \yii) p(\mu_0 \vert \yii} \mathop{}\!\mathrm{d} \mu_0 \\
&= \int\limits \underbrace{p \paren{y_*\vert \mu_0)}}_{\mathcal{N} \paren{y_*; \mu_0, \Psi_{\theta_*, \sigma_*^2}^{\tpst}} } \underbrace{p(\mu_0 \vert \yii)}_{\mathcal{N} \paren{\mu_0; \mhat, \Khat_*^p} } \mathop{}\!\mathrm{d} \mu_0.
\end{align*}
This convolution of two Gaussians remains Gaussian \citep[][Chapter 2.3.3]{BishopPatternrecognitionmachine2006}.
For any random variable $X \in \Omega$, and $A_X$ depending on X, let $\mathbb{E}_{A_X} \croch{X} = \int_{\Omega} x \ p\paren{ A_X } \mathop{}\!\mathrm{d} x $.
The mean parameter is then given by
\begin{align*}
\mathbb{E}_{y_*\vert \yii}\croch{y_*}
&= \int y_* \ p \paren{ y_* \vert \yii } \mathop{}\!\mathrm{d} y_* \\
&= \int y_* \int p \paren{y_*\vert \mu_0} p(\mu_0 \vert \yii) \mathop{}\!\mathrm{d} \mu_0 \mathop{}\!\mathrm{d} y_* \\
&= \int \paren{ \int y_* p \paren{y_*\vert \mu_0} \mathop{}\!\mathrm{d} y_*} p(\mu_0 \vert \yii) \mathop{}\!\mathrm{d} \mu_0 \\
&= \int \mathbb{E}_{y_* \vert \mu_0} \croch{ y_* } p(\mu_0 \vert \yii) \mathop{}\!\mathrm{d} \mu_0 \\
&= \mathbb{E}_{\mu_0 \vert \yii} \croch{ \mathbb{E}_{y_* \vert \mu_0} \croch{ y_* } } \\
&= \mathbb{E}_{\mu_0 \vert \yii} \croch{ \mu_0 } \\
&= \mhat.
\end{align*}
Following the same idea, the second-order moment is given by
\begin{align*}
\mathbb{E}_{y_*\vert \yii}\croch{y_*^2}
&= \mathbb{E}_{\mu_0 \vert \yii} \croch{ \mathbb{E}_{y_* \vert \mu_0} \croch{ y_*^2 } } \\
&= \mathbb{E}_{\mu_0 \vert \yii} \croch{ \mathbb{V}_{y_* \vert \mu_0} \croch{y_*} + \mathbb{E}_{y_* \vert \mu_0} \croch{ y_* }^2 } \\
&= \Psi_{\theta_*, \sigma_*^2} + \mathbb{E}_{\mu_0 \vert \yii} \croch{ \mu_0^2 } \\
&= \Psi_{\theta_*, \sigma_*^2} + \mathbb{V}_{\mu_0 \vert \yii} \croch{\mu_0} + \mathbb{E}_{\mu_0 \vert \yii} \croch{ \mu_0 }^2 \\
&= \Psi_{\theta_*, \sigma_*^2} + \Khat + \mhat^2,
\end{align*}
\noindent hence
\begin{align*}
\mathbb{V}_{y_*\vert \yii}\croch{y_*}
&= \mathbb{E}_{y_*\vert \yii}\croch{y_*^2} - \mathbb{E}_{y_*\vert \yii}\croch{y_*}^2 \\
&= \Psi_{\theta_*, \sigma_*^2} + \Khat + \mhat^2 - \mhat^2 \\
&= \Psi_{\theta_*, \sigma_*^2} + \Khat.
\end{align*}
\qed
\end{proof}
Note that the process $y_*(\cdot) \vert \yii$ is not a GP, although its finite-dimensional evaluation above remains Gaussian.
The covariance structure cannot be expressed as a kernel that could be directly evaluated on any vector: the process is known as a \emph{degenerated GP}.
In practice however, this does not bear much consequence as an arbitrary vector of timestamps $\tau$ can still be chosen, then we compute the hyper-posterior $p(\mu_0(\tau) \vert \yii)$, which yields the Gaussian distribution $p(y_*(\tau) \vert \yii)$ as above.
For the sake of simplicity, we now rename the covariance matrix of the prior distribution as
\begin{equation}
\Khat_*^p + \Psi_{\theta_*, \sigma_*^2}^{\tpst} = \Gamma_*^p =
\begin{pmatrix}
\Gamma_{pp} & \Gamma_{p*} \\
\Gamma_{*p} & \Gamma_{**}
\end{pmatrix} ,
\end{equation}
\noindent where $\Gamma_{x, x'} = \Gamma(\Ut_x, \Ut_{x'}) = \Khat_x^{x'} + \Psi_{\theta_*, \sigma_*^2}(\Ut_x, \Ut_{x'})$, for any $\Ut_x, \Ut_{x'} \subset \mathcal{T}$.
\subsection{Learning hyper-parameters of a new individual}
\label{sec:Learning_new_hp}
When we collect data points for a new individual, as in the single-task GP setting, we need to learn the hyper-parameters of its covariance function before making predictions. A salient fact in our multi-task approach is that we include this step in the prediction process, for the two following reasons.
First, the model is already trained for individuals $i = 1,\dots, M$, and this training is general and independent from future individual $*$ or the choice of prediction timestamps. Since learning these new hyper-parameters requires knowledge of $\mu(\tpst)$ and thus of the prediction timestamps, we cannot compute them beforehand.
Secondly, learning these hyper-parameters with the empirical Bayes approach only requires maximisation of a Gaussian likelihood which is negligible in computing time compared to the previous EM algorithm.
As for single-task GP, we have the following estimates for hyper-parameters:
\begin{align*}
\hat{\Theta}_*
&= \argmax_{\Theta_*} p(y_*(\tst) \vert \yii, \Theta_*) \\
&= \argmax_{\Theta_*} \mathcal{N}\paren{ y_*(\tst); \mhat(\tst), \Gamma_{**}^{\Theta_*} }.
\end{align*}
Note that this step is optional depending on model: in the common hyper-parameters model (i.e. $\ (\theta, \sigma^2) = (\theta_i, \sigma_i^2)$), any new individual will share the same hyper-parameters and we already have $\hat{\Theta}_* = (\hat{\theta}_*, \hat{\sigma}_*^2) = (\hat{\theta}, \hat{\sigma}^2)$ from the EM algorithm.
\subsection{Prediction}
\label{sec:GP_pred}
We can write the prior distribution, separating observed and prediction timestamps, as:
\begin{align}
p( y_*(\tpst) \vert \yii)
&= p( y_*(\tpred), y_*(\tst) \vert \yii) \\
&= \mathcal{N} \paren{y_*(\tpst); \mhat(\tpst), \Gamma_*^p } \\
&= \mathcal{N} \paren{
\begin{bmatrix}
y_*(\tpred) \\
y_*(\tst)
\end{bmatrix};
\begin{bmatrix}
\mhat(\tpred) \\
\mhat(\tst)
\end{bmatrix},
\begin{pmatrix}
\Gamma_{pp} & \Gamma_{p*} \\
\Gamma_{*p} & \Gamma_{**}
\end{pmatrix} }.
\end{align}
\noindent The conditional distribution remains Gaussian \citep[][]{BishopPatternrecognitionmachine2006}, and the predictive distribution is given by
\begin{equation}
p( y_*(\tpred) \vert y_*(\tst) , \yii) = \mathcal{N} \paren{ y_*(\tpred); \hat{\mu}_0^p, \hat{\Gamma}^p },
\end{equation}
\noindent where:
\begin{itemize}
\item $\hat{\mu}_0^p = \mhat(\tpred) + \Gamma_{p*} \Gamma_{**}^{-1} \paren{ y_*(\tst) - \mhat(\tst)},$
\item $\hat{\Gamma}^p = \Gamma_{pp} - \Gamma_{p*} \Gamma_{**}^{-1} \Gamma_{*p}.$
\end{itemize}
\subsection{Learning: An EM algorithm}
\label{sec:EM_algo}
Several approaches to learn hyper-parameters for Gaussian processes have been proposed in the literature, we refer to \cite{RasmussenGaussianprocessesmachine2006a} for a comprehensive study.
One classical approach, called \emph{empirical Bayes} \citep{CasellaIntroductionEmpiricalBayes1985}, is based on the maximisation of an explicit likelihood to estimate hyper-parameters.
This procedure avoids to sample from intractable distributions, usually resulting in additional computational cost and complicating practical use in moderate to large sample sizes.
However, since the likelihood of the model depends on $\mu_0$, we cannot maximise it directly.
Therefore, we propose an EM algorithm (see the pseudocode in \Cref{alg:algo_EM}) to learn the hyper-parameters $\Theta$.
The procedure alternatively computes the hyper-posterior distribution $p(\mu_0 \vert (\yi)_i, \hat{\Theta})$ with current hyper-parameters, and then optimises $\Theta$ according to this hyper-posterior distribution.
This EM algorithm converges to local maxima \citep{DempsterMaximumLikelihoodIncomplete1977}, typically in a handful of iterations.
\paragraph{E step}
~\par
For the sake of simplicity, we assume in that section that for all $i,j, \ \ti = \mathbf{t}_j = \Ut$, i.e. the individuals are observed on a common grid of timestamps. The E-step then consists in computing the hyper-posterior distribution of $\mu_0(\Ut)$.
\begin{theopargself}
\begin{proposition}[]
\label{prop:E_step}
Assume the hyper-parameters $\hat{\Theta}$ known from initialisation or estimated from a previous M step. The hyper-posterior distribution of $\mu_0$ remains Gaussian:
\begin{equation}
p\paren{\mu_0(\Ut) \vert \yii, \hat{\Theta}} = \mathcal{N} \paren{\mu_0(\Ut) ; \mhat(\Ut), \Khat^{\Ut}},
\end{equation}
with
\begin{itemize}
\item $\Khat^{\Ut} = \paren{ { \Kthetahat^{\Ut}}^{-1} + \sumi {\Psiihat^{\Ut}}^{-1}}^{-1},$
\item $\mhat(\Ut) = \Khat^{\Ut} \paren{ { \Kthetahat^{\Ut}}^{-1} m_0\paren{\Ut} + \sumi { \Psiihat^{\Ut}}^{-1} \yi }.$
\end{itemize}
\end{proposition}
\end{theopargself}
\begin{proof}
We omit specifying timestamps in what follows since each process is evaluated on $\Ut$.
\begin{align*}
p \paren{ \mu_0 \vert \yii, \hat{\Theta} }
& \propto p \paren{ \yii \vert \mu_0 , \hat{\Theta}} p \paren{ \mu_0 \vert \hat{\Theta} } \\
& \propto \left\{ \displaystyle \prod_{i = 1}^{M} p \paren{ \yi \vert \mu_0 , \hat{\theta}_i, \hat{\sigma}_i^2} \right\} p \paren{ \mu_0 \vert \hat{\theta}_0 } \\
& \propto \left\{ \displaystyle \prod_{i = 1}^{M} \mathcal{N} \paren{\yi; \mu_0, \Psiihat)} \right\}\mathcal{N} \paren{ \mu_0; m_0, \Kthetahat }.
\end{align*}
The term $\mathcal{L}_1 = - (1/2) \log p ( \mu_0 \vert \yii, \hat{\Theta})$ may then be written as
\begin{align*}
\mathcal{L}_1
&= - \dfrac{1}{2} \log p ( \mu_0 \vert \yii, \hat{\Theta}) \\
&= \sumi \paren{ y_i - \mu_0 }^{\intercal} \Psiihat^{-1} \paren{ y_i - \mu_0 } \\
& \ \ \ + \paren{ \mu_0 - m_0 }^{\intercal} \Kthetahat^{-1} \paren{ \mu_0 - m_0 } + C_1 \\
&= \sumi \mu_0^{\intercal} \Psiihat^{-1} \mu_0 - 2 \mu_0^{\intercal} \Psiihat^{-1} \yi \\
& \ \ \ + \mu_0^{\intercal} \Kthetahat^{-1} \mu_0 - 2 \mu_0^{\intercal} \Kthetahat^{-1} m_0 + C_2 \\
&= \mu_0^{\intercal} \paren{ \Kthetahat^{-1} + \sumi \Psiihat^{-1} } \mu_0 \\
& \ \ \ - 2 \mu_0^{\intercal} \paren{ \Kthetahat^{-1} m_0 + \sumi \Psiihat^{-1} \yi} + C_2.
\end{align*}
Identifying terms in the quadratic form with the Gaussian likelihood, we get the desired result.
\qed
\end{proof}
Let us stress here that the above result assumes common timestamps among individuals, which is a simplified setting.
We provide a generalisation of this proposition in \Cref{sec:prediction}: \Cref{prop:post_mu} holds with uncommon grids of timestamps $\ti$.
\newline
The maximisation step depends on the assumptions on the generative model, resulting in two versions for the EM algorithm (the E step is common to both, the branching point is here).
\paragraph{M step: different hyper-parameters}
~\par
Assuming each individual has its own set of hyper-parameters $\{ \theta_i, \sigma_i^2 \}$, the M step is given by the following.
\begin{theopargself}
\begin{proposition}[]
\label{prop:M_step_diff}
Assume $p(\mu_0 \vert \yii) \\ = \mathcal{N} \paren{\mu_0(\Ut); \mhat(\Ut), \Khat^{\Ut} }$ given by a previous E step.
For a set of hyper-parameters $\Theta = \{ \theta_0, \thetaii, \sigmaii \}$, optimal values are given by
\begin{align*}
\hat{\Theta}
&= \argmax_{\Theta} \mathbb{E}_{\mu_0 \vert \yii} \croch{ p(\yii, \mu_0(\Ut) \vert \Theta) },
\end{align*}
\noindent inducing $M +1$ independent maximisation problems:
\begin{align*}
\hat{\theta}_0 &= \argmax\limits_{\theta_0} \ \mathcal{L}^{\Ut} \paren{\mhat(\Ut); m_0(\Ut), K_{\theta_0}^{\Ut} } , \\
( \hat{\theta}_i, \hat{\sigma}_i^2 ) &= \argmax\limits_{\theta_i, \sigma_i^2} \ \mathcal{L}^{\ti} ( \yi; \mhat(\Ut), \Psii^{\ti} ), \ \forall i,
\end{align*}
\noindent where
\begin{equation*}
\mathcal{L}^{\Ut} \paren{ \mathbf{x}; \mathbf{m}, S } = \mathcal{N} \paren{\mathbf{x}; \mathbf{m}, S } - \dfrac{1}{2} Tr \paren{ \Khat^{\Ut} {S}^{-1}}.
\end{equation*}
\end{proposition}
\end{theopargself}
\begin{proof}
One simply has to distribute the conditional expectation in order to get the right likelihood to maximise, and then notice that the function can be written as a sum of M+1 independent (with respect to the hyper-parameters) terms.
Moreover, by rearranging, one can observe that each independent term is the sum of a Gaussian likelihood and a correction trace term. See \Cref{proof:Proof_M_step} for details.
\end{proof}
\paragraph{M step: common hyper-parameters}
~\par
Alternatively, assuming all individuals share the same set of hyper-parameters $\{ \theta, \sigma^2 \}$, the M step is given by the following.
\begin{theopargself}
\begin{proposition}[]
\label{prop:M_step_common}
Assume $p(\mu_0 \vert \yii) \\ = \mathcal{N} \paren{\mu_0(\Ut); \mhat(\Ut), \Khat^{\Ut} }$ given by a previous E step.
For a set of hyper-parameters $\Theta = \{ \theta_0, \theta, \sigma^2 \}$, optimal values are given by
\begin{equation*}
\hat{\Theta} = \argmax_{\Theta} \mathbb{E}_{\mu_0 \vert \yii} \croch{ p(\yii, \mu_0(\Ut) \vert \Theta) },
\end{equation*}
\noindent inducing two independent maximisation problems:
\begin{align*}
\hat{\theta}_0 &= \argmax\limits_{\theta_0} \ \mathcal{L}^{\Ut} \paren{\mhat(\Ut); m_0(\Ut), K_{\theta_0}^{\Ut} } , \\
( \hat{\theta}, \hat{\sigma}^2 ) &= \argmax\limits_{\theta, \sigma^2} \ \mathcal{L}_M ( \theta, \sigma^2 ),
\end{align*}
\noindent where
\begin{equation*}
\mathcal{L}_M ( \theta, \sigma^2 ) = \sumi \mathcal{L}^{\ti} ( \yi; \mhat(\Ut), \Psi_{\theta, \sigma^2}^{\ti} ).
\end{equation*}
\end{proposition}
\end{theopargself}
\begin{proof}
We use the same strategy as for \Cref{prop:M_step_diff}, see \Cref{proof:Proof_M_step} for details.
\end{proof}
In both cases, explicit gradients associated with the likelihoods to maximise are available, facilitating the optimisation with gradient-based methods.
\subsection{Initialisation}
\label{sec:initialisation}
To implement the EM algorithm described above, several constants must be (appropriately) initialised:
\begin{itemize}
\item $m_0(\cdot)$, the mean parameter from the hyper-prior distribution of the mean process $\mu_0(\cdot)$.
A somewhat classical choice in GP is to set its value to a constant, typically $0$ in the absence of external knowledge.
Notice that, in our multi-task framework, the influence of $m_0(\cdot)$ in hyper-posterior computation decreases quickly as $M$ grows.
\item Initial values for kernel parameters $\theta_0$ and $\thetaii$.
Those strongly depend on the chosen kernel and its properties.
We advise initiating $\theta_0$ and $\thetaii$ with close values, as a too large difference might induce a nearly singular covariance matrix and result in numerical instability.
In such pathological regime, the influence of a specific individual tends to overtake others in the calculus of $\mu_0$'s hyper-posterior distribution.
\item Initial values for the variance of the error terms $\sigmaii$.
This choice mostly depends on the context and properties of the dataset.
We suggest avoiding initial values with more than an order of magnitude different from the variability of data.
In particular, a too high value might result in a model mostly capturing noise.
\end{itemize}
As a final note, let us stress that the EM algorithm depends on the
initialisation and is only
guaranteed to converge to local maxima of the likelihood
function \citep{krishnan1997algorithm}. Several strategies have been considered in the literature to
tackle this issue such as simulated annealing and the use of multiple
initialisations \citep{Biernacki2001strategies}. In this paper, we choose the latter option.
\subsection{Pseudocode}
\label{sec:pseudo_code}
We wrap up this section with the pseudocode of the EM component of our complete algorithm, which we call \textsc{Magma}\xspace (standing for Multi tAsk Gaussian processes with common MeAn). The corresponding code is available at \url{https://github.com/ArthurLeroy/MAGMA}.
\begin{algorithm}
\caption{\textsc{Magma}\xspace: EM component}
\label{alg:algo_EM}
\begin{algorithmic}
\STATE Initialise $m_0$ and $\Theta = \acc{ \theta_0 , \thetaii , \sigmaii}$.
\WHILE{not converged}
\STATE E step: Compute the hyper-posterior distribution
\STATE \hspace{1.1cm} $p(\mu_0 \vert \yii, \Theta) = \mathcal{N}(\mhat, \Khat).$
\newline
\STATE M step: Estimate hyper-parameter by maximising
\STATE \hspace{1.1cm} $\hat{\Theta} = \argmax\limits_{\Theta} \mathbb{E}_{\mu_0 \vert \yii} \croch{ p(\mu_0, \yii \vert \Theta) } .$
\ENDWHILE
\RETURN $\hat{\Theta}$, $\mhat$, $\Khat$.
\end{algorithmic}
\end{algorithm}
\subsection{Discussion of EM algorithms and alternatives}
\label{sec:related_work}
Let us stress that even though we focus on prediction purpose in this paper, the output of the EM algorithm already provides results on related FDA problems.
The generative model in \cite{YangSmoothingMeanCovariance2016} describes a Bayesian framework that resembles ours to smooth multiple curves simultaneously.
However, modelling variance structure with an Inverse-Wishard process forces the use of an MCMC algorithm for inference or the introduction of a more tractable approximation in \cite{YangEfficientBayesianhierarchical2017a}.
One can think of the learning through \textsc{Magma}\xspace and applying a single task GP regression on each individual as an \textit{empirical Bayes} counterpart to their approach.
Meanwhile, $\mu_0$'s hyper-posterior distribution also provides the probabilistic estimation of a mean curve from a set of functional data.
The closest method to our approach can be found in \cite{ShiGaussianProcessFunctional2007} and the following book \cite{ShiGaussianProcessRegression2011}, though by several aspects, authors dealt with more general features like multidimensional or non-functional inputs.
The authors also work in the context of a multi-task GP model, and one can retrieve the idea of defining a mean function $\mu_0$ to overcome the weaknesses of classic GPs in making predictions far from observed data.
Since their model uses B-splines to estimate this mean function, thanks to information from multiple individuals, this method only works if all individuals share the same grid of observation, and does not account for uncertainty over $\mu_0$.
\subsection{\Cref{prop:integrate_mu}}
\subsection{Illustration on a simple example}
\label{sec:simu_illustration}
\begin{figure*}
\begin{center}
\includegraphics[width=\textwidth]{images/Figure_1.png}
\caption{Prediction curves (blue) for a new individual with associated 95\% credible intervals (grey) for GP regression (left) and \textsc{Magma}\xspace (right). The dashed line represents the mean function of the mean process's hyper-posterior $p(\mu_0 \vert \yii)$. Observed data points are in black, and testing data points are in red. The colourful backward points are the observations from the training dataset, each colour corresponding to a different individual.}
\label{fig:example}
\end{center}
\end{figure*}
To illustrate the multi-task approach of \textsc{Magma}\xspace, \Cref{fig:example} displays a comparison between single GP regression and \textsc{Magma}\xspace on a simple example, from a dataset simulated according to the scheme above.
Given the observed data (in black), values on a thin grid of unobserved timestamps are predicted and compared, in particular, with the true test values (in red).
As expected, GP regression provides a good fitting close to the data points and then dives rapidly to the prior 0 with increasing uncertainty.
Conversely, although the initialisation for the prior mean was also $0$ in \textsc{Magma}\xspace, the hyper-posterior distribution of $\mu_0$ (dashed line) is estimated thanks to all individuals in the training dataset.
This process acts as an informed prior helping GP prediction for the new individual, even far from its own observations.
More precisely, 3 phases can be distinguished according to the level of information coming from the data: in the first one, close to the observed data ($t \in \croch{1,7}$), the two processes behave similarly, except a slight increase in the variance for \textsc{Magma}\xspace, which is logical since the prediction also takes uncertainty over $\mu_0$ into account (see \Cref{eq:prior});
in the second one, on intervals of unobserved timestamps containing data points from the training dataset ($t \in \croch{0,1} \cup \croch{7,10}$), the prediction is guided by the information coming from other individuals through $\mu_0$.
In this context, the mean trajectory remains coherent and the uncertainty increases only slightly.
In the third case, where no observations are available neither from new individual nor from training dataset ($t \in \croch{10,12}$), the prediction behaves as expected, with a slow drifting to the prior mean 0, with highly increasing variance.
Overall, the multi-task framework provides reliable probabilistic predictions on a wider range of timestamps, potentially outside of the usual scope for GPs.
\subsection{Performance comparison on simulated datasets}
\label{sec:simu_comparison}
\begin{table*}
\begin{center}
\caption{Average MSE (sd) and average $CI_{95}$ coverage (sd) on 100 runs for GP, GPFDA and \textsc{Magma}\xspace. ($\star$ : 99.6 (2.8), the measure of incertitude from the GPFDA package is not a genuine credible interval)}
\label{tab:compare_algo}
\begin{tabular}{c|cc|cc|}
\cline{2-5}
& \multicolumn{2}{c|}{Prediction} & \multicolumn{2}{c|}{Estimation $\mu_0$} \\
& MSE & $CI_{95}$ & MSE & $CI_{95}$ \\ \hline
\multicolumn{1}{|c|}{\textsc{Magma}\xspace} & \textbf{18.7 (31.4)} & \textbf{93.8 (13.5)} & \textbf{1.3 (2)} & \textbf{94.3 (11.3)} \\
\multicolumn{1}{|c|}{GPFDA} & 31.8 (49.4) & 90.4 (18.1) & 2.4 (3.6) & $\star$ \\
\multicolumn{1}{|c|}{GP} & 87.5 (151.9) & 74.0 (32.7) & \cellcolor[HTML]{9B9B9B} & \cellcolor[HTML]{9B9B9B} \\ \hline
\end{tabular}
\end{center}
\end{table*}
\begin{figure*}
\begin{center}
\includegraphics[width= \textwidth]{images/Figure_2.png}
\caption{MSE with respect to the number $M$ of training individuals (100 runs in each case). \emph{Left}: prediction error on 10 testing points. \emph{Right}: estimation error of the true mean process $\mu_0$.}
\label{fig:varying_M}
\end{center}
\end{figure*}
\begin{figure}
\begin{center}
\includegraphics[width= .5\textwidth]{images/Figure_3.png}
\caption{MSE prediction error on the 10 last testing points with respect to the increasing number N of observed timestamps, among the first 20 points (100 runs in each case).}
\label{fig:varying_N}
\end{center}
\end{figure}
We confront the performance of \textsc{Magma}\xspace to alternatives in several situations and for different datasets.
In the first place, the classical GP regression (GP), GPFDA and \textsc{Magma}\xspace are compared through their performance in prediction and estimation of the true mean process $\mu_0$.
In the prediction context, the performances are evaluated according to the following indicators:
\begin{itemize}
\item the mean squared error (MSE) which compares the predicted values to the true test values of the 10 last timestamps:
$$ \dfrac{1}{10} \sum\limits_{k = 21}^{30} \paren{ y_i^{pred} (t_i^k) - y_i^{true} (t_i^k) }^2 , $$
\item the ratio of $CI_{95}$ coverage, i.e. the percentage of unobserved data points effectively lying within the 95\% credible interval defined from the predictive posterior distribution $p(y_*(\tpred) \vert y_*(\tst), \yii)$:
$$ 100 \times \dfrac{1}{10} \sum\limits_{k = 21}^{30} \mathbb{1}_{ \{ y_i^{true} \in \ CI_{95} \} }.$$
\end{itemize}
The ratio of $CI_{95}$ coverage gives an approximation of the predictive variance reliability and should be as close to the value 95\% as possible.
Other values would indicate a tendency to underestimate or overestimate the uncertainty.
Let us recall that GPFDA uses B-splines to estimate the mean process and does not account for uncertainty, contrarily to a probabilistic framework as \textsc{Magma}\xspace.
However, a measure of uncertainty based on an empirical variance estimated from training curves is proposed \citep[see][Section 3.2.1]{ShiGaussianProcessFunction2014a}.
In practice, this measure constantly overestimates the true variance, and the $CI_{95}$ coverage is generally equal or close to 100\%.
\newline
In the estimation context, the performances are evaluated thanks to another MSE, which compares the estimations to the true values of $\mu_0$ at all timestamps:
$$ \dfrac{1}{M} \sumi \dfrac{1}{N} \sum\limits_{k = 1}^{N} \paren{ \mu_0^{pred} (t_i^k) - \mu_0^{true} (t_i^k) }^2 .$$
\Cref{tab:compare_algo} presents the results obtained over 100 datasets, where the model is trained on $M = 20$ individuals, each of them observed on $N = 30$ common timestamps.
As expected, both multi-task methods lead to better results than GP.
However, \textsc{Magma}\xspace outperforms GPFDA, both in estimation of $\mu_0$ and in prediction performance.
In terms of error as well as in uncertainty quantification, \textsc{Magma}\xspace provides more accurate results, in particular with a $CI_{95}$ coverage close to the 95\% expected value.
Each method presents a quite high standard deviation for MSE in prediction, which is due to some datasets with particularly difficult values to predict, although most of the cases lead to small errors.
This behaviour is reasonably expected since the forecast of 10-ahead-timestamps might sometimes be tricky.
It can also be noticed on \Cref{fig:varying_M} that \textsc{Magma}\xspace consistently provides lower errors as well as less pathological behaviour, as it may sometimes occur with the B-splines modelling used in GPFDA.
To highlight the effect of the number of individuals $M$ on the performance, \Cref{fig:varying_M} provides the same 100 runs trial as previously, for different values of $M$.
The boxplots exhibit, for each method, the behaviour of the prediction and estimation MSE as information is added in the training dataset.
Let us mention the absence of discernible changes as soon as $M > 200$.
As expected, we notice on the right panel that adding information from new individuals improves the estimation of $\mu_0$, leading to shallow errors for high values of $M$, in particular for \textsc{Magma}\xspace.
Meanwhile, the left panel exhibits reasonably unchanged prediction performance with respect to the values of $M$, excepted some random fluctuations.
This property is expected for GP regression, since no external information is used from the training dataset in this context.
For both multi-tasks algorithms though, the estimation of $\mu_0$ improves the prediction by one order of magnitude below the typical errors, even with only a few training individuals.
Furthermore, since a new individual behaves independently through $f_*$, it is natural for a 10-points-ahead forecast to present intrinsic variations, despite an adequate estimation of the shared mean process.
\newline
To illustrate the advantage of multi-task methods, even for $M = 20$, we display on \Cref{fig:varying_N} the evolution of MSE according to the number of timestamps $N$ that are assumed to be observed for the new individual on which we make predictions.
These predictions remain computed on the last 10 timestamps, although in this experiment, we only observe the first 5, 10, 15, or 20 timestamps, in order to change the volume of information and the distance from training observations to targets.
We observe on \Cref{fig:varying_M} that, as expected in a GP framework, the closer observations are to targets, the better the results.
However, for multi-tasks approaches and in particular for \textsc{Magma}\xspace, the prediction remains consistently adequate even with few observations.
Once more, sharing information across individuals significantly helps the prediction, even for small values of $M$ or few observed data.
\subsection{\textsc{Magma}\xspace specific settings}
\label{sec:simu_settings}
As we previously discussed, different settings are available for \textsc{Magma}\xspace according to the nature of data and the model hypotheses.
First, the \emph{Common grid} setting corresponds to cases where all individuals share the same timestamps, whereas \emph{Uncommon grid} is used otherwise.
Moreover, \textsc{Magma}\xspace enables to consider identical hyper-parameters for all individuals or specific ones, as previously discussed in \Cref{sec:model_hypo}.
To evaluate the effect of the different settings, performances in prediction and $\mu_0$'s estimation are evaluated in the following cases in \Cref{tab:settings_mtgp}:
\begin{itemize}
\item \emph{Common HP}, when data are simulated with a common set of hyper-parameters for all individuals, and \Cref{prop:M_step_common} is used for inference in \textsc{Magma}\xspace,
\item \emph{Different HP}, when data are simulated with its own set of hyper-parameters for each individual, and \Cref{prop:M_step_diff} is used for inference in \textsc{Magma}\xspace,
\item \emph{Common HP on different HP data}, when data are simulated with its own set of hyper-parameters for each individual, and \Cref{prop:M_step_common} is used for inference in \textsc{Magma}\xspace.
\end{itemize}
Note that the first line of the table (\emph{Common grid / Common HP}) of \Cref{tab:settings_mtgp} is identical to the corresponding results in \Cref{tab:compare_algo}, providing reference values, significantly better than for other methods.
The results obtained in \Cref{tab:settings_mtgp} indicates that the \textsc{Magma}\xspace performance are not significantly altered by the settings used, or the nature of the simulated data.
In order to confirm the robustness of the method, the setting \emph{Common HP} was applied to data generated by drawing different values of hyper-parameters for each individual (\emph{Different HP data}).
In this case, performance in prediction and estimation of $\mu_0$ are slightly deteriorated, although \textsc{Magma}\xspace still provides quite reliable forecasts.
This experience also highlights a particularity of the \emph{Different HP} setting: looking at the estimation of $\mu_0$ performance, we observe a significant decrease in the $CI_{95}$ coverage, due to numerical instability in some pathological cases.
Numerical issues, in particular during matrix inversions, are classical problems in the GP literature and, because of the potentially large number of different hyper-parameters to train, the probability for at least one of them to lead to a nearly singular matrix increases.
In this case, one individual might overwhelm others in the calculus of $\mu_0$'s hyper-posterior (see \Cref{prop:post_mu}), and thus lead to an underestimated posterior variance.
This problem does not occur in the \emph{Common HP} settings, since sharing the same hyper-parameters prevents the associated covariance matrices from running over each other.
Thus, except if one specifically wants to smooth multiple curves presenting really different behaviours, keeping \emph{Common HP} as a default setting appear as a reasonable choice.
Let us notice that the estimation of $\mu_0$ is slightly better for common than for uncommon grid, since the estimation problem on the union of different timestamps is generally more difficult.
However, this feature only depends on the nature of data.
\begin{table*}
\begin{center}
\caption{Average MSE (sd) and average $CI_{95}$ coverage (sd) on 100 runs for the different settings of \textsc{Magma}\xspace.}
\label{tab:settings_mtgp}
\begin{tabular}{cl|ll|ll|}
\cline{3-6}
\multicolumn{1}{l}{} & & \multicolumn{2}{c|}{Prediction} & \multicolumn{2}{c|}{Estimation of $\mu_0$} \\
\multicolumn{1}{l}{} & & \multicolumn{1}{c}{MSE} & \multicolumn{1}{c|}{$CI_{95}$} & \multicolumn{1}{c}{MSE} & \multicolumn{1}{c|}{$CI_{95}$} \\ \hline
\multicolumn{1}{|c|}{\multirow{2}{*}{Common HP}} & Common grid & 18.7 (31.4) & 93.8 (13.5) & 1.3 (2) & 94.3 (11.3) \\
\multicolumn{1}{|c|}{} & Uncommon grid & 19.2 (43) & 94.6 (13.1) & 2.9 (2.6) & 93.6 (9.2) \\ \cline{1-2}
\multicolumn{1}{|c|}{\multirow{2}{*}{Different HP}} & Common grid & 19.9 (54.7) & 91.6 (17.8) & 0.5 (0.4) & 70.8 (24.3) \\
\multicolumn{1}{|c|}{} & Uncommon grid & 14.5 (22.4) & 89.1 (17.9) & 2.5 (4.5) & 81.1 (15.9) \\ \cline{1-2}
\multicolumn{1}{|c|}{\multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Common HP on\\ different HP data\end{tabular}}} & Common grid & 21.7 (36) & 91 (19.8) & 1.5 (1.2) & 91.1 (13) \\
\multicolumn{1}{|c|}{} & Uncommon grid & 18.1 (33) & 92.5 (15.9) & 3.2 (4.5) & 93.4 (9.8) \\ \hline
\end{tabular}
\end{center}
\end{table*}
\subsection{Running times comparisons}
\label{sec:simu_burden}
The counterpart of the more accurate and general results provided by \textsc{Magma}\xspace is a natural increase in running time.
\Cref{tab:simu_burden_compare} exhibits the raw and relative training times for GPFDA and \textsc{Magma}\xspace (prediction times are negligible and comparable in both cases), with varying values of $M$ on a \emph{Common grid} of $N = 30$ timestamps.
The algorithms were run under the \emph{3.6.1 R version}, on a laptop with a dual-core processor cadenced at 2.90GhZ and an 8Go RAM.
The reported computing times are in seconds, and for small to moderate datasets ($N \simeq 10^3$, $M \simeq 10^4$ ) the procedures ran in few minutes to few hours.
The difference between the two algorithms is due to GPFDA modelling $\mu_0$ as a deterministic function through B-splines smoothing, whereas \textsc{Magma}\xspace accounts for uncertainty.
The ratio of computing times between the two methods tends to decrease as $M$ increases, and stabilises around $2$ for higher numbers of training individuals.
This behaviour comes from the E step in \textsc{Magma}\xspace, which is incompressible and quite insensitive to the value of $M$.
Roughly speaking, one needs to pay twice the computing price of GPFDA for \textsc{Magma}\xspace to provide (significantly) more accurate predictions and uncertainty over $\mu_0$.
\Cref{tab:burden_diff_settings} provides running times of \textsc{Magma}\xspace according to its different settings, with $M=20$.
Because the complexity is linear in $M$ in each case, the ratio in running times would remain roughly similar no matter the value of $M$.
Prediction time appears negligible compared to training time, and generally takes less than one second to run.
Besides, the \emph{Different HP} setting increases the running time, since in this context $M$ maximisations (instead of one for \emph{Common HP}) are required at each EM iteration.
In this case, the prediction also takes slightly longer because of the necessity to optimise hyper-parameters for the new individual.
Although the nature of the grid of timestamps does not matter in itself, a key limitation lies in the dimension $N$ of the pooled set of timestamps, which tends to get bigger when individuals have different timestamps from one another.
\begin{table}[H]
\caption{Average (sd) training time (in seconds) for \textsc{Magma}\xspace and GPFDA for different numbers $M$ of individuals in the training dataset. The relative running time between {\textsc{Magma}\xspace} and GPFDA is provided on the line \emph{Ratio}.}
\label{tab:simu_burden_compare}
\begin{tabular}{c|cccc|}
\cline{2-5}
& 5 & 10 & 50 & 100 \\ \hline
\multicolumn{1}{|c|}{\textsc{Magma}\xspace} & 5.2 (2.7) & 7.6 (3.2) & 24.2 (11.1) & 42.8 (10) \\
\multicolumn{1}{|c|}{GPFDA} & 1 (0.3) & 2.1 (0.6) & 10.7 (2.4) & 23.1 (5.3) \\ \hline
\multicolumn{1}{|c|}{Ratio} & \textbf{5.2} & \textbf{3.6} & \textbf{2.3} & \textbf{1.9} \\ \hline
\end{tabular}
\end{table}
\begin{table}[H]
\caption{Average (sd) training and prediction time (in seconds) for different settings of \textsc{Magma}\xspace.}
\label{tab:burden_diff_settings}
\begin{tabular}{cc|cc|}
\cline{3-4}
& & Train & Predict \\ \hline
\multicolumn{1}{|c|}{\multirow{2}{*}{Common HP}} & Common grid & 12.6 (3.5) & 0.1 (0) \\
\multicolumn{1}{|c|}{} & Uncommon grid & 16.5 (11.4) & 0.2 (0.1) \\ \cline{1-2}
\multicolumn{1}{|c|}{\multirow{2}{*}{Different HP}} & Common grid & 42.6 (20.5) & 0.6 (0.1) \\
\multicolumn{1}{|c|}{} & Uncommon grid & 40.2 (17) & 0.6 (0.1) \\ \hline
\end{tabular}
\end{table}
\subsection{Application of \textsc{Magma}\xspace on swimmers' progression curves}
\label{sec:simu_real_data}
\paragraph{Data and problematic}
~\par
We consider the problem of performance prediction in competition for french swimmers.
The French Swimming Federation (FFN) provided us with an anonymised dataset, compiling the age and results of its members between 2000 and 2016. For each competitor, the race times are registered for competitions of 100m freestyle (50m swimming-pool).
The database contains results from 1731 women and 7876 men, each of them compiling an average of 22.2 data points (min = 15, max = 61) and 12 data points (min = 5, max = 57) respectively.
In the following, age of the $i$-th swimmer is considered as the input variable (timestamp $t$) and the performance (in seconds) on a 100m freestyle as the output ($y_i(t)$).
For reasons of confidentiality and property, the raw dataset cannot be published.
The analysis focuses on the youth period, from 10 to 20 years, where the progression is the most noticeable.
In order to get relevant time series, we retained only individuals having a sufficient number of data points on the considered time period.
For a young swimmer, observed during its first years of competition, we aim at modelling its progression curve and make predictions on its future performance in the subsequent years.
Since we consider a decision-making problem involving irregular time series, the GP probabilistic framework is a natural choice to work on.
Thereby, assuming that each swimmer in the database is a realisation $y_i$ defined as previously, we expect \textsc{Magma}\xspace to provide multi-task predictions for a new young swimmer, that will benefit from information of other swimmers already observed at older ages.
To study such modelling, and validate its efficiency in practice, we split the individuals into a training and testing datasets with respective sizes:
\begin{itemize}
\item $M_{train}^F = 1039$, for the female training set,
\item $M_{test}^F = 692$, for the female testing set,
\item $M_{train}^M = 4726$, for the male training set,
\item $M_{test}^M = 3150$, for the male testing set.
\end{itemize}
\noindent Inference on the hyper-parameters is performed thanks to the training dataset in both cases.
Considering the different timestamps and the relative monotony of the progression curves, the settings \emph{Uncommon grid}/\emph{Common HP} has been used for \textsc{Magma}\xspace.
The overall training lasted around 2 hours with the same hardware configuration as for simulations.
To compute MSE and the $CI_{95}$ coverage, the data points of each individual in the testing set has been split into \emph{observed} and \emph{testing} timestamps.
Since each individual has a different number of data points, the first 80\% of timestamps are taken as \emph{observed}, while the remaining 20\% are considered as \emph{testing} timestamps.
{\textsc{Magma}\xspace}'s predictions are compared with the true values of $y_i$ at testing timestamps.
\newline
As previously, both GP and \textsc{Magma}\xspace have been initialised with a constant 0 mean function.
Initial values for hyper-parameters are also similar for all i, $\theta_0^{ini} = \theta_i^{ini} = (e^1, e^1)$ and $\sigma_i^{ini} = 0.4$.
Those values are the default in \textsc{Magma}\xspace and remain adequate in the context of these datasets.
\paragraph{Results and interpretation}
The overall performance and comparison are summarised in \Cref{tab:real_data}.
\begin{table}[H]
\caption{Average MSE (sd) and average $CI_{95}$ coverage (sd) for prediction on french swimmer testing datasets.}
\label{tab:real_data}
\begin{center}
\begin{tabular}{cc|cc|}
\cline{3-4}
\multicolumn{1}{l}{} & & MSE & $CI_{95}$ Cover \\ \hline
\multicolumn{1}{|c|}{\multirow{2}{*}{Women}} & \textsc{Magma}\xspace & \textbf{3.8 (10.3)} & \textbf{95.3 (15.9)} \\
\multicolumn{1}{|c|}{} & GP & 25.3 (97.6) & 72.7 (37.1) \\ \cline{1-2}
\multicolumn{1}{|c|}{\multirow{2}{*}{Men}} & \textsc{Magma}\xspace & \textbf{3.7 (5.3)} & \textbf{93.9 (15.3)} \\
\multicolumn{1}{|c|}{} & GP & 22.1 (94.3) & 78.2 (30.4) \\ \hline
\end{tabular}
\end{center}
\end{table}
\begin{figure*}
\begin{center}
\includegraphics[width= \textwidth]{images/Figure_4_bis.png}
\caption{Prediction curves (blue) for a testing individual with associated 95\% credible intervals (grey) for GP regression (left) and \textsc{Magma}\xspace (right), for both women (top) and men (bottom). The dashed lines represent the mean functions of the hyper-posterior mean process $\mu_0 \vert \yii$. Observed data points are in black, and testing data points are in red. The colourful backward points are observations from the training dataset, each colour corresponding to a different individual.}
\label{fig:real_data}
\end{center}
\end{figure*}
We observe that \textsc{Magma}\xspace still provides excellent results in this context, and naturally outperform predictions provided by a single GP regression.
The progression curves presenting relatively monotonic variations, and thus avoiding pathological behaviours that could occur with synthetic data, the MSE in prediction remains very low.
The $CI_{95}$ coverage sticks close to the 95\% expected value for \textsc{Magma}\xspace, indicating an adequate quantification of uncertainty.
To illustrate these results, an example is displayed on \Cref{fig:real_data} for both men and women.
For a randomly chosen testing individual, we plot its predicted progression curve (in blue), where we used its first 15 data points as observations (in black), while the remaining true data points (in red) are displayed for comparison purpose.
As previously observed in the simulation study, the simple GP quickly drifts to the prior 0 mean, as soon as data lack.
However, for both men and women, the \textsc{Magma}\xspace predictions remain close to the true data, which also lie within the 95\% credible interval.
Even for long term forecast, where the mean prediction curve tends to overlap the mean process (dashed line), the true data remain in our range of uncertainty, as the credible interval widens far from observations.
For clarity, we displayed only a few individuals from the training dataset (colourful points) in the background.
The mean process (dashed line) seems to represent the main trend of progression among swimmers correctly, even though we cannot numerically compare $\mu_0$ to any real-life analogous quantity.
In a more sport-related perspective, we can note that both genders present similar patterns of progression.
However, while performances are roughly similar in mean trend before the age of 14, they start to differentiate afterwards and then converge to average times with approximatively a 5 seconds gap.
Interestingly, the difference between world records in 100 freestyle for men and women is currently 4.8 seconds (46.91 versus 51.71).
These results, obtained under reasonable hypotheses on several hundreds of swimmers, seem to indicate that \textsc{Magma}\xspace would give quite reliable predictions for a new young swimmer.
Furthermore, the uncertainty provided through the predictive posterior distribution offers an adequate degree of caution in a decision-making process.
| proofpile-arXiv_065-235 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Dielectric multilayers constitute one of the simplest and most common classes of optical metamaterials \cite{Capolino:2009vr,Cai:2010om}, and can be fabricated with high precision via well-established deposition processes. In the regime of {\em deeply subwavelength} layers, spatial-dispersion (nonlocal) effects tend to be negligibly weak, so that these materials can be accurately modeled via
macroscopic effective parameters that do not depend on the specific geometrical order and thickness of the layers, but only on their constitutive properties and filling fractions.
This {\em effective-medium-theory} (EMT) model \cite{Sihvola:1999em} is known to capture the macroscopic optical response quite accurately. However, recent theoretical \cite{Sheinfux:2014sm} and experimental studies \cite{Zhukovsky:2015ed} on periodic arrangements have pointed out that nonlocal effects may be counterintuitively amplified within certain critical parameter regimes mixing evanescent and propagating light transport, thereby leading to the {\em breakdown} of the EMT approximation. Follow-up studies \cite{Andryieuski:2015ae,Popov:2016oa,Lei:2017rt,Maurel:2018so,Castaldi:2018be,Gorlach:2020bc} have provided alternative interpretations of these effects, and have suggested possible corrections to the conventional EMT model in order to capture them. These corrections typically include frequency- and wavenumber-dependent terms to account for nonlocality, and possibly magneto-electric coupling to ensure self-consistency.
In essence, the above results indicate that the optical response of finite-size, fully dielectric multilayered metamaterials may exhibit an anomalous sensitivity to geometrical features at deeply subwavelength scales, which may find intriguing applications in numerous fields, ranging from optical sensing to switching and lasing.
A fascinating and substantially uncharted implication of the above outcomes is that spatial order (or disorder) may play a key role not only in the {\em diffractive} regime of wavelength-sized layers (typical, e.g, of photonic crystals \cite{Joannopoulos:2008pc}), but also at much smaller scales. For instance, theoretical \cite{Sheinfux:2016cr} and experimental \cite{Sheinfux:2017oo} studies in {\em randomly disordered} dielectric multilayers have demonstrated the occurrence of anomalous Anderson-type localization effects in stark contrast with the EMT prediction of an essentially transparent behavior. Within this framework, we have recently initiated a systematic exploration of {\em aperiodically ordered} geometries \cite{Macia:2006tr,DalNegro:2011da}, which constitute the middle ground between perfect periodicity and random disorder. These geometries have been extensively studied in the diffractive regime of photonic ``quasicrystals''
\cite{Poddubny:2010pc,Vardeny:2013hj,Ghulinyan:2014od}, but their interplay with mixed evanescent/propagating light transport at deeply subwavelength scales remains largely unexplored. In particular, we have studied the Thue-Morse \cite{Coppolaro:2018ao} and Golay-Rudin-Shapiro \cite{Coppolaro:2020eo} geometries, characterized by {\em singular-continuous} and {\em absolutely continuous} spatial spectra, respectively \cite{Grimm:2015ac}; from a measure-theoretic viewpoint (Lebesgue decomposition theorem), these represent two of the three distinctive spectral traits of aperiodic order \cite{Grimm:2015ac}.
For these geometries, we have explored the critical parameter regimes leading to the occurrence of the EMT-breakdown phenomenon, highlighting some similarities and fundamental differences from what observed in the periodic and random scenarios.
To close the loop, here we focus on {\em quasiperiodic} geometries characterized by {\em discrete} spatial spectra, representing the remaining of the aforementioned distinctive traits \cite{Grimm:2015ac}, which has never been explored in connection with deeply subwavelength dielectric multilayers. In this context, the quintessential representative geometries are based on Fibonacci-type sequences \cite{Albuquerque:2004pa}. Specifically, here we consider a
modified-Fibonacci geometry \cite{Buczek:2005pd} characterized by a scale-ratio parameter that can be exploited to study the transition from periodic to quasiperiodic order, so as to identify and elucidate the anomalous light-transport effects genuinely induced by quasiperiodic order.
Accordingly, the rest of the paper is organized as follows. In Sec. \ref{Sec:Formulation}, we outline the problem and describe its geometry and main observables. In Sec. \ref{Sec:Results},
we illustrate some representative results from a comprehensive parametric study, indicating the occurrence of anomalous light-transport effects (in terms of transmittance, field enhancement, absorption, and lasing) that are in striking contrast with the predictions from conventional EMT and with what observable in periodic counterparts. We also address the development of nonlocal corrections that can capture some of these effects.
Finally, in Sec. \ref{Sec:Conclusions}, we draw some conclusions and outline some possible directions for further research.
\section{Problem Formulation}
\label{Sec:Formulation}
\subsection{Geometry}
The geometry of interest is schematically illustrated in Fig. \ref{Figure1}. Our multilayered metamaterial is composed of dielectric layers with alternating high and low relative permittivity ($\varepsilon_H$ and $\varepsilon_L$, respectively), and generally different thicknesses $d_a$ and $d_b$ distributed according to the Fibonacci sequence. The structure is assumed of infinite extent along the $x$ and $y$ directions, and is embedded in a homogeneous background with relative permittivity $\varepsilon_e$. We assume that all materials are nonmagnetic (relative permeability $\mu=1$) and, for now, we neglect optical losses.
The quasiperiodic Fibonacci geometry can be equivalently generated in several ways. One possibility is to iterate the well-known inflation rules \cite{Albuquerque:2004pa}
\beq
a \rightarrow a b, \quad b \rightarrow a,
\label{eq:itrule}
\eeq
associating the thicknesses $d_a$ and $d_b$ to the symbols $a$ and $b$, respectively, in the obtained sequence. Equivalently, one can exploit a {\em cut-and-project} approach, and calculate directly the positions of the layer interfaces as \cite{Buczek:2005pd}
\beq
z_{n}=d_a\left\|\frac{n}{\varphi}\right\|+d_b\left(n-\left\|\frac{n}{\varphi}\right\|\right),
\label{eq:Fibseq}
\eeq
where $\varphi \equiv(1+\sqrt{5}) / 2\approx1.618$ is the Golden Mean and
\beq
\|x\|=\left\{\begin{array}{ll}
n, & n \leq x<n+\frac{1}{2},\\
n+1, & n+\frac{1}{2} \leq x \leq n+1.
\end{array}\right.
\eeq
It can be shown that, in the asymptotic limit of an infinite sequence, the ratio between the number of symbols $a$ and $b$ approaches the Golden Mean \cite{Buczek:2005pd} , viz.,
\beq
\lim _{N \rightarrow \infty} \frac{N_{a}}{N_{b}}=\varphi, \quad N_{a}+N_{b}=N.
\label{eq:aslim}
\eeq
It is important to note that, at variance with typical Fibonacci-type multilayer geometries in the literature
\cite{Vasconcelos:1999tf}, here we only assume the layer thicknesses distributed
according to the Fibonacci sequence, whereas the relative permittivities are simply alternated; this implies that, for each layer, there are four possible combinations of thickness and relative permittivity. This modified scheme facilitates the comparison with the EMT predictions as well as with a periodic reference structure. Accordingly, we generally assume $d_a\ge d_b$, and define the scale-ratio parameter
\beq
\nu=\frac{d_b}{d_a}, \quad 0<\nu \leq 1.
\label{eq:nu}
\eeq
By changing $\nu$, we can study the transition between perfect periodicity ($\nu=1$) and variable shades of quasiperiodic order ($\nu<1$). Within this framework, it is expedient to define the average layer thickness $\bar{d}=L/N$, with $L$ denoting the total thickness of the multilayer (see Fig. \ref{Figure1}). By exploiting the result in (\ref{eq:aslim}), it can be readily shown that, in the asymptotic limit of an infinite sequence,
\beq
\bar{d}=\frac{\varphi d_a+d_b}{1+\varphi}.
\label{eq:dav}
\eeq
As previously mentioned, the spatial spectrum associated with our modified-Fibonacci geometry is discrete \cite{Albuquerque:2004pa}. Specifically, it can be shown that, in the asymptotic limit of an infinite sequence, there is a double infinity of spectral peaks localized at wavenumbers \cite{Buczek:2005pd}
\beq
k_{zpq}=\frac{2 \pi}{\bar{d}} \frac{\left(p+q \varphi\right)}{(\varphi+1)},
\label{eq:kzpq}
\eeq
with amplitudes
\beq
S_{pq}=\frac{\sin W_{pq}}{W_{pq}},
\eeq
where
\beq
W_{pq}=\frac{\pi}{\bar{d}}\left(p d_a-q d_b\right)=\frac{\pi(1+\varphi)\left(p-q \nu\right)}{\nu+\varphi}.
\label{eq:Wpq}
\eeq
As typical of quasiperiodicity, the above spectrum is generally characterized by pairwise-incommensurate harmonics \cite{Buczek:2005pd}. Quite interestingly, it can be shown \cite{Buczek:2005pd} (see also Appendix \ref{Sec:AppA} for details) that, for commensurate scales (i.e,. rational values of the scale ratio), the spatial spectrum is periodic, even though the geometry remains aperiodic in space. Moreover, it can be verified that for the periodic case ($\nu=1$, i.e., $d_a=d_b$), the conventional periodic spatial spectrum is recovered (see Appendix \ref{Sec:AppA} for details).
For illustration, Fig. \ref{Figure2} shows some representative spatial spectra pertaining to a finite-size ($N=1024$) structure, for rational and irrational values of $\nu$. By focusing on the dominant spectral peaks, as $\nu$ decreases we observe a progressive weakening of the harmonics at integer values of $2\pi/\bar{d}$ (typical of periodicity) and the appearance of new dominant harmonics at intermediate positions.
The above modified-Fibonacci geometry has been studied in connection with antenna arrays \cite{Galdi:2005pq,Castaldi:2007rf} but, to the best of our knowledge, has never been applied to optical multilayers.
In all examples considered in our study below, the Fibonacci sequence is generated via (\ref{eq:Fibseq}), and the relative permittivity distribution starts with $\varepsilon_H$.
\subsection{Statement and Observables}
As shown in Fig. \ref{Figure1}, the structure under study is obliquely illuminated by a plane wave with transverse-electric (TE) polarization.
Specifically, we assume an implicit $\exp\left(-i\omega t\right)$ time-harmonic dependence, and a $y$-directed, unit-amplitude
electric field
\beq
E_y^{(i)}\left(x,z\right)=\exp\left[ik_e \left(x\sin\theta_i+z\cos\theta_i\right)\right],
\eeq
where $\theta_i$ is the angle of incidence, $k_e=k\sqrt{\varepsilon_e}$ is the ambient wavenumber in the exterior medium, and $k=\omega/c=2\pi/\lambda$ is the vacuum wavenumber (with $c$ and $\lambda$ denoting the corresponding speed of light and wavelength, respectively).
Starting from some pioneering experimental \cite{Merlin:1985qp} and theoretical \cite{Kohmoto:1987lo} studies in the 1980s,
prior works on quasiperiodic Fibonacci-type multilayers have essentially focused on the diffractive regime of photonic quasicrystals ($d_{a,b}\lesssim\lambda$),
and have elucidated the physical mechanisms underpinning the localization \cite{Gellermann:1994lo}, photonic dispersion \cite{Hattori:1994pd}, perfect transmission \cite{Huang:2001pt,Peng:2002si,Nava:2009pl}, bandgap \cite{Kaliteevski:2001bs} and field-enhancement \cite{Hiltunen:2007mo} properties, as well as the
multifractal \cite{Fujiwara:1989mw}, critical \cite{Macia:1999pn} and band-edge states \cite{DalNegro:2003lt}.
Besides the aforementioned differences in the geometrical model, a key aspect of our investigation is the focus on the {\em deeply subwavelength} regime $d_{a,b}\ll \lambda$. In this regime, for the assumed TE polarization, the optical response of the multilayer tends to be accurately modeled via conventional EMT in terms of an effective relative permittivity \cite{Sihvola:1999em}
\beq
{\bar \varepsilon}_{\parallel}=L^{-1}
\sum_{n=1}^{N}\varepsilon^{(n)}d^{(n)},
\label{eq:EMT}
\eeq
where $\varepsilon^{(n)}$ and $d^{(n)}$ represent the relative permittivity ($\varepsilon_{H,L}$) and thickness ($d_{a,b}$), respectively, of the $n$-th layer. For the modified-Fibonacci geometry under study, it can be shown (see Appendix \ref{Sec:AppB} for details) that the following approximation holds with good accuracy
\beq
{\bar \varepsilon}_{\parallel}\approx \frac{\varepsilon_H+\varepsilon_L}{2},
\label{eq:EMTapp}
\eeq
{\em irrespective} of the scale-ratio parameter. By virtue of this remarkable property, we can explore the transition from perfect periodicity to quasiperiodicity maintaining the same effective properties; in other words, by varying the scale ratio $\nu$, the multilayer maintains the same proportions of high- and low-permittivity constituents, so that the only difference is their spatial arrangement.
As we will show hereafter, contrary to conventional wisdom, the spatial order may play a key role also at deep subwavelength scales in co-action with mixed evanescent/propagating light transport. To elucidate this mechanism, we rely on a rigorous solution of the boundary-value problem based on the well-established transfer-matrix formalism \cite{Born:1999un} (see Appendix \ref{Sec:AppC} for details). Specifically, we calculate the transmission coefficient
\beq
\tau_N=\frac{\left.E_{y}^{(t)}\right|_{z=L}}{\left.E_{y}^{(i)}\right|_{z=0}}=\frac{2}{\chi_N+i\upsilon_N},
\label{eq:tauN}
\eeq
where $\chi_N$ and $\upsilon_N$ denote the {\em trace} and {\em anti-trace}, respectively, of the transfer matrix associated to a $N$-layer structure (see Appendix \ref{Sec:AppC} for details).
Other meaningful observables of interest are the reflection (and absorption, in the presence of losses) coefficient, as well as the field distribution in the multilayer.
\section{Representative Results}
\label{Sec:Results}
\subsection{Parametric Study}
To gain a comprehensive view of the phenomenology and identify the critical parameters, we carry out a parametric study of the transmission response of the multilayered metamaterial by varying the incidence direction, electrical thickness and number the layers, and scale ratio. In what follows we assume the same constitutive parameters for the layers ($\varepsilon_L=1$, $\varepsilon_H=5$) and exterior medium ($\varepsilon_e=4$) utilized in previous studies on periodic and aperiodic (either orderly or random)
geometries \cite{Sheinfux:2014sm,Sheinfux:2016cr,Sheinfux:2017oo,Castaldi:2018be,Coppolaro:2018ao,Coppolaro:2020eo}, so as to
facilitate direct comparison of the results. Recalling the approximation in (\ref{eq:EMTapp}), this corresponds to an effective medium with ${\bar \varepsilon}_{\parallel}\approx3$; we stress that this value is essentially independent of the scale ratio, and therefore holds for all examples considered in our study. In the same spirit, although we are not bound with specific sequence lengths, we assume power-of-two values for the number of layers $N$, similar to our previous studies on Thue-Morse \cite{Coppolaro:2018ao} and Golay-Rudin-Shapiro \cite{Coppolaro:2020eo} geometries.
Moreover, to ensure meaningful comparisons among different geometries, we parameterize the electrical thickness in terms of the average thickness $\bar{d}$ in (\ref{eq:dav}), so that structures with same number of layers have same electrical size. In order to maintain the average thickness for different values of the scale ratio, it readily follows from (\ref{eq:nu}) and (\ref{eq:dav}) that the layer thicknesses need to be adjusted as
\beq
d_a=\frac{(1+\varphi)}{(\nu+\varphi)} \bar{d}, \quad d_b=\nu d_a.
\eeq
Our study below is focused on the deeply subwavelength regime $0.01\lambda<\bar{d}<0.05\lambda$, with incidence angles $30^o<\theta_i\lesssim 60^o$. This last condition implies that, for the assumed constitutive parameters, the field is evanescent in the low-permittivity layers and propagating in high-permittivity ones. Prior studies on periodic and aperiodic configurations \cite{Sheinfux:2014sm,Sheinfux:2016cr,Sheinfux:2017oo,Castaldi:2018be,Coppolaro:2018ao,Coppolaro:2020eo} have shown that the anomalous phase-accumulation mechanism underlying this mixed light-transport regime can induce a large amplification of the nonlocal effects, so that the optical response exhibits a strongly enhanced sensitivity to geometrical variations at deeply subwavelength scales. The maximum angle of incidence is chosen nearby the critical angle ${\bar \theta}_c=\arcsin\left(\sqrt{{\bar \varepsilon}_{\parallel}/\varepsilon_e}\right)\approx60^o$, which defines the effective-medium total-internal-reflection condition.
Figures \ref{Figure3}, \ref{Figure4} and \ref{Figure5} show the transmittance response ($\left|\tau_N\right|^2$) as a function of the average electrical thickness of the layers and angle of incidence, for $N=128, 256$ and $512$ layers, respectively. Each figure is organized in six panels, pertaining to representative values of the scale ratio transitioning from perfect periodicity ($\nu=1$) to different degrees of quasiperiodicity, with both rational ($\nu=0.8, 0.4, 0.2$) and irrational ($\nu=1/\varphi\approx 0.618$) values; also shown is the reference EMT response pertaining to the effective relative permittivity in (\ref{eq:EMTapp}).
At a qualitative glance, we observe a generally good agreement between the EMT and periodic configurations. As intuitively expected, both cases exhibit a regime of substantial transmission (with Fabry-P\'erot-type fringes) within most of the observation range, with an abrupt transition to opaqueness in the vicinity of the critical angle ${\bar \theta}_c\approx60^o$. Although it is somehow hidden by the graph scale, a closer look around the transition region would in fact reveal significant differences between the EMT and periodic responses, as extensively studied in \cite{Sheinfux:2014sm,Andryieuski:2015ae,Popov:2016oa,Lei:2017rt,Maurel:2018so,Castaldi:2018be,Gorlach:2020bc}. The quasiperiodic configurations display instead visible differences with the EMT and periodic counterparts, also away from the critical-incidence condition, manifested as the appearance of medium- and low-transmission regions whose extent and complex interleaving increases with increasing size and decreasing values of the scale-ratio parameter. In what follows, we carry out a systematic, quantitative analysis of these differences and investigate the underlying mechanisms.
\subsection{Near-Critical Incidence}
As previously highlighted, nearby the critical angle $\theta_i\approx60^o$, substantial departures of the optical response from the EMT predictions can be observed also in the case of periodic geometries \cite{Sheinfux:2014sm,Andryieuski:2015ae,Popov:2016oa,Lei:2017rt,Maurel:2018so,Castaldi:2018be,Gorlach:2020bc}. However, the geometry under study exhibits different types of anomalies that are distinctive of quasiperiodic order. As an illustrative example, Fig. \ref{Figure6} compares the transmittance cuts at $\theta_i=60.6^o$, for varying sizes and scale-ratios. For these parameters, the field in the EMT and periodic cases is evanescent and, although some differences are visible between the two responses, the transmission is consistently very low. Conversely, for increasing departures from periodicity, we start observing a general increase in the transmittance, with the occasional appearance of near-unit transmission peaks. As a general trend, for decreasing values of the scale-ratio parameter and increasing size, these peaks tend to narrow down, increase in number, and move toward smaller values of the electrical layer thickness. Perfect transmission peaks have been observed in previous studies on Fibonacci multilayers in the diffractive (quasicrystal) regime \cite{Huang:2001pt,Peng:2002si,Nava:2009pl}. From the theoretical viewpoint, they are a manifestation of extended optical states that can exist in quasiperiodic geometries as a consequence of enforced or hidden symmetries \cite{Nava:2009pl}. From the mathematical viewpoint, these peaks correspond to conditions where the trace of the transfer matrix is equal to two and the anti-trace vanishes [see (\ref{eq:tauN})].
Quite remarkably, in our case, these peaks may be observed even for electrical thicknesses as small as $\bar{d}\sim 0.01\lambda$, and relatively small ($N=128$) sizes. For basic illustration, Figs. \ref{Figure6}d and \ref{Figure6}e show two representative geometries associated with near-unit transmission peaks.
Figures \ref{Figure7}a--\ref{Figure7}c show the field distributions (inside the multilayer) pertaining to three representative high-transmission peaks. Typical common features that can be observed include self-similarity and field enhancement; these characteristics have also been observed in the diffractive (photonic-quasicrystal) regime \cite{Fujiwara:1989mw,Hiltunen:2007mo}. In fact, for larger (but still deeply subwavelength) electrical thicknesses, field-enhancement factors up to $\sim 300$ can be observed for near-critical incidence, as exemplified in Figs. \ref{Figure7}d--\ref{Figure7}f.
\subsection{Non-Critical Incidence}
Away from critical-incidence conditions, the differences between the quasiperiodic and periodic/EMT configuration become even more pronounced. Figures \ref{Figure8} and \ref{Figure9} shows some representative transmittance cuts at $\theta_i=50.1^o$ and $40.1^o$, respectively. For these parameter configurations, the EMT and periodic responses are near-unit and hardly distinguishable. As the scale-ratio decreases, we observe the appearance of a rather wide bandgap at the upper edge of the electric-thickness range, and the progressive formation of secondary bandgaps at increasingly smaller values of the electrical thickness. For increasing sizes, these bandgaps tend to become denser and more pronounced. Quite interestingly, the position of certain bandgaps at particularly small values of the electrical thickness ($\bar{d}\sim 0.01\lambda$) seems to be rather robust with respect to the scale ratio.
To gain some insight in the effect of the structure size, Fig. \ref{Figure10} shows the transmittance cuts for a fixed value of the scale ratio ($\nu=1/\varphi$) for the number of layers $N$ ranging from 128 to 1024. As the size grows, we observe an increasing complexity with fractal-type structure. This is not surprising, as the fractal nature of the band structure is a well-known distinctive trait of Fibonacci-type photonic quasicrystals \cite{Kohmoto:1987lo,Kaliteevski:2001bs}, but it is still noteworthy that such complex behavior is visible at the deeply subwavelength scales of interest here.
To elucidate the role played by the scale ratio, Fig. \ref{Figure11} compares the field distributions for fixed size ($N=128$), non-critical incidence conditions ($\theta_i=54^o$) and electrical thickness $\bar{d}=0.024\lambda$, and various values of $\nu$. As can be observed, the field gradually transitions from a standing-wave, high-transmission character for the periodic case (in fair agreement with the EMT prediction), to a progressively decaying, low-transmission behavior as the scale ratio decreases. It is quite astounding that these marked differences emerge for layers as thin as $\bar{d}=0.024\lambda$ and a relatively small ($\sim 3\lambda$) structure.
For the periodic \cite{Castaldi:2018be} and Thue-Morse \cite{Coppolaro:2018ao} geometries, it was shown that the EMT breakdown could be effectively interpreted and parameterized in terms of error propagation in the evolution of the trace and antitrace of the multilayer transfer-matrix, which is directly related to the transmission coefficient via (\ref{eq:tauN}). Interestingly, for the periodic case, it is possible to calculate analytically some closed-form bounds for the error propagation so as to identify the critical parameter regimes. Although for standard Fibonacci-type geometries (with both permittivity and thickness distributed according to the Fibonacci sequence) the trace and antitrace evolution can be studied via simple iterated maps \cite{Kohmoto:1987lo,Wang:2000ta}, these unfortunately cannot be applied to our modified geometry. Nevertheless, they can be studied numerically from the transfer-matrix cascading (see Appendix \ref{Sec:AppC} for details). For $\theta_i=50^o$ and $\bar{d}=0.015\lambda$, Fig. \ref{Figure12} illustrates the evolution of the trace, antitrace and transmission-coefficient errors
\beq
\Delta\chi_N=\left|\chi_N-\bar{\chi}_N\right|,\quad
\Delta\upsilon_N=\left|\upsilon_N-\bar{\upsilon}_N\right|,\quad
\Delta\tau_N=\left|\tau_N-\bar{\tau}_N\right|,
\label{eq:errors}
\eeq
where the overbar indicates the EMT prediction; the evolution is shown as a function of the number of layers $N$, for representative values of the scale-ratio parameter. As a general trend, we observe fast, oscillatory behaviors with envelopes that grow with the multilayer size. For these parameters, the periodic case exhibits the slowest increase, with errors that remain below $\sim 0.1$; the reader is referred to Ref. \cite{Castaldi:2018be} for a detailed analytical study. As the geometry transitions to quasiperiodicity ($\nu<1$), we observe that the errors tend to grow increasingly faster with the number of layers, reaching values $\sim 10$ for the trace and antitrace, and approaching the maximum value of 2 for the transmission coefficient. These results quantitatively summarize at a glance the effects of quasiperiodicity in the EMT breakdown or, in other words, its visibility at deep subwavelength scales. Moreover, they also illustrate the important differences with respect to metallo-dielectric structures, which also feature a mixed (evanescent/propagating) light transport. In fact, for metallo-dielectric structures such as hyperbolic metamaterials, the errors in the trace and anti-trace can be significant even for a very small number of deeply subwavelength layers, thereby leading to visible ``bulk effects'', such as additional extraordinary waves \cite{Orlov:2011eo}. Conversely, in the fully dielectric case, the mechanism is essentially based on boundary effects, with errors that tend to be negligibly small for few layers, but, under certain critical conditions, may accumulate and grow (though non-monotonically) as the structure size increases.
Strong field enhancement can also be observed for noncritical incidence. In this case, the most sensible enhancements are exhibited by edge modes around the bandgap appearing for $\bar{d}\sim 0.04\lambda$, still well within the deep subwavelength regime. Figure \ref{Figure13} illustrates three representative modes, for different sizes, scale ratio and incidence conditions. For increasing size, we observe that the field distributions tend to exhibit self-similar, fractal-like structures, with enhancements of over two orders of magnitudes. Such levels of enhancement are in line what observed in prior studies on aperiodic geometries \cite{Coppolaro:2018ao,Coppolaro:2020eo} geometries, and in substantial contrast with the EMT prediction (see \cite{Coppolaro:2018ao} for details)
\beq
\bar{\gamma} =\frac{\sqrt{\varepsilon_{e}} \cos \theta_{i}}{\sqrt{\bar{\varepsilon}_{\|}-\varepsilon_{e} \sin ^{2} \theta_{i}}},
\eeq
which, for the parameters in Fig. \ref{Figure13}, is $\lesssim 2$.
\subsection{Nonlocal Corrections}
\label{Sec:NL}
For the periodic case ($\nu=1$), it was shown \cite{Castaldi:2018be} that the error-propagation phenomenon illustrated in Fig. \ref{Figure12} could be significantly mitigated by resorting to suitable nonlocal corrections
(and possibly magneto-electric coupling \cite{Popov:2016oa}) in the effective-medium model, which could be computed analytically in closed form. In principle, such strategy could be applied to the quasiperiodic scenario ($\nu<1$) of interest here, but there is no simple analytical expression for the nonlocal corrections. For a basic illustration, we resort to a fully numerical approach, by parameterizing the effective relative permittivity as
\beq
{\hat \varepsilon}_{\parallel}\left(k_x\right)=\frac{a_0 \left(1+a_2 k_x^2+a_4 k_x^4\right)}{1+b_2 k_x^2+b_4 k_x^4},
\label{eq:NLe}
\eeq
where the wavenumber dependence implies the nonlocal character (with only even powers of $k_x$ in view of the inherent symmetry), and the coefficients $a_0$, $a_2$, $a_4$, $b_2$, $b_4$ generally depend on the frequency and on the multilayer geometrical and constitutive parameters. These coefficients are computed numerically by minimizing the mismatch with the exact transmission response at selected wavenumber values (or, equivalently, incidence directions). Specifically, for a given multilayer and electrical thickness, we compute the coefficient $a_0$ by minimizing the mismatch for normal incidence ($k_x=0$), and the remaining four coefficients by minimizing the root-mean-square error for incidence angle $\theta_i$ varying from $1^o$ to $60^o$ (with step of $1^o$, and $k_x=k_e\sin\theta_i$). For the numerical optimization, we utilize a Python-based implementation of the Nelder-Mead method available in the SciPy optimization library \cite{SciPy}.
Figure \ref{Figure14} illustrates some representative results, for $N=128$ layers, $\nu=0.4$ and ${\bar d}=0.015\lambda$. Specifically, we compare the transmission coefficient error $\Delta\tau_N$ in (\ref{eq:errors}) for the conventional EMT and the nonlocal effective model in (\ref{eq:NLe}) as a function of the incidence angle. As can be observed, a significant reduction is attained. Qualitatively similar results (not shown for brevity) are obtained for different lengths, frequencies and scale-ratio parameters. Stronger error reductions can be in principle obtained by resorting to higher-order and/or more sophisticated models that also account for magneto-electric coupling \cite{Popov:2016oa}.
\subsection{Anomalous Absorption and Lasing}
Our previous studies on the Thue-Morse \cite{Coppolaro:2018ao} and Golay-Rudin-Shapiro \cite{Coppolaro:2020eo} geometries have shown that, in the presence of small losses or gain, field-enhancement levels like those illustrated above can lead to anomalous absorption or lasing effects, respectively. To illustrate these phenomena, we assume a complex-valued
relative permittivity $\varepsilon_H=5+i\delta$, where the imaginary part $\delta$ parameterizes the presence of loss or gain (for $\delta>0$ and $\delta<0$, respectively, due to the assumed time-harmonic convention).
For a very low level of losses ($\delta=10^{-4}$), Fig. \ref{Figure15} shows some representative absorbance responses, for different parameter configurations, from which we observe the presence of sharp peaks of significant (sometimes near-unit) amplitude. The corresponding field distributions (not shown for brevity) are qualitatively similar to those in Figs. \ref{Figure7} and \ref{Figure13}.
As a benchmark, for these parameters, the EMT prediction for the absorbance is $\lesssim 0.3$, whereas the result for the periodic reference configuration is $\lesssim 0.5$.
Finally, we consider the presence of small gain ($\delta=-10^{-3}$), and study the possible onset of lasing conditions.
Figure \ref{Figure16} shows some representative reflectance responses for different parameter configurations, which display sharp peaks with amplitude exceeding $\sim 1000$. This indicates the presence of pole-type singularities that are distinctive of lasing, in spite of the quite low level of gain considered. To give an idea, by considering as a reference the lasing peak at $\bar{d}/\lambda=0.036$ in Fig. \ref{Figure16}a, in order to obtain comparable results in the EMT scenario we would need an increase of a factor $\sim 12$ in the gain coefficient or, equivalently, in the structure size (see \cite{Coppolaro:2018ao} for details).
These results provide further evidence of the potentially useful applications of
aperiodic order to the design of innovative absorbers and low-threshold lasers.
\section{Conclusions and Outlook}
\label{Sec:Conclusions}
In summary, we have studied the effects of quasiperiodic order at deeply subwavelength scales in multilayered dielectric metamaterials. With specific reference to a modified-Fibonacci geometry, we have shown that the interplay with mixed evanescent/propagating light transport may induce anomalous optical responses (in terms of transmission, field-enhancement, absorption and lasing) that deviate substantially from the conventional EMT predictions. Moreover, by varying the scale-ratio parameter available in our model, we have explored and elucidated the transition from perfect periodicity to different shades of quasiperiodicity, identifying the critical parameter regimes and possible nonlocal corrections that can capture some of the effects. We highlight that, although our results here are restricted to TE polarization and a relatively high-contrast scenario, previous studies on the periodic case have shown that the EMT breakdown can also be observed for transverse-magnetic and/or lower-contrast configurations \cite{Zhukovsky:2015ed}, but their visibility may be reduced.
This investigation closes the loop with our previous studies on aperiodically (dis)ordered geometries, by adding to the already studied singular-continuous \cite{Coppolaro:2018ao} and absolutely continuous \cite{Coppolaro:2020eo} scenarios
a representative geometry with {\em discrete-spectrum} characteristics which had not been previously explored. These three characteristic spectra are fully representative of the generic aspects of aperiodic order.
Overall, these results indicate that deterministic spatial (dis)order may play a significant role even at deeply subwavelength scales. Besides providing a new geometrical degree of freedom in the design of optical devices (such as absorbers or lasers), this also opens up intriguing possibilities in the optical probing of the microstructure of a (meta)material and the sensing of its variations at scales much smaller than a wavelength.
Of particular interest for future studies it appears the exploration of similar effects
in non-Hermitian \cite{Dikopoltsev:2019co} and time-varying \cite{Sharabi:2019lp} scenarios,
as well as the extension to 2-D geometries such as rod-type dielectric metamaterials.
| proofpile-arXiv_065-236 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Observation of the gravitational wave signal from the binary neutron star (BNS) merger event GW170817 by the Laser Interferometer Gravitational-Wave Observatory (LIGO) and the Virgo Consortium (LVC) [\citealt{2017PhRvL.119p1101A}], and the follow-up observation campaign across the electromagnetic (EM) spectrum marked the dawn of the era of multi-messenger astronomy (\citealt{2017ApJ...848L..13A}).
One of the most important findings was the association of GW170817 with the prompt emission of the short Gamma-Ray Burst (\textit{s}GRB) \textit{s}GRB 170817A $\sim 1.7$ s after the GW signal (\citealt{2017ApJ...848L..13A}).
Clear evidences of a relativistic jet have also been obtained from radio observations at later times (\citealt{2018Natur.561..355M}).
As GRB observations show, relativistic jets are very important in time-domain astronomy, especially because of the EM emission over a wide spectrum (i.e., the prompt and the afterglow emission).
This scenario, that the BNS merger powers a relativistic jet, was theoretically suggested for \textit{s}GRBs in the past (\citealt{1986ApJ...308L..43P}; \citealt{1986ApJ...308L..47G}; \citealt{1989Natur.340..126E}).
Recent studies of numerical relativity show that after the merger,
a central engine is formed and is surrounded by $\sim 10^{-3} - 10^{-2} M_\odot$ of matter that has been ejected during the merger, referred to as ``the dynamical ejecta'', that expands at a speed of $\sim 0.2c$, where $c$ is the light speed (\citealt{1999PhRvD..60j4052S}; \citealt{2000PhRvD..61f4001S}; \citealt{2013PhRvD..87b4001H}; \citealt{2013ApJ...773...78B}; \citealt{2018ApJ...869..130R}; etc.).
Also, later on ($< 10$ s after the merger), matter gets ejected from the torus surrounding the merger remnant in the form of wind (\citealt{2017PhRvL.119w1102S,2018ApJ...869L..35R,2018ApJ...860...64F,2020arXiv200700474F,2020arXiv200804333N,2020arXiv200411298C}; etc.).
Multi-messenger observations of the BNS merger event GW170817 indicates that
the relativistic jet was launched from the central engine within $\sim 1.7$ s after the merger (more precisely, within $\sim 1.3$ s after the merger according to \citealt{2020MNRAS.491.3192H}, see their Figure 9; within $0.4$ s according to \citealt{2020arXiv200410210L}).
Therefore, the jet must have propagated through the dense surrounding medium (i.e., the dynamical ejecta), and successfully broke out of it for the \textit{s}GRB 170817A to be emitted (as it has been observed).
This is because the jet outflow is not observable unless it propagates up to the outer edge of the medium and eventually breaks out of it;
as it is the case for long GRBs, where the relativistic jet propagates through the stellar envelope of a massive star.
During its propagation,
the jet continuously injects energy into the expanding ejecta material.
This produces the hot cocoon.
The cocoon immediately surrounds the jet, interacts with it, and collimates it.
Although this phase, where the jet is confined inside the ejecta, is short, it is critical, as it shapes the jet (and the cocoon) structure (see \citealt{2020arXiv200602466G}).
After the breakout, the jet (and the cocoon) is the source of different EM counterparts over a wide band,
and it is the key to interpreting them (\citealt{2017ApJ...834...28N,2017ApJ...848L...6L,2018MNRAS.473..576G,2018PhRvL.120x1103L,2018ApJ...855..103P,2018ApJ...867...18N,2018PTEP.2018d3E02I}; etc.).
Also, this connection between the jet and the EM counterparts allows us to make use of observational data to extract crucial information and better understand the phenomenon of \textit{s}GRBs (e.g., the jet angular structure, \citealt{2019MNRAS.489.1919T,2020MNRAS.tmp.2102T};
the property of the jet outflow, \citealt{2019MNRAS.487.4884I}; the central engine, \citealt{2019ApJ...876..139G,2020MNRAS.491.3192H,2020arXiv200410210L,2020arXiv200607376S}; the physics of neutron density matter, \citealt{2019ApJ...881...89L}; the viewing angle, \citealt{2020arXiv200501754N}; etc.).
Therefore,
the jet propagation through the ejecta surrounding the BNS merger remnant, until the breakout, is a key process in \textit{s}GRBs (as it is in collapsars and long GRBs).
The propagation of astrophysical jets through dense ambient media has been the subject of intensive theoretical works; mostly, in the context of Active Galactic Nuclei (AGNs) and collapsars (\citealt{1989ApJ...345L..21B}; \citealt{1997ApJ...479..151M}; \citealt{1999ApJ...524..262M}; \citealt{2003MNRAS.345..575M}; \citealt{2011ApJ...740..100B}; \citealt{2013ApJ...777..162M}; \citealt{2018MNRAS.477.2128H}; etc.).
One critical difference in the context of BNS mergers is that the ambient medium expands at substantial velocities (\citealt{2013PhRvD..87b4001H}), while it is static in AGNs and collapsars (\citealt{1999ApJ...524..262M}).
This further complicates the problem of modeling the jet propagation in BNS mergers.
There have been an increasing number of studies dedicated to solving jet propagation in BNS mergers through numerical simulations, especially after the discovery of GW170817 (\citealt{2014ApJ...784L..28N}; \citealt{2014ApJ...788L...8M}; \citealt{2015ApJ...813...64D}; \citealt{2018MNRAS.475.2971B}; \citealt{2018ApJ...866....3D}; \citealt{2017ApJ...848L...6L}; \citealt{2018MNRAS.473..576G}; \citealt{2018MNRAS.479..588G}; \citealt{2018ApJ...863...58X}; \citealt{2020MNRAS.495.3780N}; \citealt{2020arXiv200602466G}; etc.).
However, the subject is still far from being well understood.
Using ideas from the modeling of the jet-cocoon in collapsars (e.g., \citealt{2011ApJ...740..100B}; \citealt{2013ApJ...777..162M}; \citealt{2018MNRAS.477.2128H}), several studies presented analytic modeling of the jet-cocoon in an expanding medium (\citealt{2017MNRAS.471.1652L}; \citealt{2018MNRAS.475.2659M}; \citealt{2018ApJ...866....3D}; \citealt{2018ApJ...866L..16M}; \citealt{2019ApJ...876..139G}; \citealt{2020A&A...636A.105S}; \citealt{2020MNRAS.491.3192H}; \citealt{2020MNRAS.491..483L}; \citealt{2020ApJ...895L..33B}; etc.).
Although some of these works offer promising results, many of them overlooked important aspects, such as the jet collimation, the expansion of the ejecta and its effect on the cocoon (e.g., on the cocoon pressure and on the cocoon radius), etc.
And still, there is no analytic model for the jet propagation in an expanding medium that is simple to use, and robust at the same time (which is necessary to investigate other related topics,
such as the emission from the cocoon).
This work presents analytic modeling of jet propagation in an expanding medium.
This model is an upgrade of the model presented in \citet{2020MNRAS.491.3192H}.
The main improvement is that the jet collimation (i.e., the jet opening angle) is calculated in a self-consistent manner by using the cocoon pressure; rather than relying on the assumption of a constant opening angle, and on the free parameter $f_j$.
Another addition is the semi-analytic model presented here, where results are found through numerical integration of ordinary differential equations, relying less on approximations.
The aim of our work is to present a physical model that accurately describes the jet-cocoon system in an expanding medium, in consistency with numerical simulation.
A crucial point in our study is that our modeling is based on rigorous analysis of the jet-cocoon system in numerical simulations (in both expanding and static media), which shows that the expansion of the medium does intrinsically affect the jet-cocoon (e.g., the energy composition of the cocoon, the expansion velocity of the cocoon, etc.).
We show that the jet-cocoon system, in an expanding medium, can be described by a set of equations that can be solved numerically (referred to as the ``semi-analytic" solution).
We also show that, with some reasonable approximations, the system of equations can be simplified and solved analytically (the ``analytic" solution).
Both solutions are rigorously compared to the results from the numerical simulations and found consistent.
This paper is organized as follows.
In Section \ref{sec:2}, physical modeling of the jet-cocoon system in both expanding and static media is presented, and two (semi-analytic and analytic) solutions are derived.
In Section \ref{sec:3}, numerical simulations are presented and compared to both solutions.
A conclusion is presented in Section \ref{sec:conclusion}.
\section{The jet-cocoon physical model}
\label{sec:2}
The jet-cocoon model presented here is an upgrade of previous models, in particular in \citet{2011ApJ...740..100B}; \citet{2013ApJ...777..162M}; \citet{2018MNRAS.477.2128H}; \citet{2020MNRAS.491.3192H}; etc.
Unlike previous models, this model allows us to treat the jet collimation by the cocoon in expanding media.
Therefore, this model can be applied not only to the case of collapsar jets (where the jet propagates through the static stellar envelope of a massive star) but also to the case of BNS merger jets (where the jet propagates through the expanding dynamical ejecta).
This model is also an upgrade of the model presented \citet{2020MNRAS.491.3192H};
it takes into account the cocoon and its pressure on the jet (i.e., collimation), hence allowing to derive the jet opening angle in a self-consistent manner.
\subsection{Jump conditions}
\label{sec:Jump conditions}
The jet head dynamics in a dense ambient medium can be determined by the shock jump conditions at the jet's head (e.g., \citealt{1989ApJ...345L..21B}; \citealt{1997ApJ...479..151M}; \citealt{2003MNRAS.345..575M}):
\begin{eqnarray}
h_j \rho_j c^2 (\Gamma\beta)_{jh}^2 + P_j = h_a \rho_a c^2 (\Gamma\beta)_{ha}^2 + P_a ,
\label{eq:jump}
\end{eqnarray}
where $h$, $\rho$, $\Gamma=(1-\beta^2)^{-1/2}$, and $P$ are enthalpy, density, Lorentz factor (with $\beta$ being the velocity normalized by the light speed), and pressure of each fluid element, all measured in the fluid's rest frame.
The subscripts $j$, $h$, and $a$ refer to three domains: the relativistic jet, the jet head, and the cold ambient medium, respectively. Typically, both $P_a$ and $P_j$ in equation (\ref{eq:jump}) are negligible terms;
hence, we can write the jet head velocity as (for more details see \citealt{2018PTEP.2018d3E02I}; \citealt{2020MNRAS.491.3192H}):
\begin{eqnarray}
\beta_h = \frac{\beta_j - \beta_a}{1 + \tilde{L}^{-1/2}} + \beta_a ,
\label{eq:beta_h 1}
\end{eqnarray}
where $\tilde{L}$ is the ratio of energy density between the jet and the ejecta,
$\tilde{L} = \frac{h_j \rho_j \Gamma_j^2}{h_a \rho_a \Gamma_a^2}$,
which can be approximated as:
\begin{eqnarray}
\tilde{L} \simeq \frac{L_j}{\Sigma_j(t) \rho_a c^3 \Gamma_a^2} ,
\label{eq:L expression approx}
\end{eqnarray}
where $L_j$ is the (true) jet luminosity (per one jet), $\Sigma_j(t)=\pi\theta_j^2(t) r_h^2(t)$ is the jet head cross section, with $r_h(t)$ being the jet head radius [$r_h(t)=\int^t_{t_0}\beta_h dt+r_0$], and $\theta_j(t)$ being the jet opening angle (\citealt{2003MNRAS.345..575M}; \citealt{2011ApJ...740..100B}).
\subsection{Main approximations}
\label{sec:Main approximations}
Here, in our analytic modeling, we consider a similar set of approximations to that in \citet{2020MNRAS.491.3192H}.
In summary:
\begin{enumerate}
\item The analytic treatment presented here is limited to the case of a non-relativistic jet head where:
\begin{equation}
\tilde{L}\ll (1-\beta_a)^2.
\end{equation}
\item In BNS mergers, during the merger, matter is dynamically ejected from the system (i.e., the dynamical ejecta) and surrounds the later formed central engine.
After the merger, another plausible source of mass ejection is the wind.
The mass of matter driven by the wind could be substantial in early time (e.g., $\sim 0.01 M_\odot$ in the case of magnetically driven wind; see \citealt{2020arXiv200411298C}), however, the launch time (of the wind) may be later than the jet depending on the type of wind (see \citealt{2018ApJ...860...64F} for the case of viscous wind).
For simplicity, we consider the case where the central engine is only surrounded by the dynamical ejecta.
Note that, here, the ambient medium (often referred to as ``the ejecta'') is defined as the medium surrounding the central engine through which the jet propagation takes place; regardless of whether this inner region is gravitationally bound to the central engine or not.
Hence, for simplicity, the total mass of the ambient medium, $M_{a}$, is defined by the ambient medium's density through which the jet head propagates $\rho_a(r,t)$ [i.e., in the polar direction]\footnote{Note that, in the case of BNS mergers, the density throughout the dynamical ejecta is angle dependent, with the density in the polar region being much lower than that near the equatorial region (see Figure 8 in \citealt{2020MNRAS.491.3192H}). This effect results in equation (\ref{eq:M_a}) giving a mass, $M_a$, of $\sim 0.002 M_\odot$ if accounting for a total dynamical ejecta mass of $\sim 0.01 M_\odot$ (for more details see \citealt{2020MNRAS.491.3192H}). Note that, this value for the mass $M_a$ can be scaled-up to account for contribution form the wind.}:
\begin{equation}
M_{a}=\int_{r_0}^{r_{m}(t)}4\pi r^2 \rho_a(r,t) {\rm d}r,
\label{eq:M_a}
\end{equation}
where $r_0$ is the inner boundary of the ambient medium, which is of the order of $10^6-10^7$ cm, and $r_m(t)$ is the outer radius of the ambient medium.
This definition of $M_a$ is also used for the collapsar case.
\item In the case of an expanding medium (BNS merger), the approximation of a homologous expansion is used (see Figure 8 in \citealt{2020MNRAS.491.3192H}). Hence, the radial velocity of the ambient medium as a function of radius $r$ and time $t$ is approximated as:
\begin{equation}
v_a(r,t)=\left[\frac{r}{r_m(t)}\right]v_{m},
\label{eq:v_a}
\end{equation}
with $r_m(t)$ being the outer radius of the ambient medium, and $v_{m}$ being the maximum velocity of the ambient medium at the radius $r_m(t)$.
In the case of collapsars, the velocity is negligible, i.e., $v_m=0$.
\item The ambient medium's density profile is approximated to a power-law function, with ``$n$" being its index. Hence, considering the homologous expansion of the ambient medium, the density can be written as:
\begin{equation}
\rho_a(r,t)=\rho_0 \left[\frac{r_0}{r}\right]^n \left[\frac{r_{m,0}}{r_m(t)}\right]^{3-n},
\label{eq:rho_a}
\end{equation}
where $\rho_0=\rho_a(r_0,t_0) = \left[\frac{M_a}{4\pi r_0^n}\right]\left[\frac{3-n}{r_{m,0}^{3-n}-r_0^{3-n}}\right]$, and $r_{m,0}=r_m(t_0)$, with $t_0$ being the jet launch time.
This expression is simpler in the collapsar case, where $r_m(t)=r_{m,0}$ ($\equiv r_m$).
Also, the value $n=2$ is assumed for the density profile of the ambient medium, for both the BNS merger case and the collapsar case.
It should be noted that this is a simplification as, ideally, $n\sim 2-3.5$ in the BNS merger case (see Figure 8 in \citealt{2020MNRAS.491.3192H}), and $n\sim 1.5-3$ in the collapsar case (see Figure 2 in \citealt{2013ApJ...777..162M}).
Also, it should be noted that the analytic modeling presented here is limited for the case $n<3$ (\citealt{2011ApJ...740..100B}; \citealt{2020MNRAS.491.3192H}).
\item The pressure in the cocoon, $P_c$, is dominated by radiation pressure. Hence, it can be written as:
\begin{equation}
P_c= \frac{E_i}{3 V_c},
\label{eq:P_c}
\end{equation}
where $E_i$ is the cocoon's internal energy and $V_c$ is the cocoon's volume.
\item Based on rigorous analysis of the cocoon in numerical simulations, we suggest that the cocoon's shape is better approximated to an ellipsoidal (see \citealt{2020MNRAS.491.3192H}; also see Figure \ref{fig:maps} below); where the ellipsoid's semi-major axis and semi-minor axis at a time $t$ are $\frac{1}{2}r_h(t)$ and $r_c(t)$, respectively, with $r_c(t)$ being the cocoon's lateral width (from the jet axis) at the radius $\frac{1}{2}r_h(t)$ [see also equation (\ref{eq:S1})].
Hence, the volume of the cocoon (in one hemisphere) can be written as:\footnote{Ideally, the jet volume should be subtracted from the above expression of $V_c$ to give a more accurate expression of the cocoon volume. However, as long as the jet opening angle is not very large (as it is the case here), the jet volume can be neglected.}
\begin{equation}
V_c=\frac{2\pi}{3}r_c^2(t) r_h(t).
\label{eq:V_c}
\end{equation}
Note that this presents one of the differences compared to previous works -- typically assuming a cylindrical cocoon shape (e.g., \citealt{2011ApJ...740..100B}; \citealt{2013ApJ...777..162M}; \citealt{2020A&A...636A.105S}).
\item As previously explained in \citet{2013ApJ...777..162M}, and more rigorously in \citet{2018MNRAS.477.2128H}, the analytic description of $\tilde{L}$ in equation (\ref{eq:L expression approx}) needs to be calibrated by numerical simulations.
The parameter $N_s$ is introduced to calibrate the analytic value of $\tilde L$ so that:
\begin{equation}
\tilde{L}_c = N_s^2 \tilde{L},
\label{eq:N_s}
\end{equation}
where $\tilde{L}_c$ is the calibrated counterpart of $\tilde{L}$.
Here, the value of $N_s$ for the analytic (or semi-analytic) solution is chosen so that the analytic (or semi-analytic) breakout time is calibrated to the breakout time measured in numerical simulations (see Table \ref{Table:sim}).
As previously noted in \citet{2020MNRAS.491.3192H} (also see \citealt{2013ApJ...777..162M,2018MNRAS.477.2128H}), $N_s$ accounts for the part of meandering energy, without contributing to the forward jet head motion.
However, it should be noted that value of $N_s$ should be dependent on the parameter space (in particular on the value of $\tilde{L}$, see Figure 12 in \citealt{2018MNRAS.477.2128H}; and $v_m$, see Appendix \ref{sec:C}).
Therefore, the values of $N_s$ used here should be limited to the parameter space used here, and should not be taken at face value (for more details on $N_s$ refer to Appendix \ref{sec:C}).
\end{enumerate}
Following the introduction of the calibration coefficient $N_s$, $\tilde{L}$ is substituted by $\tilde{L}_c$ in equation (\ref{eq:beta_h 1}), and with $\beta_j\simeq 1$, the jet head velocity can be written as:
\begin{eqnarray}
\beta_h = \left[ (1 - \beta_a)(1 + \tilde{L}_c^{1/2})^{-1} \right] \tilde{L}_c^{1/2} + \beta_a ,
\label{eq:beta_h 2}
\end{eqnarray}
where $\tilde{L}_c$ can be found from equations (\ref{eq:L expression approx}) and (\ref{eq:N_s}).
Given the jet luminosity $L_j$, the ambient medium's velocity $\beta_a$ [$=v_a(r,t)/c$; see equation (\ref{eq:v_a})], and the density $\rho_a(r,t)$ [see equation (\ref{eq:rho_a})], the only unknown quantity for $\tilde{L}_c$ (i.e., $\beta_h$) to be determined is the jet head cross-section $\Sigma_j(t)$.
The jet head opening angle $\theta_j(t)$; and hence $\Sigma_j(t)$; will be determined in Section \ref{sec:cocoon collimation} by considering the collimation of the jet by the cocoon.
The jet head velocity [i.e., $\beta_h$ in equation (\ref{eq:beta_h 2})] will be solved in two different ways: Semi-analytically and analytically (details are given in Sections \ref{sec:Semi-analytic solution} and \ref{sec:Analytic solution}, respectively).
In the semi-analytic solution, the expression of $\beta_h$ in equation (\ref{eq:beta_h 2}) is used as it is, and is solved through numerical integration.
In the analytic solution, the above expression of $\beta_h$ is further approximated so that it is solved analytically (see Section \ref{sec:The approximated analytic jet head velocity}).
\subsection{The cocoon and jet collimation}
\label{sec:cocoon collimation}
\subsubsection{The system of equations}
We follow the same treatment of \citet{2011ApJ...740..100B}.
The unshocked jet's height $\hat{z}$ can be written as a function of the jet luminosity $L_j$ and the cocoon's pressure $P_c$:
\begin{eqnarray}
\hat{z} = \sqrt{\frac{L_j}{\pi c P_c}} + z_*.
\label{eq:z}
\end{eqnarray}
With $r_{in}$ being the radius at which the jet is injected into the medium, $z_* = \max[r_{in},z(P_c = P_{j0})]$ is the radius at which the pressure of the injected jet and the pressure of the cocoon are balanced; beyond $z_*$ the pressure of the cocoon is higher than the pressure of the injected jet (\citealt{2011ApJ...740..100B}).
In our simulations, $z_*$ is typically of the same order of $r_{in}$, hence, for simplicity, we take $z_* \approx r_{in}$.
At a certain time $t$, the jet is uncollimated if the jet head's radius, $r_h(t)$, is below $\hat{z}/2$, and collimated if it is beyond $\hat{z}/2$ (see Figure 2 in \citealt{2011ApJ...740..100B}).
Hence, the jet head's cross-section can be found for the two modes as follows:
\begin{equation}
\Sigma_j(t) =
\begin{cases}
\pi r_h^2(t)\theta_0^2 & \text{if $r_h(t)<\hat{z}/2$ (uncollimated jet)} ,\\
\pi r_h^2(t)\theta_j^2(t) & \text{if $r_h(t)>\hat{z}/2$ (collimated jet)} ,
\end{cases}
\label{eq:Sigma}
\end{equation}
where $\theta_0$ is the initial opening angle of the jet\footnote{The initial opening angle is given by $\theta_0 \approx \theta_{inj} + 1/\Gamma_0$ where $\theta_{inj}$ is the opening angle of the injected jet at $t=t_0$ and $r=r_{in}$, and $\Gamma_0$ is its initial Lorentz factor.}, and $\theta_j(t)$ is the opening angle of the jet head at a given time $t$.
Since the cocoon shape is approximated to an ellipsoidal [see (vi) in Section \ref{sec:Main approximations}], $r_c(t)$ is the cocoon's lateral width at the radius $\frac{1}{2}r_h(t)$.
$r_c(t)$ is determined by integrating the lateral velocity, $\beta_{\perp}$, with which the cocoon expands into the ambient medium at the radius $\sqrt{[r_h(t)/2]^2 + r_c^2(t)}\approx \frac{1}{2}r_h(t)$ [since $r_h(t)\gg r_c(t)$; see Figures \ref{fig:BNS case}, \ref{fig:collapsar case}, and \ref{fig:maps}].
At this radius, since the ambient medium's velocity $v_a(r_h/2)$ is $\leq v_m/2$, and considering the value of $v_m$ (see Table \ref{Table:sim}), $\Gamma_a(r_h/2)$ is $\approx 1$ and a non-relativistic treatment is reasonable.
$\beta_{\perp}$ is therefore determined by the ram pressure balance between the cocoon and the ambient medium at the radius $\frac{1}{2}r_h(t)$, giving:
\begin{eqnarray}
P_c \approx \rho_a(r_h/2,t) c^2 [\beta_{\perp} - \beta_{a,\perp}]^2 ,
\label{eq:jump and beta_perp}
\end{eqnarray}
where $\beta_{a,\perp}$ is the ambient medium's expansion velocity [see equation (\ref{eq:v_a})] in the lateral direction:
\begin{eqnarray}
\beta_{a,\perp} =\left[\frac{r_c(t)}{r_m(t)}\right]\frac{v_{m}}{c}.
\label{eq:beta_perp}
\end{eqnarray}
In summary, the equations describing the jet-cocoon system can be found as follows:
\begin{align}
\label{eq:S1}
\frac{dr_c(t)}{dt} =& c\beta_\perp , \\
\label{eq:S2}
\beta_{\perp} =&
\sqrt{\frac{P_{c}}{{\rho}_{a}(r_h/2,t) c^{2}}} +\left[\frac{r_c(t)}{r_m(t)}\right]\frac{v_{m}}{c}, \\
\label{eq:S3}
P_{{c}} =& \:\:\:\:\:\: \frac{E_{i}}{3\:V_{{c}}} \:\:\:\:\:\:\:= \eta\frac{L_j\left(1-\langle{\beta_h}\rangle \right) \:(t-t_0)}{2 \pi r_c^{2}(t) r_{{h}}(t)} , \\
\label{eq:S4}
\Sigma_j(t) =& \pi r_h^2(t) \theta_j^2(t) = \frac{L_j \theta_0^2}{4 c P_c} ,
\end{align}
where $P_c$ and $V_c$ are defined as in equations (\ref{eq:P_c}) and (\ref{eq:V_c}), respectively;
and $\left<\beta_h\right>=\frac{1}{c}\frac{r_h(t)-r_0}{t-t_0}$ is the time-averaged jet head velocity, which is a term that takes into account the fact that a part of the injected energy [$=L_j\langle{\beta_h}\rangle(t-t_0)$] is contained in the jet and does not make its way into the cocoon instantly.
The last equation (\ref{eq:S4}) is determined by the pressure balance between the post-collimated jet and the cocoon (\citealt{2011ApJ...740..100B}).
The expression of $\beta_\perp$ [and eventually $r_c(t)=\int_{t_0}^t c \beta_\perp dt + r_0\theta_{0}$] here is different from the original
collapsar case where the medium is static (\citealt{2011ApJ...740..100B}; \citealt{2018MNRAS.477.2128H}); it is instead applicable to both the case of static medium and the case of expanding medium.
The term $\left[\frac{r_c(t)}{r_m(t)}\right]\frac{v_{m}}{c}$ in equation (\ref{eq:S2}) is new and is the result of the homologous expansion of the medium.
It is worth mentioning that the term $\left[\frac{r_c(t)}{r_m(t)}\right]\frac{v_{m}}{c}$ is far more dominant (in $\beta_\perp$) over the term $\sqrt{\frac{P_{c}}{\rho_{a}(r_h/2,t) c^{2}}}$ in the case of an expanding medium as in BNS mergers, and hence it is important.
\subsubsection{The parameters $\eta$ and $\eta'$}
\label{sec:eta definition}
$\eta$ in equation (\ref{eq:S3}) is a parameter that expresses the fraction of internal energy in the total energy delivered into the cocoon (by the engine and through the jet) at a given time $t$ (\citealt{2011ApJ...740..100B}; \citealt{2013ApJ...777..162M}). It takes values between 0 and 1, and it can be expressed as:
\begin{equation}
\eta= \frac{3P_c V_c}{L_j\left(1-\langle{\beta_h}\rangle \right) \:(t-t_0)} ,
\label{eq:eta}
\end{equation}
with $E_i= 3P_cV_c$ [see equation (\ref{eq:P_c})].
For convenience, we define the parameter $\eta'=\eta[1-\langle{\beta_h}\rangle]$;
it relates to the fraction of internal energy in the cocoon out of the total energy delivered by the central engine, at a given time $t$.
Hence:
\begin{equation}
\eta'= \frac{3P_c V_c}{L_j \:(t-t_0)}.
\label{eq:eta'}
\end{equation}
$\eta$ and $\eta'$ can be easily deduced from numerical simulations by measuring both $P_c$ and $V_c$, or by measuring the internal energy in the cocoon $E_i$.
In Section \ref{sec:Mesurements of internal energy}, using numerical simulations' results, we will show that, on average, $\langle{\eta'}\rangle \sim 1/2$ for the collapsar case, and $\langle{\eta'}\rangle \sim 1/4$ for the BNS merger case [see Figure \ref{fig:eta} and equation (\ref{eq: eta cases})], where:
\begin{equation}
\langle{\eta'}\rangle = \frac{1}{t_b-t_0}\int_{t_0}^{t_b}\eta'dt.
\label{eq:eta average}
\end{equation}
These fiducial values will be adopted to solve the jet head motion (see Table \ref{Table:sim}).
\subsection{The semi-analytic solution}
\label{sec:Semi-analytic solution}
Here, the system of equations [equations (\ref{eq:L expression approx}), (\ref{eq:N_s}), (\ref{eq:beta_h 2}), (\ref{eq:z}), (\ref{eq:Sigma}), (\ref{eq:S1}), (\ref{eq:S2}), (\ref{eq:S3}), and (\ref{eq:S4})] is solved though numerical integration.
At every time step, the time is updated (from $t$ to $t+dt$, where $dt$ is sufficiently small).
The density ${\rho}_a(r_h/2,t)$ in equation (\ref{eq:S2}) is calculated using equation (\ref{eq:rho_a}).
Then, using equation (\ref{eq:S3}) the pressure is calculated;
the parameter $\eta'$ [as defined in equation (\ref{eq:eta'})] is represented by its time-averaged value $\langle{\eta'}\rangle$ [see equation (\ref{eq:eta average})] as measured in numerical simulations (see Table \ref{Table:sim} for the values of $\langle{\eta'}\rangle$ used).
Next, $\beta_\perp$ is derived using equation (\ref{eq:S2}).
The jet head's cross-section and opening angle are found by calculating $\hat{z}$ first, using (\ref{eq:z}), and then determining the collimation mode and the opening angle of the jet, using equation (\ref{eq:Sigma}) together with equation (\ref{eq:S4}).
$\tilde{L}_c$ is then calculated using equations (\ref{eq:L expression approx}) and (\ref{eq:N_s}).
Finally, at the end of each time step, the jet head radius $r_h(t)$, the cocoon's lateral width $r_c(t)$, and the cocoon's volume $V_c$, for the next time step are calculated using equations (\ref{eq:beta_h 2}), (\ref{eq:S1}) and (\ref{eq:V_c}), respectively.
These processes are repeated until the jet breaks out of the ambient medium [i.e., the following condition is met: $r_h(t) \geqslant r_m(t)$].
\subsection{The analytic solution}
\label{sec:Analytic solution}
Here, the system of equations (\ref{eq:L expression approx}), (\ref{eq:N_s}), (\ref{eq:beta_h 2}), (\ref{eq:Sigma}), (\ref{eq:S1}), (\ref{eq:S2}), (\ref{eq:S3}), and (\ref{eq:S4}) [in Sections \ref{sec:Jump conditions}, \ref{sec:Main approximations}, and \ref{sec:cocoon collimation}] is simplified using several additional approximations, and then solved analytically.
In summary, the jet head's velocity [equation (\ref{eq:beta_h 2})] is simplified to equation (\ref{eq:beta_h 2 approx}), which can be written as a function of $t$, $r_h(t)$, and $\theta_j(t)$ using equations (\ref{eq:L expression approx}), (\ref{eq:N_s}), and (\ref{eq:Sigma}) [see Section \ref{sec:The approximated analytic jet head velocity}].
The expression of the cocoon's lateral width, $r_c(t)$, is simplified from equation (\ref{eq:S1}) to equation (\ref{eq:S1 approx}) [$\langle{\chi}\rangle$ can be found with equations (\ref{eq:chi}) and (\ref{eq:chi average}); see Section \ref{sec:The approximated cocoon's lateral width $r_c$}], and with equation (\ref{eq:S3 approx}) the expression of the cocoon pressure, $P_c$, is derived analytically in Section \ref{sec:The system of equations and the analytic solution} [in equation (\ref{eq:P_c approx}) as a function of $t$ and $r_h(t)$].
Next, equation (\ref{eq:S4}) is used to find the analytic expression of the jet opening angle $\theta_j(t)$ [equation (\ref{eq:theta_j/theta_0 app}) as a function of $r_h(t)$ and $t$], which allows us to derive an analytically solvable equation of motion of the jet head [equation (\ref{eq:dif dynamic coll v0})], and to determine the solution, $r_h(t)$, as a function of the initial parameters and $t$ (see Section \ref{sec:BNS merger case}).
The same logic can be used in the collapsar, and the equation of motion of the jet head can be found accordingly [equation (\ref{eq:dif static v0}); see Section \ref{sec:The collapsar case}].
For reference, Table \ref{Table:sim} presents a summary of the relevant parameters and the values they take.
\subsubsection{Approximated jet head velocity $\beta_h$}
\label{sec:The approximated analytic jet head velocity}
In the analytic solution, two additional approximations are used for the jet head velocity.
Firstly, in the case of BNS mergers where the medium is expanding (i.e., $\Gamma_a>1$), the term $\frac{1}{\Gamma_a^2}$ in the expression of $\tilde{L}_c$ in equation (\ref{eq:L expression approx}) is considered as constant and is absorbed into $N_s$.
Secondly, in the analytic solution, the term $\left[ (1 - \beta_a)(1 + \tilde{L}_c^{1/2})^{-1} \right]$ in equation (\ref{eq:beta_h 2}) is also approximated as, roughly, constant over time
and is also effectively absorbed into the calibration coefficient $N_s$.
The result is the following expression:
\begin{eqnarray}
\beta_h \approx \tilde{L}_c^\frac{1}{2} + \beta_a .
\label{eq:beta_h 2 approx}
\end{eqnarray}
In the case of BNS mergers, and for typical parameters ($\beta_a \sim 0.2$ and ${\tilde L}_c \sim 0.1$--$0.4$), these approximations would result in a factor of $\sim 0.5$ being absorbed in $N_s$
[values of $N_s$ are given in the caption of Table \ref{Table:sim}; for details refer to Appendix \ref{sec:C2} and equation (\ref{eq:N_s analytic BNS})].
In the case of collapsars ($\beta_a = 0$) the above expression is even simpler [see equation (\ref{eq:beta_h 2 approx collapsar}) in Section \ref{sec:The collapsar case}], and this approximation results in a factor of $\sim 0.7$ being absorbed in $N_s$ [for details refer to Appendix \ref{sec:C2} and equation (\ref{eq:N_s analytic collaspar})].
\citet{2018MNRAS.477.2128H} showed that $N_s$ depends on the actual value of $\tilde{L}$ (i.e., $\tilde{L}_c$), but overall $N_s \sim 0.3-0.4$ for the case of a non-relativistic collapsar jet.
As a remark, since $N_s$ here is used to absorb the above two approximations, its value differs depending on the type of the jet (BNS merger case or collapsar case) and on the type of the solution (semi-analytic or analytic; see Sections \ref{sec:Semi-analytic solution} and \ref{sec:Analytic solution}).
Even for the case of a collapsar jet, the values of $N_s$ here do differ slightly from those in \citet{2018MNRAS.477.2128H}
[see the caption of Table \ref{Table:sim} for the values of $N_s$].
This is because additional difference in $N_s$ emerges as a result of the difference in the modeling [e.g., difference in the modeling of the cocoon's lateral width, volume, and in the value of $E_i/E_c$ (or $\eta$) compared to \citealt{2018MNRAS.477.2128H}].
\subsubsection{The approximated cocoon's lateral width $r_c$}
\label{sec:The approximated cocoon's lateral width $r_c$}
Here, the expression of $r_c(t)$ is simplified based on approximation that the term $\sqrt{\frac{P_{c}}{\rho_{a}(r_h/2,t) c^{2}}}$ in the expression of $\beta_\perp$ [in equation (\ref{eq:S2})] is considered as roughly constant over time.
This approximation is justified later by comparison with numerical simulations.
This allows us to write equation (\ref{eq:S1}) as:
\begin{equation}
\frac{dr_c(t)}{dt} + \left[-\frac{v_{m}}{r_m(t)}\right]r_c(t) = \sqrt{\frac{P_{c}}{\rho_{a}(r_h/2,t) }}.
\label{eq:S1 no jet}
\end{equation}
This is integrated, with $r_c(t=t_0)$ being very negligible, as
\begin{equation}
r_c(t) \approx \chi(t) \sqrt{\frac{P_{c}}{\rho_{a}(r_h/2,t)}} (t-t_0),
\end{equation}
where
$\chi(t)$ is given by:
\begin{equation}
\chi(t) = \frac{r_m(t)}{r_m(t)-r_{m,0}} \ln{\left[\frac{r_m(t)}{r_{m,0}}\right]}\approx \frac{t-t_m}{t-t_0} \ln{\left[\frac{t-t_m}{t_0-t_m}\right]} ,
\label{eq:chi}
\end{equation}
with $t_m$ being the time of the merger in the case of BNS mergers.
The value of $\chi(t)$ in BNS mergers depends on the time since the merger, the time since the jet launch, and the time delay between the merger and the jet launch.
Typically $\chi(t)$ is found to take values as follows (also, see Table \ref{Table:sim} for the average values):
\begin{equation}
\chi(t)
\begin{cases}
=1 & \text{if $\beta_a=0$ (Collapsar case)} ,\\
\sim 1-2 & \text{if $\beta_a \sim 0.2-0.3$ (BNS merger case)} .
\end{cases}
\label{eq: chi cases}
\end{equation}
Since $\chi(t) \propto \ln{t}$, its evolution over time is very limited. Therefore, in order to further simplify the expression of $r_c(t)$, we consider the time-averaged value of $\chi(t)$:
\begin{equation}
\langle{\chi}\rangle = \frac{1}{t_b-t_0}\int_{t_0}^{t_b}\chi(t)dt ,
\label{eq:chi average}
\end{equation}
so that $r_c(t)$ is simplified to:
\begin{equation}
r_c(t) \approx \langle{\chi}\rangle \sqrt{\frac{P_{c}}{\rho_{a}(r_h/2,t)}} (t-t_0).
\end{equation}
See Table \ref{Table:sim} for the typical values of $\langle{\chi}\rangle$.
When deriving the breakout time $t_b$, $\langle{\chi}\rangle$ and $t_b$ depend on each other [see equation (\ref{eq:t_b BNS}) for the expression of $t_b$, with $A_1 \propto 1/\sqrt{\langle{\chi}\rangle}$ as in equation (\ref{eq:A1})].
However, this dependency is very weak, and small variation in the value of $t_b$ hardly affects the value $\langle{\chi}\rangle$.
Therefore, both $t_b$ and $\langle{\chi}\rangle$ can be determined
iteratively
\footnote{Initially, a typical value is assumed for $t_b$ and $\langle{\chi}\rangle$ based on a guess on $t_b$ which can be guided by numerical simulations. Then $t_b$ is found using equation (\ref{eq:t_b BNS}) and a new value of $\langle{\chi}\rangle$ is found by inserting $t_b$ in equation (\ref{eq:chi average}). This new value of $\langle{\chi}\rangle$ results in a slightly different $t_b$, which is used again (to find a more accurate $\langle{\chi}\rangle$). This process is repeated $\sim 2-3$ times until the values of $t_b$ and $\langle{\chi}\rangle$ converge.}.
\subsubsection{The system of equations and the analytic solution}
\label{sec:The system of equations and the analytic solution}
The system of equations (\ref{eq:S1}), (\ref{eq:S2}), (\ref{eq:S3}), and (\ref{eq:S4}) can be simplified to the following:
\begin{align}
\label{eq:S1 approx}
r_c(t) \approx& \langle{\chi}\rangle \sqrt{\frac{P_{c}}{\rho_{a}(r_h/2,t)}} (t-t_0) ,\\
\label{eq:S2 approx}
\beta_{\perp} =&
\sqrt{\frac{P_{c}}{{\rho}_{a}(r_h/2,t) c^{2}}} +\left[\frac{r_c(t)}{r_m(t)}\right]\frac{v_{m}}{c}, \\
\label{eq:S3 approx}
P_{{c}} =& \:\:\:\:\:\: \frac{E_{i}}{3\:V_{{c}}} \:\:\:\:\:\:\:= \langle{\eta'}\rangle \frac{L_j \:(t-t_0)}{2 \pi r_c^{2}(t) r_{{h}}(t)} , \\
\label{eq:S4 approx}
\Sigma_j(t) =& \pi r_h^2(t) \theta_j^2(t) = \frac{L_j \theta_0^2}{4 c P_c} .
\end{align}
From equation (\ref{eq:rho_a}), ${\rho}_{a}(r_h/2,t)$ can be found.
Then, replacing $r_c(t)$ in equation (\ref{eq:S3 approx}), and using $r_h(t)\gg r_0$ and $r_{m,0}\gg r_0$, $P_c$ can be written as:
\begin{equation}
P_c = \sqrt{ \frac{\langle{\eta'}\rangle}{\langle{\chi}\rangle^2} \frac{(3-n) L_j M_{a} }{2^{3-n}\: \pi^2(t-t_0)} \frac{r_m^{n-3}(t)}{r_h^{n+1}(t)}} .
\label{eq:P_c approx}
\end{equation}
Finally, substituting equation (\ref{eq:P_c approx}) in equation (\ref{eq:S4 approx}) gives the expression of the opening angle of the collimated jet as:
\begin{equation}
\frac{\theta_j(t)}{\theta_0} =
\left[\frac{\langle{\chi}\rangle^2}{\langle{\eta'}\rangle} \frac{L_j}{M_{a}c^2} \frac{1}{(3-n)2^{n+1}} \right]^{\frac{1}{4}}\: \left[\frac{r_h(t)}{r_m(t)} \right]^{\frac{n-3}{4}} \: [t-t_0]^{\frac{1}{4}} .
\label{eq:theta_j/theta_0 app}
\end{equation}
Notice the weak dependence of the jet opening angle on time, which has already been pointed out in \citet{2020MNRAS.491.3192H}.
The opening angle of the jet depends on the two parameters $\langle{\chi}\rangle$ and $\langle{\eta'}\rangle$.
In the BNS merger case, the ratio $\langle{\chi}\rangle^2/\langle{\eta'}\rangle$ can take values up to $\sim 10$; hence, these two parameters are important and should not be overlooked.
The expression of $P_c$ and $\theta_j(t)$ [equations (\ref{eq:P_c approx}) and (\ref{eq:theta_j/theta_0 app})] is valid for both a BNS merger jet case and a collapsar jet case [where, $r_m(t)\equiv r_m$ and $\langle{\chi}\rangle=1$].
\subsubsection{Analytic solution for the BNS merger case}
\label{sec:BNS merger case}
The jet head velocity [equation (\ref{eq:beta_h 2 approx})] with equations (\ref{eq:L expression approx}), (\ref{eq:v_a}), (\ref{eq:rho_a}), (\ref{eq:N_s}), and (\ref{eq:theta_j/theta_0 app}), with further simplifications, gives the following differential equation:
\begin{eqnarray}
\frac{dr_h(t)}{dt} + \left[ -\frac{v_{m}}{r_{m}(t)}\right]r_h(t) = {A(t)}\:{r_m(t)}^\frac{3-n}{2}{r_h(t)}^\frac{n-2}{2} ,
\label{eq:dif dynamic coll v0}
\end{eqnarray}
where:
\begin{equation}
A(t)= N_s\sqrt{ \left(\frac{r_{m,0}^{3-n}-r_0^{3-n}}{(3-n)\:r_{m,0}^{3-n}}\right)\left(\frac{4\:L_j}{\theta_0^2 M_{a}\:c}\right) } \times\left[\frac{\theta_0}{\theta_j(t)}\right] .
\label{eq:A(t)}
\end{equation}
The jet is initially uncollimated until the jet head's radius, $r_h(t)$, crosses the radius $\hat{z}/2$ [see equation (\ref{eq:Sigma})].
Since this initial phase is very short (relative to the jet propagation timescale until the breakout; see Figure \ref{fig:BNS case}), we consider as if the jet is in the collimated mode from the start ($t=t_0$).
Therefore, the jet opening angle can be found using equation (\ref{eq:theta_j/theta_0 app}), and after inserting it in the expression of $A(t)$ [equation (\ref{eq:A(t)})], the equation of motion [equation (\ref{eq:dif dynamic coll v0})] can be found as:
\begin{equation}
\left[\frac{r_h(t)}{r_m(t)}\right]^{\frac{5-n}{4}} = A_1 \frac{5-n}{4} \int {r_m^{-3/4}(t)[1-r_{m,0}/r_m(t)]^{-1/4} dt},
\end{equation}
where $A_1$, a constant, can be found as follows:
\begin{align}
\label{eq:A1}
A_1=& N_s \left[\frac{\langle{\eta'}\rangle}{\langle{\chi}\rangle^2}\right]^\frac{1}{4}\left[ \left(\frac{r_{m,0}^{3-n}-r_0^{3-n}}{r_{m,0}^{3-n}}\right)^2\frac{2^{n+5}}{3-n}\frac{\:L_j\:v_{m}}{\theta_0^4 M_{a}} \right]^{\frac{1}{4}} , \\
\label{eq:A1 approx}
\approx& N_s \left[\frac{\langle{\eta'}\rangle}{\langle{\chi}\rangle^2}\right]^\frac{1}{4}\left[\frac{2^{n+5}}{3-n}\frac{\:L_j\:v_{m}}{\theta_0^4 M_{a}} \right]^{\frac{1}{4}} .
\end{align}
In the case where the delay between the merger time and the jet launch time, $t_0-t_m$, is significantly smaller in comparison to the breakout time $t_b-t_m$:
$t_0-t_m\ll t_b - t_m$, we have $r_m(t) \gg r_{m,0}$, hence the following approximation can be made:\footnote{The approximation $[1-r_{m,0}/r_m(t)]\approx 1$ is not good in the early phase of jet propagation where $r_{m,0} \simeq r_m(t)$.
Still, since this early phase's timescale is very short (relative to the whole jet propagation timescale; see Figure \ref{fig:BNS case}) this approximation is reasonable as long as $t_b-t_m\gg t_0-t_m$.}
\begin{equation}
\int {r_m^{-3/4}(t)[1-r_{m,0}/r_m(t)]^{-1/4} dt} \simeq \int{ r_m^{-3/4}(t)dt}.
\label{eq:r_m>>r_m,0 app}
\end{equation}
With the boundary conditions $r_m(t_0)=r_{m,0}$ and $r_h(t_0)=r_0$ at $t=t_0$, the integration gives:
\begin{equation}
r_h(t) = \left\{ \frac{(5-n)A_1 }{v_{m}}(r_m^{\frac{1}{4}}(t) - r_{m,0}^\frac{1}{4}) +\left[\frac{r_0}{r_{m,0}}\right]^\frac{5-n}{4} \right\}^\frac{4}{5-n}r_m(t) .
\label{eq:r_h analytic}
\end{equation}
The jet head velocity can be deduced from equation (\ref{eq:r_h analytic}) as:
\begin{equation}
v_h(t)=v_{m}\left[ \frac{r_h(t)}{r_m(t)}\right] +A(t)\left[ \frac{r_h(t)}{r_m(t)}\right]^\frac{n-2}{2}[r_m(t)^\frac{1}{4}(r_m(t)-r_{m,0})^\frac{1}{4}] .
\label{eq:v_h BNS}
\end{equation}
Finally, the breakout time can be derived by taking $r_h(t_b)/r_m(t_b)=1$ in equation (\ref{eq:r_h analytic}):
\begin{equation}
t_b - t_0 =\left\{ \frac{v_{m}^\frac{3}{4}}{(5-n)A_1}\left[1-\left[\frac{r_0}{r_{m,0}}\right]^\frac{5-n}{4}\right]+ \left(\frac{r_{m,0}}{v_m}\right)^\frac{1}{4}\right\}^{4} - \frac{r_{m,0}}{v_{m}} .
\label{eq:t_b BNS}
\end{equation}
\subsubsection{Analytic solution for the collapsar case}
\label{sec:The collapsar case}
This case is a special case from the previous one (in Section \ref{sec:BNS merger case}) where $v_m=0$ [i.e., the ambient medium is static: $\beta_a = 0$, $\chi(t)=1$ and $r_m(t) \equiv r_{m}$].
Therefore, the equation of motion [equation (\ref{eq:beta_h 2})], after being approximated to equation (\ref{eq:beta_h 2 approx}) (see Section \ref{sec:The approximated analytic jet head velocity}),
can be further simplified to the following:
\begin{eqnarray}
\beta_h \approx \tilde{L}_c^\frac{1}{2}.
\label{eq:beta_h 2 approx collapsar}
\end{eqnarray}
Hence, the equation of motion for the jet head can be found as:
\begin{eqnarray}
\frac{dr_h(t)}{dt} = {A(t)}\:{r_m}^\frac{3-n}{2}{r_h(t)}^\frac{n-2}{2} ,
\label{eq:dif static v0}
\end{eqnarray}
where $A(t)$ here is:
\begin{equation}
A(t)= N_s\sqrt{ \left(\frac{r_{m}^{3-n}-r_0^{3-n}}{(3-n)\:r_{m}^{3-n}}\right)\left(\frac{4\:L_j}{\theta_0^2 M_{a}\:c}\right) } \times\left[\frac{\theta_0}{\theta_j(t)}\right] ,
\label{eq:A(t) cc}
\end{equation}
which is the same expression as in equation (\ref{eq:A(t)}) [where here $r_{m,0} \equiv r_{m}$].
As in Section \ref{sec:BNS merger case}, the initial uncollimated phase is overlooked for simplicity.
Then the expression of $\theta_j(t)/\theta_0 $ can be found using equation (\ref{eq:theta_j/theta_0 app}) [with $r_
{m}(t)=r_m$ in the collapsar case].
Inserting $\theta_j(t)/\theta_0 $ in the above expression of $A(t)$ [equation (\ref{eq:A(t) cc})],
integrating equation (\ref{eq:dif static v0}), and using the boundary condition $r_h(t_0)=r_0$, gives the following expression for the jet head radius:
\begin{equation}
r_h(t) = \left\{\frac{(5-n)A_1'}{3} (t - t_0)^\frac{3}{4} +\left[\frac{r_0}{r_{m}}\right]^\frac{5-n}{4} \right\}^\frac{4}{5-n} r_m,
\label{eq:r_h collpsar}
\end{equation}
and the jet head velocity:
\begin{equation}
v_h(t) = A_1'r_m\left\{\frac{(5-n)A_1'}{3} (t - t_0)^\frac{3}{4} +\left[\frac{r_0}{r_{m}}\right]^\frac{5-n}{4} \right\}^\frac{n-1}{5-n}(t-t_0)^{-\frac{1}{4}},
\label{eq:v_h collapsar}
\end{equation}
where $A_1'$ is a constant that can be written as:
\begin{align}
\label{eq:A1'}
A_1'=& N_s\left[\frac{\langle{\eta'}\rangle}{\langle{\chi}\rangle^2}\right]^\frac{1}{4}\left[ \left(\frac{r_{m}^{3-n}-r_0^{3-n}}{r_{m}^{4-n}}\right)^2\frac{2^{n+5}}{3-n}\frac{L_j}{\theta_0^4 M_{a}} \right]^{\frac{1}{4}} , \\
\label{eq:A1' approx}
\approx& N_s\left[\frac{\langle{\eta'}\rangle}{\langle{\chi}\rangle^2}\right]^\frac{1}{4}\left[ \frac{1}{r_{m}^{2}}\frac{2^{n+5}}{3-n}\frac{L_j}{\theta_0^4 M_{a}} \right]^{\frac{1}{4}} .
\end{align}
The breakout time can be found for $r_h(t_b)=r_m$ as:
\begin{equation}
t_b - t_0 = \left\{ \frac{3}{(5-n)A_1'}\left[ 1-\left(\frac{r_0}{r_m}\right)^\frac{5-n}{4}\right]\right\}^\frac{4}{3}.
\label{eq:t_b collapsar}
\end{equation}
\section{Comparison with numerical simulations}
\label{sec:3}
\subsection{Numerical simulations}
In addition to the analytic (and semi-analytic) modeling presented above, we carried out a series of 2D relativistic hydrodynamical simulations.
In total, the series includes a total of over a hundred models covering a wide parameter space (see Table 1 in \citealt{2020MNRAS.491.3192H}).
The essential aim of carrying out numerical simulations here is to test the semi-analytic (Section \ref{sec:Semi-analytic solution}) and the analytic (Section \ref{sec:Analytic solution}) solutions, and calibrate them if necessary.
These tests and calibrations are presented for both, the case of BNS merger jet, and the case of collapsar jet.
We pick up four models, as a subsample, out of our sample of numerical simulations.
As presented in Table \ref{Table:sim}, two out of four are BNS merger models, with different initial opening angles (T03-H and T13-H); and the other two are collapsar models, with different initial opening angles as well (A and B).
The parameters of the stellar envelope in collapsar simulations (models A and B) follows the widely used model 16TI in \citet{2006ApJ...637..914W}.
However, for simplicity, the density profile is approximated to a power-law function with an index $n=2$ [see (iv) in Section \ref{sec:Main approximations}].
This allows the analytic results to be tested fairly with simulations.
The injection radius, $r_{in}$, is set at $1.2\times 10^8$ cm for the BNS merger case, and $10^9$ cm for the collapsar case (see Table \ref{Table:sim}).
This might seem quite large;
ideally the injection radius should be of the order of $10^7$ cm.
However, since the density profile in the inner region can be approximated to a power-law function with an index $n<3$ ($n=2$ for the dynamical ejecta of BNS mergers, see Figure 8 in \citealt{2020MNRAS.491.3192H}; $n\approx 1.5$ for the collapsar case, see Figure 2 in \citealt{2013MNRAS.428..729M}),
the mass contained in the inner region ($<10^8$ cm in BNS mergers; $<10^9$ cm in collapsars) is negligible, in comparison to the total ambient medium mass, as long as $r_{in} \ll r_{m,0}$ (or $r_{in} \ll r_{m}$), which is the case in our simulations [see equation (\ref{eq:M_a})].
Hence, this inner region is expected to have a very limited effect on the overall jet dynamics.
For an estimation of the effect of these values on jet dynamics, please refer to the analytic model [in particular refer to equations (\ref{eq:t_b BNS}), and (\ref{eq:t_b collapsar}), for the effect of the value of the inner boundary, $r_0$, on the jet breakout time].
Note that the motivation for taking such large values for $r_{in}$ is because smaller injection radii make numerical simulation extremely expensive in terms of computation time.
Further details about the numerical code are presented in \citet{2017MNRAS.469.2361H}.
For more information about the setup of the numerical simulations, refer to Section 3.1 in \citet{2020MNRAS.491.3192H}.
\subsection{Measurement of the internal energy in the cocoon and $\eta'$}
\label{sec:Mesurements of internal energy}
Figure \ref{fig:eta} shows the time evolution of the fraction of internal energy in the cocoon $E_i/E_c$, and the two parameters $\eta$ and $\eta'$ [previously defined in equations (\ref{eq:eta}) and (\ref{eq:eta'}); see Section \ref{sec:eta definition}], from the jet launch time ($t=t_0$) to the jet breakout time ($t=t_b$), as measured from numerical simulation
\footnote{The data was deduced by measuring $E_i$ (or the cocoon pressure $P_c$ as presented in Figure \ref{fig:P_c}, combined with the total volume of the cocoon $V_c$), the total energy contained in the cocoon $E_c$, and the jet head radius [$r_h(t)$ or $\langle{\beta_h}\rangle$], all from numerical simulations.}.
For comparison, both the case of collapsar jet and the case of BNS merger jet are shown.
In the collapsar jet case (model A and B), the fraction of internal energy in the cocoon, $E_i/E_c$, is high ($\sim 0.7 - 0.8$).
The values of $\eta$ and $\eta'$ (in the range $\sim 0.5 - 1$) are also high.
On the other hand, in the BNS merger jet case (model T03-H and T13-H), where the medium is expanding, the values of $E_i/E_c$, $\eta$, and $\eta'$ are significantly lower ($< 0.5$).
This contrast is related to the adiabatic expansion of the cocoon, which is very effective in the case where the medium is expanding;
as the medium's expansion velocity (i.e., the dynamical ejecta's radial velocity) is comparable to the jet head velocity, up to the breakout time, this expansion enhances the volume of the (over-pressurized) cocoon significantly, while depleting its internal energy.
Through this process, in the BNS merger case, the inner region of the cocoon,
where initially the density is high and the expansion velocity is small,
is propelled further outward within the cocoon, up to velocities of the order of the homologous expansion of the medium, gaining kinetic energy at the expense of internal energy.
Note that in the collapsar case, the same process happens but to a lesser extent;
the inner cocoon is propelled outward, but as a result of the initially static medium,
the gained velocity (and fraction of kinetic energy) is much less significant.
Another reason is the high jet head velocity ($\langle{\beta_h}\rangle$) in the BNS merger jet case (roughly $ \sim 2 v_m/c$; see \citealt{2018PTEP.2018d3E02I}), implying that the fraction of the injected energy that reaches the cocoon [$\propto (1-\langle{\beta_h}\rangle)$] is less significant [relative to the case of a collapsar jet; see equation (\ref{eq:E_in})].
For more details refer to Appendix \ref{sec:A}.
To our best knowledge, this is the first time that the fraction of internal energy of the cocoon, and the parameter $\eta$ (and $\eta'$), has been measured (from simulations), and such a significant difference, between the case of collapsar jet and the case of BNS merger jet, has been found.
The parameter $\eta$ has been discussed in the literature in the case of collapsar jets, and it has been suggested to take a value of $\sim 1$ (e.g., in \citealt{2011ApJ...740..100B}).
Our results show that this assumption is quite reasonable.
On the other hand,
several recent works naively assumed the same value, $\eta \sim 1$, for the case of BNS merger jet (e.g., \citealt{2018ApJ...866L..16M}; \citealt{2019ApJ...876..139G}; \citealt{2020A&A...636A.105S}).
Here, we show that such assumption is not reasonable
by a factor of $\sim 2$; $\eta$ should rather be smaller in the case of BNS merger jet, as it can be seen in Figure \ref{fig:eta} (unless $t_b -t_m \sim t_0 - t_m$).
In summary, $\eta'$ is found to take values as follows:
\begin{equation}
\eta'
\begin{cases}
\sim 0.5-1 & \text{if $\beta_{a} = 0$ (Collapsar jet case)} ,\\
\sim 0.1-0.5 & \text{if $\beta_{a} \gg 0$ (BNS merger jet case)} .
\end{cases}
\label{eq: eta cases}
\end{equation}
As a remark, for typical cases, naively assuming $\eta \sim 1$ for the case of BNS merger tends to incorrectly give a factor of $\sim \sqrt{2}$ times more collimation [i.e., $\sim \sqrt{2}$ times more jet head velocity, and hence much shorter breakout times; see equations (\ref{eq:theta_j/theta_0 app}); (\ref{eq:A1}); and (\ref{eq:t_b BNS})].
\begin{figure
\vspace{4ex}
\begin{subfigure
\centering
\includegraphics[width=0.995\linewidth]{Ei.png}
\end{subfigure
\begin{subfigure
\centering
\includegraphics[width=0.995\linewidth]{eta.png}
\end{subfigure}
\begin{subfigure
\centering
\includegraphics[width=0.995\linewidth]{eta2.png}
\end{subfigure}
\caption{The fraction of the internal energy to the total energy of the cocoon (top); and the parameters $\eta$ (middle; the fraction of the cocoon's internal energy to the energy injected into the cocoon) and $\eta'$ (bottom; the fraction of the cocoon's internal energy to the injected jet energy), as measured in our 2D simulations [see equations (\ref{eq:eta}) and (\ref{eq:eta'})]. The red and dark red lines are for collapsar jet models (models A and B in Table \ref{Table:sim}). The blue and dark blue lines are for BNS merger models (models T03-H and T13-H in Table \ref{Table:sim}).}
\label{fig:eta}
\end{figure}
\begin{figure*
\vspace{4ex}
\begin{subfigure
\centering
\includegraphics[width=0.495\linewidth]{T03-rh.png}
\end{subfigure
\begin{subfigure
\centering
\includegraphics[width=0.495\linewidth]{T13-rh.png}
\end{subfigure}
\begin{subfigure
\centering
\includegraphics[width=0.495\linewidth]{T03-rc.png}
\end{subfigure
\begin{subfigure
\centering
\includegraphics[width=0.495\linewidth]{T13-rc.png}
\end{subfigure}
\begin{subfigure
\centering
\includegraphics[width=0.495\linewidth]{T03-theta.png}
\end{subfigure
\begin{subfigure
\centering
\includegraphics[width=0.495\linewidth]{T13-theta.png}
\end{subfigure}
\caption{Results for the case of BNS mergers showing the jet's and the cocoon's evolution over time, as measured in numerical simulations (black dotted line with filled squares), and as inferred with the analytic (solid blue line) and the semi-analytic (solid red line) solutions [see Sections \ref{sec:Semi-analytic solution} and \ref{sec:BNS merger case}].
The black dotted line in the top two panels shows the outer radius of the expanding ejecta.
From top to bottom, the jet head radius, the cocoon's lateral width, and the jet opening angle relative to the initial opening angle, are shown respectively. From left to right, results for the narrow jet case (model T03-H) and the wide jet case (model T13-H) are shown respectively. We take $N_s=0.46$ in the analytic solution, and $N_s=0.75$ in the semi-analytic solution (see Section \ref{sec:The approximated analytic jet head velocity} and Table \ref{Table:sim}).}
\label{fig:BNS case}
\end{figure*}
\begin{figure*
\vspace{4ex}
\begin{subfigure
\centering
\includegraphics[width=0.494\linewidth]{A-rh.png}
\end{subfigure
\begin{subfigure
\centering
\includegraphics[width=0.494\linewidth]{B-rh.png}
\end{subfigure}
\begin{subfigure
\centering
\includegraphics[width=0.494\linewidth]{A-rc.png}
\end{subfigure
\begin{subfigure
\centering
\includegraphics[width=0.494\linewidth]{B-rc.png}
\end{subfigure}
\begin{subfigure
\centering
\includegraphics[width=0.494\linewidth]{A-theta.png}
\end{subfigure}
\begin{subfigure
\centering
\includegraphics[width=0.494\linewidth]{B-theta.png}
\end{subfigure}
\caption{Same as Figure \ref{fig:BNS case} for the collapsar case, showing the results from numerical simulations (black dotted line with filled squares), and results of the analytic (solid blue line) and the semi-analytic (solid red line) solutions [see Sections \ref{sec:Semi-analytic solution} and \ref{sec:The collapsar case}].
The horizontal black dotted line in the top two panels shows the radius of the stellar envelope.
From left to right, the results for the narrow jet case (model A) and the results for the wide jet case (model B) are shown, respectively. We take $N_s=0.38$ in the analytic solution, and $N_s=0.53$ in the semi-analytic solution (see Section \ref{sec:The approximated analytic jet head velocity} and Table \ref{Table:sim}).
}
\label{fig:collapsar case}
\end{figure*}
\begin{table*}
\caption {A subsample showing the simulated models and the corresponding parameters.
From the left:
The model name;
the type of jet (BNS merger or collapsar);
the ambient medium's mass;
the jet initial opening angle;
the engine's isotropic equivalent luminosity [$L_{iso,0} = \frac{2 L_j}{1-\cos\theta_0}\simeq \frac{4 L_j}{\theta_0^2}$] where $L_j$ is the jet true luminosity (one sided);
the inner radius at which the jet is injected in simulations;
the ambient medium's outer radius at the start of the simulation;
the maximum expansion velocity of the ambient medium;
the time-averaged value of $\eta'$ [see equation (\ref{eq:eta average})] estimated from simulations (see Figure \ref{fig:eta});
the time-averaged value of $\chi(t)$ used in the analytic solution [using equation (\ref{eq:chi average}); see Section \ref{sec:The approximated cocoon's lateral width $r_c$}];
the breakout time measured in numerical simulations;
the inferred breakout time by the analytic solution [using equation (\ref{eq:t_b BNS}) for the BNS merger jet case, and equation (\ref{eq:t_b collapsar}) for the collapsar jet case], and by the semi-analytic solution (see Section \ref{sec:Semi-analytic solution});
The values of the calibration coefficient $N_s$ are:
$0.46$ and $0.75$ in the BNS merger case (for the analytic and the semi-analytic solution, respectively), $0.38$ and $0.53$ in the collapsar case (for the analytic and the semi-analytic solution, respectively).
The density profile of the ambient medium in all models is approximated to power-law with the index $n=2$ [see (iv) in Section \ref{sec:Main approximations}].}
\label{Table:sim}
\begin{tabular}{llllllllll|l|l|l}
\hline
& & $M_{a}$ & $\theta_0$ & $L_{iso,0}$ & $r_{in}$ & $r_m(t_0)$ & $v_{m}$ & $\langle{\eta'}\rangle$ & $\langle{\chi}\rangle$ & $t_b-t_0$ [s] & $t_b-t_0$ [s] & $t_b-t_0$ [s] \\
& & & & & & & & & & & & (Semi-\\
Model & Type & [$M_\odot$] & [deg] & [erg s$^{-1}$] & [cm] & [cm] & [c] & & & (Simulation) & (Analytic) & analytic)\\
\hline
T03-H & BNS & $0.002$ & 6.8 & $5\times10^{50}$ & $1.2\times 10^8$ & $1.67\times10^9$ & $\frac{\sqrt{3}}{5}$ & $1/4$ & $1.25$ & 0.221 & 0.203 & 0.231\\
T13-H & BNS & $0.002$ & 18.0 & $5\times10^{50}$ & $1.2\times 10^8$ & $1.67\times10^9$ & $\frac{\sqrt{3}}{5}$ & $1/4$ & $1.48$ & 0.429 & 0.456 & 0.408\\
\hline
A & Collapsar & $13.950$ & 9.2 & $7.83\times10^{52}$ & $10^9$ & $4\times10^{10}$ & 0 & $1/2$ & $1.00$ & 3.804 & 3.282 & 3.947\\
B & Collapsar & $13.950$ & 22.9 & $1.27\times10^{52}$ & $10^9$ & $4\times10^{10}$ & 0 & $1/2$ & $1.00$ & 9.930 & 11.137 & 9.681 \\
\hline
\hline
\end{tabular}
\end{table*}
\subsection{Time evolution of the jet propagation}
\label{sec:Time evolution of the jet propagation}
\subsubsection{BNS merger's case}
\label{BNS merger's case}
In Figure \ref{fig:BNS case} we show results for the two models of jet propagation in the BNS merger ejecta [T03-H and T13-H with $\theta_0=6.8^\circ$ (left) and $18.0^\circ$ (right), respectively; see Table \ref{Table:sim}].
Three different quantities are shown (from top to bottom): the jet head radius $r_h(t)$, the cocoon's lateral width $r_c(t)$, and the opening angle of the jet head $\theta_j(t)$.
The calibration coefficient $N_s$ has been used to calibrate the analytic solution (with $N_s = 0.46$) and the semi-analytic solution (with $N_s = 0.75$); the value of $N_s$ is set so that the breakout time in the analytic (or semi-analytic) solution matches the breakout time in numerical simulations [refer to equation (\ref{eq:N_s}) and the explanation that follows].
Also, as noted in Section \ref{sec:The approximated analytic jet head velocity}, the different values of $N_s$ are due to the additional approximations in the analytic solution.
It should be noted that the value of $N_s$ for the analytic model, here (0.46), is slightly different from the one in the analytic model presented in \citet{2020MNRAS.491.3192H} (0.40).
This difference is due to the main difference between the two models;
in \citet{2020MNRAS.491.3192H} the jet opening angle is fixed using the parameter $f_j$ (measured from numerical simulations; see Figure 3 in \citealt{2020MNRAS.491.3192H}),
while here, the jet opening angle is determined self-consistently (by calculating the jet collimation by the cocoon), and should be more robust (see the bottom two panels in Figure \ref{fig:BNS case}).
For more details on $N_s$ refer to Appendix \ref{sec:C}.
The time evolution of the analytic and the semi-analytic jet head radius, $r_h(t)$, in Figure \ref{fig:BNS case} shows a very good agreement with simulations (within $\sim10\%$).
Analytic and semi-analytic results hold fairly well for both models (T03-H and T13-H) showing that the jet-cocoon model here works well regardless of the initial jet opening angle $\theta_0$.
The time evolution of the analytic and semi-analytic cocoon's lateral width $r_c(t)$ is also consistent with simulations, especially for the case of small $\theta_0$ (within $\sim 10 \%$; see T03-H in Figure \ref{fig:BNS case}).
For the case of large $\theta_0$ (T13-H), the agreement with simulations is less significant, but roughly within $\sim 30 \%$, where $r_c(t)$ is slightly underestimated in the analytic and semi-analytical model (relative to numerical simulations).
In simulations, the jet head's opening angle has been estimated by taking the average opening angle from $r = \frac{1}{2}r_h(t)$ to $r = r_h(t)$ [see equation (34) in \citealt{2020MNRAS.491.3192H}].
This average opening angle is compared with the analytic and the semi-analytic jet opening angles in Figure \ref{fig:BNS case}.
Except the early time evolution of the jet opening angle, during which the jet-cocoon is highly inhomogeneous in simulations [in particular, in terms of entropy and Lorentz factor which are used to discriminate the jet from the cocoon: for more details see Section 3.1.2 in \citet{2020MNRAS.491.3192H}],
analytic and semi-analytic jet opening angles are consistent with the average opening angle in simulations within
$\sim 30\%$.
\subsubsection{Collapsar's case}
\label{Collapsar's case}
Figure \ref{fig:collapsar case} shows a comparison of the analytic and the semi-analytic results with simulations for three quantities $r_h(t)$, $r_c(t)$, and $\theta_j(t)$, in the same manner as in Figure \ref{fig:BNS case} (and Section \ref{BNS merger's case}), but for the collapsar jet case.
We present two models with different initial opening angles [A (left) and B (right), with $\theta_0=9.2^\circ$ and $22.9^\circ$, respectively; see Table \ref{Table:sim}].
The calibration coefficient is found as $N_s = 0.38$ for the analytic solution, and $N_s = 0.53$ for the semi-analytic solution (see Section \ref{sec:The approximated analytic jet head velocity} for more details about the origin of this difference).
Although slightly larger, these values of $N_s$ are fairly consistent with those found by \citet{2018MNRAS.477.2128H}, despite several differences in the jet-cocoon modeling (such as for the cocoon's lateral width, volume, $\langle{\eta'}\rangle$, etc.).
The analytic and the semi-analytic solutions for the time evolution of the jet head radius, $r_h(t)$, show a clear agreement with simulations (within $\sim10$--$20\%$) for both models (A and B).
The same can be said about the time evolution of the cocoon's lateral width $r_c(t)$
(within $\sim 10 - 20\%$).
The time evolution of the average opening angle of the jet head $\theta_j(t)$ in simulation can be divided into two phases; with the first phase showing relatively large opening angles and unstable behavior, and the second phase showing collimated opening angles, and relatively a stable behavior.
In the first phase, the effects of the initial conditions are still present.
However, since this phase is relatively short, its contribution to the jet structure and propagation, up to the breakout, is limited.
In the second phase, which represents most of the jet propagation time, the jet head's opening angle in the analytic and semi-analytic solutions is, overall, consistent with the average opening angle in simulations (well within $\sim 50\%$).
\subsection{The cocoon pressure $P_c$}
\label{sec:The cocoon pressure}
Figure \ref{fig:P_c} shows the average cocoon pressure in simulations (measured throughout the cocoon's grid in numerical simulations and volume-averaged) compared with the cocoon pressure as inferred from our analytic and semi-analytic solutions [see Section \ref{sec:Semi-analytic solution}; and equation (\ref{eq:P_c approx}) in Section \ref{sec:Analytic solution}].
For both cases (BNS merger jet and collapsar jet), and for both the analytic and the semi-analytic solutions, the time evolution of the cocoon pressure up to the breakout is well consistent with numerical simulations.
This agreement indicates that the modeling presented here is a good representation of the cocoon and its interaction with the jet and the ambient medium.
Note that in the analytic solution, as the early uncollimated jet phase is not taken into account, the cocoon pressure diverges at $t\sim t_0$ due to the approximation in equation (\ref{eq:r_m>>r_m,0 app}).
The semi-analytic solution shows
no such anomaly.
\begin{figure*
\vspace{4ex}
\begin{subfigure
\centering
\includegraphics[width=0.455\linewidth]{T03-pc.png}
\end{subfigure
\begin{subfigure
\centering
\includegraphics[width=0.455\linewidth]{T13-pc.png}
\end{subfigure}
\begin{subfigure
\centering
\includegraphics[width=0.455\linewidth]{A-pc.png}
\end{subfigure
\begin{subfigure
\centering
\includegraphics[width=0.455\linewidth]{B-pc.png}
\end{subfigure}
\caption{The pressure of the cocoon $P_c$ from the jet launch time $t_0$ to the breakout time. The top two panels show two BNS merger models (T03-H and T13-H, from left to right), and the bottom two panels show two collapsar models (A and B, from left to right; see Table \ref{Table:sim}). The black dotted line with filled squares shows the average pressure in the cocoon as measured in numerical simulation. The blue line shows the cocoon pressure according to the analytic model [equation (\ref{eq:P_c approx})]. The red solid line shows the cocoon pressure according to the semi-analytic model (calculated numerically; see Section \ref{sec:Semi-analytic solution}).}
\label{fig:P_c}
\end{figure*}
\begin{figure*
\vspace{4ex}
\begin{subfigure
\centering
\includegraphics[width=0.49\linewidth]{T03-snap-v6.pdf}
\end{subfigure
\begin{subfigure
\centering
\includegraphics[width=0.49\linewidth]{T13-snap-v6.pdf}
\end{subfigure}
\vspace{4ex}\\
\begin{subfigure
\centering
\includegraphics[width=0.49\linewidth]{A-snap-v6.pdf}
\end{subfigure
\begin{subfigure
\centering
\includegraphics[width=0.49\linewidth]{B-snap-v6.pdf}
\end{subfigure}
\caption{Density maps showing the jet-cocoon system inside the ambient medium just before the jet breakout. Four models are shown (see Table \ref{Table:sim}), where the top two models are for jet propagation in BNS merger ejecta (T03-H and T13-H), and the bottom two are for jet propagation in a stellar envelope (collapsar jets; model A and B).
The black filled square shows the inferred jet head radius $r_h$ by our semi-analytic model, and the two black filled circles show the inferred cocoon's lateral width $r_c$.
The ellipsoidal shape (solid black line) shows the jet-cocoon's shape as predicted by our modeling (the semi-analytic solution). }
\label{fig:maps}
\end{figure*}
\subsection{Morphology of the jet-cocoon system}
\label{sec:Morphology of the jet-cocoon system}
Figure \ref{fig:maps} presents snapshots from our numerical simulations showing the density map of the jet-cocoon system just before its breakout out of the ambient medium.
The four models in Table \ref{Table:sim} are shown for BNS mergers (top) and collapsars (bottom).
The jet head radius and the cocoon's lateral width,
as inferred from our semi-analytic solution, are also shown for comparison with simulations (with a black filled square, and two black filled circles, respectively).
Also, the inferred jet-cocoon morphology using the approximation of an ellipsoidal shape is shown (with a solid black line) where the semi-major axis is $\frac{1}{2}r_h$ and the semi-minor axis is $r_c$.
In Figure \ref{fig:maps}, we see a clear similarity between the morphology of the whole jet-cocoon system as inferred from our modeling and numerical simulations.
With $r_h$ being well consistent with simulations, and the error on the cocoon's lateral width of the order of $\sim 20\%$ [at $r\sim \frac{1}{2}r_h(t)$], it can be claimed that our modeling can robustly give the cocoon volume (within $\sim 50\%$).
Together, with the modeling of the cocoon pressure (see Section \ref{sec:The cocoon pressure}), we can conclude that all aspects of the jet-cocoon system are fairly well reproduced with our modeling.
\section{Conclusion}
\label{sec:conclusion}
In this paper we present a new jet-cocoon model.
The model is based on previous works of collapsar jet-cocoon modeling, in particular, models in \citet{2003MNRAS.345..575M}; \citet{2011ApJ...740..100B}; \citet{2013ApJ...777..162M}; \citet{2018MNRAS.477.2128H}.
From the analysis of jet propagation in numerical simulations over a wide parameter range,
the model has been generalized to enable proper treatment of jet propagation in the case of BNS merger where the ambient medium is expanding.
For each jet case, equations have been solved through numerical integration (semi-analytic solution), or analytically (analytic solution) after adding some approximations (see Section \ref{sec:The approximated analytic jet head velocity} and Section \ref{sec:The approximated cocoon's lateral width $r_c$}). Table \ref{Table:2.works} presents an overview of previous works on the modeling of GRB-jet propagation, and a comparison with this work.
In summary, our results can be summarized as follows:
\begin{enumerate}
\item Comparisons with numerical simulations show that our model's results are in a clear agreement with numerical simulations (overall, within $\sim 20\%$).
The time evolution of the following quantities has been shown to be well consistent with measurements from numerical simulations: the jet head radius $r_h(t)$, the cocoon radius $r_c(t)$, the jet head opening angle $\theta_j(t)$, and the cocoon pressure $P_c$ (see Figures \ref{fig:BNS case}, \ref{fig:collapsar case} and \ref{fig:P_c}).
The cocoon's morphology, and volume, as inferred with our model are also consistent with numerical simulations (see Figure \ref{fig:maps}).
This is the first time that results from the modeling of jet propagation in an expanding medium (as in BNS mergers) has been compared with numerical simulations over such a large set of parameters, and found consistent to such extent (see Table \ref{Table:sim}).
\item The results of our jet-cocoon model are proven to be consistent with numerical simulations regardless of the jet case (collapsar jet or BNS merger jet), and regardless of the jet initial opening angle (see Figures \ref{fig:BNS case}, \ref{fig:collapsar case}, \ref{fig:P_c}, and \ref{fig:maps}).
\item In addition to the semi-analytic solution, where equations are solved through numerical integration, our model offers an analytic solution, where equations are solved analytically after being approximated and simplified (see Section \ref{sec:Analytic solution}).
Still, we showed that, within certain conditions (e.g., $t_0-t_m \ll t_b-t_m$), the analytic solution's results are, almost, as consistent with numerical simulations as the semi-analytic model's results, despite being much simpler.
This analytic modeling, with its simplified but fairly robust equations [e.g., equation (\ref{eq:t_b BNS}) for the breakout time], is very useful for further investigations; for instance, on the cocoon's cooling emission (Hamidani et al. in prep).
\item The composition of the cocoon energy has been measured for both jet cases (collapsar and BNS merger), thanks to numerical simulations.
Results show a clear contrast between the two cases;
the cocoon energy in the case of a collapsar jet is overwhelmingly dominated by internal energy, while in the case of a BNS merger jet it is, rather, overwhelmingly dominated by kinetic energy.
This is the first time that such difference has been revealed.
As a result of this difference in internal energy, we showed that the parameter $\eta$ [see equation (\ref{eq:S3})] is much smaller ($\sim 2$ times) in the case of BNS mergers than in the case of collapsars (see Figure \ref{fig:eta}).
Such difference has not been taken into account in previous works although it substantially affects every aspect of the jet propagation.
Also, such difference in internal energy (in the BNS merger case) is very important when estimating the cooling emission of the cocoon; hence the importance of this result (previously mentioned in \citealt{2019ApJ...887L..16K}).
\end{enumerate}
It should be noted that the analytic modeling presented here includes several limitations.
The most important limitation is that jets here are assumed as unmagnetized.
Other notable limitations are that effects from neutrinos, r-process, viscous wind, general relativity, stellar rotation, stellar magnetic field, etc. have been overlooked for the sake of simplicity.
Future works are likely to update our results.
Finally, it should also be noted that due to the limited computational resources, the numerical simulations presented here use the approximation of axial symmetric jets (2D) and jets are injected at relatively larger radii.
This may result in some numerical artifacts.
Therefore, results such as; the value of $N_s$; and the overall agreement of $\sim 20\%$ between analytic results and numerical simulations; should not be taken at face value.
We expect these values to be updated in the future once more refined numerical simulations are available.
\begin{table*}
\caption{Comparison of jet-cocoon models in the literature. }
\label{Table:2.works}
\begin{tabular}{*6c}
\hline
& \multicolumn{2}{c}{Context (medium):} & & Consistency & \\
& BNS mergers & Collapsars & Analytic & with & \\
Work & (expanding) & (static) & solution & simulations & Comment\\
\hline
\citet{2011ApJ...740..100B} & No & \checkmark & \checkmark & \checkmark & Limited to the collapsar case\\
\citet{2013ApJ...777..162M} & No & \checkmark & \checkmark & \checkmark & Limited to the collapsar case \\
\citet{2018MNRAS.475.2659M} & No & \checkmark$^*$ & \checkmark & ? & $^*$Jet propagation in SLSN ejecta.\\
\citet{2018MNRAS.477.2128H} & No & \checkmark & \checkmark & \checkmark & Limited to the collapsar case\\
\citet{2018ApJ...866....3D} & \checkmark & No & \checkmark & \checkmark & No treatment for jet collimation.\\
\citet{2018ApJ...866L..16M} & \checkmark & No & \checkmark & No & Overlooks $\eta$ and $\chi$.\\
\citet{2019ApJ...881...89L} & \checkmark & No & \checkmark & ? & Describes the jet-wind interaction.\\
\citet{2019ApJ...876..139G} & \checkmark & No & \checkmark & ? & Overlooks $\eta$ and $\chi$.\\
Salafia et al. (2020) & \checkmark & \checkmark & No & \checkmark & The effect of the expansion was not included \\
&&&&& [in $\beta_\perp$, second term in equation (\ref{eq:S2}); in $\eta$; etc.].\\
\citet{2020MNRAS.491.3192H} & \checkmark & \checkmark & \checkmark & \checkmark & Simplified, after showing that $\theta_j(t)\sim \rm{Constant}$.\\
\citet{2020MNRAS.491..483L} & \checkmark & No & \checkmark & No & No treatment for jet collimation.\\
\citet{2020ApJ...895L..33B} & \checkmark & No & \checkmark & No & No treatment for jet collimation.\\
\hline
This work & \checkmark & \checkmark & \checkmark & \checkmark & \\
\hline
\end{tabular}
\end{table*}
\section*{Acknowledgements}
\addcontentsline{toc}{section}{Acknowledgements}
We thank Amir Levinson, Atsushi Taruya, Bing Zhang, Christopher M. Irwin, Hendrik van Eerten, Hirotaka Ito, Kazumi Kashiyama, Kazuya Takahashi, Kenta Kiuchi, Kohta Murase, Koutarou Kyutoku, Masaomi Tanaka, Masaru Shibata, Ore Gottlieb, Tomoki Wada, Toshikazu Shigeyama, Tsvi Piran, and Yudai Suwa, for fruitful discussions and comments.
We thank the participants and the organizers of the workshops with the identification number YITP-T-19-04, YITP-W-18-11 and YITP-T-18-06, for their generous support and helpful comments.
Numerical computations were achieved thanks to the following: Cray XC50 of the Center for Computational Astrophysics at the National Astronomical Observatory of Japan, and Cray XC40 at the Yukawa Institute Computer Facility.
This work is partly supported by JSPS KAKENHI nos. 20H01901, 20H01904, 20H00158, 18H01213, 18H01215, 17H06357, 17H06362, 17H06131 (KI).
\section{Data availability}
The data underlying this article will be shared on reasonable request to the corresponding author.
\bibliographystyle{mnras}
| proofpile-arXiv_065-237 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\label{sec:introduction}
In recent years, radar has been utilized to numerous civilian applications such as traffic control, remote sensing, car cruise control, and collision avoidance. On the other hand, there is a tremendous demand for additional bandwidth from the wireless sector. Thus, the coexistence between radar and communication systems using shared spectrum has attracted significant attention (e.g.
\cite{ZhengLopsEldar2019,AubryCarotenuto2016,LiPetropulu2015ICASSP,BicaKoivune16ICASSP,LiuMasouros18,WangLiGovoni2019,KangMongaRangaswamy19}). The coexistence, if improperly implemented, can cause
significant interference and performance degradation for both systems
\cite{ShajaiahClancy15,NartasilpaErricolo16,ChiriyathBliss16}. Extensive research has been directed to employing various signal processing techniques, i.e., interference mitigation, power and/or subcarrier allocation, precoding, and waveform design, to allow both radar and communication systems to efficiently share the spectrum.
Depending on the number of systems involved in the design, the
research can be classified into two types. The first is based on
\emph{joint design} that addresses the radar and communication
coexistence by jointly optimizing performance metrics and simultaneously adjusting parameters for both systems \cite{LiPetropulu2017,ZhengLopsWangTSP2018,ChengLiaoHe19,WangLi2019}, e.g., the throughput for the communication system and the signal-to-interference-plus-noise ratio (SINR) for the radar. Specifically, \cite{LiPetropulu2017} considered the co-design of the communication transmit covariance matrix and radar sampling scheme for multiple-input multiple-output (MIMO) radar and communication systems, where the design was formulated as a nonconvex problem that was solved via an alternating optimization algorithm. In \cite{ZhengLopsWangTSP2018}, the radar pulse and communication encoding matrix were jointly designed for the coexistence of a communication system and a pulsed radar. The coexistence of MIMO radar and downlink multiple-input signle-output (MISO) communication systems was
considered in \cite{ChengLiaoHe19} by minimizing the Cram\'er-Rao bound of direction-of-arrival estimation while imposing an appropriate constraint on the communication quality of service. In addition, \cite{WangLi2019} studied the joint design problem of radar and communication coexistence by considering both radar-centric and communication-centric formulations.
The second type is based on \emph{unilateral design} from either the radar or communication perspective \cite{DengHimed13,DengHimed15,ZhengLopsWang2018,ShiSellathurai2018,AubryMaio2015,AubryCarotenuto2016spl}, i.e., only parameters of one system are adjusted by using the information of the other system. One standard approach for the unilateral design is to mitigate mutual interference via either signal processing \cite{DengHimed13,DengHimed15,ZhengLopsWang2018} or constrained optimization \cite{ShiSellathurai2018,AubryMaio2015,AubryCarotenuto2016spl} techniques. Specifically, \cite{DengHimed13} employed receiving beamforming at the radar to cancel the sidelobe interference, while \cite{DengHimed15} proposed a spatial filtering technique to mitigate the wireless interference in both mainlobe and sidelobe directions in coherent MIMO radar. An uncoordinated radar and communication coexistence scenario was considered
in \cite{ZhengLopsWang2018}, where compressed sensing based radar parameter estimation was used in the communication demodulation process to remove the radar interference. Meanwhile, \cite{ShiSellathurai2018} proposed a unilateral design scheme by minimizing the total radar transmission power for an orthogonal frequency division multiplexing (OFDM) radar. Radar waveform design using constrained optimization
techniques to control the amount of radar-to-communication
interference and other waveform properties was investigated in \cite{AubryMaio2015,AubryCarotenuto2016spl}.
Given the wide use of multicarrier signals in communication systems, multicarrier waveforms have become increasingly popular in radar as well due to several advantages such as frequency diversity, waveform diversity, and easy implementation \cite{SenNehorai2011,BicKoivunen2016}. At any time instant, since the desired subcarriers can be digitally selected at the transmitter, narrowband jamming/interferences mitigation can be achieved by simply turning off affected subcarriers. OFDM waveforms with pulse-to-pulse agility was investigated in \cite{LellouchGenderen2008} for Doppler processing from the radar point of view. In \cite{TanBlunt2016}, a sparse spectrum allocation algorithm for an OFDM-type radar was presented by using the integrate sidelobe level (ISL) as an optimization metric. Multicarrier waveforms were also employed by radar and communication systems to tackle coexistence applications \cite{BicaKoivunen2019}.
We consider spectrum sharing between a multicarrier radar and a communication system operating in a cluttered environment, where the communication or radar receiver observes not only the cross-interference from its counterpart but also a multi-path or clutter signal, which arises from the system's own transmission. While multi-path can be exploited for communication, clutter is a self-interference to the radar system and must be adequately mitigated in order to expose weak targets. Clutter is also a \emph{signal-dependent} interference, i.e., it depends on the transmitted signal to be determined, which makes the design problem considerably more challenging \cite{AubryMaioStoica14,QianLopsZheng18,GrossiLops20}. It is noted that multi-path and clutter were neglected in earlier multicarrier-based spectrum sharing studies to ease the development of proposed solutions (e.g., \cite{BicaKoivune16ICASSP,WangLiGovoni2019,BicaKoivunen2019})
Specifically, we propose a joint design approach to jointly optimize the radar and communication transmission power allocated to each subcarrier. The optimum power allocation strategies are obtained for both systems by maximizing the radar output SINR while maintaining a minimum communication throughput constraint, along with a total transmission power constraint and subchannel peak power constraints for each system. The joint power allocation problem is highly nonconvex with respect to (w.r.t.) the design variables. To address this challenge, we reformulate the problem by combining the radar and communication power variables into a single stacked variable. This allows us to bypass a conventional alternating optimization procedure, which is computationally intensive. The resulting problem is then solved by using a quadratic transform method along with a sequential convex programming (SCP) technique.
In addition, we also propose a unilateral design from the radar perspective for the case when the communication system is a primary user of the frequency band, while the radar joins occasionally as a secondary user. The unilateral design optimizes the radar transmission power with throughput and power constraints under the condition that the communication transmission power is a prior knowledge and fixed \cite{ShiSellathurai2018}. The communication system employs a waterfilling solution to allocate subchannel power based on its channel condition when radar is absent. The unilateral design is solved by a Taylor expansion based iterative SCP procedure. Simulation results validate the effectiveness of the proposed joint and unilateral designs over a subcarrier-allocation based method.
The remainder of the paper is organized as follows. The signal model and problem of interest are introduced in \ref{sec:system_model}. The proposed designs along with their solutions are developed in Section \ref{sec:proposed_approach}. Section \ref{sec:simulationresults} contains numerical results and discussions, followed by conclusions in Section \ref{sec:conclusion}.
\emph{Notations}: We use boldface symbols for vectors (lower case) and matrices (upper case). $(\cdot)^T$ the transpose, $\mathbb{E}\{\cdot\}$
represents the statistical expectation, and $\mathcal{O}(\cdot)$ denotes the Landau notation for complexity.
\section{Signal Model}
\label{sec:system_model}
\begin{figure
\centering
\includegraphics[width=3.1in]{Fig1}
\caption{A radar and communication coexistence scenario in a cluttered environment.}
\label{fig:configuration}
\end{figure}
Consider a radar system that coexists with a communication system in a cluttered environment as depicted in Fig.\,\ref{fig:configuration}. Both systems share a frequency band of bandwidth $B$ Hz and employ multicarrier waveforms with $N$ subcarriers, where the subcarrier spacing $\Delta f=B/N$. Under the considered set-up, the communication or radar receiver (RX) receives not only the direct useful signal (indicated by the solid lines in Fig.\,\ref{fig:configuration}), but the direct cross interference (dashed lines in Fig.\,\ref{fig:configuration}) and reflections from the environment (dotted lines in Fig.\,\ref{fig:configuration}) as well.
Let $\pbf_\text{c}=[p_{\text{c},1},\cdots,p_{\text{c},N}]^T$ denote the communication powers allocated to the $N$ subcarriers, which are to be determined. Then, the transmitted communication signal can be represented as
\begin{equation}\label{equ:transmission_c}
x_\text{c}(t)=q_\text{c}(t)\sum_{n=1}^Nd_n\sqrt{p_{\text{c},n}}e^{j2\pi (f_\text{c}+n\Delta f)t}\triangleq\sum_{n=1}^Nx_{\text{c},n}(t),
\end{equation}
where $f_\text{c}$ is the carrier frequency, $q_{\text{c}}(t)$ the communication waveform with a duration $T_{\text{c}}$, and $d_n$ the symbol carried by the $n$-th subcarrier. Without loss of generality, we assume $E\{\vert d_n\vert^2\}=1$.
The multicarrier radar system uses the same carrier frequency $f_\text{c}$ and intercarrier spacing $\Delta f$, and a radar waveform $q_{\text{r}}(t)$ with a duration $T_\text{r}$. For simplicity, we assume $T_\text{r}=T_\text{c}=T$. Then, the transmitted radar signal can be written as
\begin{equation}\label{equ:transmission_r}
x_\text{r}(t)=q_\text{r}(t)\sum_{n=1}^{N}\sqrt{p_{\text{r},n}}e^{j2\pi (f_\text{c}+n\Delta f)t}\triangleq\sum_{n=1}^Nx_{\text{r},n}(t),
\end{equation}
where $\pbf_\text{r}=[p_{\text{r},1},\cdots,p_{\text{r},N}]^T$ denote the radar powers that are to be determined.
As illustrated in Fig.\,\ref{fig:configuration}, the signal received at the communication RX on the $n$-th subcarrier is given by\footnote{The channel coefficient are represented by using the following convention. $\alpha'$ denotes a desired (e.g., communication-to-communication or radar-to-radar) channel while $\beta'$ denotes an interference (e.g., radar-to-communication) channel. The subscripts ``cc'' (or ``rc'') indicate the channel starts from the communication (or radar) TX and ends in the communication RX.}
\begin{align}\label{equ:y_c1s}
y_{\text{c},n}(t)&=\sum_{k=1}^{K_{\text{cc}}}\alpha'_{\text{cc},n,k}x_{\text{c},n}(t-\tau_{\text{cc},k})\notag\\&+\sum_{k=1}^{K_{\text{rc}}}\beta'_{\text{rc},n,k}x_{\text{r},n}(t-\widetilde{\tau}_{\text{rc},k})+w'_{\text{c},n}(t),
\end{align}
where $\alpha'_{\text{cc},n,k}$ is the channel coefficient of the $k$-th communication path with propagation delay $\tau_{\text{cc},k}$, $K_{\text{cc}}$ denotes the total number of communication paths, $\beta'_{\text{rc},n,k}$ is the channel coefficient from the radar transmitter (TX) to the communication RX due to the $k$-th clutter scatterer with propagation delay $\widetilde{\tau}_{\text{rc},k}$, $K_{\text{rc}}$ denotes the total number of clutter scatterers, and $w'_{\text{c},n}(t)$ is the additive channel noise. Note that the channel coefficients $\alpha'_{\text{cc},n,k}$ and $\beta'_{\text{rc},n,k}$ are frequency dependent as indicated by the subscript $n$, which is standard in multicarrier systems.
In the first sum of \eqref{equ:y_c1s}, $\alpha'_{\text{cc},n,1}$ refers to the direct desired signal depicted in Fig.\,\ref{fig:configuration}, i.e., the line of sight (LOS) path between the communication TX and RX. Meanwhile, in the second sum of \eqref{equ:y_c1s}, $\beta'_{\text{rc},n,1}$ refers to the direct cross interference from the radar TX to the communication RX. This is usually the strongest interference to the communication RX induced by spectrum sharing.
\emph{Assumption 1:} The propagation delay spread $\Delta\tau$, i.e., the difference between the smallest delay and largest delay, from the communication/radar TXs to the communication/radar RX is small with respect to (w.r.t.) the pulse duration $T$.
Assumption 1 is usually satisfied in a multicarrier system since each subcarrier is a narrowband system with a bandwidth $\Delta f\ll1/\Delta\tau$ \cite[Section 12.1]{Goldsmith2005}. In other words, Assumption 1 implies $\vert\tau_{\text{cc},k}-\tau_{\text{cc},1}\vert\ll T$, for $k>1$, and $\vert\widetilde{\tau}_{\text{rc},k}-\tau_{\text{cc},1}\vert\ll T$, $\forall k$, in \eqref{equ:y_c1s}.
After down-conversion, $y_{\text{c},n}(t)$ passes through a matched filter (MF) matched to the LOS communication waveform $q_\text{c}(t-\tau_{\text{cc},1})$ and is sampled at the symbol rate, which yields \cite[Section 5.1]{proakis2001digital}
\begin{equation}
\label{equ:y_c2s}
y_{\text{c},n}=\alpha_{\text{cc},n}d_n\sqrt{p_{\text{c},n}}+\beta_{\text{rc},n}\sqrt{p_{\text{r},n}}+w_{\text{c},n},
\end{equation}
where
\begin{align}
\alpha_{\text{cc},n}&=\int_{T}\sum_{k=1}^{K_{\text{cc}}}\alpha'_{\text{cc},n,k}q_\text{c}(t-\tau_{\text{cc},k})q_\text{c}^{\ast}(t-\tau_{\text{cc},1})dt,\\
\beta_{\text{rc},n}&=\int_{T}\sum_{k=1}^{K_{\text{rc}}}\beta'_{\text{rc},n,k}q_\text{r}(t-\widetilde{\tau}_{\text{rc},k})q_\text{c}^{\ast}(t-\tau_{\text{cc},1})dt,
\end{align}
and $w_{\text{c},n}$ is a zero-mean noise with
variance $\sigma_\text{c}^2$.
Now consider the radar received signal. Although the target is illuminated by both the radar TX and communication TX, we assume
\emph{Assumption 2:} The target echo due to the illumination from the communication source is negligible.
This assumption is valid because the communication source usually employs an omni-directional antenna for transmission, which leads to a much weaker target reflection compared with the target reflection due to the illumination from a directional radar TX.
Suppose there is a moving target located at range $R$ from the radar with a target radial velocity $v$. The round-trip delay between the radar and target is $\tau_{\text{rr}}=2R/c$, where $c$ is the speed of light. Then, the received signal at the radar RX on the $n$-th subcarrier can be written as \cite{SenNehorai2011}
\begin{align}
\label{equ:y_r1s}
y_{\text{r},n}(t)&=\bar{\alpha}\alpha'_{\text{rr},n}x_{\text{r},n}\big(\varepsilon(t-\tau_{\text{rr}})\big)+\sum_{k=1}^{K_{\text{rr}}}\beta'_{\text{rr},n,k}x_{\text{r},n}(t-\widetilde{\tau}_{\text{rr},k})\notag\\
&+\sum_{k=1}^{K_{\text{cr}}}\beta'_{\text{cr},n,k}x_{\text{c},n}(t-\widetilde{\tau}_{\text{cr},k})+w'_{\text{r},n}(t),
\end{align}
where $\bar{\alpha}$ is the radar cross-section (RCS), $\alpha'_{\text{rr},n}$ is a complex quantity representing the channel coefficient of the target path, $\varepsilon=1+\frac{2v}{c}$ is a scaling factor for the target Doppler shift, $\beta'_{\text{rr},n,k}$ denotes the complex scattering coefficient of the \mbox{$k$-th} clutter scatterer due to radar illumination with propagation delay $\widetilde{\tau}_{\text{rr},k}$, $\beta'_{\text{cr},n,k}$ and $\widetilde{\tau}_{\text{cr},k}$ are the scattering coefficient and, respectively, propagation delay associated with the $k$-th clutter scatterer due to the communication illumination, $K_{\text{rr}}$ and $K_{\text{cr}}$ are the total numbers of clutter scatterers observed at the radar RX due to the illumination of the radar TX and, respectively, communication TX, and $w'_{\text{r},n}(t)$ is the additive channel noise. Note that the direct-path interference from the communication TX to the radar RX is included as the first term of the second sum in \eqref{equ:y_r1s}, with $\widetilde{\tau}_{\text{cr},1}$ corresponding to the propagation delay between the communication TX and radar RX.
The radar signal $y_{\text{r},n}(t)$ is down-converted, Doppler compensated, filtered by a MF matched to the radar waveform $q_\text{r}(t-\tau_{\text{rr}})$, and sampled at the pulse rate. Like in the communication system, the propagation spread is assumed to be relatively small compared with the pulse duration $T$. The MF output can be written as \cite[Section 4.2]{richards2014fundamentals}:
\begin{align}
\label{equ:y_r2s}
y_{\text{r},n}=\alpha_{\text{rr},n}\sqrt{p_{\text{r},n}}+\beta_{\text{rr},n}\sqrt{p_{\text{r},n}}+\beta_{\text{cr},n}d_n\sqrt{p_{\text{c},n}}+w_{\text{r},n},
\end{align}
where
\begin{align}
\alpha_{\text{rr},n}=&\bar{\alpha}\int_{T}\alpha'_{\text{rr},n}q_\text{r}(t-\tau_{\text{rr}})q_\text{r}^{\ast}(t-\tau_{\text{rr}})dt,\\
\beta_{\text{rr},n}=&\int_{T}\sum_{k=1}^{K_{\text{rr}}}\beta'_{\text{rr},n,k}q_\text{r}(t-\widetilde{\tau}_{\text{rr},k})q_\text{r}^{\ast}(t-\tau_{\text{rr}})dt,\\
\beta_{\text{cr},n}=&\int_{T}\sum_{k=1}^{K_{\text{cr}}}\beta'_{\text{cr},n,k}q_\text{c}(t-\widetilde{\tau}_{\text{cr},k})q_\text{r}^{\ast}(t-\tau_{\text{rr}})dt,
\end{align}
and $w_{\text{r},n}$ is the output noise with zero mean and variance $\sigma_\text{r}^2$.
In this paper, the problem of interest is to jointly design the power allocation vectors $\pbf_\text{r}$ and $\pbf_\text{c}$ based on the radar-communication coexistence model in \eqref{equ:y_c2s} and \eqref{equ:y_r2s}.
\section{Proposed Approaches}
\label{sec:proposed_approach}
In this section, we propose two power allocation designs for the coexistence problem. The first one is a joint design, which considers the case when the radar and communication systems are fully cooperative, i.e., parameters of both systems are jointly designed to tackle the cross-interference induced by coexistence. The second one is a unilateral design, which is useful when the communication system is the primary user of the frequency band and the radar system wants to join and co-exist as a secondary user.
\subsection{Joint Design}
\label{subsec:jointdesign}
The figure of merit for the communication system is the achievable channel throughput, which is given by
\begin{gather}
C(\pbf_\text{r},\pbf_\text{c})=\sum_{n=1}^N\log_2\Big(1+\frac{\gamma_{\text{cc},n}p_{\text{c},n}}{\eta_{\text{rc},n}p_{\text{r},n}+1}\Big),
\end{gather}
where $\gamma_{\text{cc},n}=\frac{\mathbb{E}\{\vert\alpha_{\text{cc},n}\vert^2\}}{\sigma_\text{c}^2}$ and
$\eta_{\text{rc},n}=\frac{\mathbb{E}\{\vert\alpha_{\text{rc},n}\vert^2\}}{\sigma_\text{c}^2}$ denote the \emph{normalized
signal-to-noise ratio} (SNR) and \emph{normalized
interference-to-noise ratio} (INR) at the communication
receiver, which are effectively the SNR or INR per unit transmission power. $\mathbb{E}\{\cdot\}$
represents the statistical expectation. For the radar system, the figure of merit is the SINR
\begin{gather}
\text{SINR}(\pbf_\text{r},\pbf_\text{c})=\sum_{n=1}^N\frac{\gamma_{\text{rr},n}p_{\text{r},n}}{\eta_{\text{rr},n}p_{\text{r},n}+\eta_{\text{cr},n}p_{\text{c},n}+1},
\end{gather}
where $\gamma_{\text{rr},n}=\frac{\mathbb{E}\{\vert\alpha_{\text{rr},n}\vert^2\}}{\sigma_\text{r}^2}$, $\eta_{\text{rr},n}=\frac{\mathbb{E}\{\vert\beta_{\text{rr},n}\vert^2\}}{\sigma_\text{r}^2}$, and
$\eta_{\text{cr},n}=\frac{\mathbb{E}\{\vert\beta_{\text{cr},n}\vert^2\}}{\sigma_\text{r}^2}$ are the normalized SNR, \emph{clutter-to-noise ratio} (\mbox{CNR}), and INR, respectively. The joint power allocation problem is formulated as maximizing the radar SINR under throughput and power constraints:
\begin{subequations}
\label{equ:P1}
\begin{gather}
\label{equ:sharing_abj}
\max\limits_{\pbf_\text{r},\pbf_\text{c}}~~\text{SINR}(\pbf_\text{r},\pbf_\text{c}),
\\
\label{equ:sharing_c1}
\text{s.t.}~\sum_{n=1}^Np_{\text{r},n}\leq P_\text{r},~\sum_{n=1}^N p_{\text{c},n}\leq P_\text{c},
\\
\label{equ:sharing_c2}
0\leq p_{\text{r},n}\leq\xi_\text{r},~0\leq p_{\text{c},n}\leq \xi_\text{c},~\forall~n,
\\
\label{equ:sharing_c3}
C(\pbf_\text{r},\pbf_\text{c})\geq \kappa,
\end{gather}
\end{subequations}
where \eqref{equ:sharing_c1} represents the total transmission power constraint for each system, \eqref{equ:sharing_c2} denotes subchannel peak power constraints, and \eqref{equ:sharing_c3} is a communication throughput constraint.
The joint design problem \eqref{equ:P1} is nonconvex since the objective function and the constraint \eqref{equ:sharing_c3} are both nonconvex. The above problem may be solved by employing an alternating optimization procedure \cite{AubryMaioTSP18}. The idea is to iteratively solve \eqref{equ:P1} w.r.t. $\pbf_r$ while keeping $\pbf_c$ fixed, and vice versa, until convergence is reached. However, this alternating maximization method is computationally intensive and does not guarantee convergence. This is particularly so for the considered cluttered environment, where the clutter term in the SINR depends on the power allocation variable $\pbf_\text{r}$, which makes the optimization problem significantly more challenging even with fixed $\pbf_\text{c}$. To address these challenges, we consider a different approach that is described next.
Specifically, let us define $\etabf_{\text{c},n}=[\eta_{\text{rc},n},\ 0]^T$, $\etabf_{\text{r},n}=[\eta_{\text{rr},n},\ \eta_{\text{cr},n}]^T$, $\gammabf_{\text{r},n}=[\gamma_{\text{rr},n},\ 0]^T$, $\gammabf_{\text{c},n}=[0,\ \gamma_{\text{cc},n}]^T$, and $\Pbf=[\pbf_{\text{r}}^T;\ \pbf_{\text{c}}^T]$ is a $2\times N$ matrix. Then, \eqref{equ:P1} can be rewritten as
\begin{subequations}
\label{equ:NonP1}
\begin{gather}
\label{equ:sharingNon_abj}
\max\limits_{\Pbf}~~\sum_{n=1}^N\frac{\gammabf_{\text{r},n}^T\Pbf\sbf_n}{\etabf_{\text{r},n}^T\Pbf\sbf_n+1},
\\
\label{equ:sharingNon_c1}
\text{s.t.}~\eqref{equ:sharing_c1},~\eqref{equ:sharing_c2},
\\
\label{equ:sharingNon_c3}
\sum_{n=1}^N\log_2\Big(1+\frac{\gammabf_{\text{c},n}^T\Pbf\sbf_n}{\etabf_{\text{c},n}^T\Pbf\sbf_n+1}\Big)\geq \kappa,
\end{gather}
\end{subequations}
where $\sbf_n$ is a $N\times1$ selection vector given by
\begin{equation}
\sbf_n(i)=
\begin{cases}
1,~i=n,\\
0,~\text{otherwise}.
\end{cases}
\end{equation}
Note that \eqref{equ:NonP1} is a fractional programming (FP) with the objective function being a sum of multiple ratios.
\emph{Remark 1}: The conventional alternating optimization approach usually decomposes the original nonconvex problem \eqref{equ:P1} into two subproblems in $\pbf_r$ and $\pbf_c$, respectively \cite{LiPetropuluTSP16,LiPetropulu2017,ZhengLopsWangTSP2018,AubryMaioTSP18,RihanHuang18,ChengLiaoHe19}. Although the subproblems are simpler than the original problem, they are still nonconvex and require convex relaxation techniques to solve. Specifically, when $\pbf_c$ is fixed, the subproblem in $\pbf_r$ has a similar form as \eqref{equ:NonP1}, which is multiple-ratio FP problem. On the other hand, when fixing $\pbf_r$, the resulting subproblem is also nonconvex. The alternating approach needs to solve both nonconvex subproblems multiple times till convergence or a fixed number of iterations is completed. To bypass the alternating procedure, we combine the design variables into a single stacked variable and transfer the original problem into a simplified form. A direct benefit is computational saving since we need to solve the multiple-ratio FP problem only once. Simulation results show that the complexity of the proposed algorithm is considerably lower than that of the alternating procedure.
The multiple-ratio FP problem \eqref{equ:NonP1} is nonconvex since the objective function is a sum of ratios, which is nonconvex, and the throughput constraint \eqref{equ:sharingNon_c3} imposes a nonconvex feasible set. To solve \eqref{equ:NonP1}, we can reformulate the objective function and employ an inner iteration based on convex relaxation for the throughput constraint. First, for the objective function, a quadratic transform can be used \cite{ShenYu2018}. This approach introduces a set of slack variables $\lambdabf=[\lambda_1,\cdots,\lambda_N]^T$ to deal with the nonconvexity. Specifically, problem \eqref{equ:NonP1} is equivalent to
\begin{subequations}
\label{equ:NonP2}
\begin{gather}
\label{equ:optNon_sr_new}
\max\limits_{\Pbf,\lambdabf}~~F(\lambdabf,\Pbf),
\\
\text{s.t.}~\eqref{equ:sharing_c1},~\eqref{equ:sharing_c2},~\eqref{equ:sharingNon_c3},
\end{gather}
\end{subequations}
where
\begin{align}
F(\lambdabf,\Pbf)=\sum_{n=1}^N\Big(2\lambda_n\sqrt{\gammabf_{\text{r},n}^T\Pbf\sbf_n}-\lambda_n^2\big(\etabf_{\text{r},n}^T\Pbf\sbf_n+1\big)\Big).
\end{align}
Let $\lambda_n^{(\ell-1)}$ and $\tilde{\Pbf}^{(\ell-1)}$ denote the solutions obtained from the $(\ell-1)$-st iteration. Then, $\lambda_n^{(\ell)}$ can be updated by solving the following problem:
\begin{equation}
\max\limits_{\lambdabf}~~F(\lambdabf,\tilde{\Pbf}^{(\ell-1)}),
\end{equation}
which has a closed-form solution:
\begin{equation}\label{equ:lambdacomp}
\lambda_n^{(\ell)}=\frac{\sqrt{\gammabf_{\text{r},n}^T\tilde{\Pbf}^{(\ell-1)}\sbf_n}}{\etabf_{\text{r},n}^T\tilde{\Pbf}^{(\ell-1)}\sbf_n+1}.
\end{equation}
In turn, $\tilde{\Pbf}^{(\ell)}$ can be obtained by solving
\begin{subequations}
\label{equ:NonP3}
\begin{gather}
\label{equ:opt_sr_Nonp3}
\max\limits_{\Pbf}~~F(\lambdabf^{(\ell)},\Pbf),
\\
\text{s.t.}~\eqref{equ:sharing_c1},~\eqref{equ:sharing_c2},~\eqref{equ:sharingNon_c3},
\end{gather}
\end{subequations}
Note that the above problem is nonconvex since \eqref{equ:sharingNon_c3} imposes a nonconvex set. We can use a SCP process to relax constraint \eqref{equ:sharingNon_c3} by converting it into a convex set along with an inner iteration to solve \eqref{equ:NonP3}. Specifically, \eqref{equ:sharingNon_c3} can be relaxed into a linear form as
\begin{equation}\label{equ:consrate}
\sum_{n=1}^{N}\log_2\big(\gammabf_{\text{c},n}^T\Pbf\sbf_n+\etabf_{\text{c},n}^T\Pbf\sbf_n+1\big)-G(\Pbf,\hat{\Pbf}^{(\ell_\text{s}-1)})\geq\kappa,
\end{equation}
where $\hat{\Pbf}^{(\ell_\text{s}-1)}$ is the power vector from the $(\ell_\text{s}-1)$-st inner SCP iteration and
\begin{align}\label{equ:consscpinner}
G(\Pbf,\hat{\Pbf}^{(\ell_\text{s}-1)})&\triangleq\log_2(\etabf_{\text{c},n}^T\hat{\Pbf}^{(\ell_\text{s}-1)}\sbf_n+1)\notag\\
&+\frac{\etabf_{\text{c},n}^T(\Pbf-\hat{\Pbf}^{(\ell_\text{s}-1)})\sbf_n}{\ln2(\etabf_{\text{c},n}^T\hat{\Pbf}^{(\ell_\text{s}-1)}\sbf_n+1)}.
\end{align}
Thus, during the $\ell_\text{s}$-th inner SCP iteration, the following convex optimization problem is solved to obtain $\hat{\Pbf}^{(\ell_\text{s})}$:
\begin{subequations}
\label{equ:NonP4}
\begin{gather}
\label{equ:opt_sr_Nonp4}
\max\limits_{\Pbf}~~F(\lambdabf^{(\ell)},\Pbf),
\\
\text{s.t.}~\eqref{equ:sharing_c1},~\eqref{equ:sharing_c2},~\eqref{equ:consrate}.
\end{gather}
\end{subequations}
After convergence, $\tilde{\Pbf}^{(\ell)}=\hat{\Pbf}^{(\ell_\text{s})}$ is used in \eqref{equ:lambdacomp} to compute $\lambda_n$ for the next quadratic transform iteration. Our proposed solution to the joint design problem is summarized in $\textbf{Algorithm~\ref{alg:Joint}}$.
The computational complexity of $\textbf{Algorithm~\ref{alg:Joint}}$ depends on the number of the quadratic transform iterations $L$ as well as the number of the SCP iterations $L_\text{s}$. Simulations show that the required number of the inner or outer iteration is relatively small. In addition, the convex problem \eqref{equ:NonP4} inside the iteration has a complexity of $\mathcal{O}(N^{3.5})$ when an interior-point method is used \cite{Boyd2004}. Thus, the overall complexity of the proposed solution is $\mathcal{O}(LL_\text{s}N^{3.5})$.
\begin{algorithm}[t]
\caption{Proposed Joint Design}
\begin{algorithmic}
\label{alg:Joint}
\STATE \textbf{Input:} Channel SNRs $\gamma_{\text{rr},n}$ and $\gamma_{\text{cc},n}$, channel INRs $\eta_{\text{rc},n}$ and $\eta_{\text{cr},n}$, CNR $\eta_{\text{rr},n}$, total powers $P_\text{r}$ and $P_\text{c}$, peak power constraints $\xi_\text{r}$ and $\xi_\text{c}$, throughput constraint $\kappa$, and tolerance $\epsilon$.
\STATE \textbf{Output:} Radar and communication powers $\Pbf$.\\
\STATE \textbf{Initialization:} Initialize $\tilde{\Pbf}^{(0)}$ and set iteration index $\ell=0$.\\
\REPEAT
\STATE
\begin{enumerate}
\item Set $\ell=\ell+1$.
\item Solve problem \eqref{equ:lambdacomp} to obtain $\lambda_n^{(\ell)}$.
\item Initialization: $\ell_\text{s}=0$ and $\hat{\Pbf}^{(\ell_\text{s})}=\tilde{\Pbf}^{(\ell-1)}$.
\STATE \textbf{repeat}
\begin{enumerate}
\item Set $\ell_\text{s}=\ell_\text{s}+1$.
\item Solve problem \eqref{equ:NonP4} with fixed $\hat{\Pbf}^{(\ell_\text{s}-1)}$ and $\lambda_n^{(\ell)}$ to obtain $\hat{\Pbf}^{(\ell_\text{s})}$.
\end{enumerate}
\STATE \textbf{until} convergence.
\item Update $\tilde{\Pbf}^{(\ell)}=\hat{\Pbf}^{(\ell_\text{s})}$.
\end{enumerate}
\UNTIL convergence.
\RETURN $\Pbf=\tilde{\Pbf}^{(\ell)}$.
\end{algorithmic}
\end{algorithm}
\subsection{Unilateral Design}
\label{subsec:UniDesign}
The above joint design requires mutual cooperation of both radar and communication systems. However, in some scenarios, the communication system may be the primary and pre-existing user of the frequency band, while the radar occasionally joins and co-exists with the primary user. Thus, we consider a second spectrum sharing framework based on a unilateral design from the radar perspective that optimizes the radar transmission power $\pbf_{\text{r}}$ when the communication transmission power is known and fixed \cite{ShiSellathurai2018}.
Specifically, suppose the communication system pre-exists and employs a waterfilling approach to allocate subchannel power before the radar enters the channel:
\begin{subequations}
\label{equ:P_u}
\begin{gather}
\label{equ:sharing_abj_u}
\tilde{\pbf}_\text{c}=\arg~\max\limits_{\pbf_\text{c}}~~\sum_{n=1}^N\log_2\Big(1+\gamma_{\text{cc},n}p_{\text{c},n}\Big),
\\
\label{equ:sharing_c1_u}
\text{s.t.}~\sum_{n=1}^N p_{\text{c},n}\leq P_\text{c},~0\leq p_{\text{c},n}\leq \xi_\text{c},~\forall~n,
\end{gather}
\end{subequations}
which is convex and can be solved by waterfilling.
When the radar needs to access the channel, it acquires the knowledge of communication power allocation and uses a strategy to maximize its SINR, subject to a minimum communication throughput constraint and power constraints:
\begin{subequations}
\label{equ:P4}
\begin{gather}
\label{equ:opt_ad-scp}
\max\limits_{\pbf_\text{r}}~~\sum_{n=1}^N\frac{\gamma_{\text{rr},n}p_{\text{r},n}}{\eta_{\text{rr},n}p_{\text{r},n}+\eta_{\text{cr},n}\tilde{p}_{\text{c},n}+1},
\\
\label{equ:c1_ad-scp}
\text{s.t.}~0\leq p_{\text{r},n}\leq \xi_\text{r},~\forall~n,~\sum_{n=1}^Np_{\text{r},n}\leq P_\text{r},
\\
\label{equ:c2_ad-scp}
\sum_{n=1}^N\log_2\Big(1+\frac{\gamma_{\text{cc},n}\tilde{p}_{\text{c},n}}{\eta_{\text{rc},n}p_{\text{r},n}+1}\Big)\geq \kappa.
\end{gather}
\end{subequations}
The objective function can be rewritten as
\begin{equation}
\sum_{n=1}^{N}\frac{\gamma_{\text{rr},n}}{\eta_{\text{rr},n}}-\sum_{n=1}^{N}\frac{\gamma_{\text{rr},n}(\eta_{\text{cr},n}\tilde{p}_{\text{c},n}+1)}{\eta_{\text{rr},n}^2p_{\text{r},n}+\eta_{\text{rr},n}(\eta_{\text{cr},n}\tilde{p}_{\text{c},n}+1)}.
\end{equation}
Thus, problem \eqref{equ:P4} is equivalent to
\begin{subequations}
\label{equ:P6}
\begin{gather}
\label{equ:opt_sr_p6}
\min\limits_{\pbf_\text{r}}~~\sum_{n=1}^{N}\frac{\gamma_{\text{rr},n}(\eta_{\text{cr},n}\tilde{p}_{\text{c},n}+1)}{\eta_{\text{rr},n}^2p_{\text{r},n}+\eta_{\text{rr},n}(\eta_{\text{cr},n}\tilde{p}_{\text{c},n}+1)},
\\
\text{s.t.}~\eqref{equ:c1_ad-scp},~\eqref{equ:c2_ad-scp},
\end{gather}
\end{subequations}
Note that while the objective \eqref{equ:opt_sr_p6} is convex, the above problem is nonconvex since \eqref{equ:c2_ad-scp} is a nonconvex set. We can use the first-order Taylor expansion to convert the nonconvex constraint into a convex one and solve the relaxed problem using an SCP process. Specifically, rewrite the left side of \eqref{equ:c2_ad-scp} as
\begin{equation}
\sum_{n=1}^Nf(p_{\text{r},n}),
\end{equation}
where
\begin{equation}\label{equ:dc_obj}
f(p_{\text{r},n})\triangleq F_1(p_{\text{r},n}\vert \tilde{p}_{\text{c},n})-\log_2\big(\eta_{\text{rc},n}p_{\text{r},n}+1\big),
\end{equation}
and
\begin{equation}
F_1(p_{\text{r},n}\vert \tilde{p}_{\text{c},n})=\log_2\big(\eta_{\text{rc},n}p_{\text{r},n}+1+\gamma_{\text{cc},n}\tilde{p}_{\text{c},n}\big).
\end{equation}
It can be shown that $f(p_{\text{r},n})$ is nonconvex w.r.t. $p_{\text{r},n}$ since it is a difference of two concave functions. The second concave function in \eqref{equ:dc_obj} can be upper bounded by a first-order Taylor expansion at $\tilde{p}_{\text{r},n}^{(\ell_\text{r}-1)}$:
\begin{equation}\label{equ:F2approx}
\begin{split}
&\log_2\big(\eta_{\text{rc},n}p_{\text{r},n}+1\big)\leq F_2(p_{\text{r},n}\vert\tilde{p}_{\text{r},n}^{(\ell_\text{r}-1)})\\&\triangleq\log_2\big(\eta_{\text{rc},n}\tilde{p}_{\text{r},n}^{(\ell_\text{r}-1)}+1\big)
+\frac{\eta_{\text{rc},n}(p_{\text{r},n}-\tilde{p}_{\text{r},n}^{(\ell_\text{r}-1)})}{\ln 2(\eta_{\text{rc},n}\tilde{p}_{\text{r},n}^{(\ell_\text{r}-1)}+1)},
\end{split}
\end{equation}
where $\tilde{p}_{\text{r},n}^{(\ell_\text{r}-1)}$ is the radar power from the $(\ell_\text{r}-1)$-st inner iteration. Clearly, the bound is tight at $p_{\text{r},n}=\tilde{p}_{\text{r},n}^{(\ell_\text{r}-1)}$:
\begin{equation}\label{equ:tightbound}
\log_2\big(\eta_{\text{rc},n}\tilde{p}_{\text{r},n}^{(\ell_\text{r}-1)}+1\big)= F_2(\tilde{p}_{\text{r},n}^{(\ell_\text{r}-1)}\vert\tilde{p}_{\text{r},n}^{(\ell_\text{r}-1)}).
\end{equation}
Substituting \eqref{equ:F2approx} back into \eqref{equ:dc_obj} gives the lower bound of $f(p_{\text{r},n})$: $\tilde{f}(p_{\text{r},n})=F_1(p_{\text{r},n}\vert \tilde{p}_{\text{c},n})-F_2(p_{\text{r},n}\vert\tilde{p}_{\text{r},n}^{(\ell_\text{r}-1)})$. We can see that $\tilde{f}(p_{\text{r},n})$ is now an affine function of $p_{\text{r},n}$ and constraint \eqref{equ:c2_ad-scp} becomes
\begin{equation}\label{equ:relaxc}
\sum_{n=1}^N\tilde{f}(p_{\text{r},n})\geq \kappa,
\end{equation}
which is a convex set. Thus, during the $\ell_\text{r}$-th inner SCP iteration, the following convex problem is solved for $\tilde{p}_{\text{r},n}^{(\ell_\text{r})}$ until convergence:
\begin{subequations}
\label{equ:P7}
\begin{gather}
\label{equ:opt_sr_p7}
\min\limits_{\pbf_\text{r}}~~\sum_{n=1}^{N}\frac{\gamma_{\text{rr},n}(\eta_{\text{cr},n}\widetilde{p}_{\text{c},n}+1)}{\eta_{\text{rr},n}^2p_{\text{r},n}+\eta_{\text{rr},n}(\eta_{\text{cr},n}\widetilde{p}_{\text{c},n}+1)},
\\
\text{s.t.}~\eqref{equ:c1_ad-scp},~\eqref{equ:relaxc}.
\end{gather}
\end{subequations}
The proposed solution is summarized in $\textbf{Algorithm~\ref{alg:Unilateral}}$.
Similar to $\textbf{Algorithm~\ref{alg:Joint}}$, the complexity of $\textbf{Algorithm~\ref{alg:Unilateral}}$ depends on the number of the SCP iterations $L_\text{r}$ required for convergence, and the overall computational complexity is $\mathcal{O}(L_\text{r}N^{3.5})$.
\begin{algorithm}[t]
\caption{Proposed Unilateral Design}
\begin{algorithmic}
\label{alg:Unilateral}
\STATE \textbf{Input:} $\gamma_{\text{rr},n}$, $\gamma_{\text{cc},n}$, $\eta_{\text{rc},n}$, $\eta_{\text{cr},n}$, $\eta_{\text{rr},n}$, $P_\text{r}$, $\widetilde{\pbf}_\text{c}$, $\xi_\text{r}$, and $\kappa$ (same as in $\textbf{Algorithm~\ref{alg:Joint}}$).
\STATE \textbf{Output:} Radar powers $\pbf_\text{r}$.\\
\STATE \textbf{Initialization:} Initialize $\tilde{p}_{\text{r},n}^{(0)}$ and set iteration index $\ell_\text{r}=0$.\\
\REPEAT
\STATE
\begin{enumerate}
\item Set $\ell_\text{r}=\ell_\text{r}+1$.
\item Solve problem \eqref{equ:P7} to obtain $\tilde{p}_{\text{r},n}^{(\ell_\text{r})}$.
\end{enumerate}
\UNTIL convergence.
\RETURN $p_{\text{r},n}=\tilde{p}_{\text{r},n}^{(\ell_\text{r})}$.
\end{algorithmic}
\end{algorithm}
\subsection{Feasibility and Initialization Analysis}
\label{subsec:Feasibility}
For the joint design problem \eqref{equ:P1}, its feasibility depends on if the maximum achievable throughput (e.g., $C_{\text{max}}$) under the power constraints is no less than the minimum throughput constraint $\kappa$, that is, $C_{\text{max}}\geq\kappa$. Clearly, $C_{\text{max}}$ is achieved when the radar is absent while the communication system uses all subcarriers to maximize its throughput, which is the same as problem \eqref{equ:P_u}. In other words, problem \eqref{equ:P1} is feasible if the following condition is statisfied:
\begin{equation}
\label{equ:feasibility}
\sum_{n=1}^N\log_2\Big(1+\gamma_{\text{cc},n}\tilde{p}_{\text{c},n}\Big)\geq\kappa.
\end{equation}
It is easy to show that \eqref{equ:feasibility} provides also the feasible condition for the unilateral design.
Note that the proposed solutions for the joint and unilateral design requires initial values of $\pbf_c$ and, respectively, $\pbf_\text{r}$. A simple way of initialization is to consider only the power constraints \eqref{equ:sharing_c1} and \eqref{equ:sharing_c2}. A better way that also takes into account the throughput constraint $\eqref{equ:sharing_c3}$ is a greedy search (GS) method, i.e., the communication system uses its best subcarriers to maintain the throughput constraint, while the radar employs the remaining subcarriers to maximize its SINR. The detailed steps of the GS method is summarized in $\textbf{Algorithm~\ref{alg:GS}}$.
\begin{algorithm}[t]
\caption{Greedy Search}
\begin{algorithmic}
\label{alg:GS}
\STATE \textbf{Input:} $\gamma_{\text{rr},n}$, $\gamma_{\text{cc},n}$, $\eta_{\text{rc},n}$, $\eta_{\text{cr},n}$, $\eta_{\text{rr},n}$, $P_\text{r}$, $P_\text{c}$, $\xi_\text{r}$, $\xi_\text{c}$, and $\kappa$.
\STATE \textbf{Output:} Radar and communication powers $\pbf_\text{r}$ and $\pbf_\text{c}$.\\
\begin{enumerate}
\item Define a binary selection vector $\ubf=[u_1,\cdots,u_N]^T$ with $u_n\in\{0,1\}$: $u_n=1$ indicates the communication system uses the $n$-th subcarrier; otherwise, the radar uses it.
\item Sort the normalized communication channel SNRs $\gamma_{cc,n}$ in a descending order. The SNRs after sorting are denoted by $\gamma_{\text{cc},n,m}$, where the subscripts $(n,m)$ indicate the indices of a subcarrier before and after sorting.
\item Set $\iota=0$, which denotes the partial sum of the communication throughput, $\ubf=0$, and $m=1$.
\REPEAT
\STATE
\begin{enumerate}
\item $u_n=1$ and $m=m+1$.
\item Solve the following convex problem and denote the solution by $\hat{\pbf}_\text{c}$:
\begin{subequations}
\label{equ:P_gs}
\begin{gather}
\label{equ:sharing_abj_gs}
\max\limits_{\pbf_\text{c}}~\sum_{n=1}^N\log_2\Big(1+u_n\gamma_{\text{cc},n,m}p_{\text{c},n}\Big),
\\
\label{equ:sharing_c1_gs}
\text{s.t.}~\sum_{n=1}^N u_np_{\text{c},n}\leq P_\text{c},\\~0\leq u_np_{\text{c},n}\leq \xi_\text{c},~\forall~n,
\end{gather}
\end{subequations}\\
\item Compute $\iota=\sum_{n=1}^N\log_2\big(1+u_n\gamma_{\text{cc},n,m}\hat{p}_{\text{c},n}\big)$.
\end{enumerate}
\UNTIL $\iota\geq\kappa$.
\item Compute the radar power $\hat{\pbf}_\text{r}$ by solving:
\begin{subequations}
\label{equ:Pgs}
\begin{gather}
\label{equ:opt_sr_pgs}
\min\limits_{\pbf_\text{r}}~\sum_{n=1}^{N}\frac{\gamma_{\text{rr},n}(\eta_{\text{cr},n}u_n\hat{p}_{\text{c},n}+1)}{\eta_{\text{rr},n}^2(1-u_n)p_{\text{r},n}+\eta_{\text{rr},n}(\eta_{\text{cr},n}u_n\hat{p}_{\text{c},n}+1)},
\\
\text{s.t.}~\sum_{n=1}^N (1-u_n)p_{\text{r},n}\leq P_\text{r},\\~0\leq (1-u_n)p_{\text{r},n}\leq \xi_\text{r},~\forall~n,
\end{gather}
\end{subequations}
\end{enumerate}
\RETURN $\pbf_\text{c}=\hat{\pbf}_\text{c}\odot\ubf$ and $\pbf_\text{r}=(\mathbf{1}_{1\times N}-\ubf)\odot\hat{\pbf}_\text{r}$.
\end{algorithmic}
\end{algorithm}
\section{Numerical Simulations}
\label{sec:simulationresults}
In this section, numerical results are presented to illustrate the performance of different methods for spectrum sharing between multicarrier radar and communication systems. Specifically, we compare the proposed \textbf{joint design} in Section \ref{subsec:jointdesign} and \textbf{unilateral design} in Section \ref{subsec:UniDesign} with the heuristic \textbf{greedy search} method. In addition, we include the optimum radar output SINR, under the condition when the communication system is absent (denoted as \textbf{comm absent}), as an upper bound.
Unless stated otherwise, the number of subcarriers $N=16$, the convergence tolerance is $0.01$, the noise variance $\sigma^2_\text{r}=\sigma_\text{c}^2=1$, and the communication throughput constraint is $\kappa=2.5$ bits/s/Hz in most examples. The subcarrier channel coefficients
$\alpha_{\text{rr},n}$, $\alpha_{\text{cc},n}$, $\beta_{\text{rc},n}$, $\beta_{\text{rr},n}$, and $\beta_{\text{cr},n}$ are generated with Gaussian distribution
$\mathcal{C}\mathcal{N}(0,\sigma_{\text{rr}}^2)$, $\mathcal{C}\mathcal{N}(0,\sigma_{\text{cc}}^2)$,
$\mathcal{C}\mathcal{N}(0,\sigma_{\text{rc}}^2)$, $\mathcal{C}\mathcal{N}(0,\sigma^2)$ and $\mathcal{C}\mathcal{N}(0,\sigma_{\text{cr}}^2)$,
respectively. The strength of the desired signal for both systems,
indicated by $\sigma_{\text{rr}}^2$ and $\sigma_{\text{cc}}^2$, are normalized as
$\sigma_{\text{rr}}^2=\sigma_{\text{cc}}^2=1$. The clutter strength $\sigma^2=0.05$. In the sequel, we consider two coexistence scenarios characterized by the strength of the cross interference:
\begin{itemize}
\item Case 1 (weak cross interference): $\sigma_{\text{rc}}^2=\sigma_{\text{cr}}^2=0.01$.
\item Case 2 (strong cross interference): $\sigma_{\text{rc}}^2=\sigma_{\text{cr}}^2=0.1$.
\end{itemize}
In the simulation, 50 trials of channel realization are utilized to obtain the average performance.
First, we consider the computational complexity of the conventional alternating optimization approach, which decompose the original problem into two subproblems in $\pbf_\text{r}$ and $\pbf_\text{c}$, and the proposed non-alternating method as discussed in Section \ref{subsec:jointdesign}. Fig.~\ref{fig:CPUtime} shows the CPU time measured by Matlab versus the total number of subcarriers $N$ for Case 1, where $P_\text{r}=P_\text{c}=600$ and $\kappa=1.5$. It can be seen that the complexity of both methods grows as the number of subcarriers increase. However, the alternating algorithm is seen to take a longer time to converge for all cases considered. In particular, the alternating algorithm is around 8 times slower than the proposed non-alternating method at $N=512$.
\begin{figure
\centering
\includegraphics[width=3.1in]{Fig2}
\caption{Computer simulation time versus the number of subcarrier for the conventional alternating algorithm and the proposed non-alternating algorithm.}
\label{fig:CPUtime}
\end{figure}
Fig.~\ref{fig:subfigureSINR} shows the output radar SINR versus the total radar transmission power when $P_{\text{c}}=600$ and $\kappa=2.5$ for Case 1 and Case 2, respectively. It can be seen from Fig.~\ref{fig:subfigureSINR} (a) that with weak cross interference (Case 1), the output SINR of the joint design is very close to that of the comm absent scenario since the weak cross interference creates limited impact from one system to the other. On the other hand, there is a notable performance loss for the radar unilateral design due to the fixed communication power allocation. When the cross interference gets stronger (Case 2), as indicated in Fig.~\ref{fig:subfigureSINR} (b), both the joint design and unilateral design degrade, although the unilateral design experiences a more severe performance loss. In both Case 1 and Case 2, the joint design and unilateral design, which involve subcarrier sharing between the radar and communication systems, outperform the GS method, which is a subcarrier-allocation based method. As the total radar transmission power increases, the output SINR of all considered scenarios increases.
\begin{figure
\centering
\begin{tabular}{cc}
\includegraphics[width=3in]{Fig3a}\\
(a)\\
\includegraphics[width=3in]{Fig3b
\\
(b)
\end{tabular}
\caption{Output SINR versus the total radar transmission power. (a) Weak cross interference (Case 1); (b) strong cross interference (Case 2).}
\label{fig:subfigureSINR}
\end{figure}
Next, we evaluate the effects of the communication throughput constraint. Fig.~\ref{fig:subfigureCap} shows the output SINR versus $\kappa$, where $P_\text{r}=P_\text{c}=600$. It can be seen that as the communication throughput constraint increases, the output SINR of all methods except for the comm absent degrades. This is because the communication needs to increase its transmission power to meet the increasing throughput constraint, which causes stronger interference to the radar system. Note, however, that the degradation of the joint design and unilateral design is considerably smaller in Case 1 than that in Case 2.
\begin{figure
\centering
\begin{tabular}{cc}
\includegraphics[width=3in]{Fig4a}\\
(a)\\
\includegraphics[width=3in]{Fig4b
\\
(b)
\end{tabular}
\caption{Output SINR versus the communication service constraint $\kappa$. (a) Weak cross interference (Case 1); (b) strong cross interference (Case 2).}
\label{fig:subfigureCap}
\end{figure}
Fig.~\ref{fig:contourplot} depicts the contour plot of the output SINR versus the total transmission power $P_\text{r}$ and $P_\text{c}$. Each plot contains the isolines of the output SINR with a stepsize of 60. For the comm absent design, the contour lines are vertical, as its output SINR only depends on the total radar transmission power. The contour plot of the joint design is almost identical to that of the comm absent in Case 1 as indicated by Fig.~\ref{fig:contourplot} (a). This is because when the cross interference is weak, the impact from one system to the other is limited. On the other hand, the greedy search design has the worst performance since it requires the most radar and communication transmission power to achieve the same output SINR.
Fig.~\ref{fig:contourplot} (b) shows that the comm absent and greedy search in Case 2 share the same performance trend as those in Fig.~\ref{fig:contourplot} (a) since they are independent of cross interference. On the other hand, both the joint design and unilateral design are observed to degrade in Case 2, although the latter exhibits a larger performance degradation.
\begin{figure}[!hbt]
\centering
\begin{tabular}{cc}
\includegraphics[width=3in]{Fig5a}\\
(a)\\
\includegraphics[width=3in]{Fig5b
\\
(b)
\end{tabular}
\caption{Contour plot of the output SINR versus the total radar power $P_\text{r}$ and total communication power $P_\text{c}$. (a) Case 1; (b) Case 2.}
\label{fig:contourplot}
\end{figure}
\begin{figure}[!hbt]
\centering
\includegraphics[width=3in]{Fig6}
\caption{The channel strength of $\gamma_{\text{rr},n}$ and $\gamma_{\text{cc},n}$.}
\label{fig:channel}
\end{figure}
To offer further insight, we look into the specific power allocation provided by different designs. We assume the multicarrier systems employ $N=128$ subcarriers divided into four groups with each consisting of 32 subcarriers. The normalized INRs for the cross interference $\eta_{\text{rc},n}$ and $\eta_{\text{cr},n}$ are fixed as $\eta_{\text{rc},n}=\eta_{\text{cr},n}=0.01$, $n=1,\dots,128$. The normalized CNR is $\eta_{rr,n}=0.05$. The desired channel strength $\gamma_{\text{rr},n}$ and $\gamma_{\text{cc},n}$ are depicted in Fig.~\ref{fig:channel}, which show the first group of subcarriers ($n=1,\dots,32$) is good for both radar and communication, the second group ($n=33,\dots,64$) bad for both systems, the third group ($n=65,\dots,96$) good for radar but bad for the communication system, and the fourth group ($n=97,\dots,128$) is the opposite of the third group. The other parameters are $P_\text{r}=P_\text{c}=600$ and $\kappa=2.5$ bits/s/Hz.
The specific subcarrier power allocation results are shown in Fig.~\ref{fig:subfigurepower}, where the resulting SINR obtained by greedy search, unilateral design, and joint design are 28.4, 30.2 dB, and 31.3 dB, respectively. It is seen from Fig.~\ref{fig:subfigurepower} (a) that the greedy search design assigns to the communication system from its best subcarriers ($n=1,\dots32$ and $n=99,\dots,128$) until the throughput constraint is satisfied, whereas the radar employs the rest subcarrier to maximize its SINR as shown in Fig.~\ref{fig:subfigurepower} (b). For the unilateral design, the communication system first utilizes waterfilling to allocate its power [cf. \eqref{equ:P_u}], and then the radar maximizes its output SINR based on \eqref{equ:P4}. Interestingly, it is observed that the joint design reduces the communication power on the first and third groups of subcarriers to lower its interference to the radar and at the same time, increases the communication power on groups 2 and 4. This leads to an improved SINR for the radar system.
\begin{figure}[!hbt]
\centering
\begin{tabular}{cc}
\includegraphics[width=3in]{Fig7a}\\
(a)\\
\includegraphics[width=3in]{Fig7b
\\
(b)
\end{tabular}
\caption{Power allocation for $P_\text{r}=P_\text{c}=600$ and $\kappa=2.5$ bits/s/Hz. (a) Communication power allocation; (b) radar power allocation.}
\label{fig:subfigurepower}
\end{figure}
\section{Conclusion}
\label{sec:conclusion}
Power allocation based spectrum sharing between multicarrier radar and communication systems was considered by maximizing the radar output SINR while meeting a communication throughput requirement along with total/peak power constraints. A joint design as well as a unilateral design were proposed to tackle the coexistence problem. Through suitable reformulation, the nonconvex joint design was solved by a computationally efficient non-alternating method, while the unilateral design was solved by a Taylor expansion based iterative convex relaxation procedure. Simulation results validated the effectiveness of the proposed spectrum sharing methods over the subcarrier-allocation based GS scheme.
\bibliographystyle{IEEEtran}
| proofpile-arXiv_065-238 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Everyone has their life's precious moments captured in photographs. It may tell stories about old memories such as a wedding, a birthday party. Although modern cameras have many techniques to correct the colors and enhance image quality, the natural color style may not express the stories well. Therefore, many powerful tools in editing photos (e.g., Lightroom \cite{lr}) have been released to enrich the preciousness of photographs. However, professional tools require professional skills and knowledge in photography. It causes the end-users difficulty in making their photos prettier, creating an unexpected color style. Motivated by that, many photo applications provide fixed filters to beautify photos conveniently. Unfortunately, the filters are limited and do not meet the user's expectations sometimes. Regularly, experienced users try to mimic a well-retouched photo's color style, giving an overall image of their intended color style. It reveals a correlation between human behavior and the color style transfer task. Inspired by that correlation, we present a supervised approach for color style transfer based on blending and retouching photos. Additionally, we design a specific neural network named Deep Preset to stylize photos and predict the low-level image transformation behind a well-retouched photo.
\begin{figure}[t]
\centering
\includegraphics[width=0.9\linewidth]{graphics/lc_story-retouch2-2.jpg}
\caption{Expert C \cite{bychkovsky2011learning} retouched a raw photo (\textit{left}) to have a better natural-looking (\textit{middle}) using global adjustments. Beyond color correction, the photo can provide better vibes using both global and local adjustments as presets shared over the internet; for example, a vintage style (\textit{right}). It can even change the targeted colors (\textit{the whales}) without distortion creating a novel supervised approach for color style transfer.}
\label{fig:story-retouch}
\end{figure}
\begin{figure*}[t]
\centering
\includegraphics[width=\textwidth]{graphics/lc_story-overall_2.png}
\caption{Our overall concept and the problems of previous works Reinhard et. al. \cite{reinhard2001color}, Monge-Kantorovitch Linear (MKL) \cite{pitie2007linear}, Fast Photo Style (FPS) \cite{li2018closed}, WCT$^2$ \cite{yoo2019photorealistic}, PhotoNAS \cite{an2019ultrafast} shown with PSNR$\uparrow$ / LPIPS$\downarrow$ \cite{zhang2018unreasonable}. ($\uparrow$: higher is better, $\downarrow$: lower is better).}
\label{fig:story}
\end{figure*}
\textbf{Color correctness and blending}. Nowadays, most digital cameras have post-processing techniques to address the shortcomings of the sensor. It provides natural colors for the captured pictures before saving. Unfortunately, the raw photos may not fully show what human eyes actually observe due to a lack of light exposure, low-cost devices, etc. Experienced users then use professional tools to manually correct the global toner (e.g., lightness, white balance, contrast, etc.). Hence, the users' photo adjustments in correcting colors become valuable to the community wishing computers can replace humans to handle it automatically. It motivates Bychkovsky et al. \cite{bychkovsky2011learning} to create a high-quality reference dataset MIT-Adobe FiveK by asking five photography students to retouch raw images. Their work opens a supervised approach for correcting colors and predicting a reasonable photo adjustment. Thanks to the development of deep learning, the problems in automatic color correction are handled surprisingly well. As an example, Afifi et. al. proposed methods to correct exposure \cite{afifi2020learning} and white-balance \cite{afifi2020deepWB}. The Exposure of Hu et al. \cite{hu2018exposure} directly predicts a set of photo adjustment operations retouching big size photos in real-time. Additionally, their work provides users a scheme being able to adjust the photo afterward. Since the mentioned works are color correction for raw images, they only adjust global toner to avoid local changes retaining the original and consistency of taken colors, providing a naturalness. In our work, beyond color correction and global adjustments, we also exploit other blending styles and local adjustments, presenting a novel scheme for color style transfer.
In reality, besides the style of naturalness, the experienced users blend the colors according to their purposes using not only global adjustment (e.g., Brightness, Contrast, Saturation, etc.) but also local adjustments (e.g., Red Hue Shifting, Blue Saturation Adjustment). For example, a memorial photo is retouched for a vintage style telling an old story with a nostalgic vibe; besides, the local adjustment helps to change the blue colors of \textit{the whales} to \textit{green ones}, as shown in Figure \ref{fig:story-retouch}. After done photo adjusting, all adjustments will be stored as settings (preset) representing low-level color transformation methods. Usually, on the internet, a preset is shared with a photo retouched by that preset, helping end-users without photography knowledge to understand the color style before using. It opens a novel scheme in generating photos having the homologous color style for training. The ill-posed problem is how to generalize features representing color transformation, even when the image content is different, and build an efficient color style transfer. In this work, we consider two approaches: 1) Let the proposed Deep Preset predict the applied preset, which is photo adjustments behind the retouched reference. The extracted features from various image contents thus define the transformation. However, predicting an accurate preset is a difficult issue; we thus 2) add a Positive Pair-wise Loss (PPL) function to minimize distances between same-preset-applied photos in latent space. Consequently, the extracted features are robust, enhancing color style transfer for our Deep Preset.
\textbf{Photorealistic Color/Style Transfer (PCT/PST)}. In prior global color transfer works, Reinhard et al. \cite{reinhard2001color} first proposed a low-level computational color transfer method by matching mean and standard deviation. Afterward, Pitie et al. made many efforts in automated color transfer \cite{pitie2005n, pitie2005towards, pitie2007automated} and introduced a transformation based on Monge-Kantorovitch Linear (MKL) \cite{pitie2007linear}. Nowadays, these traditional works are still useful in other fields, such as creating an image harmonization dataset \cite{DoveNet2020}. Adopting the success of deep learning techniques, Gatys et al. \cite{gatys2016image} present an optimization-based method Neural Style Transfer (NST) transferring an artistic style into a photo using convolutional neural networks. Thanks to the surprising performance of NST, the style transfer field gains huge attention from researchers around the world afterward and is growing rapidly. For example, Johnson et al. \cite{johnson2016perceptual} achieve real-time performance in style transfer using a feed-forward deep neural network. Huang et al. \cite{huang2017arbitrary} create a novel way to transform contextual features based on mean and standard deviation (AdaIN); meanwhile, Li et al. \cite{li2017universal} apply whitening and coloring (WCT) for features transform. However, the mentioned methods are designed for artistic stylization rather than photorealistic stylization, which requires high performance in retaining structural details. Therefore, Luan et al. \cite{luan2017deep} propose a regularization for NST to prevent distortion. However, the optimization-based method costs long computational time, and their result is still distorted. Li et al. \cite{li2018closed} propose an enhanced photo stylization PhotoWCT based on WCT with post-processing such as smoothing and filtering techniques. Based on PhotoWCT, Yoo et al. \cite{yoo2019photorealistic} present a progressive strategy transferring style in a single pass and propose Wavelet Corrected Transfer (WCT$^2$) with wavelet pooling/unpooling. Furthermore, they do not need any post-processing; however, their performance still relies on semantic masks. Recently, An et al. \cite{an2019ultrafast} propose asymmetric auto-encoder PhotoNAS without requiring post-processing and guided masks. Their novelty includes two modules, Bottleneck Feature Aggregation (BFA), Instance Normalized Skip Link (INSL), and Network Architecture Search (NAS) for optimizing network architecture under complexity constraint.
However, in blending and retouching photos, the previous methods are overused due to transferring exact colors with degradation rather than learn the color transformation/style representation, which also means "\textit{what beautifies the reference}". Consequently, their results show the content-mismatched colors, hard-copied colors from the reference, and distortion, as shown in Figure~\ref{fig:story}. Meanwhile, the end-users desire to have a homologous color style as a well-retouched reference, especially in sensitive cases such as portraits. In this work, we define that the color style is a preset of low-level image transformation operations converting a photo with natural colors (being closest to what humans actually observe) to retouched ones.
Based on that definition, we present a novel training scheme for color style transfer \textit{with ground-truth} by leveraging various user-generated presets. As a result, having ground-truth helps our model converge in the right direction rather than based on extracted features and transform methods. Furthermore, we propose the Deep Preset to 1) learn well-generalized features representing color style transforming the input (natural) to reference (retouched), 2) estimate the preset applied on the reference, 3) synthesize the well-retouched input. Please check our supplemental document for the visual correlation between photos retouched by the same preset.
\begin{figure*}[htbp]
\centering
\includegraphics[width=0.8\textwidth]{graphics/lc_net-2.jpg}
\caption{Deep Preset transferring the color style of \textit{Reference} $Z$ to \textit{Input} $X$.}
\label{fig:dp}
\end{figure*}
Our contributions are as follows:
\begin{itemize}
\item We present a supervised approach for color style transfer by exploiting various user-generated presets in blending and retouching photos. Furthermore, as a conducted user study, two different images applied by a preset are recognizable.
\item We propose a specific deep neural network, named Deep Preset, to transfer the color style of a reference to a photo and predict low-level image transformation behind. As a result, our work outperforms previous works in transferring color style qualitatively and quantitatively.
\item Our Positive Pair-wise Loss (PPL), optimizing the distances between latent features of the photos applied by the same preset, shows the capability to stabilize color transformation and enhance stylized output.
\item Our work can automatically beautify a photo by selecting its suitable reference among well-retouched photos based on the perceptual measurement \cite{zhang2018unreasonable}.
\end{itemize}
\section{Deep Preset}
\subsection{Overview}
Blending and retouching photos help everyone to enhance the preciousness of their life's moments captured in photographs. Color style is the way of telling expression. However, it is not easy for them to create a plausible color style for their context. They thus search for a well-retouched photo having a similar context for a reference (a). Even a suitable sample is found, it is difficult for the end-users without knowledge in photography to retouch their photos using a powerful photo editing application (b).
In our work, we solve (b) using our proposed Deep Preset, which can synthesize a similar color style from the reference for a specific photo. Additionally, our Deep Preset considers which image transformation operations (preset) have been applied to the reference and learns the features representing the color transformation from natural colors (input) to retouched ones (reference) for different image content.
Regarding the problem (a), we also provide a strategy to find a reference in many well-retouched photos by matching the contextual information \cite{zhang2018unreasonable}. Consequently, the end-users can retouch their photos in one-click. Additionally, we minimize the distance between photos with a similar color transformation in latent space to enhance the generated color style and stabilize preset estimation. Our performance is thus improved.
Our Deep Preset learns the color transformation from a natural photo $X \in \mathbb{R}^{H \times W \times 3}$ to reference $Z \in \mathbb{R}^{H \times W \times 3}$ and generates the stylized photo $\hat{Y} \in \mathbb{R}^{H \times W \times 3}$. Furthermore, our work also predicts the applied preset $P \in \mathbb{R}^{69}$ representing the hyper-parameters of $69$ low-level image transformation operations retouching $Z$, as shown in Figure \ref{fig:dp}. Besides, we extract embeddings $F_{Z}$ and $F_{Z'}$, $\forall F_{Z}, F_{Z'} \in \mathbb{R}^{1024}$ from $Z$ and $Z'$ while predicting the applied preset, where $Z'$ is the random photo retouched by $P$ same as $Z$. Please check our supplemental document for the illustration of how $Z$ and $Z'$ are selected and processed while training.
Our advantages are as follows:
\begin{itemize}
\item Our models can be converged in the right direction with ground-truth. Meanwhile, previous works are mostly based on feature-based transform techniques.
\item Learning the low-level image transformation, rather than transferring/mapping exact colors, can reduce the sensitiveness of color style transfer caused by mismatched image content.
\item Our Positive Pair-wise Loss (PPL) function makes preset estimation stable and enhances generated color style.
\end{itemize}
\subsection{Network Architecture}
We adopt the U-Net \cite{ronneberger2015u}, which has an encoder-decoder architecture, to design the Deep Preset.
Our network includes four main components: Encoder $T$, Encoder $C$, Linear Layers $L$ and Decoder $G$.
First of all, the encoder $T$ leverages the content $X$ and the reference $Z$ to synthesize feature maps representing color transformation. Meanwhile, the encoder $C$ extracts contextual information preparing for blending features between $T$ and $C$. Afterwards, the linear $L$ leverages the final feature map of $T$ to extract the transformation embedding $F_{*}$ and estimate the preset $P$, as follows:
\begin{equation}
F_{*}, \hat{P} = L(T(X,Z))
\label{eq_p}
\end{equation}
where $*$ can be $Z$ or $Z'$, $\hat{P}$ is the estimated preset. Finally, the generator $G$ leverages the concatenated features between $T$ and $C$ to synthesize the stylized photo $\hat{Y}$, as:
\begin{equation}
\hat{Y} = G(T(X,Z) \bullet C(X))
\end{equation}
where $\bullet$ represents concatenations of extracted feature maps between $T$ and $C$ corresponding to feeding order, as shown in Figure \ref{fig:dp}. Please check our supplemental document for the network's technical details.
\subsection{Loss functions}
In this work, we propose a new scheme to train color style transfer with ground-truth; therefore, our loss functions are based on the ground-truth rather than extracted features of content and reference images. Consequently, our models can be converged the right way to be closer to the ground-truth. We apply Mean Square Error (MSE) to directly minimize the distance between our stylized $\hat{Y}$ and the ground-truth $Y$ as:
\begin{equation}
\mathcal{L}_{MSE} = \frac{1}{N} \sum_{i=1}^{N} {|| Y_i - \hat{Y}_i ||}^2_2
\end{equation}
where N is the batch size. Additionally, we adopt the perceptual loss LPIPS \cite{zhang2018unreasonable} to enhance contextual details as:
\begin{equation}
\mathcal{L}_{lpips} = LPIPS(\hat{Y},Y)
\end{equation}
Besides, we also predict the preset applying to the $Z$. The estimated preset $\hat{P}$ is observed as:
\begin{equation}
\mathcal{L}_{p} = \frac{1}{N} \sum_{i=1}^{N} {||P_i - \hat{P}_i||}_{1}
\end{equation}
where $P$ is the number of hyper-parameters representing low-level image transformation methods such as color-shifting. However, predicting an exact preset is difficult due to the variety of possible adjustments. It may be influenced by different image content leading to degrading our stability. We expect that the well-trained model can provide similar features representing a color transformation (preset) for all photos retouched by the same preset. Therefore, in the training stage, we randomly select a photo $Z'$ which is also retouched by $P$, extract the embedding $F_{Z'}$ (as described in the Equation \ref{eq_p}), and finally minimize the error between $F_{Z}$ and $F_{Z'}$, so-called positive pair-wise error, as:
\begin{equation}
\mathcal{L}_{pp} = \frac{1}{N} \sum_{i=1}^{N} {||{F_{Z'}}_i - {F_{Z}}_i||}_{1}
\end{equation}
Finally, our total loss function is:
\begin{equation}
\mathcal{L}_{total} = \alpha\mathcal{L}_{MSE} + \beta\mathcal{L}_{lpips} + \gamma\mathcal{L}_{p} + \eta\mathcal{L}_{pp}
\end{equation}
where $\alpha, \beta, \gamma, \eta$ are empirically set as $1, 0.5, 0.01, 1$ respectively. Please check our supplemental document for the illustration of the preset prediction and the positive pair-wise loss function.
\begin{figure*}[t]
\centering
\includegraphics[width=0.8\textwidth]{graphics/lc_comparison-ab2.jpg}
\caption{Ablation study on the Positive Pair-wise Loss (PPL) function for our \textit{Generator G}. Training with PP loss achieves the higher performance. Results are measured using PSNR/LPIPS/Chi-squared distances. \textbf{Bold} values mean better.}
\label{fig:ab}
\end{figure*}
\begin{figure}[t]
\centering
\includegraphics[width=0.8\linewidth]{graphics/val_p.PNG}
\caption{Validation on preset prediction with/without our Positive Pair-wise Loss (PPL) function. Predicted presets with PP loss show more stable.}
\label{fig:pp}
\end{figure}
\subsection{Data Preparation}
\label{sec:data}
In this section, we describe how we collect and pre-process the data for training and evaluation.
\textbf{Lightroom presets}.
Our data processing is mostly based on Adobe Lightroom \cite{lr}, the most powerful photo editing software recently. We collect 510 user-generated presets, 500 presets for training, and 10 for testing. Additionally, we only select 69 settings (low-level color transformation operations). Each setting has a value representing how large colors are shifted in a specific way. Therefore, a preset with 69 settings is assigned as a 69-dimension vector. All elements are normalized to $[-1, 1]$ based on its min/max values that the end-users can intervene.
\textbf{Training data}.
We script the Lightroom \cite{lr} to generate $601,200$ photos using $1,200$ high-definition photos from Flickr2K \cite{Lim_2017_CVPR_Workshops} and 501 pre-processed presets including base color style. Since our training target is to convert a photo with correct colors (natural) to a retouched version, we only choose the photos having the natural-looking, likely the original colors taken by a camera and corrected by humans. All training photos are scaled to $720$ with the same ratio and compressed by JPEG for efficient storage.
\textbf{Testing data}.
We prepare four subsets for evaluation from DIV2K \cite{Timofte_2018_CVPR_Workshops}, MIT-Adobe FiveK \cite{bychkovsky2011learning}, and Cosplay Portraits (CP) representing the sensitive case in color style transfer. For a fair comparison to the previous works, test data also includes the retouched photos (check our supplemental document for more detail). Regarding the color style, we utilize $10$ user-generated presets to stylize the references. In a particular way, from the DIV2K \cite{Timofte_2018_CVPR_Workshops} validation set, we prepare 1 content image to be stylized by 100 reference images (1x100x10) for the concept of different references stylizing the same image content, and 10 content images and 10 reference images (10x10x10). Taking into account of having various contexts in testing, we randomly select a set of 10x10x10 including all contextual categories such as \textit{indoor}, \textit{outdoor}, \textit{day}, \textit{sun sky}, etc. from MIT-Adobe FiveK \cite{bychkovsky2011learning}. Since our work is color transfer instead of color enhancement; plus, the raw images from \cite{bychkovsky2011learning} are not always taken and processed well as our expectation of naturalness; we thus choose the images retouched by the \textit{Expert C} only. Regarding test images of CP, we choose a set of 10x10x10 including both natural colors originally taken by modern DSLR cameras and retouched ones. In summary, we evaluate all methods on $1000$ samples each set and $4000$ samples in total. All photos are stored in JPEG and resized to $512 \times 512$ on-the-fly using bicubic interpolation while testing.
\begin{figure*}[t]
\centering
\includegraphics[width=\textwidth]{graphics/lc_comparison-main.jpg}
\caption{Qualitative comparison between our Deep Preset (\textit{Generator}) and the previous works Reinhard et al. \cite{reinhard2001color}, MKL \cite{pitie2007linear}, Deep Priors \cite{zhang2017real}, FPS \cite{li2018closed}, WCT$^2$ \cite{yoo2019photorealistic}, PhotoNAS \cite{an2019ultrafast}. \textit{H, S, B} denotes Hue, Saturation, and Brightness, respectively. \textit{Left-to-right}: DIV2K \cite{Timofte_2018_CVPR_Workshops} 1x100x10, DIV2K \cite{Timofte_2018_CVPR_Workshops} 10x10x10, Adobe-MIT FiveK \cite{bychkovsky2011learning} 10x10x10, and Cosplay Portraits 10x10x10.}
\label{fig:comparison}
\end{figure*}
\begin{table*}[t]
\caption{Quantitative comparison between our models with/without Positive-Pairwise Loss (PPL), the previous works Deep Priors \cite{zhang2017real}, Fast Photo Style Transfer (FPS) \cite{li2018closed}, WCT$^2$ \cite{yoo2019photorealistic}, and PhotoNAS \cite{an2019ultrafast}. \textbf{Bold} values indicate the best performance. $\uparrow$: higher is better, $\downarrow$: lower is better. We choose our \textit{Generator*} for other comparisons throughout the paper.}
\label{tab:comparison}
\resizebox{\textwidth}{!}{%
\begin{tabular}{|l|l|c|c|c|c|c|c|c|c|c|c|c|c|l|l|l|l|}
\hline
\multicolumn{2}{|c|}{\multirow{2}{*}{Method}} & \multicolumn{4}{c|}{DIV2K (1x100x10 and 10x10x10)} & \multicolumn{4}{c|}{MIT-Adobe FiveK (10x10x10)} & \multicolumn{4}{c|}{Cosplay Portraits (10x10x10)} & \multicolumn{4}{c|}{Average} \\ \cline{3-18}
\multicolumn{2}{|c|}{} & H-Corr $\uparrow$ & H-CHI $\downarrow$ & PSNR $\uparrow$ & LPIPS $\downarrow$ & H-Corr $\uparrow$ & H-CHI $\downarrow$ & PSNR $\uparrow$ & LPIPS $\downarrow$ & H-Corr $\uparrow$ & H-CHI $\downarrow$ & PSNR $\uparrow$ & LPIPS $\downarrow$ & \multicolumn{1}{c|}{H-Corr $\uparrow$} & \multicolumn{1}{c|}{H-CHI $\downarrow$} & \multicolumn{1}{c|}{PSNR $\uparrow$} & \multicolumn{1}{c|}{LPIPS $\downarrow$} \\ \hline \hline
\multicolumn{2}{|l|}{Reinhard et. al. \cite{reinhard2001color}} & 0.3069 & 915.63 & 15.77 & 0.2620 & 0.2198 & 969.67 & 13.62 & 0.3222 & 0.2341 & 1104.06 & 14.03 & 0.2764 & 0.2536 & 996.45 & 14.47 & 0.2869 \\ \hline
\multicolumn{2}{|l|}{MKL \cite{pitie2007linear}} & 0.3390 & 662.45 & 16.20 & 0.2607 & 0.1785 & 819.37 & 13.59 & 0.3151 & 0.3037 & 566.81 & 14.41 & 0.2545 & 0.2737 & 682.88 & 14.73 & 0.2768 \\ \hline
\multicolumn{2}{|l|}{Deep Priors \cite{zhang2017real}} & 0.5420 & 749.50 & 20.53 & 0.2033 & 0.4049 & 785.87 & 16.84 & 0.2947 & 0.4735 & 656.08 & 17.68 & 0.2896 & 0.4735 & 730.49 & 18.35 & 0.2625 \\ \hline
\multicolumn{2}{|l|}{FPS} & 0.3856 & 1232.97 & 14.71 & 0.3025 & 0.1800 & 1843.86 & 12.05 & 0.3902 & 0.3363 & 1629.30 & 12.93 & 0.3105 & 0.3006 & 1568.71 & 13.23 & 0.3344 \\ \hline
\multicolumn{2}{|l|}{$\text{WCT}^2$ \cite{yoo2019photorealistic}} & 0.3917 & 1269.91 & 16.40 & 0.2726 & 0.2043 & 5916.76 & 13.18 & 0.3633 & 0.3201 & 2950.11 & 13.98 & 0.2775 & 0.3054 & 3378.92 & 14.52 & 0.3045 \\ \hline
\multicolumn{2}{|l|}{PhotoNAS \cite{an2019ultrafast}} & 0.4129 & 824.74 & 17.06 & 0.2559 & 0.1898 & 7924.73 & 13.51 & 0.3849 & 0.3119 & 3853.21 & 14.29 & 0.2975 & 0.3049 & 4200.89 & 14.95 & 0.3128 \\ \hline
\multicolumn{1}{|c|}{\multirow{2}{*}{Ours w/o PPL}} & Preset Prediction & 0.6416 & 509.71 & 22.62 & 0.1139 & 0.6094 & 673.36 & 21.07 & 0.1179 & 0.6569 & 389.60 & 21.88 & 0.1042 & 0.6360 & 524.23 & 21.86 & 0.1120 \\ \cline{2-18}
\multicolumn{1}{|c|}{} & Generator & 0.6933 & 558.56 & 22.87 & 0.1027 & \textbf{0.6415} & 454.99 & \textbf{21.86} & 0.1105 & 0.6713 & 309.17 & 21.78 & 0.0992 & 0.6687 & 440.91 & 22.17 & 0.1041 \\ \hline
\multicolumn{1}{|c|}{\multirow{2}{*}{Ours w PPL}} & Preset Prediction & 0.6194 & \textbf{299.34} & 22.03 & 0.1275 & 0.6276 & \textbf{319.79} & 20.98 & 0.1160 & 0.6222 & \textbf{258.67} & 21.44 & 0.1157 & 0.6231 & \textbf{292.60} & 21.48 & 0.1197 \\ \cline{2-18}
\multicolumn{1}{|c|}{} & Generator* & \textbf{0.7006} & 552.45 & \textbf{23.12} & \textbf{0.0980} & 0.6313 & 968.51 & 21.71 & \textbf{0.1093} & \textbf{0.6927} & 325.89 & \textbf{22.15} & \textbf{0.0960} & \textbf{0.6749} & 615.62 & \textbf{22.33} & \textbf{0.1011} \\ \hline
\end{tabular}%
}
\end{table*}
\section{Experimental results}
\subsection{On our Positive Pair-wise Loss (PPL) function}
The encoder $T$ of Deep Preset learns color transformation representation with an auxiliary regression as the preset prediction. However, it is difficult to estimate an accurate preset leading to the instability of the extracted features representing a specific transformation with different image contents.
Therefore, we consolidate the color transformation by optimizing distances between photos having the same color style in latent space. The extracted features are thus robust for transformation making preset prediction stable while training. To prove it, we train two models with/without PPL function in the same condition and compare them qualitatively and quantitatively using histogram correlation (H-Corr), histogram Chi-squared (H-CHI), Peak Signal-to-Noise Ratio (PSNR), and the perceptual metric LPIPS \cite{zhang2018unreasonable}. Regarding evaluating the preset prediction, instead of calculating an error between predicted presets $\hat{P}$ and the actual presets $P$, we apply the $\hat{P}$ back to the content images to achieve the stylized photos so that our preset prediction can be further compared to the previous works.
As a result, in predicting presets, the model trained without PPL (non-PPL model) outperforms the model trained with PPL (PPL model) with higher H-Corr as \textbf{0.6360}, higher PSNR as \textbf{21.86} dB, lower LPIPS as \textbf{0.1120}. However, regarding directly generating the output by the Generator \textit{G}, the PP model quantitatively outperforms the non-PPL model with higher H-Corr as \textbf{0.6749}, higher PSNR as \textbf{22.33} dB, lower LPIPS as \textbf{0.1011}, as shown in Table \ref{tab:comparison}. Additionally, the PPL model stabilizes the preset prediction under different image contents for a specific color style, as shown in Figure \ref{fig:pp}. Qualitatively, the PPL model gives the closer color style as \textit{yellow} tone to the reference with higher PSNR, lower LPIPS showing a better quality; furthermore, it has smaller histogram-related distances proving the PP model's outperformance in color transformation, as shown in Figure \ref{fig:ab}. Please check our supplemental document for a qualitative comparison of both models' preset prediction's stability and further discussion on the trade-offs between preset prediction and PPL.
\subsection{Comparison to recent works}
In this section, we compare our work to Reinhard et. al. \cite{reinhard2001color}, Monge-Kantorovitch Linear (MKL) \cite{pitie2007linear}, Fast Photo Style Transfer \cite{li2018closed}, WCT$^2$ \cite{yoo2019photorealistic}, PhotoNAS \cite{an2019ultrafast} presenting creative techniques in photorealistic color/style transfer. Their works show the capability of transferring textures and colors of reference into another photo, even changing the day to night, summer to winter. However, they are overused in blending and retouching photos. This work defines that the color style is made by the low-level image transformation (e.g., color-shifting) converting a photo with natural colors to its retouched version. Adopting that idea, our proposed Deep Preset learns the defined color style representation and transforms the base colors with ground-truth instead of transferring exact colors from the reference. Hence, our work does not degrade the image quality but can beautify the input images based on a reference. To prove our proficiency, we compare our work to the mentioned works in quantitative and qualitative ways. Besides, the works in colorization with reference-guided can be treated as color style transfer. Therefore, we also compare this work to the interactive colorization of Zhang et al. \cite{zhang2017real}. Since our scheme provides the ground-truth, we thus utilize the previously mentioned similarity metrics such as H-Corr, H-CHI, PSNR, and LPIPS \cite{zhang2018unreasonable} for quantitative comparison on the four subsets described in Section \ref{sec:data}. Furthermore, we qualitatively show a sample from each subset in various contexts to convince our quantitative result. Considering this work under the aspects of production, we conduct a user study based on two-alternative forced-choice (2AFC) measuring human perceptual similarity and user preferences.
\textbf{Quantitative comparison}.
We compare our work to previous works on four subsets consisting of DIV2K \cite{Timofte_2018_CVPR_Workshops} 1x100x10, DIV2K \cite{Timofte_2018_CVPR_Workshops} 10x10x10, MIT-Adobe FiveK \cite{bychkovsky2011learning} 10x10x10, and Cosplay Portraits 10x10x10, as described in Section \ref{sec:data}. As a result, the reference-guided colorization Deep Priors \cite{zhang2017real} outperforms other previous works on average. Their colorization removes color channels, then colorize the black-and-white photo based on a given reference. The generated color thus has a high correlation with the ground-truth. However, they still suffer from color overflowed and mismatched. Meanwhile, this work quantitatively outperforms the previous works in generating a similar color style with a higher H-Corr as \textbf{0.6749} and H-CHI \textbf{615.62}. Furthermore, our results also achieve the highest PSNR as \textbf{22.33} dB, lowest LPIPS as \textbf{0.1011} on average, as shown in Table \ref{tab:comparison}.
\textbf{Qualitative comparison}.
To support our quantitative comparison, we show four samples from four test subsets described in Section \ref{sec:data}. As a qualitative result, the proposed Deep Preset can beautify the content image using a well-retouched reference without reducing image quality; meanwhile, the previous works try to transfer exact colors leading to abnormal colors causing unnaturalness. Particularly, the previous works transfer the \textit{globally bluish tone} of reference to the whole content image losing the color of \textit{the girl's hair} in the first two columns. In contrast, our result obtains plausible color for \textit{the skin} with \textit{blonde hair}, which are closest to the ground-truth. Being similar to the following samples, our Deep Preset provides a plausible color in harmony regardless of image content; furthermore, our results have the most similar color style to the ground-truth. For example, the \textit{sky} turns \textit{bluer} and the \textit{trees} turn \textit{brightly greener} in the second sample, the saturation of the text "\textit{Cafe Bastille}" in the third sample is reduced being similar to the ground-truth, even the context between the content image and reference is mismatched. Meanwhile, the previous works, one way or another, distort the input images by transferring the exact colors of references. In the last sample, we directly check Hue-Saturation-Brightness (HSB) information where the color picker is located. As a result, our work provides the closest values (H,S,B) to the ground-truth as ($\Delta0$, $\Delta-6$, $\Delta5$) compared to \cite{reinhard2001color, pitie2007linear, zhang2017real, li2018closed, yoo2019photorealistic, an2019ultrafast}, as shown in Figure \ref{fig:comparison}. We conclude that our Deep Preset synthesizes the best visual quality, which is essential for production, with the closest color style to the ground-truth compared to the previous works. Please check our supplementary materials for more qualitative comparison.
\begin{table}[t]
\caption{Three user study scenarios based on two-alternative forced-choice (2AFC) selecting A or B anchored by ground-truth, reference (style), and user preferences. Each method is paired with others twice and evaluated by the probability of being chosen (higher is better). \textbf{Bold} values reveal our outperformance compared to the previous works.}
\label{tab:us}
\resizebox{\linewidth}{!}{%
\begin{tabular}{|l|c|c|c|}
\hline
Method & \multicolumn{1}{l|}{\begin{tabular}[c]{@{}l@{}}{[}Triplet{]} Anchored\\ by Grouth-truth\end{tabular}} & \multicolumn{1}{l|}{\begin{tabular}[c]{@{}l@{}}{[}Triplet{]} Anchored\\ by Reference\end{tabular}} & \multicolumn{1}{l|}{\begin{tabular}[c]{@{}l@{}}{[}Pair{]} Anchored\\ by User Preferences\end{tabular}} \\ \hline \hline
Reinhard's \cite{reinhard2001color} & 0.43 & 0.20 & 0.47 \\ \hline
MKL \cite{pitie2007linear} & 0.50 & 0.48 & 0.35 \\ \hline
Deep Priors \cite{zhang2017real} & 0.38 & 0.52 & 0.52 \\ \hline
FPS \cite{li2018closed} & 0.34 & 0.53 & 0.27 \\ \hline
$\text{WCT}^2$ \cite{yoo2019photorealistic} & 0.32 & 0.54 & 0.31 \\ \hline
PhotoNAS \cite{an2019ultrafast} & 0.59 & 0.54 & 0.38 \\ \hline
Ours & \textbf{0.95} & \textbf{0.57} & \textbf{0.85} \\ \hline
Ground-truth & $\infty$ & 0.61 & 0.85 \\ \hline
\end{tabular}%
}
\end{table}
\subsection{User Study}
Our user study includes three scenarios based on the two-alternative forced-choice (2AFC) scheme selecting A or B anchored by (i) ground-truth having the same image content as A and B, (ii) reference having the different image content from A and B, and (iii) user preferences. In the mentioned scenarios, (i) represents the human perceptual similarity metric, which has the same meaning as the histogram distances, PSNR, LPIPS we used in Table \ref{tab:comparison}, (ii) measures whether the human can recognize the similar color style when the content between the input image and reference is different. Meanwhile, (iii) measures how the compared methods are suitable for production. We first guide the users on how the photos retouched by a preset look like, then show examples of the problems in style transfer such as distortion, color overflowed, and more. Afterward, we let the user select picture A or B while asking the question "\textit{Which photo is closest to the anchor?}". Regarding judging based on user preferences, the volunteers are asked, "\textit{Which photo do you like most?}", "\textit{Which photo are you willing to share on your social media?}", and related questions. The samples consisting of triplets for (i) and (ii), pairs for (iii) shown in our experiment are collected randomly from DIV2K \cite{Timofte_2018_CVPR_Workshops} 10x10x10 and Cosplay Portraits 10x10x10 so that each method is paired twice with the other methods; therefore, they are fairly compared. Regarding the users, we invite $23$ volunteers to run the test; in that, \textbf{74\%} of them have experience in photo adjustment. Eventually, they return $3150$ votes for three scenarios in total. As a result in Table \ref{tab:us}, our work has the highest probability of being chosen whenever it is compared to the previous works such as Reinhard's work \cite{reinhard2001color}, MKL \cite{pitie2007linear}, Deep Priors \cite{zhang2017real}, FPS \cite{li2018closed}, WCT$^2$ \cite{yoo2019photorealistic}, PhotoNAS \cite{an2019ultrafast}. Please check our supplemental document for the illustration of our user study.
"\textit{Can human judge color style produced by preset?}" Yes, the human can, as the conducted experiment 2AFC anchored by reference. This scenario will exploit the human perceptual measurement on the defined color style when the image content is not identical. As a result, the ground-truth retouched by the same preset with the reference has the highest number of chosen times as \textbf{61\%}. The result also reveals that a preset can represent a color style. Besides, our Deep Preset provides the most similar color style to the reference as \textbf{57\%} of being chosen compared to the previous works \cite{reinhard2001color, pitie2007linear, zhang2017real, li2018closed, yoo2019photorealistic, an2019ultrafast}, as shown in Table \ref{tab:us}. To have the color style's overall observation, please check our supplemental document for the visual comparison between the photos applied by the same/different preset.
"\textit{Does our Deep Preset degrade the image quality?}" No, it does not. Considering the production aspect, we conduct the 2AFC test anchored by user preferences as our scenario (iii) shown in Table \ref{tab:us}. As a result, our stylized photos achieve the highest probability of being chosen compared to the previous works \cite{reinhard2001color, pitie2007linear, zhang2017real, li2018closed, yoo2019photorealistic, an2019ultrafast}, the same number as the ground-truth. Qualitatively, our work usually provides a satisfying look as content images and ground-truth, as shown in Figure \ref{fig:comparison} and additional results in our supplemental document.
\section{Conclusion}
We define a novel color style based on low-level image transformations from natural to a retouched domain. Adopting that definition, we present a supervised approach for color style transfer, and propose the Deep Preset designed to not only efficiently transfer the color style but also predict the applied preset behind a well-retouched reference. Besides, we present a Positive Pair-wise Loss (PPL) optimizing distances between the photos applied by the same preset in latent space. An ablation study shows that our PPL can stabilize the preset prediction and enhance stylizing. As a result, the proposed Deep Preset outperforms previous works quantitatively and qualitatively. Furthermore, the conducted user study shows that our results achieve the closest color style to the ground-truth, the closest color style to the reference, and the most satisfaction voted by humans.
{\small
\bibliographystyle{ieee_fullname}
| proofpile-arXiv_065-239 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Landauer-B\"{u}ttiker formula plays an important role in the study of electronic quantum transports in nanostructures\cite{Landauer1957,Buettiker,TransmissionRMP,Datta}, molecular systems\cite{Modelular1}, and even DNAs\cite{DNA1}. It also plays an important role in calculating the thermal\cite{Thermal2,Thermal3,Thermal4}, optical\cite{Optical,Optical2} and phonon\cite{PhononTransport} transports in quantum structures. Landauer-B\"{u}ttiker formula relates the electronic conductance of a two-terminal or multi-terminal device to the quantum transmission\cite{Landauer1957,Buettiker,TransmissionRMP}. The quantum transmission can be expressed in terms of Green's functions, which is a standard numerical tool today\cite{Datta,QuantumTransport,Kwant,MoS2Ribbons}. Since this is a real space method,
it is computationally demanding for a system related with large number of orbital basis, e.g., large size systems\cite{LargeGraphene,LimitQuantumTransport}, biological molecules\cite{DNA1} and (quasi-)incommensurate systems\cite{Amorphous,QuasiCrystal,commensurate1}.
In the recent decade, a powerful numerical method treating with Hamiltonians on large Hilbert spaces has attracted attention, the kernel polynomial method (KPM), such as the Chebyshev expansion\cite{KPMRMP}. In most KPM calculations, the only matrix operations involved are product between sparse matrices (Hamiltonians) and vectors, and matrix traces. For a sparse matrix with dimension $D$, the matrix vector multiplication is only an order $O(D)$ process. Thus the calculation of $N$ moments of Chebyshev terms needs $O(ND)$ operations and time\cite{KPMRMP}. A direct application of KPM is the calculation of the spectral function of an isolated system\cite{KPMRMP,SpectralFunctionKMP1,SpectralFunctionKMP2}. Taking advantage of appropriate analytical continuation, one can arrive at the evaluation of Green's functions\cite{KPMRMP,GreenFunctionKMP}. Expressions of physical quantities in terms of KPM have been developed recently\cite{LocLengthKMP,ResponseFunctionKMP,DynCorr,SJYuan2010,SJYuan2010PRL,SJYuan2011,SJYuan2016,KuboFormula,ZYFan2018}, including the applications to superconductors\cite{BdGKPM}, topological materials\cite{RealSpace,LeiLiu2018}, quantum impurity problems\cite{Impurity1,Impurity2,SelfEnergyKMP} and \emph{ab initio} calculations\cite{Abinitio0,Abinitio}. However, these methods are applicable to bulk or isolated systems\cite{KPMRMP,ZYFan2018}, not to scattering processes between leads in open systems, which corresponds to a realistic experimental setup\cite{Datta}.
In this paper, we will propose some KPM methods to calculate the
Landauer-B\"{u}ttiker transmission in a two-terminal system: a conductor connected to left (L) and right (R) leads, as illustrated in Fig. \ref{FigDevice}. The transmission through the conductor can be written in terms of Green's functions, where the leads manifest themselves as self energies\cite{Datta}. This is typical context of an open system coupled to a bath\cite{BathKMP,FermionicBath}. We first fully describe this problem as a generalization of the standard bath technique of
KPM\cite{BathKMP}, where the needed self energies and dressed Green's function as Chebyshev polynomials of some sparse matrices. To reduce the numerical consumption, we then propose two practical improvements. One of them can largely simplify the self-consistent calculation of the self energies, and the second of them can even avoid this self-consistent process, which has a much less time and space consumption than those of the direct method of matrix evaluation without KPM.
This paper is organized as follows. After the general introduction, in Sections II and III, we briefly introduce the basic knowledge of Chebyshev polynomials and Landauer-B\"{u}ttiker formula, respectively. In Section IV, we describe the algorithm of calculating Landauer-B\"{u}ttiker formula with Chebyshev polynomials, including the calculation of dressed Green's function and lead self energies, following the standard bath method of KPM. To reduce the numerical demanding, we propose two practical improvements in Section V. Some numerical examples are presented in Section VI. In Section VII, we provide a summary and some outlooks for future works.
\section{Chebyshev Expansion and the Kernel Polynomial Method}
In this section, we briefly summarize the definition and basic properties of Chebyshev polynomials that will be used. The Chebyshev polynomials $T_{n}(x)$ with $x\in[-1,1]$ are in the explicit form as\cite{KPMRMP}
\begin{equation}\label{equ:A1}
T_{n}(x)=\cos[n\arccos(x)]\;,
\end{equation}
which satisfy the recursion relations,
\begin{equation}\label{EqRecursion}
\begin{split}
& T_{0}(x)=1\;,T_{-1}(x)=T_{1}(x)=x\;\\
& T_{n+1}(x)=2xT_{n}(x)-T_{n-1}(x)\;.
\end{split}
\end{equation}
The scalar product is defined as
\begin{equation}\label{EqScalarProduct}
\langle T_{m}|T_{n}\rangle\equiv \int_{-1}^1 \frac{T_{m}(x)T_{n}(x)}{\pi\sqrt{1-x^{2}}},
\end{equation}
with the weight function $(\pi\sqrt{1-x^{2}})^{-1}$. It is thus easy to verify the orthogonality relation between
Chebyshev polynomials,
\begin{equation}\label{EqOrthogonality}
\langle T_{m}|T_{n}\rangle=\frac{1+\delta_{n,0}}{2}\delta_{n,m}.
\end{equation}
In terms of these orthogonality relations (\ref{EqOrthogonality}), a piecewise smooth and continuous function $f(x)$ with $x\in [-1,1]$ can be expanded as
\begin{equation}\label{equ:A5}
f(x)=\frac{1}{\pi\sqrt{1-x^{2}}}\left[\mu_{0}+2\sum_{n=1}^{\infty}\mu_{n}T_{n}(x)\right],
\end{equation}
with expansion coefficients $ \mu_{n}$
\begin{equation}\label{equ:A6}
\mu_{n}=\int^{1}_{-1}f(x)T_{n}(x)dx
\end{equation}
Practically, the function $f(x)$ should be numerically reconstructed from a truncated series with the first $N$ terms in Eq. (\ref{equ:A5}). However, experiences show that the numerical performance of this simple truncation is bad, with slow convergence and remarkable fluctuations (Gibbs oscillations)\cite{KPMRMP}. This can be improved by a modification of the expansion coefficients as $\mu_n\rightarrow g_n \mu_n$, where $\{ g_n \}$ is the kernel. In other words, appropriate choices of the kernel $\{ g_n \}$ will make the truncated series a numerically better approximation of the function\cite{KPMRMP}
\begin{equation}\label{equ:A7}
f(x)\!\approx\! f_{\mathrm{KPM}}(x)\!=\!\frac{1}{\pi\sqrt{1-x^{2}}}\!\left[g_{0}\mu_{0}\!+\!2\sum_{n=1}^{N-1}g_{n}\mu_{n}T_{n}(x)\right],
\end{equation}
Among different kernels, here we adopt the Jackson kernel with the explicit expression as\cite{KPMRMP}
\begin{equation}\label{equ:A8}
\mathlarger{g}_{n}^{J}=\frac{(N-n+1)\cos\frac{\pi n}{N+1}+\sin\frac{\pi n}{N+1}\cot\frac{\pi}{N+1}}{N+1},
\end{equation}
which is suitable for the applications related to Green's functions.
Besides a numeric function $f(x)$, the Chebyshev expansion can also be used to approximate the function of a Hermitian operator $H$ (or equivalently its matrix $\bm{H}$ in an appropriate representation), if the eigenvalue spectrum of $H$ is within the interval $[-1,1]$\cite{KPMRMP}. For a general Hermitian operator, e.g. a Hamiltonian $H$ with maximum (minimum) eigenvalue $E_{\mathrm{max}}$ ($E_{\mathrm{min}}$), this condition of spectrum can be satisfied by simply performing an appropriate rescaling on the matrix (and also on the energy scale),
\begin{equation}\label{equ:A9}
\tilde{H}=\frac{1}{a}\big(H-b\big),\qquad\tilde{E}=\frac{E-b}{a}
\end{equation}
with
\begin{equation}\label{equ:A10}
a=\frac{E_{\max}-E_{\min}}{2-\zeta},\qquad b=\frac{E_{\max}+E_{\min}}{2},
\end{equation}
so that the the spectrum of $\tilde{H}$ is within $[-1,1]$. Here the parameter $\zeta>0$ is a small cutoff to avoid numerical instabilities at the boundaries $\pm 1$. A proper rescaling, i.e., an appropriately small $\zeta$ will reduce the necessary $N$ for a certain expansion to reach the same precision. Throughout this work, we fix $\zeta=0.01$. In practical uses, the lower and upper bounds of $\bm{H}$ can be estimated by using sparse matrix eigenvalue solvers, e.g., the FEAST algorithm of Intel MKL. After the calculation of physical properties with the help of Chebyshev polynomials,
their correct dependence on the energy $E$ can be restored by a simple inverse transformation of Eq. (\ref{equ:A9}). Therefore in the following, we will always consider that the operator matrices have been rescaled according to Eq. (\ref{equ:A9}) before they enter Chebyshev polynomials, and the tilde hats on the operators and eigenvalues will be omitted.
It can be shown that\cite{KPMRMP,RealSpace,GreenFunctionKMP}, the retarded (advanced) Green's function
\begin{equation}\label{EqBareGreenFunction}
G^{r(a)}(E,H)=\lim_{\eta \to 0+}\big[(E\pm i\eta)I-H\big]^{-1}
\end{equation}
at energy $E$ can be expanded in terms of Chebyshev kernel polynomials as
\begin{equation}\label{EqChebyshevGreen}
G^{r(a)}(E,H)=\frac{i}{\sqrt{1-E^{2}}}(g_{0}\mu_{0}+2\sum_{n=1}^{N-1}g_{n}\mu_{n}e^{\mp in\arccos(E)})\,
\end{equation}
with coefficient matrices
\begin{equation}\label{EqChebyshevCoeffH}
\mu_{n}=\mp T_{n}(H).
\end{equation}
Now the broadening $\eta$ does not explicitly appear in the matrix elements. Rather, it is associated with $N$, the number of expansion moments. Larger $N$ corresponds to a smaller $\eta$. Notice $G$ and $\mu_{n}$ are also operators. In a certain representation, these operators can be explicitly written as corresponding matrices $\bm{H}$, $\bm{G}$ and $\bm{\mu}_n$ with the same size.
Throughout this manuscript, all matrices will be written in a bold form of the corresponding operator.
\section{Electronic Transmission in Terms of Green's Functions}
\begin{figure}[htbp]
\centering
\includegraphics*[width=0.5\textwidth]{Fig1.eps}
\centering
\caption{The typical setup of a two-terminal measurement of quantum transport, a conductor (red) is connected to left (L) and right (R) leads (blue). Sites $\alpha$ and $\beta$ are on the surfaces of the leads, which are adjacent to the conductor. }
\label{FigDevice}
\end{figure}
In this section, we briefly review the Landauer-B\"{u}ttiker formula represented as Green's functions. Consider the two-terminal transport device illustrated in Fig. \ref{FigDevice},
with one conductor connected to two semi-infinite leads.
Formally, the Hamiltonian of this combined system can be written as
\begin{equation}\label{EqFullHamiltonian}
H=H_{C}+H_{L}+H_{R}+H_{CL}+\mathrm{H. c.}+H_{CR}+\mathrm{H. c.},
\end{equation}
where $H_{C}$ is the Hamiltonian of the conductor, $H_{L}$ ($H_{R}$) is that of the left (right) lead,
and $H_{CL}$ ($H_{CR}$) is the coupling from the conductor to the left (right) lead.
It is convenient to write these Hamiltonians in the real space representation (tight binding model) as matrices. For example, the real space Hamiltonian of a conductor (lead) can be expressed in a generic second quantization form as
\begin{equation}\label{EqHamiltonianMatrix}
H=\sum_{\alpha,\beta} \bm{H}_{\alpha\beta}c_{\alpha}^{\dagger}c_{\beta},
\end{equation}
with $c_{\beta}$ the annihilation operator of the spinorbital $\beta$ in the conductor (lead). Here $\bm{H}$ is a matrix with elements $\bm{H}_{\alpha\beta}$.
Due to the coupling to leads, now the (retarded) Green's function of the conductor $G^{r}_{C}$
is, of course, not the original bare one $\big[E + i\eta - H_{C}\big]^{-1}$. Thanks to the Dyson equation of Green's functions, it can be expressed as the dressed one as\cite{Datta,DHLee}
\begin{equation}\label{EqDressedGreenFunction2}
G^{r}_{C}(E)=\big[E - H_C-\Sigma_{L}(E)-\Sigma_{R}(E)\big]^{-1},
\end{equation}
where $\Sigma_{L}$ ($\Sigma_{R}$) is the self energy of the left (right) lead.
The technique of self energy liberate one from inserting the full Hamiltonian (\ref{EqFullHamiltonian})
into Eq. (\ref{EqBareGreenFunction}) to obtain the dressed Green's function of the conductor.
The self energy is the result of integrating out the degree of freedom of the lead\cite{Datta,CMFT}, i.e.,
\begin{equation}\label{EqSelfEnergy}
\Sigma_{p}(E)=H^{\dagger}_{pC}G^{r}_{p}(E)H_{pC},\quad p\in\{L,R\}
\end{equation}
where $G^{r}_{p}$ is the Green's function of lead $p$, and $H_{pC}$ is the coupling Hamiltonian between lead $p$ and the conductor.
In the real space representation, $\bm{G}^{r}_{p}$ is an infinite dimensional matrix because the lead is semi-infinite.
However, since only a few spinoribitals of the lead is connected to the conductor through $H_{pC}$, in the evaluation of Eq. (\ref{EqSelfEnergy}), we only need to know the ``surface'' subset of the matrix $\bm{G}^{r}_{p}$, i.e., those matrix elements $\big[\bm{G}^{r}_{p}\big]_{\alpha\beta}$ with $\alpha$ and $\beta$ running over spinorbitals connected to the conductor.
This subset will be called the surface Green's function.
At zero temperature, the two-terminal conductance $G$ in Fig.\ref{FigDevice} is
represented as the Landauer-B\"{u}ttiker formula\cite{Landauer1957,Buettiker,TransmissionRMP,Datta},
\begin{equation}
G=\frac{e^2}{h}T,
\end{equation}
where $e$ is the elementary charge, $h$ is the Planck constant, and $T$
is the transmission through the conductor.
This transmission $T$ at Fermi energy $E$ can be expressed in terms of Green's functions as\cite{Datta,QuantumTransport},
\begin{equation}\label{EqnTransmission}
T(E)=\mathrm{Tr}[\Gamma_{R}(E)G_C^{r}(E)\Gamma_{L}(E)G_C^{a}(E)],
\end{equation}
where
\begin{equation}\label{EqDressedGreenFunction}
G^{a}_C(E)=\big[G^{r}_C(E)\big]^{\dagger},\qquad \Gamma_{L(R)}=i\big[\Sigma_{L(R)}-\Sigma_{L(R)}^{\dagger}\big]
\end{equation}
Traditionally, the self energies (\ref{EqSelfEnergy}) of the leads can be calculated explicitly by a direct diagonalization method\cite{DHLee} or an iterative method\cite{InterativeSurfaceGreen}. Afterwards, they are inserted into Eqs. (\ref{EqDressedGreenFunction2}), (\ref{EqDressedGreenFunction}) and finally (\ref{EqnTransmission}) for the evaluation of the transmission. In this process, the most time-consuming step will be the calculation of lead self energies, and the matrix inversion (which does not preserve the sparseness of the matrix) in Eq. (\ref{EqDressedGreenFunction2}). For a two-terminal device simulation where the conductor lattice can be well divided into layers of sites (layers should be defined in such a way that hoppings only exist between nearest layers), the simulation can be decomposed into a layer-to-layer recursive method, which is based on the Dyson equation for Green's functions\cite{QuantumTransport,Recursive}. This decomposition can remarkably reduce the time and space consumption in calculations. However, this recursive method will be technically tedious for a multi-terminal setup, and even impossible for, say, a twisted bilayer graphene\cite{TwistedGraphene1,TwistedGraphene2}. In these examples, one still needs to calculate the full-size and dense matrices associated with Hamiltonians and Green's functions directly. In the following, we will investigate algorithms based on KPM to calculate Eq. (\ref{EqnTransmission}), with slightly different steps.
\section{Standard Bath Chebyshev Polynomial Method}
Before evaluating the transmission function (\ref{EqnTransmission}) from the Hamiltonian (\ref{EqFullHamiltonian}), two steps are essential: First, solving the self energies [Eq. (\ref{EqSelfEnergy})]; Second, inclusion of them into
the conductor's Green's function as Eq. (\ref{EqDressedGreenFunction2}).
The numerical treatments of these steps by direct matrix calculations have been very mature and well-known\cite{Datta,QuantumTransport}.
However, in the context of KPM, the realization of these steps is not easy nor straightforward,
especially if one insists to avoid calculations related to large dense matrices.
We achieve this goal by a generalization of the bath technique of KPM\cite{BathKMP},
which will be described here in detail. In Section V B, another distinct algorithm will be introduced.
The lead connected to the conductor is semi-infinitely long
and therefore it can be viewed as a bath\cite{BathKMP}. The central task of obtaining the lead self energy [Eq. (\ref{EqSelfEnergy})] is to calculate the surface Green's function $\big[\bm{G}^{r}_{p}\big]_{\alpha\beta}$ of lead $p$, with $\alpha$ and $\beta$ running over the surface which will be connected to the conductor. In the context of KPM method, we need to calculate the Chebyshev coefficient matrix $\bm{\mu}_n^{\alpha\beta}$ in Eq. (\ref{EqChebyshevGreen}) of the lead. This, of course, cannot be calculated by using Eq. (\ref{EqChebyshevCoeffH}) directly, as the lead Hamiltonian matrix $\bm{H}_p$ is infinite dimensional. Instead, we will use a self consistent method as described below.
\subsection{Basic Definitions}
First, some useful mathematical structures related to an isolated lead $p$ will be constructed. As suggested in Ref. \cite{BathKMP}, we define the Chebyshev vectors as
\begin{equation}\label{EqChebyshevVector}
|n_{\alpha}\rangle \equiv T_{n}(H_{p})f_{\alpha}^{\dagger}|\mathrm{vac}\rangle, (n\in\mathbb{N})
\end{equation}
with $|\mathrm{vac}\rangle$ describing the lead vacuum, i.e., $f_{\alpha}^{\dagger}|\mathrm{vac}\rangle=0$, and $f_{\alpha}^{\dagger}$ the creation operator in the lead at spinorbital state $\alpha$. These Chebyshev vectors are not orthonormal and the scalar product
\begin{equation}\label{equ:A18}
\langle 0_{\beta}|n_{\alpha}\rangle =\langle \mathrm{vac}|f_{\beta}T_{n}(H_{p})f_{\alpha}^{\dagger}|\mathrm{vac}\rangle=\bm{\mu}_{n}^{\beta\alpha}.
\end{equation}
By comparing with Eq. (\ref{EqChebyshevCoeffH}), one can see that, this matrix $\bm{\mu}_{n}$ is just the $n$-th Chebyshev coefficient matrix of the lead's Green's function. The series of the Chebyshev vectors defined in Eq. (\ref{EqChebyshevVector}) span a Hilbert space $\mathcal{H}_{\alpha}$. As can be seen from the definition, $\mathcal{H}_{\alpha}$ is a subspace of the Fock space for the lead operator $f_{\alpha}^{(\dagger)}$. From the recursion relation, Eq. (\ref{EqRecursion}), it is easy to conclude the operation of $H_{p}$ on $\mathcal{H}_{\alpha}$ as
\begin{equation}
H_{p}|n_{\alpha}\rangle=
\begin{cases}
|1_{\alpha}\rangle,&n=0\\
(1/2)(|(n-1)_{\alpha}\rangle+|(n+1)_{\alpha}\rangle),&n>0.
\end{cases}
\end{equation}
In other words, in the subspace $\mathcal{H}_{\alpha}$, the effect of $H_{p}$ can be expressed in a matrix form as
\begin{equation}\label{equ:A20}
(\widehat{\bm{H}}^{\alpha}_{p})_{mn}\equiv \frac{1}{2}\left(
\begin{array}{ccccc}
0&1&0&0&\cdots\\
2&0&1&0& \\
&1&0&1& \\
& &1&0& \\
& &\vdots& &\ddots
\end{array}
\right),
\end{equation}
with
\begin{equation}\label{EqHpmn}
H_{p}|n_{\alpha}\rangle=\sum_{m}(\widehat{\bm{H}}^{\alpha}_{p})_{mn}|m_{\alpha}\rangle.
\end{equation}
Notice that $(\widehat{\bm{H}}^{\alpha}_{p})_{mn}\neq\langle m_{\alpha}|H_{p}|n_{\alpha}\rangle$ owing to the non-orthogonality of these Chebyshev vectors.
For a truncation with $N$ Chebyshev terms (\ref{equ:A7}), the size of the matrix $\widehat{\bm{H}}^{\alpha}_{p}$ is $N \times N$.
Following Eq. (\ref{EqChebyshevVector}), another useful relation can be obtained as
\begin{eqnarray}\label{equ:A26}
f_{\beta}|n_{\alpha}\rangle&=f_{\beta}T_{n}(H_{p})f_{\alpha}^{\dagger}|\mathrm{vac}\rangle\\
&=|\mathrm{vac}\rangle\langle\mathrm{vac}|f_{\beta}T_{n}(H_{p})f_{\alpha}^{\dagger}|\mathrm{vac}\rangle=\bm{\mu}_{n}^{\beta\alpha}|\mathrm{vac}\rangle.
\end{eqnarray}
\subsection{Dressed Green's Function}
Suppose the Chebyshev coefficient matrices $\bm{\mu}_n$ of lead $p$ have been known. Now we connect a conductor $C$ to the lead. The Hamiltonian $H_C$ of the conductor is in the form of Eq. (\ref{EqHamiltonianMatrix}),
and the size of the corresponding matrix $\bm{H}_C$ is $M\times M$.
Without loss of generality, we consider the conductor-lead coupling
to be the following simple form
\begin{equation}\label{equ:A16}
H_{Cp} + \mathrm{H.c.} =\sum_{\alpha=1}^{W}t_{\alpha}(c_{\alpha}^{\dagger}f_{\alpha}+f_{\alpha}^{\dagger}c_{\alpha}),
\end{equation}
where $W$ is the effective ``width'' of the cross section, $c_{\alpha}$ ($f_{\alpha}$) is the annihilation operator in the conductor (lead), and $t_{\alpha}$ denotes the hopping matrix elements. As in most practical cases, we have considered these $W$ hopping bonds are coupling sites between the conductor and the lead in a one-to-one way.
Based on above definitions, now we can approximately express the full Hamiltonian $H_C+H_p+H_{Cp} + \mathrm{H.c.}$ in the finite-dimensional representation with basis ordered as
\begin{eqnarray}\label{EqCoupledMatrix}
&\Big( |1\rangle, \cdots,|\beta\rangle,\cdots|M\rangle, |0_{1}\rangle,\cdots,|n_{1}\rangle,\cdots,|N-1_{\, 1}\rangle,\nonumber\\
&\cdots,|n_{\alpha}\rangle,\cdots, |0_{W}\rangle,\cdots,|n_{W}\rangle,\cdots,|N-1_{\,W}\rangle \Big),
\end{eqnarray}
where $\huge|\beta\rangle$ ($1\leq\beta\leq M$) are spinorbital basis states in the conductor, and
$|n_{\alpha}\rangle$ ($0\leq n\leq N-1$ and $1\leq \alpha\leq W$) are Chebyshev vectors of lead $p$ defined in Eq. (\ref{EqChebyshevVector}). It can be easily shown that, the full Hamiltonian in this representation is a $(M+W\times N)$-dimensional sparse matrix with nonzero blocks illustrated as follows:
\begin{equation}\label{EqR}
\bm{R}=\left(
\begin{array}{ccc|cccccc}
\quad& & & & & & & &\\
\quad& \text{\Large$ \bm{H}_C $}& & & & & & & \\
\quad& & &\bm{L}^{1}_p&\bm{L}^{2}_p&\cdots&\bm{L}^{\alpha}_p&\cdots&\bm{L}^{W}_p\\ \hline
& & \bm{M}^{1}_p&\bm{\widehat{H}}^{1}_p& \\
& & \bm{M}^{2}_p& &\bm{\widehat{H}}^{2}_p& \\
& & \vdots& & &\ddots& \\
& & \bm{M}^{\alpha}_p&& & &\bm{\widehat{H}}^{\alpha}_p& \\
& & \vdots& & & & &\ddots& \\
& & \bm{M}^{W}_p& & & & & &\bm{\widehat{H}}^{W}_p\\
\end{array}
\right)
\end{equation}
Here, $\bm{H}_C$ is the Hamiltonian matrix of the isolated conductor with size $M \times M$, and $\bm{\widehat{H}}^{\alpha}_p$ are $N\times N$ matrices defined in Eq. (\ref{EqHpmn}) associated with lead $p$. As for the conductor-lead coupling sub-matrices, each $\bm{L}^{\alpha}_p$ is a $W\times N$ matrix in the form as
\begin{equation}\label{EqLalpha}
\bm{L}^{\alpha}_p=\left(
\begin{array}{cccc}
t_{\alpha}\mu_{0}^{1\alpha} & t_{\alpha}\mu_{1}^{1\alpha} & \cdots & t_{\alpha}\mu_{N-1}^{1\alpha} \\
t_{\alpha}\mu_{0}^{2\alpha} & t_{\alpha}\mu_{1}^{2\alpha} & \cdots & t_{\alpha}\mu_{N-1}^{2\alpha} \\
& & \cdots & \\
t_{\alpha}\mu_{0}^{W\alpha} & t_{\alpha}\mu_{1}^{W\alpha} & \cdots & t_{\alpha}\mu_{N-1}^{W\alpha} \\
\end{array}
\right),
\end{equation}
where $t_{\alpha}$ are the coupling hopping integral defined in Eq (\ref{equ:A16}),
and $\bm{\mu_{n}}^{\beta\alpha}$ of the lead are defined in Eq. (\ref{equ:A18}), with $\{ \alpha,\beta \}$ only running over the surface spinorbital states connecting with the conductor. On the other hand, each $M^{\alpha}_p$ is an $N\times W$ matrix with only one nonzero element,
\begin{eqnarray}
\big(\bm{M}^{\alpha}_p \big)_{n\gamma}= \begin{cases}
t_{\alpha},&n=1 \quad\mathrm{and}\quad \gamma=\alpha \\
0,& \mathrm{otherwise}.
\end{cases}
\end{eqnarray}
This matrix $\bm{R}$ is determined if $\bm{H}_C$, $\bm{H}_{Cp}$ and $\bm{\mu}_{n}^{\beta\alpha}$ are known, and it plays the central role in the Chebyshev approach to quantum systems coupled to a bath\cite{BathKMP}.
It has been shown that\cite{BathKMP}, the dressed Green's function $\big[(E+i\eta)I-H_C-\Sigma_p\big]^{-1}$ of the conductor coupled to lead $p$ can be obtained by using Eq. (\ref{EqChebyshevGreen}) and Eq. (\ref{EqChebyshevCoeffH}), with $H$ replaced by $R$\cite{BathKMP}.
\subsection{Self Energy}
In most applications, the coefficients $\mu_{n}^{\beta\alpha}$ associated with the lead surface is not known a priori, and practically they cannot be obtained by using Eq. (\ref{equ:A18}). However, the above bath method also offers a practical algorithm of calculating the lead coefficients $\bm{\mu}_{n}^{\beta\alpha}$ in a self-consistent way, which will be described as follows.
\begin{figure}[htbp]
\centering
\includegraphics*[width=0.4\textwidth]{Fig2.eps}
\centering
\caption{The self-consistent calculation of the lead Chebyshev coefficients. A natural extension with $J$ unit cells (red) are connected the terminal of the original semi-infinite lead (blue). }
\label{FigSemiinfinitelead}
\end{figure}
The lead is a semi-infinite crystal whose one-dimensional unit cell can be defined in a natural way. In Fig. \ref{FigSemiinfinitelead}, we illustrate a lead extending infinitely to the right direction, with the unit cell marked by the green dashed frame. The lead Chebyshev coefficients of the left surface are $\bm{\mu}_{n}^{\beta\alpha}$, with $\alpha$ and $\beta$ only running over the left surface spinorbitals which will be connected to the conductor. Now we couple $J$ unit cells to the left surface of this lead. Then the Chebyshev coefficients $\bm{\nu}_{n}^{\beta\alpha}$ of the new left surface can be calculated by using the process introduced above. On the other hand, because these unit cells are just a natural extension of the lead, this new composite system ($J$ unit cells coupled to the original lead) is also semi-infinite and is essentially equivalent to the original lead. As a result, the self-consistent condition
\begin{equation}\label{EqSelfConsistent}
\bm{\nu}_{n}^{\beta\alpha}=\bm{\mu}_{n}^{\beta\alpha}
\end{equation}
should hold.
Practically, we can start from a guess of the lead Chebyshev coefficients, e.g., $\bm{\mu}_{n}^{\beta\alpha}=0$, and then repeatedly couple $J$ unit cells to the left surface of the lead and calculate the Chebyshev coefficients $\bm{\nu}_{n}^{\beta\alpha}$
associated with the new surface, until the self-consistent condition (\ref{EqSelfConsistent}) are satisfied within a given error. Although the choice $\bm{\mu}_{n}^{\beta\alpha}=0$ fail to meet the rule $\langle0_{\alpha}|0_{\alpha}\rangle=\langle\mathrm{vac}|f_{\alpha}f_{\alpha}^{\dagger}|\mathrm{vac}\rangle=\bm{\mu}_{0}^{\alpha\alpha}=1$, it does not affect the final convergence. A larger number $J$ of unit cells will consume more time for each iteration step, but will reduce the number of iteration steps. Therefore an appropriate $J$ should be carefully chosen for a concrete model. After the lead Chebyshev coefficients $\bm{\mu}_{n}^{\beta\alpha}$ are known, the surface Green's function can be obtained through Eq. (\ref{EqChebyshevGreen}), and the self energy through Eq. (\ref{EqSelfEnergy}).
\subsection{Counting Both Leads in}
So far, we have shown that, replacing $H$ in Eq. (\ref{EqChebyshevCoeffH}) with $\bm{R}$ defined in Eq. (\ref{EqR}) will give rise to the dressed Green's function of the conductor when coupled to a \emph{single} lead $p$, $\bm{G}_{C}=\big[E\bm{I}-\bm{H}_C-\bm{\Sigma}_p\big]^{-1}$.
For a two-terminal device, the inclusion of left (L) and right (R) leads
\begin{equation}\nonumber
\bm{G}_{C}=\big[E\bm{I}-\bm{H}_C-\bm{\Sigma}_L-\bm{\Sigma}_R\big]^{-1}
\end{equation}
can be similarly achieved by a trivial generalization of the matrix in Eq. (\ref{EqR}) as
\begin{equation}\label{EqR2}
\bm{R}=\left(
\begin{array}{ccc|ccc|ccc}
\bm{\widehat{H}}^{1}_L & & & \bm{M}_{L}^1 & & & & & \\
& \ddots & & \vdots & & & & & \\
& & \bm{\widehat{H}}^{W}_L & \bm{M}_{L}^{W} & & & & & \\ \hline
\bm{L}_{L}^{1} & \cdots & \bm{L}_{L}^W & & & & & & \\
& & & & \text{\Large$ \bm{H}_C $} & & & & \\
& & & & & & \bm{L}_{R}^1 & \cdots & \bm{L}_{R}^W \\ \hline
& & & & & \bm{M}_{R}^1 & \bm{\widehat{H}}^{1}_R & & \\
& & & & & \vdots & & \ddots & \\
& & & & & \bm{M}_{R}^W & & & \bm{\widehat{H}}^{W}_R \\
\end{array}
\right),
\end{equation}
once the Chebyshev coefficients $\bm{\mu}_n$ associated with both leads were known.
\section{Practical Improvements}
\begin{figure}[htbp]
\centering
\includegraphics*[width=0.45\textwidth]{Fig3.eps}
\centering
\caption{ Two setups to simplify the calculations. (a) The leads (blue) are set to be decoupled 1D atomic chains. (b) The leads are set to have finite lengths $N_x^{L}$ and $N_x^{R}$, instead of being semi-infinitely long. }
\label{FigImprovements}
\end{figure}
\begin{figure*}[htbp]
\centering
\includegraphics*[width=1.0\textwidth]{Fig4.eps}
\centering
\caption{ Results for Example A, square lattice conductor with square lattice leads by using the standard bath KPM. Here we present the transmission $T$ as a function of energy $E$, with leads the same width as the conductor, for different conductor sizes: (a) $25\times 25$, (b) $60\times60$ and (c) $100\times 100$. Solid lines are results from the standard bath KPM introduced in Section IV, with different Chebyshev terms $N$. Red dashed lines are the result of a direct matrix evaluation of Eq. (\ref{EqnTransmission}) without KMP, as a reference. Notice different scales of the longitudinal axes in different panels.}
\label{FigDifference}
\end{figure*}
The central task of KPM is to obtain the corresponding Chebysheve coefficient matrices. One merit of the KPM is that these Chebyshev coefficients are independent of the energy $E$, i.e., the transport properties over the full energy spectrum are known if the corresponding Chebyshev coefficients have been calculated out. Particularly, when plotting Fig. \ref{FigDifference} , one only needs to calculate the lead Chebyshev coefficients $\bm{\mu}_n^{\beta \alpha}$ for \emph{once}, and then the energy dependence enters simply through Eq. (\ref{EqChebyshevGreen}) which is numerically cheap. In the traditional matrix inversion method, on the other hand, the full process of calculating the self energy (\ref{EqSelfEnergy}) and the dressed Green's function (\ref{EqDressedGreenFunction}) should be performed separately for different energies, which are numerically independent.
The algorithm described in Section IV was based on a mathematically rigorous realization of the standard bath approach of the KPM\cite{BathKMP}, which is referred as the ``standard bath KPM''. In the practical simulations, however, calculating Chebysheve coefficient matrices from this standard method might be very numerically demanding on central
Q4 processing unit (CPU) based computers.
For example, in the calculation of conductance in terms of KPM, the most time consuming process is the self-consistent calculation of Chebyshev coefficients
$\bm{\mu}_n^{\beta \alpha}$ of the leads. As a matter of fact, the requirement of including all details of leads into the calculation like this is a notoriously expensive cost in many quantum transport simulations\cite{WideBand2018,JHuangPhD,JTLu2014,Zelovich2015}. Now we will present two practical improvements of the algorithm.
\subsection{Chain Shaped Leads}
The first convenient simplification is to reduce the shape of leads into independent and semi-infinite 1D chains, as shown in Fig. \ref{FigImprovements} (a). Without transverse coupling in the lead, the coefficients are diagonal $\bm{\mu}_n^{\beta \alpha}=\delta_{\beta\alpha}\bm{\mu}_n^{\alpha \alpha}$ and they are identical for different $\alpha$. Now we only need to self-consistently calculate the Chebyshev coefficients of a 1D chain with width $W=1$, and the dimension of the matrix $\bm{R}$ in Eq. (\ref{EqR}) is reduced from $W \times J + W \times N$ to $M+N=J+N$. However, the mismatch between the leads and the conductor will give rise to additional scattering at their boundaries. Therefore, this brute simplification is mostly suitable for topological materials where backscattering have been prohibited by protections from topology and/or symmetry\cite{Bernevig2006}. See Examples B and C in the Section VI for simulation results.
\subsection{Finite Lead Approximation}
Here we propose another simple but efficient approximation to circumvent these difficulties.
The original setup was that both leads should be semi-\emph{infinitely} long, as illustrated in Fig. \ref{FigDevice}. Now we approximate both leads by two \emph{finite} ones, as presented in Fig. \ref{FigImprovements} (b). It is reasonable to imagine that the result will approach the correct one when their lengths $N_x^{L}$ and $N_x^{R}$ are sufficiently large. Now the conductor and leads are perfectly matched, so there will be no scattering on their boundaries.
If the lead lengths $N_x^{L}$ and $N_x^{R}$ needed to arrive within some precision are numerically acceptable,
this algorithm will be numerically more superior than the standard bath KPM described above.
For instance, the dressed Green's function can be obtained from Eq. (\ref{EqChebyshevGreen}) directly, only if $\bm{H}$ is the coupled Hamiltonian matrix of the whole system, the conductor \emph{and} two finite leads. Now, the sub-matrix $\bm{G}^{r(a)}_{ij}(E,H)$ (with $i,j$ running over the conductor sites) is naturally the approximation of the dressed Green's function (\ref{EqDressedGreenFunction})\cite{Datta}. In fact, due to the simple algebraic structure of Eq. (\ref{EqnTransmission}), one only needs the matrix indices $(i,j)$ running over the boundary sites connected to two leads. This process avoids the construction and calculation of complicated and non-Hermitian matrices like $R$ defined in Eq. (\ref{EqR}). A non-Hermitian matrix has complex eigenvalues, leading to difficulties of scaling itself with Eq. (\ref{equ:A9}) by its maximum and minimum eigenvalues, but an appropriate scaling of the matrix is key in the context of KPM.
Similarly, the surface Green's function of the lead can also be approximated by that of the finite one from Eq. (\ref{EqChebyshevGreen}), then the self energy is calculated with the help of Eq. (\ref{EqSelfEnergy}). In brief, this method circumvents all complicated steps of the standard bath KPM described in Section IV, especially the self-consistency calculation of the self energy, which is very time-consuming.
\section{Examples }
In this section, we present results from above KPM to calculate the two-terminal conductance of some example models.
\subsection{Square Lattice Conductor with Square Lattice Leads}
The first example is the two-dimensional square lattice with nearest hopping $t$,
\begin{equation}\label{EqHamiltonianSquareLattice}
H=\sum_{\langle\alpha,\beta\rangle} t c_{\alpha}^{\dagger}c_{\beta},
\end{equation}
where $\alpha$ and $\beta$ are indices for sites (each with only one spinorbital) in the conductor and leads, and $\langle\alpha,\beta\rangle$ run over all nearest site pairs.
The size of the conductor is $L\times W$, and the widths of both leads are also $W$.
The results from the standard bath KPM are shown in Fig. \ref{FigDifference}. Here the transmission $T$ a function of energy $E$ is plotted as the solid lines, for conductor sizes: (a) $25\times 25$, (b) $60\times 60$ (b) and (c) $100\times 100$. Different line colors corresponding to different Chebyshev terms $N$ are also shown in each panel. As a comparison, the red dashed line is the result from a direct matrix calculation of Eq. (\ref{EqnTransmission}), without using KPM. Without any disorder in the conductor, the conductance is quantized as plateaus with values $p\frac{e^2}{h}$, with $p$ the number of active channels at energy $E$\cite{Datta}. Smaller $N$ effectively corresponds to a stronger dephasing\cite{KPMRMP,LeiLiu2018}, and therefore the conductance is not perfectly quantized. The largest deviation at smaller $N$ (green lines in Fig. \ref{FigDifference}) happens around the band center $E=0$, which is a van Hove singularity.
This enhanced scattering is caused by the extremely large density of states around such a singularity\cite{Economou}.
On the other hand, larger conductor sizes need more Chebyshev terms to reach the perfect conductance value of quantum transport. This is understandable since a larger conductor gives rise to a longer journey for the electron to experience the dephasing, which is induced by the finiteness of Chebyshev terms $N$. Due to the tedious process of the standard bath KPM, especially the self-consistent calculation of the self energies, it is even more time-consuming than the direct matrix calculation.
\subsection{Square Lattice Conductor with Chain Shaped Leads}
\begin{figure}[htbp]
\centering
\includegraphics*[width=0.4\textwidth]{Fig5.eps}
\centering
\caption{ The black line is the result for Example B, square lattice conductor with chain shaped leads as illustrated in Fig. \ref{FigImprovements} (a). The conductor size is $100\times 100$, and the number of Chebyshev polynomial terms is $N=10000$. The red line is identical to that in Fig. \ref{FigDifference} (c), the result of square lattice leads (from direct matrix calculation), as a reference. }
\label{FigChainLead}
\end{figure}
The results are shown as the black solid line of Fig. \ref{FigChainLead} (b), where the result of square lattice leads (red dashed line) are also presented as a comparison. Similar to the popular method of wide band approximation\cite{WideBand2018,JHuangPhD,Haug2008}, this simplification largely reduces the time and space consumptions of the self-consistent calculation of the lead self energy. On the other hand, the mismatch between the lead and the conductor will give rise to remarkable additional scattering on the interface, leading to a distinct reduction of the conductance compared with the case of perfect leads.
\subsection{Topological Material Conductor with Chain Shaped Leads}
\begin{figure}[htbp]
\centering
\includegraphics*[width=0.35\textwidth]{Fig6.eps}
\centering
\caption{ Black solid lines are results for Example C, topological material conductor with chain shaped leads. The conductor is the quantum anomalous Hall model defined in Eqs. (\ref{EqBHZ1}) and (\ref{EqBHZ2}), and the chain shaped leads are illustrated in Fig. \ref{FigImprovements} (a). The red dashed lines mark the quantized conductance value in units of $\frac{e^2}{h}$.
(a) Conductor size $60 \times 60$, Chebyshev terms $N=17000$.
(b) Conductor size $80 \times 80$, Chebyshev terms $N=9000$. The model parameters are: $A=1$, $B=-1$ ,$C=D=0$, and $M=-2$. The hopping in the leads, and the coupling hopping between the conductor and the lead is $t=1$. }
\label{FigBHZ}
\end{figure}
However, such scattering will be practically avoided if the conductor is a topological material with robust transport against backscattering, or three dimensional conductor with sufficiently large number of transport channels. Therefore this method is most applicable in these contexts. Here we adopt the typical model of the quantum anomalous Hall effect, the spin-up component of the Bernevig-Hughes-Zhang (BHZ) model\cite{Bernevig2006}, defined on a two-orbital square lattice. The Hamiltonian in the $k$ space can be written as\cite{Bernevig2006}
\begin{equation}\label{EqBHZ1}
H=\sum_{\bm{k}}h_{\alpha\beta}(\bm{k})c^{\dagger}_{\bm{k}\alpha},
c_{\bm{k}\beta}
\end{equation}
where $h_{\alpha\beta}(\bm{k})$ is a $2\times2$ matrix defined as
\begin{eqnarray}\label{EqBHZ2}
h(\bm{k}) &=&d_{0}I_{2\times 2}+d_{1}\sigma _{x}+d_{2}\sigma
_{y}+d_{3}\sigma _{z} \label{EqBHZ2} \\
d_{0}(\bm{k}) &=&-2D\big(2-\cos k_{x}-\cos k_{y}\big) \notag \\
d_{1}(\bm{k}) &=&A\sin k_{x},\quad d_{2}(\bm{k})=-A\sin k_{y} \notag \\
d_{3}(\bm{k}) &=&M-2B\big(2-\cos k_{x}-\cos k_{y}\big), \notag
\end{eqnarray}
with $\sigma_{x,y,z}$ the Pauli matrices acting on the space of two orbitals.
The Chern number of this model is 1 when $B/M>0$, so that there will be a pair of topological edge states in the bulk gap $(-\frac{M}{2},\frac{M}{2})$. Due to the topological origin, the edge states will contribute a quantized conductance $1\times \frac{e^2}{h}$ that is robust against elastic backscattering.
Fig. \ref{FigBHZ} is the numerical results (black lines) of this topological model from our Chebyshev approach, with chain shaped leads as illustrated in Fig. \ref{FigImprovements} (a).
Panels (a) and (b) are for conductor sizes $60 \times 60$ and $80 \times 80$, respectively, and the red lines mark the reference position of the quantized conductance. We can see that in most of bulk gap region, the simulated conductance is perfectly consistent with that predicted by the topological invariant theory. For example in Fig. \ref{FigBHZ} (a), the numerical match can be larger than $99.5\%$ near the gap center $E=0$. As in previous examples, larger conductor sizes needs more Chebyshev terms to reach the perfect quantum transport value. Moreover, the transport near gap edges are more sensitive to scattering, which is a natural consequence of bulk-edge mixing\cite{BulkEdge}.
\subsection{Square Lattice Conductor with Finite Lead Approximation}
\begin{figure*}[htbp]
\centering
\includegraphics*[width=1.0\textwidth]{Fig7.eps}
\centering
\caption{Similar to Fig. \ref{FigDifference}, but the black solid lines are for Example D, with finite square lattice leads introduced in Section V B. The lengths of the leads are taken to be: (a) $40\times L$, (b) $30\times L$, and (c) $25\times L$. The number of Chebyshev terms are: (a) $N=5000$, (b) $N=10000$, and (c) $N=20000$. Red dashed lines are the results of a direct matrix evaluation of Eq. (\ref{EqnTransmission}) without KPM, as a reference. }
\label{FigFiniteDevice}
\end{figure*}
In Fig. \ref{FigFiniteDevice}, we present the results from the finite lead approximation, as introduced in Section V B.
Typically, the necessary length of the finite lead is less than 50 times of the conductor length, $N_x^{L},N_x^{R}\lesssim 50\times L$, to achieve a relative error less than $2\%$ throughout most of the energy spectrum. This necessitates a sparse matrix with dimension $\lesssim 100\times L\times W$ to store the Hamiltonian of the conductor \emph{and} leads. This matrix is structurally simpler, and usually not larger than $\bm{R}$ [Eq. (\ref{EqR2})] in the standard bath KPM, with an $N$ dependent dimension $(2N+L)\times W$ ($N$ is the number of Chebyshev terms, typically $\sim 10^3-10^4$) . Moreover, the calculation of self energies is also much simpler and faster than in the
standard bath KPM, since it is now a one-time process and no self-consistent loops are needed here.
Moreover, here we will show that this method is also much faster than the direct matrix evaluation of Eq. (\ref{EqnTransmission}) without KMP. In Fig. \ref{FigTime}, the computation time of these two methods are plotted as functions of the length of the square shaped conductor. Here the ``computation time'' means the full time of obtaining one curve like those in Fig. \ref{FigFiniteDevice}, by calculating transmissions over 800 energy points. It can be seen that the KPM with finite leads is numerically much more advantageous than the traditional direct matrix calculation. As has been discussed above, since the Chebyshev coefficients have contained information on the full energy spectrum, this improved KPM method will be even more superior when one needs data on more energy points. Furthermore, in the context of simulating irregular shaped conductors with an irregular structure of Hamiltonian matrix, since the calculation cannot be reduced to a layer-to-layer recursive one\cite{QuantumTransport,Recursive}, this method we propose will be very applicable.
\begin{figure}[htbp]
\centering
\includegraphics*[width=0.45\textwidth,bb=30 200 495 560]{Fig8.eps}
\centering
\caption{ The computation times of obtaining a set of $T$ curve (by scanning over 800 energy points) as functions of the length of a square conductor $L$, by using the KPM with finite leads (red line and square dots), and by using the direct matrix evaluation of Eq. (\ref{EqnTransmission}) without KMP (black line and circle dots). }
\label{FigTime}
\end{figure}
\section{Summary and Outlook}
In summary, we introduced the Chebyshev polynomial method to the Landauer-B\"{u}ttiker formula of the two-terminal transport device, by a generalization of the standard bath technique of KPM.
In this formula, the dressed Green's function can be expressed as Chebyshev polynomials of
the matrix $\bm{R}$ defined in Eq. (\ref{EqR}) or Eq. (\ref{EqR2}), and the self energies can be calculated through the dressed Green's function in a self-consistent way. During this process, the most resource consuming step is the calculation of self energies of the leads. A simple solution is to reduce the topology of the leads to parallel and decoupled atomic chains, but the price is additional scattering on the interfaces. Another solution is to approximate the leads as finite ones with sufficient length. This algorithm avoids complicated matrix calculations in the standard bath KPM (especially the self-consistent process of obtaining self energies), and also avoids notable boundary scattering in the chain shaped lead method. The numerical experiments verified that this method has a much less numerical cost than that of the traditional method of direct matrix calculation without KPM.
Since the leads themselves are not the object of study and there is a wide freedom of choosing leads. One of the future efforts is to find an appropriate design of leads or lead-conductor coupling\cite{JHuangPhD,JTLu2014,Zelovich2015,LeadGeometry}, with small resource demanding in the Chebyshev polynomial representation, while with small backscattering on the interfaces to the conductor. Furthermore, our method can also be generalized to Cheyshev forms to linear responses of other degree of freedoms of quantum transports structures\cite{Thermal2,Optical,PhononTransport,Mosso2019}.
\begin{acknowledgments}
We thank Prof. S.-J. Yuan (Wuhan University) for beneficial discussions. This work was supported by National Natural Science
Foundation of China under Grant Nos. 11774336 and 61427901. YYZ was also supported by the Starting Research Fund from Guangzhou University under Grant No. RQ2020082.
\end{acknowledgments}
\section{DATA AVAILABILITY}
The data that support the findings of this study are available from the corresponding author upon reasonable request.
\nocite{*}
| proofpile-arXiv_065-240 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Motivation}
Chromium is considered as the archetypical itinerant antiferromagnet~\cite{1988_Fawcett_RevModPhys, 1994_Fawcett_RevModPhys}. Interestingly, it shares its body-centered cubic crystal structure $Im\overline{3}m$ with the archetypical itinerant ferromagnet $\alpha$-iron and, at melting temperature, all compositions Fe$_{x}$Cr$_{1-x}$~\cite{2010_Okamoto_Book}. As a result, the Cr--Fe system offers the possibility to study the interplay of two fundamental forms of magnetic order in the same crystallographic environment.
Chromium exhibits transverse spin-density wave order below a N\'{e}el temperature $T_{\mathrm{N}} = 311$~K and longitudinal spin-density wave order below $T_{\mathrm{SF}} = 123$~K~\cite{1988_Fawcett_RevModPhys}. Under substitutional doping with iron, the longitudinal spin-density wave order becomes commensurate at $x = 0.02$. For $0.04 < x$, only commensurate antiferromagnetic order is observed~\cite{1967_Ishikawa_JPhysSocJpn, 1980_Babic_JPhysChemSolids, 1983_Burke_JPhysFMetPhys_I}. The N\'{e}el temperature decreases at first linearly with increasing $x$ and vanishes around $x \approx 0.15$~\cite{1967_Ishikawa_JPhysSocJpn, 1976_Suzuki_JPhysSocJpn, 1978_Burke_JPhysFMetPhys, 1980_Babic_JPhysChemSolids, 1983_Burke_JPhysFMetPhys_I}. Increasing $x$ further, a putative lack of long-range magnetic order~\cite{1978_Burke_JPhysFMetPhys} is followed by the onset of ferromagnetic order at $x \approx 0.18$ with a monotonic increase of the Curie temperature up to $T_{\mathrm{C}} = 1041$~K in pure $\alpha$-iron~\cite{1963_Nevitt_JApplPhys, 1975_Loegel_JPhysFMetPhys, 1980_Fincher_PhysRevLett, 1981_Shapiro_PhysRevB, 1983_Burke_JPhysFMetPhys_II, 1983_Burke_JPhysFMetPhys_III}.
The suppression of magnetic order is reminiscent of quantum critical systems under pressure~\cite{2001_Stewart_RevModPhys, 2007_Lohneysen_RevModPhys, 2008_Broun_NatPhys}, where substitutional doping of chromium with iron decreases the unit cell volume. In comparison to stoichiometric systems tuned by hydrostatic pressure, however, disorder and local strain are expected to play a crucial role in Fe$_{x}$Cr$_{1-x}$. This conjecture is consistent with reports on superparamagnetic behavior for $0.20 \leq x \leq 0.29$~\cite{1975_Loegel_JPhysFMetPhys}, mictomagnetic behavior~\footnote{In mictomagnetic materials, the virgin magnetic curves recorded in magnetization measurements as a function of field lie outside of the hysteresis loops recorded when starting from high field~\cite{1976_Shull_SolidStateCommunications}.} gradually evolving towards ferromagnetism for $0.09 \leq x \leq 0.23$~\cite{1975_Shull_AIPConferenceProceedings}, and spin-glass behavior for $0.14 \leq x \leq 0.19$~\cite{1979_Strom-Olsen_JPhysFMetPhys, 1980_Babic_JPhysChemSolids, 1981_Shapiro_PhysRevB, 1983_Burke_JPhysFMetPhys_I, 1983_Burke_JPhysFMetPhys_II, 1983_Burke_JPhysFMetPhys_III}.
Despite the rather unique combination of properties, notably a metallic spin glass emerging at the border of both itinerant antiferromagnetic and ferromagnetic order, comprehensive studies addressing the magnetic properties of Fe$_{x}$Cr$_{1-x}$ in the concentration range of putative quantum criticality are lacking. In particular, a classification of the spin-glass regime, to the best of our knowledge, has not been addressed before.
Here, we report a study of polycrystalline samples of Fe$_{x}$Cr$_{1-x}$ covering the concentration range $0.05 \leq x \leq 0.30$, i.e., from antiferromagnetic doped chromium well into the ferromagnetically ordered state of doped iron. The compositional phase diagram inferred from magnetization and ac susceptibility measurements is in agreement with previous reports~\cite{1983_Burke_JPhysFMetPhys_I, 1983_Burke_JPhysFMetPhys_II, 1983_Burke_JPhysFMetPhys_III}. As the perhaps most notable new observation, we identify a precursor phenomenon preceding the onset of spin-glass behavior in the imaginary part of the ac susceptibility. For the spin-glass state, analysis of ac susceptibility data recorded at different excitation frequencies by means of the Mydosh parameter, power-law fits, and a Vogel--Fulcher ansatz establishes a crossover from cluster-glass to superparamagnetic behavior as a function of increasing $x$. Microscopic evidence for this evolution is provided by neutron depolarization, indicating an increase of the size of ferromagnetic clusters with $x$.
Our paper is organized as follows. In Sec.~\ref{sec:methods}, the preparation of the samples and their metallurgical characterization by means of x-ray powder diffraction is reported. In addition, experimental details are briefly described. Providing a first point of reference, the presentation of the experimental results starts in Sec.~\ref{sec:results} with the compositional phase diagram as inferred in our study, before turning to a detailed description of the ac susceptibility and magnetization data. Next, neutron depolarization data are presented, allowing to extract the size of ferromagnetically ordered clusters from exponential fits. Exemplary data on the specific heat, electrical resistivity, and high-field magnetization for $x = 0.15$ complete this section. In Sec.~\ref{sec:discussion}, information on the nature of the spin-glass behavior in Fe$_{x}$Cr$_{1-x}$ and its evolution under increasing $x$ is inferred from an analysis of ac susceptibility data recorded at different excitation frequencies. Finally, in Sec.~\ref{sec:conclusion} the central findings of this study are summarized.
\section{Experimental methods}
\label{sec:methods}
Polycrystalline samples of Fe$_{x}$Cr$_{1-x}$ for $0.05 \leq x \leq 0.30$ ($x = 0.05$, 0.10, 0.15, 0.16, 0.17, 0.18, 0.18, 0.19, 0.20, 0.21, 0.22, 0.25, 0.30) were prepared from iron (4N) and chromium (5N) pieces by means of radio-frequency induction melting in a bespoke high-purity furnace~\cite{2016_Bauer_RevSciInstrum}. No losses in weight or signatures of evaporation were observed. In turn, the composition is denoted in terms of the weighed-in amounts of starting material. Prior to the synthesis, the furnace was pumped to ultra-high vacuum and subsequently flooded with 1.4~bar of argon (6N) treated by a point-of-use gas purifier yielding a nominal purity of 9N. For each sample, the starting elements were melted in a water-cooled Hukin crucible and the resulting specimen was kept molten for about 10~min to promote homogenization. Finally, the sample was quenched to room temperature. With this approach, the imminent exsolution of the compound into two phases upon cooling was prevented, as suggested by the binary phase diagram of the Fe--Cr system reported in Ref.~\cite{2010_Okamoto_Book}. From the resulting ingots samples were cut with a diamond wire saw.
\begin{figure}
\includegraphics[width=1.0\linewidth]{figure1}
\caption{\label{fig:1}X-ray powder diffraction data of Fe$_{x}$Cr$_{1-x}$. (a)~Diffraction pattern for $x = 0.15$. The Rietveld refinement (red curve) is in excellent agreement with the experimental data and confirms the $Im\overline{3}m$ structure. (b)~Diffraction pattern around the (011) peak for all concentrations studied. For clarity, the intensities are normalized and curves are offset by 0.1. Inset: Linear decrease of the lattice constant $a$ with increasing $x$. The solid gray line represents a guide to the eye.}
\end{figure}
Powder was prepared of a small piece of each ingot using an agate mortar. X-ray powder diffraction at room temperature was carried out on a Huber G670 diffractometer using a Guinier geometry. Fig.~\ref{fig:1}(a) shows the diffraction pattern for $x = 0.15$, representing typical data. A Rietveld refinement based on the $Im\overline{3}m$ structure yields a lattice constant $a = 2.883$~\AA.
Refinement and experimental data are in excellent agreement, indicating a high structural quality and homogeneity of the polycrystalline samples. With increasing $x$, the diffraction peaks shift to larger angles, as shown for the (011) peak in Fig.~\ref{fig:1}(b), consistent with a linear decrease of the lattice constant in accordance with Vegard's law.
Measurements of the magnetic properties and neutron depolarization were carried out on thin discs with a thickness of ${\sim}0.5$~mm and a diameter of ${\sim}10$~mm. Specific heat and electrical transport for $x = 0.15$ were measured on a cube of 2~mm edge length and a platelet of dimensions $5\times2\times0.5~\textrm{mm}^{3}$, respectively.
The magnetic properties, the specific heat, and the electrical resistivity were measured in a Quantum Design physical properties measurement system. The magnetization was measured by means of an extraction technique. If not stated otherwise, the ac susceptibility was measured at an excitation amplitude of 0.1~mT and an excitation frequency of 1~kHz. Additional ac susceptibility data for the analysis of the spin-glass behavior were recorded at frequencies ranging from 10~Hz to 10~kHz. The specific heat was measured using a quasi-adiabatic large-pulse technique with heat pulses of about 30\% of the current temperature~\cite{2013_Bauer_PhysRevLett}. For the measurements of the electrical resistivity the samples were contacted in a four-terminal configuration and a bespoke setup was used based on a lock-in technique at an excitation amplitude of 1~mA and an excitation frequency of 22.08~Hz. Magnetic field and current were applied perpendicular to each other, corresponding to the transverse magneto-resistance.
Neutron depolarization measurements were carried out at the instrument ANTARES~\cite{2015_Schulz_JLarge-ScaleResFacilJLSRF} at the Heinz Maier-Leibniz Zentrum~(MLZ). The incoming neutron beam had a wavelength $\lambda = 4.13$~\AA\ and a wavelength spread $\Delta\lambda / \lambda = 10\%$. It was polarized using V-cavity supermirrors. The beam was transmitted through the sample and its polarization analyzed using a second polarizing V-cavity. While nonmagnetic samples do not affect the polarization of the neutron beam, the presence of ferromagnetic domains in general results in a precession of the neutron spins. In turn, the transmitted polarization with respect to the polarization axis of the incoming beam is reduced. This effect is referred to as neutron depolarization. Low temperatures and magnetic fields for this experiment were provided by a closed-cycle refrigerator and water-cooled Helmholtz coils, respectively. A small guide field of 0.5~mT was generated by means of permanent magnets. For further information on the neutron depolarization setup, we refer to Refs.~\cite{2015_Schmakat_PhD, 2017_Seifert_JPhysConfSer, 2019_Jorba_JMagnMagnMater}.
All data shown as a function of temperature in this paper were recorded at a fixed magnetic field under increasing temperature. Depending on how the sample was cooled to 2~K prior to the measurement, three temperature versus field histories are distinguished. The sample was either cooled (i)~in zero magnetic field (zero-field cooling, zfc), (ii)~with the field at the value applied during the measurement (field cooling, fc), or (iii)~in a field of 250~mT (high-field cooling, hfc). For the magnetization data as a function of field, the sample was cooled in zero field. Subsequently, data were recorded during the initial increase of the field to $+250$~mT corresponding to a magnetic virgin curve, followed by a decrease to $-250$~mT, and a final increase back to $+250$~mT.
\section{Experimental results}
\label{sec:results}
\subsection{Phase diagram and bulk magnetic properties}
\begin{figure}
\includegraphics[width=1.0\linewidth]{figure2}
\caption{\label{fig:2}Zero-field composition--temperature phase diagram of Fe$_{x}$Cr$_{1-x}$. Data inferred from ac susceptibility, $\chi_{\mathrm{ac}}$, and neutron depolarization are combined with data reported by Burke and coworkers~\cite{1983_Burke_JPhysFMetPhys_I, 1983_Burke_JPhysFMetPhys_II, 1983_Burke_JPhysFMetPhys_III}. Paramagnetic~(PM), antiferromagnetic~(AFM), ferromagnetic~(FM), and spin-glass~(SG) regimes are distinguished. A precursor phenomenon is observed above the dome of spin-glass behavior (purple line). (a)~Overview. (b) Close-up view of the regime of spin-glass behavior as marked by the dashed box in panel (a).}
\end{figure}
The presentation of the experimental results starts with the compositional phase diagram of Fe$_{x}$Cr$_{1-x}$, illustrating central results of our study. An overview of the entire concentration range studied, $0.05 \leq x \leq 0.30$, and a close-up view around the dome of spin-glass behavior are shown in Figs.~\ref{fig:2}(a) and \ref{fig:2}(b), respectively. Characteristic temperatures inferred in this study are complemented by values reported by Burke and coworkers~\cite{1983_Burke_JPhysFMetPhys_I, 1983_Burke_JPhysFMetPhys_II, 1983_Burke_JPhysFMetPhys_III}, in good agreement with our results. Comparing the different physical properties in our study, we find that the imaginary part of the ac susceptibility displays the most pronounced signatures at the various phase transitions and crossovers. Therefore, the imaginary part was used to define the characteristic temperatures as discussed in the following. The same values are then marked in the different physical properties to highlight the consistency with alternative definitions of the characteristic temperatures based on these properties.
Four regimes may be distinguished in the phase diagram, namely paramagnetism at high temperatures (PM, no shading), antiferromagnetic order for small values of $x$ (AFM, green shading), ferromagnetic order for larger values of $x$ (FM, blue shading), and spin-glass behavior at low temperatures (SG, orange shading). We note that faint signatures reminiscent of those attributed to the onset of ferromagnetic order are observed in the susceptibility and neutron depolarization for $0.15 \leq x \leq 0.18$ (light blue shading). In addition, a distinct precursor phenomenon preceding the spin-glass behavior is observed at the temperature $T_{\mathrm{X}}$ (purple line) across a wide concentration range. Before elaborating on the underlying experimental data, we briefly summarize the key characteristics of the different regimes.
We attribute the onset of antiferromagnetic order below the N\'{e}el temperature $T_{\mathrm{N}}$ for $x = 0.05$ and $x = 0.10$ to a sharp kink in the imaginary part of the ac susceptibility, where values of $T_{\mathrm{N}}$ are consistent with previous reports~\cite{1978_Burke_JPhysFMetPhys, 1983_Burke_JPhysFMetPhys_I}. As may be expected, the transition is not sensitive to changes of the magnetic field, excitation frequency, or cooling history. The absolute value of the magnetization is small and it increases essentially linearly as a function of field in the parameter range studied.
We identify the emergence of ferromagnetic order below the Curie temperature $T_{\mathrm{C}}$ for $0.18 \leq x$ from a maximum in the imaginary part of the ac susceptibility that is suppressed in small magnetic fields of a few millitesla. This interpretation is corroborated by the onset of neutron depolarization. The transition is not sensitive to changes of the excitation frequency or cooling history. The magnetic field dependence of the magnetization exhibits a characteristic S-shape with almost vanishing hysteresis, reaching quasi-saturation at small fields. Both characteristics are expected for a soft ferromagnetic material such as iron. For $0.15 \leq x \leq 0.18$, faint signatures reminiscent of those observed for $0.18 \leq x$, such as a small shoulder instead of a maximum in the imaginary part of the ac susceptibility, are interpreted in terms of an incipient onset of ferromagnetic order.
We identify reentrant spin-glass behavior below a freezing temperature $T_{\mathrm{g}}$ for $0.10 \leq x \leq 0.25$ from a pronounced maximum in the imaginary part of the ac susceptibility that is suppressed at intermediate magnetic fields of the order of 50~mT. The transition shifts to lower temperatures with increasing excitation frequency, representing a hallmark of spin glasses. Further key indications for spin-glass behavior below $T_{\mathrm{g}}$ are a branching between different cooling histories in the temperature dependence of the magnetization and neutron depolarization as well as mictomagnetic behavior in the field dependence of the magnetization, i.e., the virgin magnetic curve lies outside the hysteresis loop obtained when starting from high magnetic field.
In addition, we identify a precursor phenomenon preceding the onset of spin-glass behavior at a temperature $T_{\mathrm{X}}$ based on a maximum in the imaginary part of the ac susceptibility that is suppressed in small magnetic fields reminiscent of the ferromagnetic transition. With increasing excitation frequency the maximum shifts to lower temperatures, however at a smaller rate than the freezing temperature $T_{\mathrm{g}}$. Interestingly, the magnetization and neutron depolarization exhibit no signatures at $T_{\mathrm{X}}$.
\subsection{Zero-field ac susceptibility}
\begin{figure}
\includegraphics[width=1.0\linewidth]{figure3}
\caption{\label{fig:3}Zero-field ac susceptibility as a function of temperature for all samples studied. For each concentration, real part (Re\,$\chi_{\mathrm{ac}}$, left column) and imaginary part (Im\,$\chi_{\mathrm{ac}}$, right column) of the susceptibility are shown. Note the logarithmic temperature scale and the increasing scale on the ordinate with increasing $x$. Triangles mark temperatures associated with the onset of antiferromagnetic order at $T_{\mathrm{N}}$ (green), spin-glass behavior at $T_{\mathrm{g}}$ (red), ferromagnetic order at $T_{\mathrm{C}}$ (blue), and the precursor phenomenon at $T_{\mathrm{X}}$ (purple). The corresponding values are inferred from Im\,$\chi_{\mathrm{ac}}$, see text for details.}
\end{figure}
The real and imaginary parts of the zero-field ac susceptibility on a logarithmic temperature scale are shown in Fig.~\ref{fig:3} for each sample studied. Characteristic temperatures are inferred from the imaginary part and marked by colored triangles in both quantities. While the identification of the underlying transitions and crossovers will be justified further in terms of the dependence of the signatures on magnetic field, excitation frequency, and history, as elaborated below, the corresponding temperatures are referred to as $T_{\mathrm{N}}$, $T_{\mathrm{C}}$, $T_{\mathrm{g}}$, and $T_{\mathrm{X}}$ already in the following.
For small iron concentrations, such as $x = 0.05$ shown in Fig.~\ref{fig:3}(a), the real part is small and essentially featureless, with exception of an increase at low temperatures that may be attributed to the presence of ferromagnetic impurities, i.e., a so-called Curie tail~\cite{1972_DiSalvo_PhysRevB, 2014_Bauer_PhysRevB}. The imaginary part is also small but displays a kink at the N\'{e}el temperature $T_{\mathrm{N}}$. In metallic specimens, such as Fe$_{x}$Cr$_{1-x}$, part of the dissipation detected via the imaginary part of the ac susceptibility arises from the excitation of eddy currents at the surface of the sample. Eddy current losses scale with the resistivity~\cite{1998_Jackson_Book, 1992_Samarappuli_PhysicaCSuperconductivity} and in turn the kink at $T_{\mathrm{N}}$ reflects the distinct change of the electrical resistivity at the onset of long-range antiferromagnetic order.
When increasing the iron concentration to $x = 0.10$, as shown in Fig.~\ref{fig:3}(b), both the real and imaginary parts increase by one order of magnitude. Starting at $x = 0.10$, a broad maximum may be observed in the real part that indicates an onset of magnetic correlations where the lack of further fine structure renders the extraction of more detailed information impossible. In contrast, the imaginary part exhibits several distinct signatures that allow, in combination with data presented below, to infer the phase diagram shown in Fig.~\ref{fig:2}. For $x = 0.10$, in addition to the kink at $T_{\mathrm{N}}$ a maximum may be observed at 3~K which we attribute to the spin freezing at $T_{\mathrm{g}}$.
Further increasing the iron concentration to $x = 0.15$, as shown in Fig.~\ref{fig:3}(c), results again in an increase of both the real and imaginary parts by one order of magnitude. The broad maximum in the real part shifts to slightly larger temperatures. In the imaginary part, two distinct maxima are resolved, accompanied by a shoulder at their high-temperature side. From low to high temperatures, these signatures may be attributed to $T_{\mathrm{g}}$, $T_{\mathrm{X}}$, and a potential onset of ferromagnetism at $T_{\mathrm{C}}$. No signatures related to antiferromagnetism may be discerned. For $x = 0.16$ and 0.17, shown in Figs.~\ref{fig:3}(d) and \ref{fig:3}(e), both the real and imaginary part remain qualitatively unchanged while their absolute values increase further. The characteristic temperatures shift slightly to larger values.
For $x = 0.18$, 0.19, 0.20, 0.21, and 0.22, shown in Figs.~\ref{fig:3}(f)--\ref{fig:3}(j), the size of the real and imaginary parts of the susceptibility remains essentially unchanged. The real part is best described in terms of a broad maximum that becomes increasingly asymmetric as the low-temperature extrapolation of the susceptibility increases with $x$. In the imaginary part, the signature ascribed to the onset of ferromagnetic order at $T_{\mathrm{C}}$ at larger concentrations develops into a clear maximum, overlapping with the maximum at $T_{\mathrm{X}}$ up to $x = 0.20$. For $x = 0.21$ and $x = 0.22$, three well-separated maxima may be attributed to the characteristic temperatures $T_{\mathrm{g}}$, $T_{\mathrm{X}}$, and $T_{\mathrm{C}}$. While both $T_{\mathrm{g}}$ and $T_{\mathrm{X}}$ stay almost constant with increasing $x$, $T_{\mathrm{C}}$ distinctly shifts to higher temperatures.
For $x = 0.25$, shown in Fig.~\ref{fig:3}(k), the signature attributed to $T_{\mathrm{X}}$ has vanished while $T_{\mathrm{g}}$ is suppressed to about 5~K. For $x = 0.30$, shown in Fig.~\ref{fig:3}(l), only the ferromagnetic transition at $T_{\mathrm{C}}$ remains and the susceptibility is essentially constant below $T_{\mathrm{C}}$. Note that the suppression of spin-glass behavior around $x = 0.25$ coincides with the percolation limit of 24.3\% in the crystal structure $Im\overline{3}m$, i.e., the limit above which long-range magnetic order is expected in spin-glass systems~\cite{1978_Mydosh_JournalofMagnetismandMagneticMaterials}. Table~\ref{tab:1} summarizes the characteristic temperatures for all samples studied, including an estimate of the associated errors.
\subsection{Magnetization and ac susceptibility under applied magnetic fields}
\begin{figure*}
\includegraphics[width=1.0\linewidth]{figure4}
\caption{\label{fig:4}Magnetization and ac susceptibility in magnetic fields up to 250~mT for selected concentrations (increasing from top to bottom). Triangles mark the temperatures $T_{\mathrm{N}}$ (green), $T_{\mathrm{g}}$ (red), $T_{\mathrm{C}}$ (blue), and $T_{\mathrm{X}}$ (purple). The values shown in all panels correspond to those inferred from Im\,$\chi_{\mathrm{ac}}$ in zero field. \mbox{(a1)--(f1)}~Real part of the ac susceptibility, Re\,$\chi_{\mathrm{ac}}$, as a function of temperature on a logarithmic scale for different magnetic fields. \mbox{(a2)--(f2)}~Imaginary part of the ac susceptibility, Im\,$\chi_{\mathrm{ac}}$. \mbox{(a3)--(f3)}~Magnetization for three different field histories, namely high-field cooling~(hfc), field cooling (fc), and zero-field cooling (zfc). \mbox{(a4)--(f4)}~Magnetization as a function of field at a temperature of 2~K after initial zero-field cooling. Arrows indicate the sweep directions. The scales of the ordinates for all quantities increase from top to bottom.}
\end{figure*}
In order to justify further the relationship of the signatures in the ac susceptibility with the different phases, their evolution under increasing magnetic field up to 250~mT and their dependence on the cooling history are illustrated in Fig.~\ref{fig:4}. For selected values of $x$, the temperature dependences of the real part of the ac susceptibility, the imaginary part of the ac susceptibility, and the magnetization, shown in the first three columns, are complemented by the magnetic field dependence of the magnetization at low temperature, $T = 2$~K, shown in the fourth column.
For small iron concentrations, such as $x = 0.05$ shown in Figs.~\ref{fig:4}(a1)--\ref{fig:4}(a4), both Re\,$\chi_{\mathrm{ac}}$ and Im\,$\chi_{\mathrm{ac}}$ remain qualitatively unchanged up to the highest fields studied. The associated stability of the transition at $T_{\mathrm{N}}$ under magnetic field represents a key characteristic of itinerant antiferromagnetism, which is also observed in pure chromium. Consistent with this behavior, the magnetization is small and increases essentially linearly in the field range studied. No dependence on the cooling history is observed.
For intermediate iron concentrations, such as $x = 0.15$, $x = 0.17$, and $x = 0.18$ shown in Figs.~\ref{fig:4}(b1) to \ref{fig:4}(d4), the broad maximum in Re\,$\chi_{\mathrm{ac}}$ is suppressed under increasing field. Akin to the situation in zero field, the evolution of the different characteristic temperatures is tracked in Im\,$\chi_{\mathrm{ac}}$. Here, the signatures associated with $T_{\mathrm{X}}$ and $T_{\mathrm{C}}$ proof to be highly sensitive to magnetic fields and are suppressed already above about 2~mT. The maximum associated with the spin freezing at $T_{\mathrm{g}}$ is suppressed at higher field values.
In the magnetization as a function of temperature, shown in Figs.~\ref{fig:4}(b3) to \ref{fig:4}(d3), a branching between different cooling histories may be observed below $T_{\mathrm{g}}$. Compared to data recorded after field cooling (fc), for which the temperature dependence of the magnetization is essentially featureless at $T_{\mathrm{g}}$, the magnetization at low temperatures is reduced for data recorded after zero-field cooling (zfc) and enhanced for data recorded after high-field cooling (hfc). Such a history dependence is typical for spin glasses~\cite{2015_Mydosh_RepProgPhys}, but also observed in materials where the orientation and population of domains with a net magnetic moment plays a role, such as conventional ferromagnets.
The spin-glass character below $T_{\mathrm{g}}$ is corroborated by the field dependence of the magnetization shown in Figs.~\ref{fig:4}(b4) to \ref{fig:4}(d4), which is perfectly consistent with the temperature dependence. Most notably, in the spin-glass regime at low temperatures, mictomagnetic behavior is observed, i.e., the magnetization of the magnetic virgin state obtained after initial zero-field cooling (red curve) is partly outside the hysteresis loop obtained when starting from the field-polarized state at large fields (blue curves)~\cite{1976_Shull_SolidStateCommunications}. This peculiar behavior is not observed in ferromagnets and represents a hallmark of spin glasses~\cite{1978_Mydosh_JournalofMagnetismandMagneticMaterials}.
For slightly larger iron concentrations, such as $x = 0.22$ shown in Figs.~\ref{fig:4}(e1) to \ref{fig:4}(e4), three maxima at $T_{\mathrm{g}}$, $T_{\mathrm{X}}$, and $T_{\mathrm{C}}$ are clearly separated. With increasing field, first the high-temperature maximum associated with $T_{\mathrm{C}}$ is suppressed, followed by the maxima at $T_{\mathrm{X}}$ and $T_{\mathrm{g}}$. The hysteresis loop at low temperatures is narrower, becoming akin to that of a conventional soft ferromagnet. For large iron concentrations, such as $x = 0.30$ shown in Figs.~\ref{fig:4}(f1) to \ref{fig:4}(f4), the evolution of Re\,$\chi_{\mathrm{ac}}$, Im\,$\chi_{\mathrm{ac}}$, and the magnetization as a function of magnetic field consistently corresponds to that of a conventional soft ferromagnet with a Curie temperature $T_{\mathrm{C}}$ of more than 200~K. For the ferromagnetic state observed here, all domains are aligned in fields exceeding ${\sim}50$~mT.
\begin{table}
\caption{\label{tab:1}Summary of the characteristic temperatures in Fe$_{x}$Cr$_{1-x}$ as inferred from the imaginary part of the ac susceptibility and neutron depolarization data. We distinguish the N\'{e}el temperature $T_{\mathrm{N}}$, the Curie temperature $T_{\mathrm{C}}$, the spin freezing temperature $T_{\mathrm{g}}$, and the precursor phenomenon at $T_{\mathrm{X}}$. Temperatures inferred from neutron depolarization data are denoted with the superscript `D'. For $T_{\mathrm{C}}^{\mathrm{D}}$, the errors were extracted from the fitting procedure (see below), while all other errors correspond to estimates of read-out errors.}
\begin{ruledtabular}
\begin{tabular}{ccccccc}
$x$ & $T_{\mathrm{N}}$ (K) & $T_{\mathrm{g}}$ (K) & $T_{\mathrm{X}}$ (K) & $T_{\mathrm{C}}$ (K) & $T_{\mathrm{g}}^{\mathrm{D}}$ (K) & $T_{\mathrm{C}}^{\mathrm{D}}$ (K) \\
\hline
0.05 & $240 \pm 5$ & - & - & - &- & - \\
0.10 & $190 \pm 5$ & $3 \pm 5$ & - & - & - & - \\
0.15 & - & $11 \pm 2$ & $23 \pm 3$ & $30 \pm 10$ & - & - \\
0.16 & - & $15 \pm 2$ & $34 \pm 3$ & $42 \pm 10$ & $18 \pm 5$ & $61 \pm 10$ \\
0.17 & - & $20 \pm 2$ & $36 \pm 3$ & $42 \pm 10$ & $23 \pm 5$ & $47 \pm 2$ \\
0.18 & - & $22 \pm 2$ & $35 \pm 3$ & $42 \pm 10$ & $22 \pm 5$ & $73 \pm 1$ \\
0.19 & - & $19 \pm 2$ & $37 \pm 5$ & $56 \pm 10$ & $25 \pm 5$ & $93 \pm 1$ \\
0.20 & - & $19 \pm 2$ & $35 \pm 5$ & $50 \pm 10$ & $24 \pm 5$ & $84 \pm 1$ \\
0.21 & - & $14 \pm 2$ & $35 \pm 5$ & $108 \pm 5$ & $25 \pm 5$ & $101 \pm 1$ \\
0.22 & - & $13 \pm 2$ & $32 \pm 5$ & $106 \pm 5$ & $21 \pm 5$ & $100 \pm 1$ \\
0.25 & - & $5 \pm 5$ & - & $200 \pm 5$ & - & - \\
0.30 & - & - & - & $290 \pm 5$ & - & - \\
\end{tabular}
\end{ruledtabular}
\end{table}
\subsection{Neutron depolarization}
\begin{figure}
\includegraphics{figure5}
\caption{\label{fig:5}Remaining neutron polarization after transmission through 0.5~mm of Fe$_{x}$Cr$_{1-x}$ as a function of temperature for $0.15 \leq x \leq 0.22$ (increasing from top to bottom). Data were measured in zero magnetic field under increasing temperature following initial zero-field cooling (zfc) or high-field cooling (hfc). Colored triangles mark the Curie transition $T_{\mathrm{C}}$ and the freezing temperature $T_{\mathrm{g}}$. Orange solid lines are fits to the experimental data, see text for details.}
\end{figure}
The neutron depolarization of samples in the central composition range $0.15 \leq x \leq 0.22$ was studied to gain further insights on the microscopic nature of the different magnetic states. Figure~\ref{fig:5} shows the polarization, $P$, of the transmitted neutron beam with respect to the polarization axis of the incoming neutron beam as a function of temperature. In the presence of ferromagnetically ordered domains or clusters that are large enough to induce a Larmor precession of the neutron spin during its transit, adjacent neutron trajectories pick up different Larmor phases due to the domain distribution in the sample. When averaged over the pixel size of the detector, this process results in polarization values below 1, also referred to as neutron depolarization. For a pedagogical introduction to the time and space resolution of this technique, we refer to Refs.~\cite{2008_Kardjilov_NatPhys, 2010_Schulz_PhD, 2015_Schmakat_PhD, _Seifert_tobepublished}.
For $x = 0.15$, shown in Fig.~\ref{fig:5}(a), no depolarization is observed. For $x = 0.16$, shown in Fig.~\ref{fig:5}(b), a weak decrease of polarization emerges below a point of inflection at $T_{\mathrm{C}} \approx 60$~K (blue triangle). The value of $T_{\mathrm{C}}$ may be inferred from a fit to the experimental data as described below and is in reasonable agreement with the value inferred from the susceptibility. The partial character of the depolarization, $P \approx 0.96$ in the low-temperature limit, indicates that ferromagnetically ordered domains of sufficient size occupy only a fraction of the sample volume. At lower temperatures, a weak additional change of slope may be attributed to the spin freezing at $T_{\mathrm{g}}$ (red triangle).
For $x = 0.17$, shown in Fig.~\ref{fig:5}(c), both signatures get more pronounced. In particular, data recorded after zero-field cooling (zfc) and high-field cooling (hfc) branch below $T_{\mathrm{g}}$, akin to the branching observed in the magnetization. The underlying dependence of the microscopic magnetic texture on the cooling history is typical for a spin glass. Note that the amount of branching varies from sample to sample. Such pronounced sample dependence is not uncommon in spin-glass systems, though the microscopic origin of these irregularities in Fe$_{x}$Cr$_{1-x}$ remains to be resolved.
When further increasing $x$, shown in Figs.~\ref{fig:5}(c)--\ref{fig:5}(h), the transition temperature $T_{\mathrm{C}}$ shifts to larger values and the depolarization gets more pronounced until essentially reaching $P = 0$ at low temperatures for $x = 0.22$. No qualitative changes are observed around $x = 0.19$, i.e., the composition for which the onset of long-range ferromagnetic order was reported previously~\cite{1983_Burke_JPhysFMetPhys_II}. Instead, the gradual evolution as a function of $x$ suggests that ferromagnetically ordered domains start to emerge already for $x \approx 0.15$ and continuously increase in size and/or number with $x$. This conjecture is also consistent with the appearance of faint signatures in the susceptibility. Note that there are no signatures related to $T_{\mathrm{X}}$.
In order to infer quantitative information, the neutron depolarization data were fitted using the formalism of Halpern and Holstein~\cite{1941_Halpern_PhysRev}. Here, spin-polarized neutrons are considered as they are traveling through a sample with randomly oriented ferromagnet domains. When the rotation of the neutron spin is small for each domain, i.e., when $\omega_{\mathrm{L}}t \ll 2\pi$ with the Larmor frequency $\omega_{\mathrm{L}}$ and the time required for transiting the domain $t$, the temperature dependence of the polarization of the transmitted neutrons may be approximated as
\begin{equation}\label{equ1}
P(T) = \mathrm{exp}\left[-\frac{1}{3}\gamma^{2}B^{2}_{\mathrm{0}}(T)\frac{d\delta}{v^{2}}\right].
\end{equation}
Here, $\gamma$ is the gyromagnetic ratio of the neutron, $B_{\mathrm{0}}(T)$ is the temperature-dependent average magnetic flux per domain, $d$ is the sample thickness along the flight direction, $\delta$ is the mean magnetic domain size, and $v$ is the speed of the neutrons. In mean-field approximation, the temperature dependence of the magnetic flux per domain is given by
\begin{equation}\label{equ2}
B_{\mathrm{0}}(T) = {\mu_{0}}^{2} {M_{0}}^{2} \left(1 - \frac{T}{T_{\mathrm{C}}}\right)^{\beta}
\end{equation}
where $\mu_{0}$ is the vacuum permeability, $M_{0}$ is the spontaneous magnetization in each domain, and $\beta$ is the critical exponent. In the following, we use the magnetization value measured at 2~K in a magnetic field of 250~mT as an approximation for $M_{0}$ and set $\beta = 0.5$, i.e., the textbook value for a mean-field ferromagnet. Note that $M_{0}$ more than triples when increasing the iron concentration from $x = 0.15$ to $x = 0.22$, as shown in Tab.~\ref{tab:2}, suggesting that correlations become increasingly important.
Fitting the temperature dependence of the polarization for temperatures above $T_{\mathrm{g}}$ according to Eq.~\eqref{equ1} yields mean values for the Curie temperature $T_{\mathrm{C}}$ and the domain size $\delta$, cf.\ solid orange lines in Fig.~\ref{fig:5} tracking the experimental data. The results of the fitting are summarized in Tab.~\ref{tab:2}. The values of $T_{\mathrm{C}}$ inferred this way are typically slightly higher than those inferred from the ac susceptibility, cf.\ Tab.~\ref{tab:1}. This shift could be related to depolarization caused by slow ferromagnetic fluctuations prevailing at temperatures just above the onset of static magnetic order. Yet, both values of $T_{\mathrm{C}}$ are in reasonable agreement. The mean size of ferromagnetically aligned domains or clusters, $\delta$, increases with increasing $x$, reflecting the increased density of iron atoms. As will be shown below, this general trend is corroborated also by an analysis of the Mydosh parameter indicating that Fe$_{x}$Cr$_{1-x}$ transforms from a cluster glass for small $x$ to a superparamagnet for larger $x$.
\begin{table}
\caption{\label{tab:2}Summary of the Curie temperature, $T_{\mathrm{C}}$, and the mean domain size, $\delta$, in Fe$_{x}$Cr$_{1-x}$ as inferred from neutron depolarization studies. Also shown is the magnetization measured at a temperature of 2~K in a magnetic field of 250~mT, ${M_{0}}$.}
\begin{ruledtabular}
\begin{tabular}{cccc}
$x$ & $T_{\mathrm{C}}^{\mathrm{D}}$ (K) & $\delta$ ($\upmu$m) & $M_{0}$ ($10^{5}$A/m) \\
\hline
0.15 & - & - & 0.70 \\
0.16 & $61 \pm 10$ & $0.61 \pm 0.10$ & 0.84 \\
0.17 & $47 \pm 2$ & $2.12 \pm 0.15$ & 0.96 \\
0.18 & $73 \pm 1$ & $3.17 \pm 0.07$ & 1.24 \\
0.19 & $93 \pm 1$ & $3.47 \pm 0.02$ & 1.64 \\
0.20 & $84 \pm 1$ & $4.67 \pm 0.03$ & 1.67 \\
0.21 & $101 \pm 1$ & $3.52 \pm 0.03$ & 2.18 \\
0.22 & $100 \pm 1$ & $5.76 \pm 0.13$ & 2.27\\
\end{tabular}
\end{ruledtabular}
\end{table}
\subsection{Specific heat, high-field magnetometry, and electrical resistivity}
\begin{figure}
\includegraphics{figure6}
\caption{\label{fig:6}Low-temperature properties of Fe$_{x}$Cr$_{1-x}$ with $x = 0.15$. (a)~Specific heat as a function of temperature. Zero-field data (black curve) and an estimate for the phonon contribution using the Debye model (gray curve) are shown. Inset: Specific heat at high temperatures approaching the Dulong--Petit limit. (b)~Specific heat divided by temperature. After subtraction of the phonon contribution, magnetic contributions at low temperatures are observed (green curve). (c)~Magnetic contribution to the entropy obtained by numerical integration. (d)~Magnetization as a function of field up to $\pm9$~T for different temperatures. (e)~Electrical resistivity as a function of temperature for different applied field values.}
\end{figure}
To obtain a complete picture of the low-temperature properties of Fe$_{x}$Cr$_{1-x}$, the magnetic properties at low fields presented so far are complemented by measurements of the specific heat, high-field magnetization, and electrical resistivity on the example of Fe$_{x}$Cr$_{1-x}$ with $x = 0.15$.
The specific heat as a function of temperature measured in zero magnetic field is shown in Fig.~\ref{fig:6}(a). At high temperatures, the specific heat approaches the Dulong--Petit limit of $C_{\mathrm{DP}} = 3R = 24.9~\mathrm{J}\,\mathrm{mol}^{-1}\mathrm{K}^{-1}$, as illustrated in the inset. With decreasing temperature, the specific heat monotonically decreases, lacking pronounced anomalies at the different characteristic temperatures.
The specific heat at high temperatures is dominated by the phonon contribution that is described well by a Debye model with a Debye temperature $\mathit{\Theta}_{\mathrm{D}} = 460$~K, which is slightly smaller than the values reported for $\alpha$-iron (477~K) and chromium (606~K)~\cite{2003_Tari_Book}. As shown in terms of the specific heat divided by temperature, $C/T$, in Fig.~\ref{fig:6}(b), the subtraction of this phonon contribution from the measured data highlights the presence of magnetic contributions to the specific heat below ${\sim}$30~K (green curve). As typical for spin-glass systems, no sharp signatures are observed and the total magnetic contribution to the specific heat is rather small~\cite{2015_Mydosh_RepProgPhys}. This finding is substantiated by the entropy $S$ as calculated by means of extrapolating $C/T$ to zero temperature and numerically integrating
\begin{equation}
S(T) = \int_{0}^{T}\frac{C(T)}{T}\,\mathrm{d}T.
\end{equation}
As shown in Fig.~\ref{fig:6}(c), the magnetic contribution to the entropy released up to 30~K amounts to about $0.04~R\ln2$, which corresponds to a small fraction of the total magnetic moment only.
Insights on the evolution of the magnetic properties under high magnetic fields may be inferred from the magnetization as measured up to $\pm9$~T, shown in Fig.~\ref{fig:6}(d). The magnetization is unsaturated up to the highest fields studied and qualitatively unchanged under increasing temperature, only moderately decreasing in absolute value. The value of 0.22~$\mu_{\mathrm{B}}/\mathrm{f.u.}$ obtained at 2~K and 9~T corresponds to a moment of 1.46~$\mu_{\mathrm{B}}/\mathrm{Fe}$, i.e., the moment per iron atom in Fe$_{x}$Cr$_{1-x}$ with $x = 0.15$ stays below the value of 2.2~$\mu_{\mathrm{B}}/\mathrm{Fe}$ observed in $\alpha$-iron~\cite{2001_Blundell_Book}.
Finally, the electrical resistivity as a function of temperature is shown in Fig.~\ref{fig:6}(e). As typical for a metal, the resistivity is of the order of several ten $\upmu\Omega\,\mathrm{cm}$ and, starting from room temperature, decreases essentially linearly with temperature. However, around 60~K, i.e., well above the onset of magnetic order, a minimum is observed before the resistivity increases towards low temperatures.
Such an incipient divergence of the resistivity with decreasing temperature due to magnetic impurities is reminiscent of single-ion Kondo systems~\cite{1934_deHaas_Physica, 1964_Kondo_ProgTheorPhys, 1987_Lin_PhysRevLett, 2012_Pikul_PhysRevLett}. When magnetic field is applied perpendicular to the current direction, this low-temperature increase is suppressed and a point of inflection emerges around 100~K. This sensitivity with respect to magnetic fields clearly indicates that the additional scattering at low temperatures is of magnetic origin. Qualitatively, the present transport data are in agreement with earlier reports on Fe$_{x}$Cr$_{1-x}$ for $0 \leq x \leq 0.112$~\cite{1966_Arajs_JApplPhys}.
\section{Characterization of the spin-glass behavior}
\label{sec:discussion}
In spin glasses, random site occupancy of magnetic moments, competing interactions, and geometric frustration lead to a collective freezing of the magnetic moments below a freezing temperature $T_{\mathrm{g}}$. The resulting irreversible metastable magnetic state shares many analogies with structural glasses. Depending on the densities of magnetic moments, different types of spin glasses may be distinguished. For small densities, the magnetic properties may be described in terms of single magnetic impurities diluted in a nonmagnetic host, referred to as canonical spin-glass behavior. These systems are characterized by strong interactions and the cooperative spin freezing represents a phase transition. For larger densities, clusters form with local magnetic order and frustration between neighboring clusters, referred to as cluster glass behavior, developing superparamagnetic characteristics as the cluster size increases. In these systems, the inter-cluster interactions are rather weak and the spin freezing takes place in the form of a gradual blocking. When the density of magnetic moments surpasses the percolation limit, long-range magnetic order may be expected.
For compositions close to the percolation limit, so-called reentrant spin-glass behavior may be observed. In such cases, as a function of decreasing temperature first a transition from a paramagnetic to a magnetically ordered state occurs before a spin-glass state emerges at lower temperatures. As both the paramagnetic and the spin-glass state lack long-range magnetic order, the expression ‘reentrant’ alludes to the disappearance of long-range magnetic order after a finite temperature interval and consequently the re-emergence of a state without long-range order~\cite{1993_Mydosh_Book}.
The metastable nature of spin glasses manifests itself in terms of a pronounced history dependence of both microscopic spin arrangement and macroscopic magnetic properties, translating into four key experimental observations; (i) a frequency-dependent shift of the maximum at $T_{\mathrm{g}}$ in the ac susceptibility, (ii) a broad maximum in the specific heat located 20\% to 40\% above $T_{\mathrm{g}}$, (iii) a splitting of the magnetization for different cooling histories, and (iv) a time-dependent creep of the magnetization~\cite{2015_Mydosh_RepProgPhys}. The splitting of the magnetization and the broad signature in the specific heat were addressed in Figs.~\ref{fig:5} and \ref{fig:6}.
In the following, the frequency dependence of the ac susceptibility will be analyzed by means of three different ways, namely the Mydosh parameter, power law fits, and the Vogel--Fulcher law, permitting to classify the spin-glass behavior in Fe$_{x}$Cr$_{1-x}$ and its change as a function of composition.
\begin{figure}
\includegraphics[width=0.97\linewidth]{figure7}
\caption{\label{fig:7}Imaginary part of the zero-field ac susceptibility as a function of temperature for Fe$_{x}$Cr$_{1-x}$ with $x = 0.15$ measured at different excitation frequencies $f$. Analysis of the frequency-dependent shift of the spin freezing temperature $T_{\mathrm{g}}$ allows to gain insights on the microscopic nature of the spin-glass state.}
\end{figure}
In the present study, the freezing temperature $T_{\mathrm{g}}$ was inferred from a maximum in the imaginary part of the ac susceptibility as measured at an excitation frequency of 1~kHz. However, in a spin glass the temperature below which spin freezing is observed depends on the excitation frequency $f$, as illustrated in Fig.~\ref{fig:7} for the example of Fe$_{x}$Cr$_{1-x}$ with $x = 0.15$. Under increasing frequency, the imaginary part remains qualitatively unchanged but increases in absolute size and the maximum indicating $T_{\mathrm{g}}$ shifts to higher temperatures. Analyzing this shift in turn provides information on the microscopic nature of the spin-glass behavior.
The first and perhaps most straightforward approach utilizes the empirical Mydosh parameter $\phi$, defined as
\begin{equation}
\phi = \left[\frac{T_{\mathrm{g}}(f_{\mathrm{high}})}{T_{\mathrm{g}}(f_{\mathrm{low}})} - 1\right] \left[\ln\left(\frac{f_{\mathrm{high}}}{f_{\mathrm{low}}}\right)\right]^{-1}
\end{equation}
where $T_{\mathrm{g}}(f_{\mathrm{high}})$ and $T_{\mathrm{g}}(f_{\mathrm{low}})$ are the freezing temperatures as experimentally observed at high and low excitation frequencies, $f_{\mathrm{high}}$ and $f_{\mathrm{low}}$, respectively~\cite{1993_Mydosh_Book, 2015_Mydosh_RepProgPhys}. Small shifts associated with Mydosh parameters below 0.01 are typical for canonical spin glasses such as Mn$_{x}$Cu$_{1-x}$, while cluster glasses exhibit intermediate values up to 0.1. Values exceeding 0.1 suggest superparamagnetic behavior~\cite{1993_Mydosh_Book, 2015_Mydosh_RepProgPhys, 1980_Tholence_SolidStateCommun, 1986_Binder_RevModPhys}.
\begin{figure}
\includegraphics[width=1.0\linewidth]{figure8}
\caption{\label{fig:8}Evolution of the Mydosh-parameter in Fe$_{x}$Cr$_{1-x}$. (a)~Schematic depiction of the five different sequences of magnetic regimes observed as a function of temperature for different $x$. The following regimes are distinguished: paramagnetic~(PM), antiferromagnetic~(AFM), ferromagnetic~(FM), spin-glass~(SG). A precursor phenomenon~(PC) may be observed between FM and SG. (b)~Mydosh parameter $\phi$ as a function of the iron concentration $x$, allowing to classify the spin-glass behavior as canonical ($\phi \leq 0.01$, gray shading), cluster-glass ($0.01 \leq \phi \leq 0.1$, yellow shading), or superparamagnetic ($\phi \geq 0.1$, brown shading). }
\end{figure}
\begin{table*}
\caption{\label{tab:3}Parameters inferred from the analysis of the spin-glass behavior in Fe$_{x}$Cr$_{1-x}$, namely the Mydosh parameter $\phi$, the zero-frequency extrapolation of the spin freezing temperature $T_\mathrm{g}(0)$, the characteristic relaxation time $\tau_{0}$, the critical exponent $z\nu$, the Vogel--Fulcher temperature $T_{0}$, and the cluster activation energy $E_{a}$. The errors were determined by means of Gaussian error propagation ($\phi$), the distance of neighboring data points ($T_\mathrm{g}(0)$), and statistical deviations of the linear fits ($\tau_{0}$, $z\nu$, $T_{0}$, and $E_{a}$).}
\begin{ruledtabular}
\begin{tabular}{ccccccc}
$x$ & $\phi$ & $T_\mathrm{g}(0)$ (K) & $\tau_{0}$ ($10^{-6}$~s) & $z\nu$ & $T_{0}$ (K) & $E_{a}$ (K) \\
\hline
0.05 & - & - & - & - & - & - \\
0.10 & $0.064 \pm 0.011$ & - & - & - & - & - \\
0.15 & $0.080 \pm 0.020$ & $9.1 \pm 0.1$ & $0.16 \pm 0.03$ & $5.0 \pm 0.1$ & $8.5 \pm 0.1$ & $19.9 \pm 0.8$ \\
0.16 & $0.100 \pm 0.034$ & $13.4 \pm 0.1$ & $1.73 \pm 0.15$ & $2.2 \pm 0.0$ & $11.9 \pm 0.1$ & $14.4 \pm 0.3$ \\
0.17 & $0.107 \pm 0.068$ & $18.3 \pm 0.1$ & $6.13 \pm 1.52$ & $1.5 \pm 0.1$ & $16.3 \pm 0.3$ & $12.8 \pm 0.9$ \\
0.18 & $0.108 \pm 0.081$ & $14.5 \pm 0.1$ & $1.18 \pm 0.46$ & $7.0 \pm 0.5$ & $16.9 \pm 0.5$ & $24.2 \pm 2.3$ \\
0.19 & $0.120 \pm 0.042$ & $14.2 \pm 0.1$ & $0.47 \pm 0.15$ & $4.5 \pm 0.2$ & $14.6 \pm 0.4$ & $16.3 \pm 1.4$ \\
0.20 & $0.125 \pm 0.043$ & $13.5 \pm 0.1$ & $1.29 \pm 0.34$ & $4.1 \pm 0.2$ & $13.6 \pm 0.3$ & $18.8 \pm 1.3$ \\
0.21 & $0.138 \pm 0.048$ & $9.5 \pm 0.1$ & $1.67 \pm 0.21$ & $4.7 \pm 0.1$ & $10.3 \pm 0.4$ & $12.0 \pm 1.3$ \\
0.22 & $0.204 \pm 0.071$ & $11.7 \pm 0.1$ & $2.95 \pm 0.80$ & $2.6 \pm 0.1$ & $11.3 \pm 0.4$ & $11.3 \pm 1.2$ \\
0.25 & $0.517 \pm 0.180$ & $2.8 \pm 0.1$ & $75.3 \pm 5.34$ & $1.8 \pm 0.1$ & - & - \\
0.30 & - & - & - & - & - & \\
\end{tabular}
\end{ruledtabular}
\end{table*}
As summarized in Tab.~\ref{tab:3} and illustrated in Fig.~\ref{fig:8}, the Mydosh parameter in Fe$_{x}$Cr$_{1-x}$ monotonically increases as a of function of increasing iron concentration. For small $x$, the values are characteristic of cluster-glass behavior, while for large $x$ they lie well within the regime of superparamagnetic behavior. This evolution reflects the increase of the mean size of ferromagnetic clusters as inferred from the analysis of the neutron depolarization data.
\begin{figure}
\includegraphics[width=1.0\linewidth]{figure9}
\caption{\label{fig:9}Analysis of spin-glass behavior using power law fits and the Vogel--Fulcher law for Fe$_{x}$Cr$_{1-x}$ with $x = 0.15$. (a)~Logarithm of the relaxation time as a function of the logarithm of the normalized shift of the freezing temperature. The red solid line is a power law fit allowing to infer the characteristic relaxation time $\tau_{0}$ and the critical exponent $z\nu$. Inset: Goodness of fit for different estimated zero-frequency extrapolations of the freezing temperature, $T_{\mathrm{g}}^{\mathrm{est}}(0)$. The value $T_{\mathrm{g}}(0)$ used in the main panel is defined as the temperature of highest $R^{2}$. (b)~Spin freezing temperature as a function of the inverse of the logarithm of the ratio of characteristic frequency and excitation frequency. The red solid line is a fit according to the Vogel--Fulcher law allowing to infer the cluster activation energy $E_{a}$ and the Vogel--Fulcher temperature $T_{0}$.}
\end{figure}
The second approach employs the standard theory for dynamical scaling near phase transitions to $T_{\mathrm{g}}$~\cite{1977_Hohenberg_RevModPhys, 1993_Mydosh_Book}. The relaxation time $\tau = \frac{1}{2\pi f}$ is expressed in terms of the power law
\begin{equation}
\tau = \tau_{0} \left[\frac{T_{\mathrm{g}}(f)}{T_{\mathrm{g}}(0)} - 1\right]^{z\nu}
\end{equation}
where $\tau_{0}$ is the characteristic relaxation time of a single moment or cluster, $T_{\mathrm{g}}(0)$ is the zero-frequency limit of the spin freezing temperature, and $z\nu$ is the critical exponent. In the archetypical canonical spin glass Mn$_{x}$Cu$_{1-x}$, one obtains values such as $\tau_{0} = 10^{-13}~\mathrm{s}$, $T_{\mathrm{g}}(0) = 27.5~\mathrm{K}$, and $z\nu = 5$~\cite{1985_Souletie_PhysRevB}.
The corresponding analysis is illustrated in Fig.~\ref{fig:9}(a) for Fe$_{x}$Cr$_{1-x}$ with $x = 0.15$. First the logarithm of the ratio of relaxation time and characteristic relaxation time, $\ln(\frac{\tau}{\tau_{0}})$, is plotted as a function of the logarithm of the normalized shift of the freezing temperature, $\ln\left[\frac{T_{\mathrm{g}}(f)}{T_{\mathrm{g}}(0)} - 1\right]$, for a series of estimated values of the zero-frequency extrapolation $T_{\mathrm{g}}^{\mathrm{est}}(0)$. For each value of $T_{\mathrm{g}}^{\mathrm{est}}(0)$ the data are fitted linearly and the goodness of fit is compared by means of the $R^{2}$ coefficient, cf.\ inset of Fig.~\ref{fig:9}(a). The best approximation for the zero-frequency freezing temperature, $T_{\mathrm{g}}(0)$, is defined as the temperature of highest $R^{2}$. Finally, the characteristic relaxation time $\tau_{0}$ and the critical exponent $z\nu$ are inferred from a linear fit to the experimental data using this value $T_{\mathrm{g}}(0)$, as shown in Fig.~\ref{fig:9}(a) for Fe$_{x}$Cr$_{1-x}$ with $x = 0.15$.
The same analysis was carried out for all compositions Fe$_{x}$Cr$_{1-x}$ featuring spin-glass behavior, yielding the parameters summarized in Tab.~\ref{tab:3}. Characteristic relaxation times of the order of $10^{-6}~\mathrm{s}$ are inferred, i.e., several order of magnitude larger than those observed in canonical spin glasses and consistent with the presence of comparably large magnetic clusters, as may be expected for the large values of $x$. Note that these characteristic times are also distinctly larger than the $10^{-12}~\mathrm{s}$ to $10^{-8}~\mathrm{s}$ that neutrons require to traverse the magnetic clusters in the depolarization experiments. Consequently, the clusters appear quasi-static for the neutron which in turn is a prerequisite for the observation of net depolarization across a macroscopic sample. The critical exponents range from 1.5 to 7.0, i.e., within the range expected for glassy systems~\cite{1980_Tholence_SolidStateCommun, 1985_Souletie_PhysRevB}. The lack of systematic evolution of both $\tau_{0}$ and $z\nu$ as a function of iron concentration $x$ suggests that these parameters in fact may be rather sensitive to details of microscopic structure, potentially varying substantially between individual samples.
The third approach uses the Vogel--Fulcher law, developed to describe the viscosity of supercooled liquids and glasses, to interpret the properties around the spin freezing temperature $T_{\mathrm{g}}$~\cite{1993_Mydosh_Book, 1925_Fulcher_JAmCeramSoc, 1980_Tholence_SolidStateCommun, 2013_Svanidze_PhysRevB}. Calculating the characteristic frequency $f_{0} = \frac{1}{2\pi\tau_{0}}$ from the characteristic relaxation time $\tau_{0}$ as determined above, the Vogel--Fulcher law for the excitation frequency $f$ reads
\begin{equation}
f = f_{0} \exp\left\lbrace-\frac{E_{a}}{k_{\mathrm{B}}[T_{\mathrm{g}}(f)-T_{0}]}\right\rbrace
\end{equation}
where $k_{\mathrm{B}}$ is the Boltzmann constant, $E_{a}$ is the activation energy for aligning a magnetic cluster by the applied field, and $T_{0}$ is the Vogel--Fulcher temperature providing a measure of the strength of the cluster interactions. As a point of reference, it is interesting to note that values such as $E_{a}/k_{\mathrm{B}} = 11.8~\mathrm{K}$ and $T_{0} = 26.9~\mathrm{K}$ are observed in the archetypical canonical spin glass Mn$_{x}$Cu$_{1-x}$~\cite{1985_Souletie_PhysRevB}.
For each composition Fe$_{x}$Cr$_{1-x}$, the spin freezing temperature $T_{\mathrm{g}}(f)$ is plotted as a function of the inverse of the logarithm of the ratio of characteristic frequency and excitation frequency, $\frac{1}{\ln(f/f_{0})}$, as shown in Fig.~\ref{fig:9}(b) for Fe$_{x}$Cr$_{1-x}$ with $x = 0.15$. A linear fit to the experimental data allows to infer $E_{a}$ and $T_{0}$ from the slope and the intercept. The corresponding values for all compositions Fe$_{x}$Cr$_{1-x}$ featuring spin-glass behavior are summarized in Tab.~\ref{tab:3}. All values of $T_{0}$ and $E_{a}$ are of the order 10~K and positive, indicating the presence of strongly correlated clusters~\cite{2012_Anand_PhysRevB, 2011_Li_ChinesePhysB, 2013_Svanidze_PhysRevB}. Both $T_{0}$ and $E_{a}$ follow roughly the evolution of the spin freezing temperature $T_{\mathrm{g}}$, reaching their maximum values around $x = 0.17$ or $x = 0.18$.
\section{Conclusions}
\label{sec:conclusion}
In summary, a comprehensive study of the magnetic properties of polycrystalline Fe$_{x}$Cr$_{1-x}$ in the composition range $0.05 \leq x \leq 0.30$ was carried out by means of x-ray powder diffraction as well as measurements of the magnetization, ac susceptibility, and neutron depolarization, complemented by specific heat and electrical resistivity data for $x = 0.15$. As our central result, we present a detailed composition--temperature phase diagram based on the combination of a large number of quantities. Under increasing iron concentration $x$, antiferromagnetic order akin to pure Cr is suppressed above $x = 0.15$, followed by the emergence of weak magnetic order developing distinct ferromagnetic character above $x = 0.18$. At low temperatures, a wide dome of reentrant spin-glass behavior is observed for $0.10 \leq x \leq 0.25$, preceded by a precursor phenomenon. Analysis of the neutron depolarization data and the frequency-dependent shift in the ac susceptibility indicate that with increasing $x$ the size of ferromagnetically ordered clusters increases and that the character of the spin-glass behavior changes from a cluster glass to a superparamagnet.
\acknowledgments
We wish to thank P.~B\"{o}ni and S.~Mayr for fruitful discussions and assistance with the experiments. This work has been funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under TRR80 (From Electronic Correlations to Functionality, Project No.\ 107745057, Project E1) and the excellence cluster MCQST under Germany's Excellence Strategy EXC-2111 (Project No.\ 390814868). Financial support by the Bundesministerium f\"{u}r Bildung und Forschung (BMBF) through Project No.\ 05K16WO6 as well as by the European Research Council (ERC) through Advanced Grants No.\ 291079 (TOPFIT) and No.\ 788031 (ExQuiSid) is gratefully acknowledged. G.B., P.S., S.S., M.S., and P.J.\ acknowledge financial support through the TUM Graduate School.
| proofpile-arXiv_065-241 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Meta-Learning for Optimal Agent}
The trade-off described above begs the following question: How can it be managed in an environment with unknown properties? That is, how does an agent decide whether to pursue single-task or multitask training in a new environment so that it can learn the tasks most efficiently while maximizing the rewards it receives?
Suppose an agent must learn how to optimize its performance on a given set of tasks in an environment over $\tau$ trials. At trial $t$, for each task, the agent receives a set of inputs $\mathbf{X} = \{\mathbf{x}_k\}_{k=1}^{K}$ and is expected to produce the correct labels $\mathbf{Y} = \{\mathbf{y}_k\}_{k=1}^{K}$ corresponding to the task. Assuming that each is a classification task, the agent's reward is its accuracy for the inputs, i.e. $R_t = \frac{1}{K} \sum_{k=1}^{K} \mathbbm{1}_{\hat{\mathbf{y}}_k = \mathbf{y}_k}$ where $\hat{\mathbf{y}}_k$ is the predicted label by the agent for each input $\mathbf{x}_k$. On each trial, the agent must perform all the tasks, and it can choose to do so either serially (i.e. by single-tasking) or simultaneously (i.e. by multitasking). After completion of the task and observation of the rewards, the agent also receives the correct labels $\mathbf{Y}$ for the tasks in order to train itself to improve task performance. Finally, assume that the agent's performance is measured across these trials through the entire course of learning and its goal is to maximize the sum of these rewards across all tasks.
To encode the time cost of single-tasking execution, we assume the environment has some unknown \emph{serialization} cost $c$ that determines the cost of performing tasks serially, i.e. one at a time. We assume that the reward for each task when done serially is $\frac{R_t}{1 + c}$ where $R_t$ is the reward as defined before. The serialization cost therefore discounts the reward in a multiplicative fashion for single-tasking. We assume that $0 \le c \le 1$ so that $c=0$ indicates there is no cost enforced for serial performance whereas $c=1$ indicates that the agent receives half the reward for all the tasks by performing them in sequence. Note that the training strategy the agent picks not only affects the immediate rewards it receives but also the future rewards, as it influences how effectively the agent learns the tasks to improve its performance in the future. Thus, depending on the serialization cost, the agent may receive lower reward for picking single-tasking but gains a benefit in learning speed that may or may not make up for it over the course of the entire learning episode. This question is at the heart of the trade-off the agent must navigate to make the optimal decision. We note that this is one simple but intuitive way to encode the cost of doing tasks serially but other mechanisms are possible.
\subsection{Approximate Bayesian Agent}
We assume that, on each trial, the agent has the choice between two training strategies to execute and learn the given tasks - by single-tasking or multitasking. The method we describe involves, on each trial, the agent modeling the reward dynamics under each training strategy for each task and picking the strategy that is predicted to give the highest discounted total future reward across all tasks. To model the reward progress under each strategy, we first define the reward function for each strategy, $f_{A,i}(t)$, which gives the reward for a task $i$ under strategy $A$ assuming strategy $A$ has been selected $t$ times. The reward function captures the effects of both the strategy's learning dynamics and unknown serialization cost (if it exists for the strategy). Here, $A \in \{S, M\}$ where $S$ represents the single-tasking strategy and $M$ represents the multitasking strategy.
We can use the reward function to get the reward for a task $i$ at trial $t'$ when selecting strategy $A$. Let $a_1, a_2, \ldots, a_{t' - 1}$ be the strategies picked at each trial until trial $t'$. Then,
\[
R^{(A,i)}_{t'} = f_{A,i}\left( \sum_{t=1}^{t'-1} \mathbbm{1}_{a_t = A} \right).
\]
is the reward for task $i$ at trial $t'$ assuming we pick strategy $A$.
Given the reward for each task, the agent can get the total discounted future reward for a strategy $A$ from trial $t'$ onward assuming we repeatedly select strategy $A$:
\[
R^{(A)}_{\ge t'} = \sum_{t=t'}^{\tau} \mu(t) \left( \sum_{i=1}^{N} R^{(A,i)}_{t} \right),
\]
where $\mu(t)$ is the temporal discounting function, $N$ is the total number of tasks, and $\tau$ is the total number of trials the agent has to maximize its reward on the tasks.
We now discuss how the agent maintains its estimate of each strategy's reward function for each task. The reward function is modeled as a sigmoidal function, the parameters of which are updated on each trial. Specifically, for a strategy $A$ and task $i$, using parameters $\theta_{A,i} = \{w_1, b_1, w_2, b_2 \}$, we model the reward function as $f_{A,i}(t) = \sigma(w_2 \cdot \sigma(w_1 \cdot t + b_1) + b_2)$.
We place a prior over the parameters $p(\theta_{A,i})$ and compute the posterior at each trial $t'$ over the parameters $p(\theta_{A,i} | D_{t'})$, where $D_{t'}$ is the observed rewards until trial $t'$ using strategy $A$ on task $i$. Because the exact posterior is difficult to compute, we calculate the approximate posterior $q(\theta_{A,i} | D_{t'})$ using variational inference \cite} \newcommand{\citealp}[1]{\citeauthor{#1} \citeyear{#1}{wainwright2008graphical}. Specifically, we use Stein variational gradient descent (SVGD) \cite{liu2016stein}, which is a deterministic variational inference method that approximates the posterior using a set of particles that represent samples from the approximate posterior. The benefit of using SVGD is that it allows the number of particles used to be selected so as to increase the complexity of the approximate posterior, while ensuring that the time it takes to compute this approximation is practical. Because the posterior needs to be calculated repeatedly during training of the network, we found SVGD to offer the best properties - the approximate posterior is much quicker to compute than using MCMC techniques, while allowing for a complex approximation compared to using a simple Gaussian variational approximation to the posterior.
At each trial $t'$, the agent uses its estimate of the total discounted future reward for single-tasking and multitasking ($R^{(S)}_{\ge t'}$ and $R^{(M)}_{\ge t'}$, respectively) to decide which strategy to use. This can be thought of as a two-armed bandit problem, in which the agent needs to adequately explore and exploit to decide which arm, or strategy, is better. Choosing the single-tasking training regimen may give initial high reward (because of the learning speed benefit) but choosing multitasking may be the better long-term strategy because it does not suffer from any serialization cost. Thompson sampling \cite} \newcommand{\citealp}[1]{\citeauthor{#1} \citeyear{#1}{thompson1933likelihood,chapelle2011empirical,gershman2018deconstructing} is an elegant solution to the explore-exploit problem, involving sampling from the posterior over the parameters and taking decisions greedily with respect to the sample. It provides initial exploration as the posterior variance is large at the start because of a lack of data and then turns to exploitation when the posterior is more confident due to seeing enough data. On each trial, we use Thompson sampling to pick between the training strategies by sampling from the approximate posterior over parameters for each strategy, calculating the total discounted future reward for each strategy according to the sampled parameters, and picking the strategy corresponding to the higher reward. Note that in practice we do not re-estimate the posterior in each trial (as one new reward will not change the posterior much) but instead do it periodically when enough new rewards have been observed.
\begin{comment}
We tackle this trade-off using the classic Botlzmann exploration method, where an action is sampled according to the distribution computed using the softmax over the future reward estimates for each strategy. The temperature parameter for the softmax is annealed over time to $0$ so that there is a time-period of initial exploration after which the decisions become greedy.
\end{comment}
\section{Supplementary Material}
\subsection{Experimental Details}
Our data consists of $29,250$ data-points generated in AirSim. Each data-point consists of examples for each input (GPS and image) and labels for all four tasks. The GPS-input is a two-dimensional input whereas the image-input is $84 \times 84$.
The specific network architecture we use involves processing the GPS-input using a single-layer neural network with $50$ hidden units and image-input using a $4$-layer convolutional network with $32$ feature maps in each layer. The final hidden-layer representations for each type of network are then mapped to the different outputs using a fully-connected layer. All networks are trained using SGD with learning rate of $0.1$ that is decayed across the trials. At each trial, the network receives $160$ items for which it is trained on all $4$ tasks either via single-tasking or multitasking. For single-tasking, the network is trained on each task one after another, where the data for each task is treated as one mini-batch. For multitasking, the network is trained on performing Tasks $1$ and $4$ concurrently and then on performing Tasks $2$ and $3$ concurrently, where again data for each multitasking execution is treated as one mini-batch. We measure the learning speed for each task by measuring the accuracy on the data for a task before the network is updated to be trained on that data. For the results shown in Figures $4$ and $5$ in the main text, the networks are trained for $20,000$ trials. The error bars represent $95 \%$ confidence intervals computed using $10$ different network initializations, where each initialization involves using a different random seed when sampling weights according to the Xavier initialization scheme for both convolutional and fully-connected layers. Lastly, all models were trained on a Nvidia Titan X GPU.
The multitasking error for a network is computed by measuring how much worse the average performance for all the data for a task is when the task is executed in multitasking fashion vs when it is executed as a single task. Thus, for example, to get the multitasking error for Task $1$, we would measure the error in average performance when executing the task in multitasking fashion (where we execute Tasks $1$ and $4$ concurrently) vs executing the task just by itself.
The amount of sharing of representations between two tasks is computed by taking the average representations for the two tasks (when executed in single-tasking fashion) across all the data and measuring the correlation between these two average representations. We can compute this layer-wise by only considering the average representation at a certain layer.
\subsection{Meta-Learning Experimental Details}
Information about hyper-parameters used for meta-learning are shown in Table~\ref{table:hyperparam}. The hyper-parameters for number of particles and posterior re-computation were primarily picked to have a feasible running-time whereas the prior parameters were picked to be representative of a reward function that was non-decreasing over time and converging to perfect performance by the end of the total number of trials. The error bars in Figure $6$a and $6$b represent $95 \%$ confidence intervals computed using $15$ different runs of the meta-learner.
\begin{table}[t]
\centering
\begin{tabular}{|c|c|}
\hline
Hyper-parameter Description & Value \\
\hline
\hline
Number of Particles for SVGD & $5$ \\ \hline
\shortstack{Amount of new trial data \\ to re-compute posterior} & $50$ \\ \hline
Prior distribution for $w_1$ & $\mathcal{N}(0.001, 0.2)$ \\ \hline
Prior distribution for $w_2$ & $\mathcal{N}(10, 1)$ \\ \hline
Prior distribution for $b_1$ & $\mathcal{N}(-2, 1)$ \\ \hline
Prior distribution for $b_2$ & $\mathcal{N}(-5, 1)$ \\ \hline
\end{tabular}
\caption{Hyper-parameters for meta-learning.}
\label{table:hyperparam}
\end{table}
\subsection{Visualization of Multitasking Error}
Figure~\ref{fig:multierrorvis} visualizes the outputs of a single-tasking trained and a multitasking trained network asked to perform Task $1$ (GPS-localization) and Task $4$ (Image-classification) concurrently. The examples show mis-classifications by the single-tasking trained network on Task $1$ when multitasking, as the single-tasking trained network seems to err towards the output of Task $3$ (Image-localization). Executing Task $1$ and $4$ requires activation of representations for both tasks in the hidden layers by the task-input layer. This leads to an implicit engagement of Task $3$ which shares a representation with Task $4$, leading to cross-talk with Task $1$ at the location output layer. In these examples, we see that the prediction for the GPS location is biased toward the location of the object which would correspond to the correct label for the implicitly activated Task $3$. The multitasking trained network, on the other hand, does not suffer from this cross-talk and is able to execute Tasks $1$ and $4$ concurrently with no error.
\begin{figure}[t]
\centering
\includegraphics[width=0.8\linewidth]{figures/multi_error_vis.png}
\caption{Visualization of predictions from concurrent execution of Tasks 1 and 4 in a single-tasking trained (left) and multitasking trained (right) network. For a correct output for Task 1 (GPS-localization), the predicted output (green box) should contain the GPS-input (green point).}
\label{fig:multierrorvis}
\end{figure}
\subsection{Effect of Sharing Representations on Learning Speed and Multitasking Ability with Different Initialization}
In Figure~\ref{fig:multitask_diff_init}, we show results comparing single-task vs multitask training when the network isn't as biased towards using shared representations because of initialization using smaller task-associated weights. We see the same conclusions as the previous related experiment in the main text; however, the learning speed benefit of the single-task trained network seems even larger in this case.
\begingroup
\makeatletter
\renewcommand{\p@subfigure}{
\begin{figure}
\centering
\begin{subfigure}[b]{.38\textwidth}
\centering
\includegraphics[width=1.15\linewidth]{plots/multitask_learning_speed_eb_init.png}
\caption{}
\label{fig:multitask_learning_speed}
\end{subfigure}
\begin{subfigure}[b]{0.34\textwidth}
\centering
\begin{minipage}[b]{0.55\textwidth}
\centering
\includegraphics[width=\textwidth]{plots/multitask_error_eb_init.png}
\caption{}
\label{fig:multitask_error}
\end{minipage}
\begin{minipage}[b]{0.55\textwidth}
\centering
\includegraphics[width=\textwidth]{plots/multitask_layer_corr_eb_init.png}
\caption{}
\label{fig:multitask_corr}
\end{minipage}
\end{subfigure}
\vspace{-5pt}
\caption{Effect of single-task vs multitask training. (\subref{fig:multitask_learning_speed}) Comparison of learning speed of the networks. (\subref{fig:multitask_error}) Comparison of the error in average task performance over all data when multitasking compared to single-tasking (the lack of a bar indicates no error). (\subref{fig:multitask_corr}) Correlation of convolutional layer representations between Tasks $3$ and Tasks $4$ computed using the average representation for each layer across all the data. We again show results for the tasks involving the convolutional network.}
\label{fig:multitask_diff_init}
\vspace{-5pt}
\end{figure}
\endgroup
\subsection{Visualization of Meta-Learner}
In Figure~\ref{fig:posterior}, we visualize the predictive distribution of rewards at various trials when varying amount of data has been observed. We see that the predictive distribution is initially uncertain when observing a small amount of rewards for each strategy (which is useful for exploration) and grows certain as more data is observed (which is utilized to be greedy).
\begin{figure}
\begin{subfigure}[t]{\textwidth}
\captionsetup{justification=raggedright,singlelinecheck=false,margin=4.33cm}
\includegraphics[width=0.5\linewidth]{plots/posterior1.png}
\caption{}
\label{fig:posterior1}
\end{subfigure}
\begin{subfigure}[t]{\textwidth}
\captionsetup{justification=raggedright,singlelinecheck=false,margin=4.33cm}
\includegraphics[width=0.5\linewidth]{plots/posterior2.png}
\caption{}
\label{fig:posterior2}
\end{subfigure}
\begin{subfigure}[t]{\textwidth}
\captionsetup{justification=raggedright,singlelinecheck=false,margin=4.33cm}
\includegraphics[width=0.5\linewidth]{plots/posterior3.png}
\caption{}
\label{fig:posterior3}
\end{subfigure}
\caption{Visualization of actual rewards and predictive distribution of rewards for a specific task. Shaded areas correspond to $\pm 3$ standard deviations around mean. For each of (\subref{fig:posterior1}), (\subref{fig:posterior2}), and (\subref{fig:posterior3}), we show the actual rewards accumulated over trials for each strategy (on top) and the predictive distribution over reward data computed using samples from the posterior distribution over parameters for each strategy given the reward data (on bottom).}
\label{fig:posterior}
\end{figure}
\section{Background}
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{figures/example_network_s.eps}
\caption{Neural network architecture from \citet{musslick2016controlled}.}
\label{fig:example_NN}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{figures/basis_vs_tensor_s.eps}
\caption{Network structure for minimal basis set (left) and tensor product (right) representations and the effects of multitasking in each. Red cross indicates error in execution of task because of interference whereas green check-mark indicates successful execution.}
\label{fig:network_structure}
\end{figure}
\subsection{Definition of Tasks and Multitasking}
Consider an environment in which there are multiple stimulus input dimensions (e.g. corresponding to different sensory modalities) and multiple output dimensions (corresponding to different response modalities). Given an input dimension $I$ (e.g. an image) and an output dimension $O$ (e.g. object category) of responses, a task $T: I \to O$ represents a mapping between the two (e.g. mapping a set of images to a set of object categories), such that the mapping is independent of any other.
Thus, given $N$ different input dimensions and $K$ possible output dimensions, there is a total of $NK$ possible independent tasks that the network can learn to perform. Finally, multitasking refers to the simultaneous execution of multiple tasks, i.e. within one forward-pass from the inputs to the outputs of a network. Note that such multitasking differs from multi-task learning in that multitasking requires tasks to map different input dimensions to different output dimensions \cite{pashler1994dual} in a way that each is independent of the other, whereas typically in multi-task learning all tasks map the same input dimension to different output dimensions \cite{caruana1997multitask}.
\subsection{Processing Single and Multiple Tasks Based on Task Projections}
\label{task-projection}
We focus on a network architecture that has been used extensively in previous work \cite{cohen1990control,botvinick2001conflict,musslick2017multitasking} (shown in Figure~\ref{fig:example_NN}). Here, in addition to the set of stimulus inputs, there is also an input dimension to indicate which task the network should perform. This task vector is projected to the hidden units and output units using learned weights. The hidden unit task projection biases the hidden layer to calculate a specific representation required for each task, whereas the output unit projection biases the outputs to only allow the output that is relevant for the task. The functional role of the task layer is inspired by the notion of cognitive control and attention in psychology and neuroscience, that is, the ability to flexibly guide information processing according to current task goals \cite{shiffrin1977controlled,posnerr,cohen1990control}. Assuming that the task representations used to specify different tasks are orthogonal to one another (e.g., using a one hot code for each), then multitasking can be specified by a superposition (sum) of the representations for the desired tasks in the task input layer. The weights learned for the projections from the task input units to units in the hidden layers, together with those learned within the rest of the network, co-determine what type of representation (shared or separate) the network uses.
\subsection{Minimal Basis Set vs Tensor Product Representations}
Previous work \cite} \newcommand{\citealp}[1]{\citeauthor{#1} \citeyear{#1}{feng2014multitasking,musslick2016controlled,musslick2017multitasking} has established that, in the extreme, there are two ways that different tasks can be represented in the hidden layer of a two-layer network. The first representational scheme is the \emph{minimal basis set} (shown on the left in Figure~\ref{fig:network_structure}), in which all tasks that rely on the same input encode the input in the same set of hidden representations. The second scheme is the \emph{tensor product} (shown on the right in Figure~\ref{fig:network_structure}), in which the input for each task is separately encoded in its own set of hidden representations. Thus, the minimal basis set maximally shares representations across tasks whereas the tensor product uses separate representations for each task.
These two representational schemes pose a fundamental trade-off. The minimal basis set provides a more efficient encoding of the inputs, and allows for faster learning of the tasks because of the sharing of information across tasks. However, it prohibits executing more than one task at a time (i.e. any multitasking). This is because, with the minimal basis set, attempting to execute two tasks concurrently causes the implicit execution of other tasks due to the representational sharing between tasks. In contrast, while the tensor product network scheme is less compact, multitasking is possible since each task is encoded separately in the network, so that cross-talk does not arise among them (see Figure \ref{fig:network_structure} for an example of multitasking and its effects in both types of networks). However, learning the tensor product representation takes longer since it cannot exploit the sharing of representations across tasks.
The type of representation learned by the network can be determined by the type of task-processing on which it is trained. Single-task training (referred to in the literature as multi-task training), involves training on tasks one at a time and generally induces shared representations. In contrast, multitask training involves training on multiple tasks concurrently and produces separate representations. This occurs because using shared representations when multitasking causes interference and thus error in task execution. In order to minimize this error and the cross-talk that is responsible for it, the network learns task projection weights that lead to separate representations for the tasks. In single-tasking training, there is no such pressure, as there is no potential for interference when executing one task at a time, and so the network can use shared representations.
These effects have been established both theoretically and experimentally for shallow networks with one hidden-layer trained to perform simple synthetic tasks \cite} \newcommand{\citealp}[1]{\citeauthor{#1} \citeyear{#1}{musslick2016controlled,musslick2017multitasking}. Below, we report results suggesting that they generalize to deep neural networks trained on more complex tasks.
\section{Discussion}
In this work we study the trade-off between using shared vs separated representations in deep neural networks. We experimentally show that using shared representations leads to faster learning but at the cost of degraded multitasking performance\footnote{Note that limitations in multitasking due to shared representations may be bypassed by executing different single tasks across multiple copies of the trained network. However, this strategy appears inefficient as it requires a higher amount of memory and computation that scales with the number of tasks to be executed.}. We additionally propose and evaluate a meta-learning algorithm to decide which training strategy is best to use in an environment with unknown serialization cost.
We believe simultaneous task execution as considered here could be important for real-world applications as it minimizes the number of forward passes needed to execute a set of tasks. The cost of a forward pass is an important factor in embedded devices (in terms of both time and energy required) and scales badly as we consider larger task spaces and more complex networks. Thus, optimally managing the trade-off between learning speed vs multitasking could be crucial for maximizing efficiency in such situations.
A promising direction for future studies involves application of this meta-learner to more complex tasks. As we add more tasks, the potential for interference increases across tasks; however, as tasks become more difficult, the minimal basis set becomes more desirable, as there is even bigger benefit to sharing representations. Furthermore, in this more complicated setting, we would also like to expand our meta-learning algorithm to decide explicitly which set of tasks should be learned so that they can be executed in multitasking fashion and which set of tasks should only be executed one at a time. This requires a more complicated model, as we have to keep track of many possible strategies in order to see what will give the most reward in the future.
\section{Related Work}
The most relevant work from the multi-task learning literature focuses on maximizing positive transfer across tasks while minimizing negative transfer. This list includes work on minimizing learning interference when doing multi-task training \cite} \newcommand{\citealp}[1]{\citeauthor{#1} \citeyear{#1}{teh2017distral,rosenbaum2017routing} and reducing catastrophic interference when learning tasks one after another \cite} \newcommand{\citealp}[1]{\citeauthor{#1} \citeyear{#1}{rusu2016progressive,kirkpatrick2017overcoming}. However, to our knowledge, none of these works explicitly deal with the issue of how the type of representations a network uses affects whether it can execute tasks serially or in parallel.
Additionally, as mentioned previously, we build on previous work studying the trade-off of learning speed vs multitasking ability in artificial neural networks \cite} \newcommand{\citealp}[1]{\citeauthor{#1} \citeyear{#1}{feng2014multitasking,musslick2016controlled,musslick2017multitasking}. Additionally, our meta-learning algorithm is similar to the one proposed by \citet{sagivefficiency}. However, we explicitly use the model's estimate of future rewards under each strategy to also decide how to train the network, whereas the meta-learner in \cite} \newcommand{\citealp}[1]{\citeauthor{#1} \citeyear{#1}{sagivefficiency} was not applied to a neural network's learning dynamics. Instead, the actual learning curve for each strategy $A$ was defined according to pre-defined synthetic function. Our algorithm is thus applied in a much more complex setting in which estimation of each strategy's future rewards directly affects how the network chooses to be trained. Furthermore, our method is fully Bayesian in the sense that we utilize uncertainty in the parameter posterior distribution to control exploration vs exploitation via Thompson sampling. In \cite} \newcommand{\citealp}[1]{\citeauthor{#1} \citeyear{#1}{sagivefficiency} logistic regression was combined with the $\epsilon$-greedy method to perform this trade-off, which requires hyper-parameters to control the degree of exploration. Lastly, we assume that the serialization cost is unknown and model its effects on the future reward of each strategy whereas \cite} \newcommand{\citealp}[1]{\citeauthor{#1} \citeyear{#1}{sagivefficiency} makes the simplifying assumption that the cost is known. Modeling the effects of an unknown serialization cost on the reward makes the problem more difficult but is a necessary assumption when deploying agents that need to make such decisions in a new environment with unknown properties.
Lastly, previous work on bounded optimality \cite} \newcommand{\citealp}[1]{\citeauthor{#1} \citeyear{#1}{russell1994provably,lieder2017strategy} is also relevant, as it is closely related to the idea of optimizing a series of computations given a processing cost as our proposed meta-learner does.
\section{Introduction}
Many recent advances in machine learning can be attributed to the ability of neural networks to learn and to process complex representations by simultaneously taking into account a large number of interrelated and interacting constraints - a property often referred to as parallel distributed processing \cite} \newcommand{\citealp}[1]{\citeauthor{#1} \citeyear{#1}{mcclelland1986appeal}. Here, we refer to this sort of parallel processing as interactive parallelism. This type of parallelism stands in contrast to the ability of a network architecture to carry out multiple processes independently at the same time. We refer to this as independent parallelism and it is heavily used, for example, in computing clusters to distribute independent units of computation in order to minimize compute time. Most applications of neural networks have exploited the benefits of interactive parallelism \cite} \newcommand{\citealp}[1]{\citeauthor{#1} \citeyear{#1}{bengio2013representation}. For instance, in the multi-task learning paradigm, learning of a task is facilitated by training a network on various related tasks \cite} \newcommand{\citealp}[1]{\citeauthor{#1} \citeyear{#1}{caruana1997multitask,collobert2008unified,kaiser2017one,kendall2018multi}. This learning benefit has been hypothesized to arise due to the development of shared representation between tasks \cite} \newcommand{\citealp}[1]{\citeauthor{#1} \citeyear{#1}{baxter1995learning,caruana1997multitask}. However, the capacity of such networks to execute multiple tasks simultaneously\footnote{Here we refer to the simultaneous execution of multiple tasks in a single feed-forward pass.} (what we call multitasking) has been less explored.
Recent work \cite} \newcommand{\citealp}[1]{\citeauthor{#1} \citeyear{#1}{musslick2016controlled,musslick2017multitasking,PetriInPrep} has hypothesized that the trade-off between these two types of computation is critical to certain aspects of human cognition. Specifically, though interactive parallelism allows for quicker learning and greater generalization via the use of shared representations, it poses the risk of cross-talk, thus limiting the number of tasks that can be executed at the same time (i.e. multitasking). Navigation of this trade-off by the human brain may explain why we are able to multitask some tasks in daily life (such as talking while walking) but not others (for example, doing two mental arithmetic problems at the same time). \citet{musslick2017multitasking} have shown that this trade-off is also faced by artificial neural networks when trained to perform simple synthetic tasks. This previous work demonstrates both computationally and analytically that the improvement in learning speed through the use of shared representation comes at the cost of limitations in concurrent multitasking \cite{musslick2020rationalizing}.
While these studies were informative, they were limited to shallow networks and simple tasks.\footnote{See \citet{Alon2017,musslick2020rational} for a graph-theoretic analysis of multitasking capability as a function of network depth.} Moreover, this work raises an important, but as yet unanswered question: how can an agent optimally trade-off the efficiency of multi-task learning against multitasking capability? In this work, we: (a) show that this trade-off also arises in deep convolutional networks used to learn more complex tasks; (b) demonstrate that this trade-off can be managed by using single-task vs multitask training to control whether or not representations are shared; (c) propose and evaluate a meta-learning algorithm that can be used by a network to regulate its training and optimally manage the trade-off between multi-task learning and multitasking in an environment with unknown serialization costs.
\section{Experiments}
In this section, we evaluate experimentally the aforementioned trade-off and proposed meta-learner model to resolve it. We first start by describing the task environment we use and the set of tasks we consider. We then describe the neural network architecture used, including the specific form of the task projection layer mentioned in section \ref{task-projection} that we use and how training occurs for single-tasking and multitasking. In sections \ref{tradeoff1} and \ref{tradeoff2}, we show through experiments explicitly how the trade-off arises in the task environment. Lastly, in section \ref{meta-learning}, we evaluate our proposed meta-learner's ability to navigate this trade-off in the environment given that there is an unknown serialization cost.
\subsection{Experimental Setup}
\begin{figure}
\centering
\includegraphics[width=3.8cm]{figures/conv_net_example.eps}
\caption{Neural network architecture used.}
\label{fig:example_convNN}
\end{figure}
We create a synthetic task environment using AirSim \cite} \newcommand{\citealp}[1]{\citeauthor{#1} \citeyear{#1}{shah2018airsim}, an open-source simulator for autonomous vehicles built on Unreal Engine\footnote{Code and data will be released in final version of paper.}.
We assume a drone-agent that has two stimulus-inputs: (1) a GPS-input through which it can be given location-relevant information; (2) an image-input providing it visual information (e.g. from a camera). The agent also has two outputs: (1) a location-output designating a location in the input image; (2) an object-output designating the object the agent believes is present in the input. Based on the definition of a task as a mapping from one input to one output, this give us the following four tasks that the agent can perform:
\begin{enumerate}[itemsep=0mm, leftmargin=3\parindent]
\item[Task 1] (GPS-localization): given a GPS location, output the position in the image of that location.
\item[Task 2] (GPS-classification): given a GPS location, output the type of object the agent expects to be in that area based on its experience.
\item[Task 3] (Image-localization): given a visual image,
output the location of the object in the image.
\item[Task 4] (Image-classification): given a visual image, output the type of object in the image.
\end{enumerate}
Using AirSim, we simulate an ocean-based environment with a set of different possible objects (such as whales, dolphins, orcas, and boats). We create training examples for the agent by randomizing the location of the agent within the environment, the type of object present in the visual input, the location and rotation of the object, and the GPS location provided to the agent. Thus, each training instance contains a set of randomized inputs and a label for each of the tasks with respect to the specific inputs. The agent can execute each task using either single-tasking (one after another) or multitasking (in which it can execute Tasks $1$ and $4$ together or Tasks $2$ and $3$ together). Note that in this setup, only $2$ tasks at most can be performed simultaneously as we will have conflicting outputs if we attempt to multitask more than $2$ tasks.
\subsection{Neural Network Architecture}
The GPS-input is processed using a single-layer neural network, whereas the image-input is processed using a multi-layer convolutional neural network. The encoded inputs are then mapped via fully-connected layers to each output. We allow the task input to modify each hidden, or convolutional, layer using a learned projection of the task input specific to each layer. This is related to the idea of cognitive control in psychology \cite} \newcommand{\citealp}[1]{\citeauthor{#1} \citeyear{#1}{cohen1990control} but also to attention mechanisms used in machine learning \cite} \newcommand{\citealp}[1]{\citeauthor{#1} \citeyear{#1}{hochreiter1997long}.
More formally, the task-specific projection for the $i^{\text{th}}$ layer $\mathbf{c}_i$ is computed using a matrix multiplication with learned task projection matrix $\mathbf{W}_{t,i}$ and task-input $\mathbf{x}_t$, followed by a sigmoid:
\[
\mathbf{c}_i = \sigma(\mathbf{W}_{t,i} \mathbf{x}_t - \beta),
\]
where $\beta$ is a positive constant. The subtraction by $\beta > 0$ means that task projections are by default ``off'' i.e. close to being $0$. For a fully-connected layer, the task projection $\mathbf{c}_i$ modifies the hidden units for the $i^{th}$ layer $\mathbf{h}_i$ through multiplicative gating to compute the hidden units $\mathbf{h}_{i+1}$:
\[
\mathbf{h}_{i+1} = g\left(
\left( \mathbf{W}_{h,i} \mathbf{h}_{i} + \mathbf{b}_i \right) \odot \mathbf{c}_i \right),
\]
where $\mathbf{W}_{h,i}$ and $\mathbf{b}_i$ are the typical weight matrix and bias for the fully-connected layer, and $g$ is the non-linearity. For the hidden units, we let $g$ be the rectified linear activation function (ReLU) whereas for output units it is the identity function. Similarly, for a convolutional layer the feature maps $\mathbf{h}_{i+1}$ are computed from $\mathbf{h}_{i}$ as:
\[
\mathbf{h}_{i+1} = g\left(
\left(\mathbf{h}_{i} * \mathbf{W}_{h,i} + \mathbf{b}_i \right) \odot \mathbf{c}_i \right),
\]
where $\mathbf{W}_{h,i}$ is now the convolutional kernel. Note that we use multiplicative biasing via the task projection whereas previous work \cite} \newcommand{\citealp}[1]{\citeauthor{#1} \citeyear{#1}{musslick2016controlled,musslick2017multitasking} used additive biasing. We found multiplicative biasing to work better for settings in which the task projection matrix needs to be learned. A visual example of the network architecture is shown in Figure~\ref{fig:example_convNN}.
Training in this network occurs in the typical supervised way with some modifications. To train for a specific task, we feed in the stimulus-input and associated task-input, and train the network to produce the correct label at the output associated with the task. For outputs not associated with the task, we train the network to output some default value. In this work, we focus on classification-based tasks for simplicity, and so the network is trained via cross-entropy loss computed using the softmax over the network output logits and the true class label. To train the network on multitasking, we feed in the stimulus-input and the associated task-input (indicating which set of tasks to perform concurrently) and train the network on the sum of losses computed at the outputs associated with the set of tasks. Note that we consider the localization-based tasks as classification tasks by outputting a distribution over a set of pre-determined bounding boxes that partition the image space.
\begingroup
\makeatletter
\renewcommand{\p@subfigure}{
\begin{figure}[t]
\centering
\begin{subfigure}[b]{0.5\textwidth}
\centering
\includegraphics[width=0.82\linewidth]{plots/overlap_learning_speed_eb.eps}
\caption{}
\label{fig:overlap_learning_speed}
\end{subfigure}
\begin{subfigure}[b]{0.5\textwidth}
\centering
\begin{minipage}[b]{0.42\textwidth}
\centering
\includegraphics[width=\textwidth]{plots/overlap_error_eb.eps}
\caption{}
\label{fig:overlap_error}
\end{minipage}
\begin{minipage}[b]{0.42\textwidth}
\centering
\includegraphics[width=\textwidth]{plots/overlap_layer_corr_eb.eps}
\caption{}
\label{fig:overlap_corr}
\end{minipage}
\end{subfigure}
\caption{Effect of varying representational overlap. (\subref{fig:overlap_learning_speed}) Comparison of learning speed of the networks. (\subref{fig:overlap_error}) Comparison of the error in average task performance over all data when multitasking compared to single-tasking. (\subref{fig:overlap_corr}) Correlation of convolutional layer representations between Tasks $3$ and Tasks $4$ computed using the average representation for each layer across all the data. We show results for the tasks involving the convolutional network, as those are the more complex tasks we are interested in.}
\label{fig:overlap}
\end{figure}
\endgroup
\subsection{Effect of Sharing Representations on Learning Speed and Multitasking Ability}
\label{tradeoff1}
First, we consider the effect of the degree of shared representations on learning speed and multitasking ability. We control the level of sharing in the representations used by the network by manipulating the task-associated weights $\mathbf{W}_{t,i}$, which implement, in effect, the task projection for each task. The more similar the task projections are for two tasks, the higher the level of sharing because more of the same hidden units are used for the two tasks. We vary $\mathbf{W}_{t,i}$ to manipulate what percent of hidden units overlap for the tasks. Thus, $100 \%$ overlap indicates that all hidden units are used by all tasks; $50 \%$ overlap indicates that $50 \%$ of the hidden units are shared between the tasks whereas the remaining $50 \%$ are split to be used independently for each task; and $0 \%$ overlap indicates that the tasks do not share any hidden units in a layer. Note that in this experiment, during training task-associated weights are frozen based on the initialization that results in the specific overlap percentage, but the weights in the remainder of the network are free to be learned. Based on previous work \cite} \newcommand{\citealp}[1]{\citeauthor{#1} \citeyear{#1}{musslick2016controlled,musslick2017multitasking}, we measure the degree of sharing at a certain layer between two task representations by computing the correlation between the mean representation for the tasks, where the mean is computed by averaging the activity at the layer across all training examples for a given task.
The results of the experiment manipulating the level of overlap are shown in Figure~\ref{fig:overlap}. These show that as overlap is increased, sharing of representations across tasks increases (as evidenced by the increase in correlations), which is associated with an increase in the learning speed. However, this is associated with a degradation in the multitasking ability of the network, as a result of the increased interference caused by increased sharing of the representations. Note that the network with $0 \%$ overlap does not achieve error-free multitasking performance. This suggests that there is a residual amount of interference in the network induced by single-task training that cannot be attributed do the chosen manipulation i.e. overlap between task representations.
\subsection{Effect of Single-task vs Multitask Training}
\label{tradeoff2}
Having established that there is a trade-off in using shared representations in the deep neural network architecture described, we now focus on how different training regimens - using single-tasking vs multitasking - impact the representations used by the network and the network's learning speed. Previous work indicated that single-task training promotes shared representations and learning efficiency \cite{caruana1997multitask,musslick2017multitasking} whereas training a network to execute multiple tasks in parallel yields separated representations between tasks and improvements in multitasking \cite{MusslickCohen2019}.
We compare different networks that vary on how much multitasking they are trained to do, from $0 \%$, in which the network is given only single-task training, to $90 \%$, in which the network is trained most of the time to do multitasking. Here, the task-associated weights $\mathbf{W}_{t,i}$ are initialized to be uniformly high across the tasks, meaning that the network is initially biased towards using shared representations, and all the weights (including task weights) are then learned based on the training regimen encountered by the network. We also conduct an experiment in which the network isn't as biased towards using shared representations by initializing smaller task-associated weights (see supplementary material). We note that the number of examples and the sequence of examples for each task are the same for both types of conditions (single-tasking or multitasking). The only difference is that in the case of single-task learning each task is learned independently using different forward and backward passes whereas in multitasking, multiple tasks can be processed together and thus learned together.
The results of this experiment (Figure~\ref{fig:multitask}) show that as the network is trained to do more multitasking, the learning speed of the network decreases and the correlation of the task representations also decreases. Because the network is initialized to use highly shared representations, we see that a multitasking training regimen clearly forces the network to move away from this initial starting point. The effect is stronger in the later layers, possibly because these layers may contribute more directly to the interference caused when multitasking.
\begingroup
\makeatletter
\renewcommand{\p@subfigure}{
\begin{figure}[t]
\centering
\begin{subfigure}[b]{.5\textwidth}
\centering
\includegraphics[width=0.82\linewidth]{plots/multitask_learning_speed_eb.eps}
\caption{}
\label{fig:multitask_learning_speed}
\end{subfigure}
\begin{subfigure}[b]{0.5\textwidth}
\centering
\begin{minipage}[b]{0.42\textwidth}
\centering
\includegraphics[width=\textwidth]{plots/multitask_error_eb.eps}
\caption{}
\label{fig:multitask_error}
\end{minipage}
\begin{minipage}[b]{0.42\textwidth}
\centering
\includegraphics[width=\textwidth]{plots/multitask_layer_corr_eb.eps}
\caption{}
\label{fig:multitask_corr}
\end{minipage}
\end{subfigure}
\caption{Effect of single-task vs multitask training. (\subref{fig:multitask_learning_speed}) Comparison of learning speed of the networks. (\subref{fig:multitask_error}) Comparison of the error in average task performance over all data when multitasking compared to single-tasking (the lack of a bar indicates no error). (\subref{fig:multitask_corr}) Correlation of convolutional layer representations between Tasks $3$ and Tasks $4$ computed using the average representation for each layer across all the data. We again show results for the tasks involving the convolutional network.}
\label{fig:multitask}
\end{figure}
\endgroup
\begin{figure}[t!]
\centering
\begin{subfigure}[b]{0.23\textwidth}
\centering
\includegraphics[width=\linewidth]{plots/meta_learner_results1.eps}
\caption{}
\label{fig:meta_learner_eval1}
\end{subfigure}
\begin{subfigure}[b]{0.23\textwidth}
\centering
\includegraphics[width=\linewidth]{plots/meta_learner_results2.eps}
\caption{}
\label{fig:meta_learner_eval2}
\end{subfigure}
\begin{subfigure}[b]{.23\textwidth}
\centering
\includegraphics[width=\linewidth]{plots/meta_learner_single_percent.eps}
\caption{}
\label{fig:single_percent}
\end{subfigure}
\caption{Evaluation of meta-learning algorithm. (\subref{fig:meta_learner_eval1}) Comparison of all methods on trade-off induced in original environment. (\subref{fig:meta_learner_eval2}) Comparison of all methods on trade-off induced in environment where noise is added to inputs. (\subref{fig:single_percent}) Percent of trials for which meta-learner picks to do single-tasking in both environments.}
\label{fig:meta_learner_eval}
\end{figure}
\subsection{Meta-Learning}
\label{meta-learning}
Finally, having established the trade-off between single-task and multitask training, we evaluate the meta-learning algorithm to test its effectiveness in optimizing this trade-off. In order to test this in an environment with unknown serialization cost, we compare it with the extremes of always picking single-task or multitask training. We fix the total number of trials to be $\tau = 5000$ and evaluate each of the methods on varying serialization costs. For the meta-learner, we average the performances over $15$ different runs in order to account for the randomness involved in its sampling choices and measure its confidence interval. We fix the order in which data is presented for the tasks for all options when comparing them. Note that the meta-learner does not know the serialization cost and so has to model its effects as part of the received reward. We create two different environments to induce different trade-offs for rewards between single-tasking and multitasking. The first is a deterministic environment whereas in the second we add noise to the inputs. Adding noise to the inputs makes the tasks harder and seems to give bigger benefit to the minimal basis set (and single-task training). We hypothesize that this is the case because sharing information across tasks becomes more valuable when noisy information is provided for each task.
Figures \ref{fig:meta_learner_eval1} and \ref{fig:meta_learner_eval2} show that the meta-learning algorithm achieves a reward rate that closely approximates the one achieved by the strategy that yields the greatest reward for a given serialization cost. Additionally, note that in the extremes of the serialization cost, the meta-learner seems better at converging to the correct training strategy, while it achieves a lower reward when the optimal strategy is harder to assess. This difference is even clearer when we study the average percent of trials for which the meta-learner picks single-task training as a function of the serialization cost in Figure \ref{fig:single_percent}.
We see that the meta-learning algorithm is well-behaved, in that as the serialization cost increases, the percent of trials in which it selects to do single-tasking smoothly decreases.
Additionally, at the points at which the optimal strategy is harder to determine, the meta-learner achieves reward closer to the worst strategy because it needs more time to sample each strategy before settling on one. | proofpile-arXiv_065-242 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The non-abelian tensor square $G \otimes G$ of a group $G$ was introduced by Brown and Loday \cite{BL} following works of Miller \cite{Miller} and Dennis \cite{Dennis}. It is defined to be the group generated by all symbols $\; \, g\otimes h, \; g,h\in G$, subject to the relations
\[
gg_1 \otimes h = ( g^{g_1}\otimes h^{g_1}) (g_1\otimes h) \quad
\mbox{and} \quad g\otimes hh_1 = (g\otimes h_1)( g^{h_1} \otimes
h^{h_1})
\]
for all $g,g_1, h,h_1 \in G$, where we write $x^y$ for the conjugate $y^{-1} x y$ of $x$ by $y$, for any elements $x, y \in G$. In \cite{BL}, Brown and Loday showed that the third homotopy group of the suspension of an Eilenberg-MacLane space $K(G,1)$ satisfies $\pi_3(SK(G,1)) \cong \mu(G),$ where $\mu(G)$ denotes the kernel of the derived map $\rho': G \otimes G \to G'$, given by $g \otimes h \mapsto [g,h]$. According to \cite[Proposition 2.8]{NR2}, the sequence
$$
1 \rightarrow \Delta(G)\rightarrow \mu(G) \rightarrow H_2(G)\rightarrow 1,
$$
is exact, where $\Delta(G) = \langle g \otimes g \mid g \in G\rangle$ and $H_2(G)$ is the second homology group of the group $G$. When $G$ is finite, the Schur multiplier of $G$, denoted by $M(G)$, is defined to be $M(G)=H_2(G)$. Here $\rho'$ corresponds to the derived map $\kappa$ of \cite{BL}.
We observe that the defining relations of the non-abelian tensor square can be viewed as abstractions of commutator relations; thus in \cite{NR1}, Rocco considered the following construction (see also Ellis and Leonard \cite{EL}). Let $G$ be a group and let $\varphi : G \rightarrow G^{\varphi}$ be an isomorphism ($G^{\varphi}$ is a copy of $G$, where $g \mapsto g^{\varphi}$, for all $g \in G$). Define the group $\nu(G)$ to be \[ \nu (G):= \langle
G \cup G^{\varphi} \ \vert \ [g_1,{g_2}^{\varphi}]^{g_3}=[{g_1}^{g_3},({g_2}^{g_3})^{\varphi}]=[g_1,{g_2}^{\varphi}]^{{g_3}^{\varphi}},
\; \ g_i \in G \rangle .\]
The group $\nu(G)$ can be viewed as a special semi-direct product $\nu(G) \cong ((G \otimes G)\rtimes G) \rtimes G$ (see \cite[Section 2]{EL} for more details). The motivation for studying $\nu(G)$ is the commutator connection: indeed, the map $\Phi: G \otimes G \rightarrow [G, G^{\varphi}]$,
defined by $g \otimes h \mapsto [g , h^{\varphi}]$, for all $g, h \in G$, is an isomorphism \cite[Proposition 2.6]{NR1}. From now on we identify the non-abelian tensor square $G \otimes G$ with the subgroup $[G,G^{\varphi}]$ of $\nu(G)$. The group $\nu(G)$ provides an interesting computational tool in the study of the non-abelian tensor square of groups (see for instance \cite{BdMGN,BFM,BN08, EL,M09,NR2}).
Our purpose is to achieve bounds for the exponent of the non-abelian tensor square and related constructions of finite $p$-groups. It is worth to mention that the bounds obtained for the exponent of the group $\nu(G)$ can be read as bounds for the exponents of its sections, and the other way around. Therefore, for the sake of completeness we summarize the relationship between the exponent of $\nu(G)$ and its sections in Remark \ref{rem:nu(G)}.
Let $p$ be a prime. A finite $p$-group $G$ is said to be {\em powerful} if $p>2$ and $G' \leq G^p$, or $p=2$ and $G' \leq G^4$. A more general class of $p$-groups is the following. We say that a finite $p$-group is {\em potent} if $p>2$ and $\gamma_{p-1}(G) \leq G^p$, or $p=2$ and $G' \leq G^4$. Note that the family of potent $p$-groups contains all powerful $p$-groups. Recall that a subgroup $N$ of $G$ is potently embedded in $G$ if $[N,_{p-2}G]\leq N^p$ for $p$ odd, or $[N, G]\leq N^4$ for $p=2$ ($N$ is powerfully embedded in $G$ if $[N,G]\leq N^p$ for $p$ odd, or $[N, G]\leq N^4$ for $p=2$). More information about finite powerful and potent $p$-groups can be found in \cite{D} and in \cite{JJ}, respectively.
Let $p$ be a prime and $r$ a positive integer. We define the integer $m=m(p,r)$ by $m(p,r)=(p-1)p^{r-1}$ for $p$ odd and $m(2,r)=2^{r+2}$. Recall that the coclass of a finite $p$-group $G$ of order $p^n$ and nilpotency class $c$ is defined to be $r(G)=n-c$. Let $G$ be a $p$-group of coclass $r=r(G)$ and nilpotency class $c$, where $c\geq 2p^r$ if $p$ is odd or $c\geq 2^{r+3}$ if $p=2$. It is well-known that in this case $\gamma_{i+s}(G)=\gamma_i(G)^p$ for all $i\geq m(p,r)$ and $s=(p-1)p^{d}$ with $0\leq d \leq r-1$ if $p$ is odd or $s=2^d$ with $0\leq d \leq r+1$ if $p=2$ (cf. \cite[Section 6.3]{LM}). In particular, note that $\gamma_m(G)$ is powerful. It is worth to mention that powerful $p$-groups satisfy some analogous power-commutator condition. Actually, if $G$ is a powerful $p$-group, then $\gamma_i(G)$ is powerfully embedded in $G$, that is, $\gamma_{i+1}(G) \leq \gamma_i(G)^{{\bf p}}$, for every $i\geq 1$. (cf. \cite[Corollary 11.6]{Khukhro}). Here and in the sequel, ${\bf p}$ denotes the prime $p$ if $p$ is odd and $4$ if $p=2$.
In \cite{M09}, Moravec proved that $[G,G^{\varphi}]$ and $\nu(G)'$ are powerfully embedded in $\nu(G)$; moreover, the exponent $\exp(\nu(G)')$ divides $\exp(G)$. Later, in \cite{BdMGN} it was shown that if $G$ is a finite potent $p$-group, then the subgroups $\gamma_k(\nu(G))$ and $[G,G^{\varphi}]$ are potently embedded in $\nu(G)$; and $\exp(\nu(G))$ divides ${\bf p} \cdot \exp(G)$. Furthermore, if $p \geq 5$ and $G$ is a powerful $p$-group, then $\exp(\nu(G)) = \exp(G)$. In some sense the next result can be viewed as an extension of these results and also as an application for $p$-groups of coclass $r$.
\begin{thm}\label{thmA}
Let $p$ be a prime and $G$ a $p$-group. Let $m$ and $s$ be positive integers such that $m\geq s$ and suppose that $\gamma_{i+s}(G)=\gamma_i(G)^p$ for every $i \geq m$. Then
\begin{itemize}
\item[(a)] $\gamma_{i+s+1}(\nu(G))=\gamma_{i+1}(\nu(G))^p$ for $i> m$;
\item[(b)] if $p$ is odd, then $\exp(\gamma_{m+1}(\nu(G)))$ divides $\exp(\gamma_{m}(G))$;
\item[(c)] if $p=2$ and $\gamma_m(G)$ is powerful, then $\exp(\gamma_{m+1}(\nu(G)))$ divides $\exp(\gamma_{m}(G)).$
\end{itemize}
\end{thm}
Despite the fact that the coclass of $\nu(G)$ grows faster than the nilpotency class of the involved group $G$ (see Remark \ref{prop.coclass}, below), Theorem \ref{thmA} (a) shows that the group $\nu(G)$ still satisfies a close power-commutator condition, as $G$ does. At the same time, according to Theorem \ref{thmA} (b) and (c), we deduce that the behaviour of the exponent $\exp(\gamma_{m+1}(\nu(G)))$ depends only on $\exp(\gamma_m(G))$.
Later, we obtain bounds for the exponent of $\nu(G)$ in terms of some specific normal subgroups of $G$.
\begin{thm}\label{thm.potent}
Let $p$ be a prime and $N$ a normal subgroup of a $p$-group $G$.
\begin{itemize}
\item[(a)] If $N$ is potent or $\gamma_{p}(N)=1$, then $\exp(\nu(G))$ divides ${\bf p}\cdot \exp(\nu(G/N))\cdot\exp(N)$.
\item[(b)] If $\gamma_{p-2}(N) \leq N^p$, then $\exp(\nu(G))$ divides $\exp(\nu(G/N))\cdot\exp(N)$.
\end{itemize}
\end{thm}
In \cite{BJR}, Brown, Johnson and Robertson described the non-abelian tensor square of $2$-groups of maximal class (i.e., groups of coclass $1$). In particular, if $G$ is a $2$-group of maximal class, then $\exp([G,G^{\varphi}])$ divides $\exp(G)$ (cf. \cite[Propositions 13--15]{BJR}). Consequently, $\exp(\nu(G))$ divides $\exp(G)^2$. In \cite{Moravec.cc}, Moravec proved that if $G$ is a $p$-group of maximal class, then $\exp(M(G))$ divides $\exp(G)$.
\begin{cor} \label{cor.maximal}
Let $p$ be a prime and $G$ a $p$-group of maximal class. Then $\exp(\nu(G))$ divides ${\bf p}^2\cdot \exp(G)$.
\end{cor}
In the literature, the exponent of several sections of the group $\nu(G)$, like $G \otimes G$, $\mu(G)$ and $M(G)$, has been investigated (see \cite{Sambonet} and the references given there). In \cite{Ellis}, Ellis proved that if $G$ is a $p$-group of class $c\geq 2$, then $\exp([G,G^{\varphi}])$ divides $\exp(G)^{c-1}$. In \cite{Moravec.Schur}, Moravec showed that if $G$ is a $p$-group of class $c\geq 2$, then $\exp(M(G))$ divides $\exp(G)^{2\lfloor \log_2(c)\rfloor}$. Later, in \cite{Sambonet17}, Sambonet proved that if $G$ is a $p$-group of class $c\geq 2$, then $\exp(M(G))$ divides $\exp(G)^{\lfloor \log_{p-1}(c)\rfloor+1}$ if $p>2$ and $\exp(M(G))$ divides $2^{\lfloor \log_2(c)\rfloor} \cdot \exp(G)^{\lfloor \log_2(c)\rfloor+1}$ if $p=2$. In \cite{APT}, Antony et al. demonstrated that $\exp(M(G))$ divides $\exp(G)^{1 + \lceil \log_{p-1}(\frac{c+1}{p+1})\rceil}$ if $p \leq c$, improving all the previous bounds.
Our contribution is a bound for the $\exp([G,G^{\varphi}])$ which, in the realm of Remark \ref{rem:nu(G)}, improves the bound obtained in \cite{APT}.
\begin{thm}\label{corlog}
Let $p$ be a prime and $G$ a $p$-group of nilpotency class $c$. Let $n=\lceil \log_{p}(c+1)\rceil$. Then $\exp([G,G^{\varphi}])$ divides $\exp{(G)}^n$.
\end{thm}
Furthermore, in \cite{Moravec.cc} Moravec proved that if $G$ is a $p$-group of coclass $r$, then $\exp(M(G))$ and $\exp(G \wedge G)$ divide $\exp(G)^{r+1+2 \left \lfloor{\log_2(m-1)}\right \rfloor}$, where $m=m(p,r)$ is as defined before. Finally, we obtain the following bounds for the exponent of $[G,G^{\varphi}]$.
\begin{thm} \label{thm.cc}
Let $p$ be a prime and $G$ a $p$-group of coclass $r$.
\begin{itemize}
\item[(a)] If $p$ is odd, then $\exp([G,G^{\varphi}])$ divides $(\exp(G))^{r}\cdot \exp(\gamma_m(G))$, where $m=(p-1)p^{r-1}$.
\item[(b)] If $p=2$, then $\exp([G,G^{\varphi}])$ divides $(\exp(G))^{r+3}\cdot \exp(\gamma_m(G))$, where $m=2^{r+2}$.
\end{itemize}
\end{thm}
We point out that both previous results implies a bound for $\exp(\nu(G))$ (cf. Remark \ref{rem:nu(G)}).
As a consequence of Theorem~\ref{thm.cc} we obtain the following.
\begin{cor}\label{cor.explicit.cc}
Let $p$ be a prime and $G$ a $p$-group of coclass $r$.
\begin{itemize}
\item[(a)] If $p\geq 3$ then $\exp(M(G))$ and $\exp(\mu(G))$ divide $ \exp(G)^{r+1}$;
\item[(b)] If $p=2$ then $\exp(M(G))$ and $\exp(\mu(G))$ divide $ \exp(G)^{r+3}$.
\end{itemize}
\end{cor}
It is worth to mention that for every prime $p$ the bounds obtained in Corollary~\ref{cor.explicit.cc} improve the ones obtained in \cite{Moravec.cc} when the coclass $r$ is at least $2$ (cf. \cite[Corollary 4.8]{Moravec.cc}). Furthermore, in the context of Sambonet theorem \cite[Theorem 3.3]{Sambonet17}, the improvement occurs for $e\leq r$ if $p>2$ and $e\leq r+2$ if $p=2$, where $\exp(G)=p^e$. \\[2mm]
The paper is organized as follows. In Section 2 we collect results of general nature that are later used in the proofs of our main theorems. The third section is devoted to the proof of Theorem \ref{thmA}. The proofs of Theorems \ref{thm.potent} and \ref{corlog} are given in Section 4. We also obtain bounds for the exponent $\exp([G,G^{\varphi}])$ in terms of some potent normal subgroups (see Corollary \ref{cor.potent}, below). In Section 5 we prove Corollary \ref{cor.maximal} and Theorem \ref{thm.cc}.
\section{Preliminaries}
\subsection{Finite $p$-groups}
In this subsection we summarize without proofs the relevant material on finite $p$-groups.
\begin{lem}(\cite[Lemma 2.2]{JJJ}) \label{normalinc}
Let $G$ be a finite $p$-group and $N$, $M$ normal subgroups of $G$. If $N\leq M[N,G]N^p$ then $N \leq M$.
\end{lem}
The following theorem is known as P. Hall's collection formula.
\begin{thm}(\cite[Appendix A]{D}) \label{thm.Hall}
Let $G$ be a $p$-group and $x, y$ elements of $G$. Then for any $k\geq 0$ we have
\[
(xy)^{p^k}\equiv x^{p^k}y^{p^k} \pmod{\gamma_{2}(L)^{p^k}\gamma_{p}(L)^{p^{k-1}}\gamma_{p^2}(L)^{p^{k-2}}\gamma_{p^3}(L)^{p^{k-3}}\cdots \gamma_{p^k}(L)},
\]
where $L=\langle x,y\rangle$. We also have
\[
[x,y]^{p^k}\equiv [x^{p^k}, y] \pmod {\gamma_{2}(L)^{p^k}\gamma_{p}(L)^{p^{k-1}}\gamma_{p^2}(L)^{p^{k-2}}\ldots \gamma_{p^k}(L)},
\]
where $L=\langle x,[x,y]\rangle$.
\end{thm}
\begin{cor}(\cite[Theorem 2.3]{JJJ}) \label{cor.Hall}
Let $G$ be a $p$-group and $x_1,\ldots, x_r$ elements of $G$. Then for any $k\geq 0$ we have
\[
(x_1\ldots x_r)^{p^k}\equiv x_1^{p^k}\ldots x_r^{p^k} \pmod{\gamma_{2}(L)^{p^k}\gamma_{p}(L)^{p^{k-1}}\gamma_{p^2}(L)^{p^{k-2}}\gamma_{p^3}(L)^{p^{k-3}}\cdots \gamma_{p^k}(L)},
\]
where $L=\langle x_1,\ldots, x_r\rangle$.
\end{cor}
A consequence of P. Hall's collection formula is given by the following lemma
\begin{lem}(\cite[Theorem 2.4]{JJJ}) \label{lem.hallformula}
Let $G$ be a finite $p$-group and $N$, $M$ normal subgroups of $G$. Then $$[N^{p^k},M]\equiv [N,M]^{p^{k}}(\text{mod}\ [M,_pN]^{p^{k-1}} [M,_{p^2}N]^{p^{k-2}} \ldots [M,_{p^k}N]).$$
\end{lem}
\begin{lem}[\cite{JJ}]\label{lem.lower.potent}
Let $G$ be a potent $p$-group and $k\geq 1$. If $p=2$, then $\gamma_{k+1}(G)\leq\gamma_k(G)^4$, and if $p\geq 3$ then $\gamma_{p-1+k}(G)\leq\gamma_{k+1}(G)^p$.
\end{lem}
The next lemma will be useful to determine the exponent of the group $\nu(G)$ in terms of $\exp(G)$ for some $p$-groups.
\begin{lem}[\cite{JJJ}] \label{lem.exponent}
Let $G$ be a finite $p$-group and $k\geq 1$. Assume that $\gamma_{k(p-1)}(G) \leq \gamma_r(G)^{p^s}$ for some $r$ and $s$ such that $k(p-1) < r + s(p-1)$. Then the exponent $\exp(\Omega_i(G))$ is at most $p^{i+k-1}$ for all $i$.
\end{lem}
Let $G$ be a $p$-group. We define $\Pi_i(G)$ inductively by: $\Pi_0(G)=G$ and $\Pi_i(G)=(\Pi_{i-1}(G))^p$ for $i>0$. The next result will be needed in the proof of Theorem \ref{thmA}.
\begin{lem}[\cite{D}]\label{power}
Let $G$ be a powerful $p$-group and $i\geq 1$. Then $\Pi_i(G) = G^{p^{i}}$ for every $i\geq 1$.
\end{lem}
\subsection{The group $\nu(G)$}
This subsection will be devoted to describe some properties of the group $\nu(G)$.
The following basic properties are consequences of
the defining relations of $\nu(G)$ and the commutator rules (see \cite[Section 2]{NR1} and \cite[Lemma 1.1]{BFM} for more details).
\begin{lem}
\label{basic.nu}
The following relations hold in $\nu(G)$, for all
$g, h, x, y \in G$.
\begin{itemize}
\item[$(i)$] $[g, h^{\varphi}]^{[x, y^{\varphi}]} = [g, h^{\varphi}]^{[x,
y]}$;
\item[$(ii)$] $[g, h^{\varphi}, x^{\varphi}] = [g, h, x^{\varphi}] = [g,
h^{\varphi}, x] = [g^{\varphi}, h, x^{\varphi}] = [g^{\varphi}, h^{\varphi}, x] =
[g^{\varphi},
h, x]$;
\item[$(iii)$] $[[g,h^{\varphi}],[x,y^{\varphi}]] = [[g,h],[x,y]^{\varphi}]$.
\end{itemize}
\end{lem}
Let $N$ be a normal subgroup of a finite group $G$. We denote by $K$ the subgroup $[N,G^{\varphi}] [G,N^{\varphi}] \cdot \langle N,N^{\varphi}\rangle$ in $\nu(G)$, where the dot means internal semidirect product. We set $\overline{G}$ for the quotient group $G/N$ and the canonical epimorphism $\pi: G \to \overline{G}$ gives rise to an epimorphism $\widetilde{\pi}: \nu(G) \to \nu(\overline{G})$ such that $g \mapsto \overline{g}$, $g^{\varphi} \mapsto \overline{g^{\varphi}}$, where $\overline{G^{\varphi}} = G^{\varphi}/N^{\varphi}$ is identified with $\overline{G}^{\varphi}$.
\begin{lem}(Rocco, \cite[Proposition 2.5 and Remark 3]{NR1})\label{lem.general} With the above notation we have
\begin{itemize}
\item[$(a)$] $[N,G^{\varphi}] \unlhd \nu(G)$, $[G,N^{\varphi}] \unlhd \nu(G)$;
\item[$(b)$] $\ker(\widetilde{\pi}) = [N,G^{\varphi}] [G,N^{\varphi}] \cdot \langle N,N^{\varphi}\rangle = ([N,G^{\varphi}] [G,N^{\varphi}]\cdot N) \cdot N^{\varphi}$.
\item[$(c)$] There is an exact sequence
\[
1 \rightarrow [N,G^{\varphi}][G,N^{\varphi}] \rightarrow{} [G,G^{\varphi}] \rightarrow \left[ G/N,\left( G/N\right)^{\varphi}\right] \rightarrow 1.
\]
\end{itemize}
\end{lem}
We need the following description to the lower central series of $\nu(G)$.
\begin{prop}\cite[Proposition 2.7]{BuenoRocco}\label{gammanu}
Let $k$ be a positive integer and $G$ a group. Then
$\gamma_{k+1}(\nu(G)) = \gamma_{k+1}(G)\gamma_{k+1}(G^{\varphi})[\gamma_{k}(G), G^{\varphi}]$.
\end{prop}
The above result shows that if $G$ is nilpotent of class $c$, then the group $\nu(G)$ is nilpotent of class at most $c+1$. On the other hand, the coclass of the group $\nu(G)$ has a different behaviour.
\begin{rem} \label{prop.coclass}
Let $G$ be a finite $p$-group. Assume that $G$ has coclass $r$ and order $|G| = p^n$. Then the coclass $r(\nu(G))$ is at least $r+2n-1$.
\end{rem}
\begin{proof}
First we prove that $|G| \leq |[G,G^{\varphi}]|$. Since $[G,G^{\varphi}]/\mu(G)$ is isomorphic to $G'$, it suffices to show that the order of the abelianization $|G^{ab}|$ divides $|\mu(G)|$. Indeed, by \cite[Remark 5]{NR1}, $|G^{ab}| \leq |\Delta(G)|$, where $\Delta(G) = \langle [g,g^{\varphi}] \mid g \in G\rangle \leq \mu(G)$. From this we deduce that $|\nu(G)| = p^{\alpha}\geq p^{3n}$. By Proposition \ref{gammanu}, the nilpotency class $c(\nu(G))$ of $\nu(G)$ is at most $c+1$, where $c=n-r$. Consequently, $$r(\nu(G)) = \alpha - c(\nu(G)) \ \geq \ 3n - (c+1) \ = \ r + 2n-1, $$
which establishes the formula.
\end{proof}
As $\nu(G)$ is an extension of $[G,G^{\varphi}]$ by $G \times G$, we have $\exp(\nu(G))$ divides $\exp(G) \cdot \exp([G,G^{\varphi}])$.
Combining \cite{BFM} and \cite{NR2}, we deduce the following bounds for $\exp(\nu(G))$ in terms of $\exp(\mu(G))$, $\exp(M(G))$.
\begin{rem} \label{rem:nu(G)}
Consider the following exact sequences (Rocco, \cite{NR2}),
$$
1 \rightarrow [G,G^{\varphi}] \rightarrow \nu(G) \rightarrow G \times G \rightarrow 1,
$$
$$
1 \rightarrow \Theta(G) \rightarrow \nu(G) \rightarrow G \rightarrow 1
$$
and
$$
1 \rightarrow \Delta(G) \rightarrow \mu(G) \rightarrow M(G) \rightarrow 1,
$$
where $\Theta(G)$ is the kernel of the epimorphism $\rho: \nu(G) \to G$, given by $g \mapsto g$ and $g^{\varphi} \mapsto g$. By \cite[Section 2]{NR2}, $\mu(G) = \Theta(G) \cap [G,G^{\varphi}]$.
Let $G$ be a finite group. By the first and second exact sequence, we deduce that $\nu(G)/\mu(G)$ is isomorphic to a subgroup of $G \times G \times G$ and so, $\exp(\nu(G))$ divides $\exp(G) \cdot \exp(\mu(G))$. Moreover, by the third exact sequence, we conclude that $\exp(\mu(G))$ divides $\exp(M(G)) \cdot \exp(\Delta(G))$. Moreover, as $[g^j,g^{\varphi}] = [g,g^{\varphi}]^j$ for any $g \in G$, we have $\exp(\Delta(G))$ divides $\exp(G)$. Consequently, $$\exp(\nu(G)) \ \text{divides} \ \exp(G)
^2 \cdot \exp(M(G)).$$
Assume that $2$ does not divide $|G^{ab}|$, where $G^{ab}=G/G'$. According to \cite[Corollary 1.4]{BFM}, we deduce that $\mu(G) \cong M(G) \times \Delta(G)$ and so, $$\exp(\nu(G)) \ \text{divides} \ \exp(G) \cdot \max \{\exp(G), \exp(M(G))\}.$$
\end{rem}
\section{Power-commutator conditions and the exponent of the lower central terms of $\nu(G)$}
Under the hypothesis of Theorem~\ref{thmA} we will prove the following proposition.
\begin{prop}\label{prop.conditions}
\begin{itemize}
\
\item[(1)] For every odd prime $p$ or $i>m$ we have $\gamma_{i+s+1}(\nu(G)) \leq \gamma_{i+1}(\nu(G))^p$.
\item[(2)] For every prime $p$ and $i\geq m$ we have $\gamma_{i+1}(\nu(G))^p \leq \gamma_{i+s+1}(\nu(G))$.
\end{itemize}
\end{prop}
\begin{proof}
(1) We start proving that $\gamma_{i+s+1}(\nu(G)) \leq \gamma_{i+1}(\nu(G))^p.$ From Proposition~\ref{gammanu} and by hypothesis we have
\begin{align*}
\gamma_{i+s+1}(\nu(G)) &= \gamma_{i+s+1}(G)\ \gamma_{i+s+1}(G^{\varphi})\ [\gamma_{i+s}(G), G^{\varphi}]\\
&= \gamma_{i+1}(G)^p \ \gamma_{i+1}(G^{\varphi})^p \ [\gamma_i(G)^p, G^{\varphi}].
\end{align*}
Since both $\gamma_{i+1}(G)^p$ and $\gamma_{i+1}(G^{\varphi})^p$ are contained in $\gamma_{i+1}(\nu(G))^p$, it suffices to prove that $[\gamma_{i}(G)^p, G^{\varphi}] \leq \gamma_{i+1}(\nu(G))^p$. For, let $x \in \gamma_i(G)$ and $y^{\varphi} \in G^{\varphi}$. Then, applying Theorem~\ref{thm.Hall} we have
\[
[x^p, y^{\varphi}] \equiv [x,y^{\varphi}]^p \pmod{\gamma_2(L)^p \ \gamma_p(L)},
\]
where $L= \langle x, [x,y^{\varphi}]\rangle$. On the one hand,
\[
\gamma_2(L)^p \leq [\gamma_i(G), G^{\varphi}, \gamma_i(G)]^p \leq \gamma_{2i+1}(\nu(G))^p.
\]
For every prime $p$, if $i>m$
\begin{align*}
\gamma_2(L) &\leq [\gamma_i(G), G^{\varphi}, \gamma_i(G)] \leq \gamma_{i+s+2}(\nu(G))\\
&=\gamma_{i+s+2}(G)\gamma_{i+s+2}(G^{\varphi})[\gamma_{i+s+1}(G),G^{\varphi}]\\
&\leq \gamma_{i+1}(\nu(G))^p[\gamma_{i}(G)^p,G,G^{\varphi}] .
\end{align*}
On the other hand, $p \geq 3$ implies $2i+p-3 \geq i+s$ and we have
\begin{align*}
\gamma_p(L) & \leq [\gamma_i(G), G^{\varphi}, \gamma_i(G),_{p-2} \nu(G)] = [\gamma_{i+1}(G), \gamma_i(G^{\varphi}),_{p-2} \nu(G)]\\[2mm]
& \leq [\gamma_{2i+1}(G),_{p-3} \nu(G), G^{\varphi}] \leq [\gamma_{2i+p-3}(G), G^{\varphi}, G^{\varphi}] \\[2mm]
& \leq [\gamma_{2i+p-3}(G), G^{\varphi}, \nu(G)] \leq [\gamma_{s+i}(G), G^{\varphi}, \nu(G)] = [\gamma_i(G)^p, G^{\varphi}, \nu(G)].
\end{align*}
Therefore, it follows that \[ [x^p, y^{\varphi}] \in \gamma_{i+1}(\nu(G))^p [\gamma_i(G)^p, G^{\varphi}, \nu(G)] \] which yields
\[
[\gamma_{i}(G)^p, G^{\varphi}] \leq \gamma_{i+1}(\nu(G))^p [\gamma_i(G)^p, G^{\varphi}, \nu(G)].
\]
Applying Lemma~\ref{normalinc} with $N=[\gamma_{i}(G)^p, G^{\varphi}]$ and $M=\gamma_{i+1}(\nu(G))^p$, we can conclude that $[\gamma_{i}(G)^p, G^{\varphi}] \leq \gamma_{i+1}(\nu(G))^p$.\\
(2) In order to prove that $\gamma_{i+1}(\nu(G))^p \leq \gamma_{i+s+1}(\nu(G))$, consider the subgroup $W=\gamma_{i+1}(G)^p \ \gamma_{i+1}(G^{\varphi})^p \ [\gamma_i(G), G^{\varphi}]^p$. Firstly, we show that
\[
W \equiv \gamma_{i+1}(\nu(G))^p \pmod{\gamma_{i+s+1}(\nu(G))}.
\]
By definition, $W \leq \gamma_{i+1}(\nu(G))^p\leq \gamma_{i+1}(\nu(G))^p\gamma_{i+s+1}(\nu(G))$, so we only need to prove that $\gamma_{i+1}(\nu(G))^p \leq W \gamma_{i+s+1}(\nu(G))$. For, let $\alpha \in \gamma_{i+1}(G)$, $\beta \in \gamma_{i+1}(G^{\varphi})$ and $\delta \in [\gamma_i(G), G^{\varphi}]$. Then, applying Corollary~\ref{cor.Hall} we have
\[
(\alpha \beta \delta)^p \equiv \alpha^p \beta^p \delta^p \pmod{ \gamma_2(J)^p \gamma_p(J)},
\]
where $J=\langle \alpha, \beta, \delta \rangle$. It is straightforward to see that $\alpha^p \beta^p \delta^p \in W$. Moreover, all the generators of $\gamma_2(J)$ belong to $\gamma_{i+s+1}(\nu(G))$. Indeed,
\begin{align*}
&[\alpha, \beta] \in [\gamma_{i+1}(G), \gamma_{i+1}(G^{\varphi})] \leq [\gamma_{i+1}(\nu(G)), \gamma_{i+1}(\nu(G))] \leq \gamma_{i+s+1}(\nu(G));\\[2mm]
&[\delta, \alpha] \in [\gamma_{i}(G), G^{\varphi}, \gamma_{i+1}(G)] \leq [\gamma_{i+1}(\nu(G)), \gamma_{i+1}(\nu(G)] \leq \gamma_{i+s+1}(\nu(G));\\[2mm]
&[\delta, \beta] \in [\gamma_{i}(G), G^{\varphi}, \gamma_{i+1}(G^{\varphi})] \leq [\gamma_{i+1}(\nu(G)), \gamma_{i+1}(\nu(G)] \leq \gamma_{i+s+1}(\nu(G)).
\end{align*}
Therefore, $\gamma_2(J)^p \leq \gamma_2(J) \leq \gamma_{i+s+1}(\nu(G))$. Furthermore, $\gamma_p(J) \leq \gamma_2(J) \leq \gamma_{i+s+1}(\nu(G)).$
It follows that $(\alpha \beta \delta)^p \in W \gamma_{i+s+1}(\nu(G))$, and so
\[
\gamma_{i+1}(\nu(G))^p \leq W \gamma_{i+s+1}(\nu(G)).
\]
To conclude, we prove that $W \leq \gamma_{i+s+1}(\nu(G))$, that is, $[\gamma_i(G), G^{\varphi}]^p \leq \gamma_{i+s+1}(\nu(G))$. Let $\alpha=\alpha_1^p\ldots\alpha_n^p\in [\gamma_i(G), G^{\varphi}]^p$, where each $\alpha_j\in [\gamma_i(G), G^{\varphi}]$. We can write $\alpha_j=[x_{j1}, y_{j1}^{\varphi}]\ldots [x_{jl}, y_{jl}^{\varphi}]$, with $x_{jk}\in \gamma_i(G)$ and $y_{jk}^\varphi\in G^\varphi$, for all $k\in\{1, \ldots, l\}$, where $l$ depends on $j$. Applying Corollary~\ref{cor.Hall}
\[
([x_{j1}, y_{j1}^{\varphi}]\ldots [x_{jl}, y_{jl}^{\varphi}])^p \equiv [x_{j1}, y_{j1}^{\varphi}]^p\ldots [x_{jl}, y_{jl}^{\varphi}]^p \pmod{ \gamma_2(S)^p \gamma_p(S)},
\]
where $S=\langle [x_{j1}, y_{j1}^{\varphi}], \ldots, [x_{jl}, y_{jl}^{\varphi}] \rangle$. Observe that
\begin{align*}
&\gamma_2(S)^p \gamma_p(S) \leq \gamma_2(S) \leq [\gamma_i(G), G^{\varphi}, [\gamma_i(G), G^{\varphi}]] \leq \gamma_{2i+1}(\nu(G)) \leq \gamma_{i+s+1}(\nu(G)).
\end{align*}
Furthermore each element $[x_{jk}, y_{jk}^{\varphi}]^p\in \gamma_{i+s+1}(\nu(G))$. Indeed by Proposition~\ref{thm.Hall} we have
\[
[x_{jk},y_{jk}^{\varphi}]^p \equiv [x_{jk}^p,y_{jk}^{\varphi}] \pmod{\gamma_2(K)^p \gamma_p(K)},
\]
where $K=\langle x_{jk}, [x_{jk},y_{jk}^{\varphi}]\rangle$. Observe that
\begin{align*}
&[x_{jk}^p, y_{jk}^{\varphi}] \in [\gamma_i(G)^p, G^{\varphi}] \leq \gamma_{i+s+1}(\nu(G));\\[2mm]
&\gamma_2(K)^p \gamma_p(K) \leq \gamma_2(K) \leq [\gamma_i(G), G^{\varphi}, \gamma_i(G)] \leq \gamma_{2i+1}(\nu(G)) \leq \gamma_{i+s+1}(\nu(G)).
\end{align*}
This means that each $\alpha_j^p\in \gamma_{i+s+1}(\nu(G))$, so $\alpha \in \gamma_{i+s+1}(\nu(G))$. Therefore $[\gamma_i(G), G^{\varphi}]^p \leq \gamma_{i+s+1}(\nu(G))$. This concludes the proof.
\end{proof}
We are now in a position to complete the proof of Theorem \ref{thmA}.
\begin{proof}[Proof of Theorem \ref{thmA}]
\noindent (a) follows directly from Proposition~\ref{prop.conditions}. Therefore, we need to prove (b) and (c).
First notice that if $p$ is odd, then $\gamma_{m}(G)$ is a powerful $p$-group. Indeed, by hypothesis we have $$[\gamma_{m}(G),\gamma_{m}(G)]\leq \gamma_{2m}(G)\leq \gamma_{m+s}(G)=\gamma_{m}(G)^p.$$
Therefore, we can assume that $\gamma_{m}(G)$ is a powerful $p$-group for every $p$, and we prove (b) and (c) simultaneously. Now, by Lemma \ref{power} we have $\Pi_j(\gamma_{m}(G)))=\gamma_{m}(G)^{p^{j}}$ for all $j\geq 1$ and for every $p$. Let $p^t$ be the exponent of $\gamma_m(G)$. Thus
\[
\gamma_{m+ts}(G)=~\gamma_m(G)^{p^t}=~1.
\]
Therefore, from Proposition \ref{gammanu} we obtain that $\gamma_{m+ts+1}(\nu(G))=~1$. On the other hand, by item (a), we have $\gamma_{i+1}(\nu(G))^p\leq \gamma_{i+s+1}(\nu(G))$ for $i\geq m$. Therefore $$\gamma_{m+1}(\nu(G))^{p^t}\leq\Pi_t(\gamma_{m+1}(\nu(G))) \leq \gamma_{m+ts+1}(\nu(G))=1,$$ and the proof is concluded.
\end{proof}
\section{The exponent of $\nu(G)$}
Throughout the sequel $N$ denotes a normal subgroup of a finite $p$-group $G$. For the sake of brevity, we write $K = NN^{\varphi}[N,G^{\varphi}][G,N^{\varphi}]$ instead of $\ker(\widetilde{\pi})$ (see Lemma \ref{lem.general}, above).
\begin{prop}\label{propN}
\ \
\begin{itemize}
\item[(a)] $\gamma_s(K)= \gamma_s(N)\gamma_s(N^{\varphi})[\gamma_{s-1}(N),N^{\varphi}][N,\gamma_{s-1}(N^{\varphi})]$ for $s\geq 2$.
\item[(b)] If $p\geq 3$, $n\in \mathbb{N}$ such that $1<n<p$ and $\gamma_n(N)\leqslant N^p$, then $\gamma_{n+1}(K)\leq \gamma_2(N)^p\gamma_2(N^{\varphi})^p[N,N^{\varphi}]^p$;
\item[(c)] If $p=2$ and $N$ is powerful, then $\gamma_3(K)\leq \gamma_2(N)^4\gamma_2(N^{\varphi})^4[N,N^{\varphi}]^4$.
\end{itemize}
\end{prop}
\begin{proof}
(a) Clearly $$ \gamma_s(N)\gamma_s(N^{\varphi})[\gamma_{s-1}(N),N^{\varphi}][N,\gamma_{s-1}(N^{\varphi})] \leq \gamma_s(K),$$ for every $s\geq 2$. To prove the other inclusion we argue by induction on~ $s$.
Let $X=\{n_1,n_2^{\varphi},[n_3,g_1^{\varphi}],[g_2,n_4^{\varphi}] \ | \ n_i\in N, g_j\in G\}$ be a set of generators of $K$. Assume that $s=2$. It suffices to show that each commutator of weight 2 in the generators belongs to $\gamma_2(N)\gamma_2(N^{\varphi})[N,N^{\varphi}]$ since it is a normal subgroup of $\nu(G)$.
Let $n_1, n_2\in N$, $g_1, g_2 \in G$ and $n = [n_1,g_1], n'=[g_2,n_2] \in N$. Then
\begin{align*}
&[n_1, n_2^{\varphi}] \in [N,N^{\varphi}];\\[2mm]
&[n_1, g_1^{\varphi},n_2] =[n_1, g_1,n_2^{\varphi}]=[n^{-1},n_2^{\varphi}]\in [N, N^{\varphi }];\\[2mm]
&[g_2,n_2^{\varphi},n_1]=[g_2,n_2,n_1^{\varphi}]=[n',n_2^{\varphi}] \in [N, N^{\varphi }];\\[2mm]
&[[g_2,n_2^{\varphi}],[n_1, g_1^{\varphi}]]=[n',n^{\varphi}]\in [N, N^{\varphi }].
\end{align*}
Of course, $[n_1,n_2]\in \gamma_2(N)$ and $[n_1^{\varphi},n_2^{\varphi}]\in \gamma_2(N^{\varphi})$. Then for $s=2$ the inclusion holds.
Now assume $s \geq 2$ and suppose by induction hypothesis that $\gamma_s(K)= \gamma_s(N)\gamma_s(N^{\varphi})[\gamma_{s-1}(N),N^{\varphi}][N,\gamma_{s-1}(N^{\varphi})]$. In particular $Y=\{x, x^{\varphi}, [y_1, n_1^{\varphi}], [n_2, y_2^{\varphi}] \ | \ x \in \gamma_s(N), y_i \in \gamma_{s-1}(N), n_i \in N\}$ is a set of generators for $\gamma_s(K)$. Therefore we need to show that $[\alpha, \beta] \in \gamma_{s+1}(N)\gamma_{s+1}(N^{\varphi})[\gamma_{s}(N),N^{\varphi}][N,\gamma_{s}(N^{\varphi})]$ for every $\alpha \in X$ and $\beta \in Y$.
Let $x\in \gamma_s(N)$, $y\in \gamma_{s-1}(N)$, $m,n,n'\in N$ and $g_1, g_2 \in G$ and $n = [n_1,g_1], n'=[g_2,n_2] \in N$. Then
\begin{align*}
&[x, m] \in \gamma_{s+1}(N); \hspace{0.5cm}[x,m^{\varphi}]\in [\gamma_{s}(N),N^{\varphi}];\\[2mm]
&[x^{\varphi}, m] \in [\gamma_s(N^\varphi),N];\hspace{0.5cm}[x^{\varphi},m^{\varphi}]\in \gamma_{s+1}(N^{\varphi});\\[2mm]
&[n_1, g_1^{\varphi},x] =[n_1, g_1, x^\varphi]=[n,x^{\varphi}]\in [N, \gamma_s(N^{\varphi })];\\[2mm]
&[g_2,n_2^{\varphi},x]=[g_2, n_2, x^\varphi]=[n',x^{\varphi}] \in [N, \gamma_s(N^{\varphi })];
\\[2mm]
&[[y, m^{\varphi}],n_1]=[[y, m], n_1^\varphi] \in [\gamma_s(N), N^\varphi];
\\[2mm]
&[[y, m^{\varphi}],[n_1,g_1^{\varphi}]]=[[y, m], n^{\varphi}]\in [\gamma_s(N), N^{\varphi }];\\[2mm]
&[[y, m^{\varphi}],[g_2, n_2^{\varphi}]]=[[y,m],(n')^{\varphi}]\in [\gamma_s(N), N^{\varphi }];\\[2mm]
&[[m, y^{\varphi}],n_1]=[[m, y], n_1^\varphi] \in [\gamma_s(N), N^\varphi];\\[2mm]
&[[m, y^{\varphi}],[n_1,g_1^{\varphi}]]=[[m,y],n^{\varphi}] \in [\gamma_s(N), N^\varphi];\\[2mm]
&[[m, y^{\varphi}],[g_2, n_2^{\varphi}]]=[[m,y],(n')^{\varphi}] \in [\gamma_s(N), N^\varphi].
\end{align*}
This suffices to conclude that $\gamma_{s+1}(K)\leq \gamma_{s+1}(N)\gamma_{s+1}(N^{\varphi})[\gamma_{s}(N),N^{\varphi}][N,\gamma_{s}(N^{\varphi})]$, and we are done.\\
\noindent (b) Consider $p\geq3$ and $1<n<p$, with $n\in \mathbb{N}$. By the previous item we have $\gamma_{n+1}(K) = \gamma_{n+1}(N) \gamma_{n+1}(N^\varphi) [\gamma_n(N), N^{\varphi}] [N, \gamma_n(N^\varphi)]$. Since $\gamma_n(N)\leq N^p$ follows that $$\gamma_{n+1}(K)\leq [N^p, N] [(N^{\varphi})^p, N^{\varphi}] [N^p, N^{\varphi}] [N, (N^{\varphi})^p].$$
First we will prove that $[N^p, N]\leq \gamma_2(N)^p$. Since $n<p$, by Lemma \ref{lem.hallformula} we have
$$\begin{array}{ccl}
[N^p, N] & \leq & [N, N]^p [N,\ _p\ N] = \gamma_2(N)^p[\gamma_{p-1}(N), N, N] \\
& \leq & \gamma_2(N)^p[\gamma_n(N), N, N] \leq \gamma_2(N)^p[N^p, N, N].
\end{array}$$
Applying Lemma~\ref{normalinc} to $[N^p, N]$ and $[N, N]^p$, we deduce that $[N^p, N] \leq \gamma_2(N)^p$. Clearly, in the same way we have $[(N^{\varphi})^p, N^{\varphi}] \leq \gamma_{2}(N^{\varphi})^p$.
Now, it remains to prove that $[N^p, N^{\varphi}] \leq [N, N^{\varphi}]^p$. For, let $x,y\in N$. By Theorem~\ref{thm.Hall},
$$[x^p, y^\varphi] \equiv\ [x, y^\varphi]^p (\text{mod}\ \gamma_2(L)^p\gamma_p(L)),$$
where $L=\langle x, [x, y^\varphi]\rangle$. Note that $\gamma_2(L)^p\leq [N, N^{\varphi}, N]^p=[N, N, N^{\varphi}]^p\leq[N, N^{\varphi}]^p$ and
$$\begin{array}{ccl}
\gamma_p(L) & \leq & [N, N^{\varphi}, N,\ _{p-2}\ N]=[\gamma_{p-1}(N), N, N^{\varphi}]\\
& \leq & [N^p, N, N^{\varphi}]=[N^p, N^{\varphi}, N^{\varphi}]\leq [N^p, N^{\varphi}, \nu(G)]
\end{array}$$
Considering all the elements $x, y\in N$, we deduce that
$$[N^p, N^{\varphi}]\leq [N, N^{\varphi}]^p[N^p, N^{\varphi}, \nu(G)].$$
Note that $[N^p, N^{\varphi}]$ and $[N, N^{\varphi}]^p$ are normal subgroups of $\nu(G)$. So, applying Lemma \ref{normalinc} to these normal subgroups we get $[N^p, N^{\varphi}] \leq [N, N^{\varphi}]^p$. Similarly we obtain $[N, (N^{\varphi})^p]\leq [N, N^{\varphi}]^p$.
Therefore $$\gamma_{n+1}(N) \gamma_{n+1}(N^{\varphi}) [N^p, N^{\varphi}] [N,(N^p)^{\varphi})] \leq \gamma_2(N)^p\gamma_2(N^{\varphi})^p[N,N^{\varphi}]^p$$ and the proof is complete for $p\geq3$. \\
\noindent (c) Now consider $p=2$. Since $N$ is a potent $p$-group, by Lemma~\ref{lem.lower.potent} we have $\gamma_3(N) \leq \gamma_2(N)^4$ so $$\gamma_3(K)=\gamma_{3}(N)\gamma_{3}(N^{\varphi})[\gamma_2(N),N^{\varphi}][N,\gamma_2(N^{\varphi})]\leq \gamma_{2}(N)^4\gamma_{2}(N^{\varphi})^4[N^4,N^{\varphi}][N,(N^{\varphi})^4]$$
We need to prove that $[N^4, N^{\varphi}]\leq [N, N^{\varphi}]^4$. Let $n, m\in N$, by Theorem~\ref{thm.Hall},
$$[n^4, m^\varphi] \equiv\ [n, m^\varphi]^4 (\text{mod}\ \gamma_2(L)^4\gamma_2(L)^2\gamma_4(L)),$$
where $L=\langle n, [n, m^\varphi]\rangle$. Note that $\gamma_2(L)\leq [N, N^{\varphi}, N]=[N, N, N^{\varphi}]$, this implies that
\begin{align*}
&\gamma_2(L)^{4} \leq \gamma_2(L)^2\leq[N, N, N^{\varphi}]^2\leq[N^4, N^{\varphi}]^2\\[2mm]
&\gamma_4(L) \leq [N, N, N^{\varphi}, L, L] \leq [N, N, N^{\varphi}, \nu(G), \nu(G)] \leq [N^4, N^{\varphi}, \nu(G)].
\end{align*}
Therefore $[n^4, m^{\varphi}] \in [N, N^{\varphi}]^4[N^4, N^{\varphi}, \nu(G)][N^4, N^{\varphi}]^2$.
By commutators relations we can conclude that for each element $\alpha\in N^4$ and each $m\in N$ we have $[\alpha, m^{\varphi}] \in [N, N^{\varphi}]^4[N^4, N^{\varphi}, \nu(G)][N^4, N^{\varphi}]^2$, that is,
$$[N^4, N^{\varphi}]\leq [N, N^{\varphi}]^4[N^4, N^{\varphi}, \nu(G)][N^4, N^{\varphi}]^2.$$
Since the subgroups $[N^4, N^{\varphi}]$ and $[N, N^{\varphi}]^4$ are normal in $\nu(G)$, we can apply Lemma~\ref{normalinc}, obtaining $[N^4, N^{\varphi}] \leq [N, N^{\varphi}]^4$. In the same way $[N, (N^{\varphi})^4]\leq [N, N^{\varphi}]^4$.
Therefore $$\gamma_{3}(N)\gamma_{3}(N^{\varphi})[N^4,N^{\varphi}][N,(N^{\varphi})^4]\leq \gamma_2(N)^4\gamma_2(N^{\varphi})^4[N,N^{\varphi}]^4$$ and the proof is complete.
\end{proof}
\begin{cor}\label{s-lowercentral}
If $N$ is potent and $s \geq 2$, then the $s$-th term of the lower central series $\gamma_s(K)$ is potently embedded in $K$.
\end{cor}
\begin{proof}
The proof is by induction on $s$. If $s=2$, by Proposition~\ref{propN} and by definition we have
$$\begin{array}{lcl}
[\gamma_2(K), _{p-2}K]=\gamma_p(K) & \leq & \gamma_2(N)^p\gamma_2(N^{\varphi})^p[N,N^{\varphi}]^p\\
\ & \leq & (\gamma_2(N)\gamma_2(N^{\varphi})[N,N^{\varphi}])^p = \gamma_2(K)^p,\ \text{for}\ p\geq 3,
\end{array}$$
$$\begin{array}{lcl}
[\gamma_2(K), K]=\gamma_3(K) & \leq & \gamma_2(N)^4\gamma_2(N^{\varphi})^4[N,N^{\varphi}]^4\\
\ & \leq & (\gamma_2(N)\gamma_2(N^{\varphi})[N,N^{\varphi}])^4 = \gamma_2(K)^4, \ \text{for}\ p=2.
\end{array}$$
This means that $\gamma_2(K)$ is potently embedded in $K$.
Suppose by induction hypothesis that $[\gamma_{s}(K), _{p-2}K] \leq \gamma_{s}(K)^p$, if $p\geq 3$, and $[\gamma_{s}(K), K] \leq \gamma_{s}(K)^4$, if $p=2$. Now Lemma \ref{lem.hallformula} yields
$$\begin{array}{lcl}
[\gamma_{s+1}(K), _{p-2}K]& \leq & [\gamma_s(K)^p, K]\\
\ & \leq & [\gamma_s(K), K]^p[K,\ _p\ \gamma_s(K)]\\
\ & \leq & \gamma_{s+1}(K)^p[\gamma_{s+1}(K),_{p-2} K, K], \ \text{for} \ p\geq 3,
\end{array}$$
$$\begin{array}{lcl}
[\gamma_{s+1}(K), K]& \leq & [\gamma_s(K)^4, K]\\
\ & \leq & [\gamma_s(K), K]^4[K,\ _2\ \gamma_s(K)]^2[K, _4\gamma_s(K)]\\
\ & \leq & \gamma_{s+1}(K)^4[\gamma_{s+1}(K), K]^2 [\gamma_{s+1}(K), K, K, K], \ \text{for} \ p=2.
\end{array}$$
Therefore, by Lemma \ref{normalinc} $[\gamma_{s+1}(K), _{p-2}K]\leq \gamma_{s+1}(K)^p$, if $p\geq 3$ and $[\gamma_{s+1}(K),K]\leq\gamma_{s+1}(K)^4$, if $p=2$, as desired.
\end{proof}
\begin{cor}\label{corN}
If $N$ is potent or $\gamma_p(N)=1$, then $\exp(K)$ divides ${\bf p} \cdot \exp(N)$.
\end{cor}
\begin{proof}
Assume that $\exp(N)=p^e$. As $K$ is generated by $N^{G^{\varphi}}$ and $(N^{\varphi})^G$, it follows that $K=\Omega_e(K)$.
Now let $p\geq 3$. On the one hand, if $N$ is potent, according to Corollary~\ref{s-lowercentral}, we conclude that $$\gamma_{2(p-1)}(K) = [\gamma_p(K),_{p-2}K] \leq \gamma_{p}(K)^p.$$
On the other hand, if $\gamma_p(N)=1$, by Proposition~\ref{propN} (a) we have $\gamma_{p+1}(K)=1$. So $$\gamma_{2(p-1)}(K) \leq \gamma_{p+1}(K)=1 \leq \gamma_p(K)^p.$$
In both cases we can apply Lemma~\ref{lem.exponent} with $k=2, r=p$ and $s=1$, obtaining $\exp(K) = \exp(\Omega_e(K))$ divides $p^{e+1}$.
Analogously, for $p=2$ we have $\gamma_{3}(K) \leq \gamma_{2}(K)^4$, if $N$ is potent, and $\gamma_3(K)=1 \leq \gamma_2(K)^4$, if $\gamma_2(N)=1$. Then, applying Lemma~\ref{lem.exponent} with $k=3, r=2$ and $s=2$, we obtain that $\exp(K)=\exp(\Omega_e(K))$ divides $2^{e+2}$.
\end{proof}
The proof of Theorem \ref{thm.potent} is now easy.
\begin{proof}[Proof of Theorem~\ref{thm.potent}]
(a) According to Lemma \ref{lem.general}~(b), we deduce that $\exp(\nu(G))$ divides $\exp(\nu(G/N)) \cdot \exp(K)$.
By Corollary~\ref{corN}, $\exp(K)$
divides ${\bf p}\cdot \exp(N)$, which completes the proof. \\
\noindent (b) Arguing as in the previous paragraph, it is sufficient to prove that $\exp(K)$ divides $\exp(N)$. So, let $p\geq 5$ and suppose that $\gamma_{p-2}(N) \leq N^p$. Applying Proposition~\ref{propN} (b), for $n=p-2$ we have that $\gamma_{p-1}(K) \leq \gamma_2(K)^p$. In particular, $K$ is potent. By Lemma~\ref{lem.exponent} with $k=1$, $r=2$ and $s=1$ we obtain that $\exp(K)=\exp(\Omega_e(K))$ divides $\exp(N)$, since $K=\ker(\tilde{\pi}) = \Omega_e(K)$.
\end{proof}
Arguing as in the proof of the above result and using Lemma~\ref{lem.general} (c) and the fact that $\exp([N, G^{\varphi}][G, N^{\varphi}])$ divides $\exp(K)$, which in turn divides $\exp(N)$, we obtain the following:
\begin{cor}\label{cor.potent}
Let $N$ be a normal subgroup of a $p$-group $G$.
\begin{itemize}
\item[(a)] If $N$ is potent or $\gamma_{p}(N)=1$, then $\exp([G,G^{\varphi}])$ divides ${\bf p}\cdot \exp([G/N,(G/N)^{\varphi}])\cdot\exp(N)$.
\item[(b)] If $\gamma_{p-2}(N) \leq N^p$, then $\exp([G,G^{\varphi}])$ divides $\exp([G/N,(G/N)^{\varphi}])\cdot\exp(N)$.
\end{itemize}
\end{cor}
A finite $p$-group $G$ is called regular if $x^py^p \equiv~(xy)^p \mod H^p$, for every $x,y \in G$ and $H = H(x,y) = \langle x,y\rangle'$. It is well-known that if $G$ is a regular $p$-group and $G$ is generated by a set $X$, then $\exp(G) = \max\{|x| \mid x \in X\}$, where $|x|$ denotes the order of the element $x$ in the group $G$ (see \cite[1.2.13~(i)]{LM}).
\begin{proof}[Proof of Theorem \ref{corlog}]
Recall that $G$ is a $p$-group of nilpotency class $c$ and $n=\lceil \log_{p}(c+1)\rceil$. We prove by induction on $c$ that $\exp([G,G^{\varphi}])$ divides $\exp{(G)}^n$.
For any $c$ let $N=\gamma_{j}(G)$ and $H=[N,G^{\varphi}]\cdot [G,N^{\varphi}]$ where $j=\lceil \frac{c+1}{p}\rceil$. Then $H\leq \gamma_{j+1}(\nu(G))$ and hence $\gamma_p(H)\leq \gamma_{c+p+1}(\nu(G))\leq \gamma_{c+2}(\nu(G))=1$. In particular, $H$ is a regular $p$-group. Let $x\in N$ and $y^{\varphi}\in G^{\varphi}$. We will prove that if $\exp(N)=p^e$, then $H^{p^e}=1$. Applying Theorem \ref{thm.Hall} we have
\[
[x^{p^e}, y^{\varphi}] \equiv [x,y^{\varphi}]^{p^e} \pmod {\gamma_{2}(L)^{p^e}\gamma_{p}(L)^{p^{e-1}}\gamma_{p^2}(L)^{p^{e-2}}\ldots \gamma_{p^e}(L)},
\]
where $L= \langle x, [x,y^{\varphi}]\rangle$. On the one hand, $\gamma_p(L) \leq \gamma_{pj+1}(\nu(G)) \leq \gamma_{c+2}(\nu(G))=1$. If $p=2$, then $\gamma_2(L)=1$ and we obtain that $[x,y^{\varphi}]^{p^e}=1$. In particular, $H^{p^e}=1$
If $p$ is odd, then $[x,y^{\varphi}]^{p^e} \in \gamma_{2}(L)^{p^e}$. Since $[x,y^{\varphi},x]=[x,y,x^{\varphi}]\in [N,N^{\varphi}]$ we conclude that $[x,y^{\varphi}]^{p^e} \in \gamma_{2}(L)^{p^e} \leq [N,N^{\varphi}]^{p^e}$. Therefore, it is sufficient to prove that $[N,N^{\varphi}]^{p^e}=1$.
Repeating the process with $a, b\in N$ we obtain that $[a,b^{\varphi}]^{p^e}\in [N',N^{\varphi}]^{p^e}$, so $H^{p^e}\leq [N,N^{\varphi}]^{p^e}\leq [N',N^{\varphi}]^{p^e} $. It is clear that using this process $p$-times eventually we obtain that $[N,N^{\varphi}]^{p^e}=1$, since $\gamma_p(N)=1$. Note that if $c \leq p-1$, then $N=G$ since $j=1$. In particular, $H=[G,G^{\varphi}]$ and $H^{\exp(G)}=1$. Thus, it remains to prove the case when $c\geq p$.
Now, Lemma \ref{lem.general} (c) implies that $\exp([G,G^{\varphi}])$ divides $$\exp([G/N,G^{\varphi}/N^{\varphi}]) \cdot \exp(H).$$
As $G/N$ has nilpotency class $\left\lceil\frac{c+1}{p}\right\rceil - 1$, by induction we obtain that $\exp([G/N,G^{\varphi}/N^{\varphi}])$ divides $\exp(G)^m$ where $m=\left\lceil \log_{p} \left\lceil\frac{c+1}{p}\right\rceil\right\rceil=\left\lceil \log_{p}(\frac{c+1}{p})\right\rceil=\left\lceil\log_{p}(c+1)\right\rceil-~1$. Therefore, $\exp([G,G^{\varphi}])$ divides $\exp(G)^m\cdot \exp(G)$ and
the result follows.
\end{proof}
The previous result improve the bounds obtained \cite{APT,Moravec.Schur,Sambonet17} (cf. \cite[Theorem 6.5]{APT}, \cite[Section 2]{Moravec.Schur} and \cite[Theorem 1.1]{Sambonet17}).
\section{Applications: finite $p$-groups of fixed coclass}
Recall that the coclass of a $p$-group of order $p^n$ and nilpotency class $c$ is defined to be $r=n-c$. Finite $p$-groups of coclass 1 are also known as $p$-groups of maximal class. For $p=2$ these are known to be either dihedral, semidihedral or quaternion groups \cite[Corollary 3.3.4 (iii)]{LM}.
Let $G$ be a $p$-group of maximal class of order greater than or equal to $p^4$. We define $G_1=C_G(\gamma_2(G)/\gamma_4(G))$. In other words $G_1$ consists of the elements $x\in G$ such that $[x,\gamma_2(G)]\leq \gamma_4(G)$.
It is well-known that the subgroup $G_1$ is a characteristic maximal subgroup of $G$. Another structural property about the subgroup $G_1$ is given in the next result.
\begin{thm}\cite[Corollary 3.3.6]{LM}
If $G$ is a $p$-group of maximal class of order greater than or equal to $p^{p+2}$, then $\gamma_p(G)=G_1^p$.
\end{thm}
More information on $p$-groups of maximal class can be found in \cite[Chapter 3]{LM}. We are now in a position to prove Corollary~\ref{cor.maximal}.
\begin{proof}[Proof of Corollary \ref{cor.maximal}]
First of all, we prove that $G$ has a potent maximal subgroup or a maximal subgroup of class at most $p-1$. If $p=2$, then $G$ has a cyclic maximal subgroup. Thus we can assume that $p$ is odd.
On the one hand, if $|G|\leq p^{p+1}$, then its maximal subgroups have order at most $p^p$ and then nilpotency class at most $p-1$.
On the other hand, assume that $|G|\geq p^{p+2}$. In this case $[G_1,G_1]=[G_1,\gamma_2(G)]$, since $|G_1:\gamma_2(G)|=p$. Thus, as $[G_1,G_1]\leq \gamma_4(G)$, it follows that $$\gamma_{p-1}(G_1)=[[G_1,G_1],\ _{p-3}\ G_1]\leq [\gamma_4(G),\ _{p-3}\ G_1]\leq \gamma_p(G)=G_1^p,$$ and $G_1$ is a potent maximal subgroup.
Now, by Theorem~\ref{thm.potent} and Corollary~\ref{corN} we obtain that $\exp(\nu(G))$ divides ${\bf p}\cdot\exp(\nu(C_p))\cdot \exp(G)={\bf p}^2\cdot \exp(G)$, since $\exp(\nu(C_p))=4$ if $p=2$ or $p$ if $p$ is odd.
\end{proof}
The following result is an immediate consequence of Corollary \ref{cor.maximal}.
\begin{cor}
Let $p$ be a prime and $G$ a $p$-group of maximal class. Then $\exp(\mu(G))$ and $\exp([G,G^{\varphi}])$ divide ${\bf p}^2 \cdot \exp(G)$.
\end{cor}
Let $p$ be a prime and $r$ an integer we define the integer $m(p,r)$ by $m(p,r)=(p-1)p^{r-1}$ for $p$ odd and $m(2,r)=2^{r+2}$. It is well-known that if $G$ is a $p$-group of coclass $r$, then $\gamma_{m(p,r)}(G)$ is powerful, see for instance \cite[Theorem 6.3.1 and 6.3.2]{LM}. Recall that $d(\gamma_m(G))$ is the minimal cardinality for a generating set of $\gamma_m(G)$. Moreover, we have the following two results.
\begin{thm}\cite[Theorem 6.3.8]{LM}\label{coclass2}
Let $G$ be a finite $2$-group of coclass $r$ and nilpotency class $c$ and let $m=m(2,r)$ and $s=d(\gamma_m(G))$. If $c\geq 2^{r+3}$, then the following hold:
\begin{itemize}
\item[(a)] $\gamma_i(G)^2=\gamma_{i+s}(G)$ for all $i\geq m$;
\item[(b)] $s=2^d$ with $0\leq d\leq r+1$.
\end{itemize}
\end{thm}
\begin{thm}\cite[Theorem 6.3.9]{LM}\label{coclassp}
Let $G$ be a finite $p$-group of coclass $r$ and nilpotency class $c$ for $p$ odd, and let $m=m(p,r)$ and $s=d(\gamma_m(G))$. If $c\geq 2p^{r}$, then the following hold:
\begin{itemize}
\item[(a)] $\gamma_i(G)^p=\gamma_{i+s}(G)$ for all $i\geq m$;
\item[(b)] $s=(p-1)p^d$ with $0\leq d\leq r-1$.
\end{itemize}
\end{thm}
We are now in a position to prove Theorem \ref{thm.cc}.
\begin{proof}[Proof of Theorem~\ref{thm.cc}]
Let $m=m(p,r)$ and consider the quotient group $\bar{G}=G/\gamma_{m+1}(G)$. By Theorem \ref{corlog} we have that $\exp ([\bar{G},\bar{G}^{\varphi}])$ divides $\exp(\bar{G})^n$ where $n=\lceil \log_{p}(m+1)\rceil$. If $p$ is odd, then $n\leq \lceil \log_p(p^r)\rceil = r$ as $m+1 \leq p^r$. If $p=2$, then $n=r+3$. By Lemma \ref{lem.general} the kernel of the canonical epimorphism $\widetilde{\pi}: \nu(G) \to \nu(\overline{G})$ is the subgroup $\gamma_{m+1}(G)\gamma_{m+1}(G)^{\varphi}[\gamma_{m+1}(G),G^{\varphi}][G,\gamma_{m+1}(G)^{\varphi}]$ which is contained in $\gamma_{m+1}(\nu(G))$ by Proposition~\ref{gammanu}. Now, applying Theorem \ref{thmA} we have $\exp(\gamma_{m+1}(\nu(G)))\leq \exp(\gamma_{m}(G))$.
\end{proof}
\section*{Acknowledgements}
The work of the first and the second authors were supported by DPI/UnB and FAPDF-Brazil. The third author was supported by CNPq-Brazil. The last author was supported by the ``National Group for Algebraic and Geometric Structures, and their Applications" (GNSAGA - INdAM). The last author is also grateful to the Department of Mathematics of the University of Brasilia for its hospitality and support, while this investigation was carried out.
Finally, the authors are very grateful to the referees who have carefully read the manuscripts and pointed out several mistakes and typographical errors. Moreover, their insightful comments were valuable for the improvement of this new version.
\bibliographystyle{plain}
| proofpile-arXiv_065-243 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
When studying dynamical systems with continuous time (i.e. systems of differential equations) or discrete time (i.e. diffeomorphisms), special solutions, such as fixed points also called equilibrium points, attract a lot of attention. In particular, one needs to understand the behavior of nearby solutions. This usually requires some deep analysis involving {\it normal forms} \cite{Arn-geom}, which are models supposed to capture the very nature of the dynamics to which the initial dynamical system is conjugate
When considering analytic or smooth dynamical systems, one needs extra assumptions in order to really obtain dynamical and geometrical information on the initial dynamical system via its normal form. These assumptions can sometimes be understood as having a lot of symmetries. This led to the concept of {\it integrability}.
In the framework of differential equations or vector fields\red{,} a first attempt to define such a notion for Hamiltonian systems is due to Liouville \cite{Liouville1855}. This led, much later, to the now classic Liouville-Mineur-Arnold theorem \cite{Arn1} which provides action-angle coordinates by a canonical transformation. For a general concept of action-angle coordinates we refer to \cite{Zung2018}.
In 1978, J. Vey studied in the groundbreaking work \cite{Vey1978}, a family of $n$ Poisson commuting analytic Hamiltonian functions
in a neighborhood of a common critical point. Under a generic condition on their Hessians, he proved that the family can be simultaneously transformed into a (Birkhoff) normal form. This system of Hamiltonian has to be understood as ``completely integrable system''. Later, H. Elliason, H. Ito, L. Stolovitch, N.T.Zung
to name a few, generalized or improved J. Vey's theorem in different aspects including non-Hamiltonian setting \cite{Eliasson1984, Eliasson1990, Ito1989, Ito1992, Stolovitch2000, Stolovitch2005, Zung2005}. This has been recently developed in the context of PDE's as infinite dimensional dynamical systems \cite{kappeler-poschel-book,kuksin-perelman, BS20}. In a different context of global dynamics, a notion of "integrable maps" has been devised relative to long-time behaviour of their orbits and their {\it complexity} \cite{veselov-intg-Russian,veselov-CMP}.
In \cite{Bogoyavlenskij1998}, a new integrability condition for non-Hamiltonian vector fields was established, which involves commuting vector fields and common first integrals. Concretely, such an integrable system on an $n$-dimensional manifold consists of $p$ independent commuting vector fields and $n-p$ functionally independent common first integrals. For a local integrable system near a common equilibrium point of the vector fields, the $p$ vector fields (resp. $n-p$ first integrals) may be not always independent (resp. functionally independent), so they are required to be independent (resp. functionally independent) almost everywhere
Then one can seek for a simultaneous Poincar\'e-Dulac normal form (named ``normal form'' for short) of the vector fields. Such a transformation can be obtained under certain non-degeneracy conditions \cite{Stolovitch2000,Zung2015,Jiang2016}.
We aim at considering, in the same spirit, discrete dynamical systems given by a family of germs of commuting diffeomorphisms at a fixed. On the one hand, the simultaneous linearization of such holomorphic family under an appropriate "small divisors" condition has been treated by the second author \cite{stolo-bsmf}. On the other hand and to the best of our knowledge, the only known result in this spririt, related to "integrability of diffeomeorphisms" is due to X. Zhang \cite{Zhang2013} who considered a single diffeomorphism near a fixed point. On the other hand, In this article, we propose an analogue notion of integrability of a family of commuting diffeomorphisms near a common fixed point, then we explore their local behavior and study their normal forms.
In this paper, we consider local diffeomorphisms on $(\mathbb{K}^n,0)$ having the form
\begin{equation}
\label{eq:11}
\Phi(x)=Ax+\textit{higher order terms}
\end{equation}
such that the coefficient matrix $A$ of the linear part at the origin has a (real or complex) logarithm, i.e., there exists a matrix $B$ such that $A=e^B$. It is known that a complex matrix has a logarithm if and only if it is invertible \cite{Gantmakher1959}; a real matrix has a real logarithm if and only if it is invertible and each Jordan block belonging to a negative eigenvalue occurs an even number of times \cite{Culver1966}.
Let $\Phi$ be a germ of diffeomorphism near a fixed point, say the origin. Then, for any integer $k\geqslant 1$, $\Phi^{(k)}$ denotes the homogeneous polynomial of degree $k$ of the Taylor expansion at the origin of $\Phi$.
\begin{definition}[Integrability, local version]
Let $\Phi$ be a local diffeomorphism on $\mathbb{K}^n (\mathbb{K}=\mathbb{C} \text{~or~} \mathbb{R})$ having the origin $0$ as its isolated fixed point. If there exists $p\geqslant1$ pairwise commuting (germs of) diffeomorphisms $\Phi_1=\Phi,\Phi_2,\ldots,\Phi_p$ of the form \eqref{eq:11} with $D\Phi_i(0)=A_i$ and $q=n-p$ common first integrals $F_1,\ldots,F_q$ of the diffeomorphisms such that
\begin{itemize}
\item the diffeomorphisms are independent in the following sense: the matrices $\{\ln A_i\}_{i=1,\ldots, p}$ are linearly independent over $\mathbb{K}$; if $\mathbb{K}=\mathbb{C}$ then $\ln A_i$ are not unique and we require the independence of families of all possible logarithms;
\item the first integrals are functionally independent almost everywhere, i.e., the wedge of differentials of the first integrals satisfies $dF_1\wedge\cdots\wedge dF_q\neq0$ almost everywhere,
\end{itemize}
then $\Phi$ is called a \textbf{completely integrable} diffeomorphism and we say \break $(\Phi_1=\Phi,\Phi_2,\ldots,\Phi_p,F_1,\ldots,F_q)$ is a (discrete) completely integrable system of type $(p,q)$.
\end{definition}
We remark that analytic integrable systems in $n$-dimensional Euclidean spaces containing a single diffeomorphism and $n-1$ functionally independent first integrals were studied \cite{Zhang2008,Zhang2013} and a local normal form was obtained under a mild generic condition
We now introduce the notion of non-degeneracy of integrable diffeomorphisms.
\begin{definition}[non-degeneracy]
We say that a local discrete integrable system \break $(\Phi_1,\ldots,\Phi_p,F_1,\ldots,F_q)$ of type $(p,q)$ is \textbf{non-degenerate}, if for all $i=1,\ldots,p$,
\begin{itemize}
\item the linear part $\Phi_i^{(1)}$ of the diffeomorphism $\Phi_i$ at the origin is semi-simple, i.e., write $\Phi_i^{(1)}(x)=A_i x$, then the coefficient matrix $A_i$ is diagonalizable over $\mathbb{C}$;
\item there exist $q$ functionally independent homogeneous polynomials $P_1,\ldots,P_q$ such that $(\Phi_1^{(1)},\ldots,\Phi_p^{(1)},P_1,\ldots,P_q)$ is a linear (discrete) completely integrable system of type $(p,q)$;
\end{itemize}
\end{definition}
The notion of the non-degeneracy of commuting diffeomorphisms follows that of non-degeneracy of commuting vector fields defined in \cite{Zung2015}. The first condition is generic in the sense that almost all matrices are diagonalizable over $\mathbb{C}$; and the second condition is automatically satisfied for formal or analytic integrable systems by Ziglin's lemma \cite{Ziglin1982}.
If there exist logarithms $\ln A_i$ of $A_i$ such that any common first integral of $\Phi_1^{(1)},\ldots,\Phi_p^{(1)}$ is also a common first integral of the linear vector fields $X_i$ defined by $\ln A_i$, then $X_1,\ldots,X_p$ together with $P_1,\ldots,P_q$ form a linear non-degenerate integrable system of type $(p,q)$. In such a case, the family of (linear) integrable diffeomorphisms is said to be {\it infinitesimally integrable} and $X_1,\ldots,X_p$ are called {\it infinitesimal (linear) generators} in the sense $\Phi_i^{(1)}=e^{X_i}$, and we pick and fix one such family of vector fields if the logarithms are not unique.
The following example shows that not all linear integrable diffeomorphisms is infinitesimally integrable.
\begin{example}
\label{ex:13}
The integrable system $\Phi(x,y)=(-2x,\frac{1}{2}y),F=x^2y^2$ on $\mathbb{C}^2$ of type $(1,1)$. The corresponding vector field $X=(\ln2+(2K_1+1)\sqrt{-1}\pi)x\frac{\partial}{\partial x}-(\ln2+2 K_2\sqrt{-1}\pi) y\frac{\partial}{\partial y}$ does not admit any homogeneous first integral for all integers $K_1, K_2$. Indeed, if $X(x^py^q)=0$ for some natural integers $p,q$, then we would have
$(\ln 2)(p-q)+2\sqrt{-1}\pi[(K_1+\frac{1}{2})p-K_2q]=0$. Then vanishing of the real part leads to $p=q$ so that the vanishing of the imaginary part reads $(K_1-K_2+\frac{1}{2})p=0$; this is not possible.
\end{example}
\section{Preliminaries and formal normal forms}
In this section, we introduce some notions and lemmas in order to well organize the proof of the main theorem.
The first lemma is analogue to the Poincar\'e-Dulac normal form for commuting vector fields. It requests neither integrability nor non-degeneracy.
\begin{lemma}[Th\'eor\`eme 4.3.2 in \cite{Chaperon1986Asterisque}]
\label{lem:PD-NF}
Let $\Phi_1,\Phi_2,\ldots,\Phi_p$ be $p$ commuting diffeomorphisms in $\mathbb{K}^n$ around $0$. Let $\Phi_{j}^{ss}$ be the semi-simple part of the Jordan decomposition of the linear part of $\Phi_i$ at the origin. There exists a formal transformation $\hat\Psi$ such that, $\hat \Phi_{i}\circ\Phi_{j}^{ss}=\Phi_{j}^{ss}\circ\hat\Phi_{i}$ for all $i,j=1,2,\ldots,p$, where $\hat \Phi_{i}:=\hat\Psi^{-1}\circ\Phi_i\circ\hat\Psi$. We say the diffeomorphisms are in Poincar\'e-Dulac normal form. Moreover, when $\mathbb{K}=\mathbb{C}$, let $\rho$ be an anti-holomorphic involution. Assume that $\Phi_i\rho=\rho\Phi_i$ for all $i$, then $\hat\Psi$ can be well chosen such that $\hat\Psi\rho=\rho\hat\Psi$ as well, and we call it $\rho$-equivariant normalization.
\end{lemma}
Though the result can be obtained by direct computation, it is easier to understand \cite{Chaperon1986Asterisque} via the Jordan decomposition theorem. For completeness, we provide a proof (in particular, of the $\rho$-equivariant case used in section 5) here.
\begin{proof}[Idea of a proof]
For each positive integer $\ell$, let $\mathcal{E}^{(\ell)}$ denote the $\mathbb{K}$ algebra of $\ell$-th order jets (Taylor expansions) $j_0^\ell f$ at $0$ of smooth $\mathbb{K}$ functions $f$ on $\mathbb{K}^n$ and let $\mathcal D^{(\ell)}$ be the group of $\ell$-th order jets $j_0^\ell\Phi$ at $0$ of smooth diffeomorphisms $\Phi$ vanishing at $0\in\mathbb{K}^n$; then, the map which sends $j_0^\ell\Phi
\in\mathcal D^{(\ell)}$
to
$(j_0^\ell\Phi)^*:j_0^\ell f\mapsto j_0^\ell(f\circ\Phi)$ is an isomorphism of $\mathcal D^{(\ell)}$ onto the group $\mathrm{aut}(\mathcal E^{(\ell)})$ of automorphisms of $\mathcal E^{(\ell)}$. We may sometimes abuse notations of jets as its representative for simplicity.
Thus,
$(j_0^\ell\Phi_1)^*,\ldots,(j_0^\ell\Phi_p)^*$ are commuting elements of $\mathrm{aut}(\mathcal E^{(\ell)})$; it follows that their semi-simple parts
, as endomorphisms of the $\mathbb{K}$ vector space $\mathcal E^{(\ell)}$, commute pairwise. Now, it is easy to see that the semi-simple part of an element of $\mathrm{aut}(\mathcal E^{(\ell)})$ lies in $\mathrm{aut}(\mathcal E^{(\ell)})$ by the Jordan-Chevalley theorem; therefore, there exist pairwise commuting elements $j_0^\ell S_i$ of $\mathcal D^{(\ell)}$ such that
$(j_0^\ell S_i)^*$ is the semi-simple part of $(j_0^\ell\Phi_i)^*$ for $1\leq i\leq p$.
Then, the following two facts are not difficult to establish:
\begin{itemize}
\item one has $j_0^1S_i=\Phi_i^{ss}$ for $1\leq i\leq p$
\item as the $(j_0^\ell S_i)^*$'s are commuting elements of $\mathrm{aut}(\mathcal E^{(\ell)})$, their semi-simplicity implies, essentially by definition, that the $j_0^\ell S_i$'s can be simultaneously linearized by a formal diffeomorphism $j_0^\ell\Psi$ of order $\ell$.
\end{itemize}
Indeed, the diffeomorphism $j_0^\ell\Psi$ can be defined through the transformation $(j_0^\ell\Psi)^*$ that normalizes the commuting family $\{(j_0^\ell S_1)^*,\ldots,(j_0^\ell S_p)^*\}$: (after complexification if necessary,) let us take $n$ common eigenvectors $j_0^\ell f_1,\ldots,j_0^\ell f_n$ of $(j_0^\ell S_1)^*,\ldots,(j_0^\ell S_p)^*$ such that $j_0^1 f_1,\ldots,j_0^1 f_n$ form a basis of $\mathcal{E}^{(1)}$ and let us set $(j_0^\ell S_i)^* (j_0^\ell f_m)=\lambda_{im}j_0^\ell f_m $. Let us define $(j_0^\ell\Psi)^*$ by sending $(j_0^\ell\Psi)^*(j_0^\ell(f_m^{(1)}))=j_0^\ell f_m$ for $m=1,\ldots,n$ where $f_m^{(1)}$ denotes the linear part of $f_m$. It follows from the equations $(j_0^\ell S_i)^* (j_0^\ell\Psi)^*\left(j_0^\ell(f_m^{(1)})\right)=\lambda_{im}(j_0^\ell\Psi)^*\left(j_0^\ell(f_m^{(1)})\right)$ that $(j_0^\ell\Psi)^*$ normalizes (that is, diagonalizes or block-diagonalizes) $(j_0^\ell S_i)^*$'s and therefore $j_0^\ell\Psi$ linearizes $j_0^\ell S_i$'s.
This change of coordinates simultaneously transforms the diffeomorphisms $\Phi_i$'s into a Poincar\'e-Dulac normal form to order $\ell$, that is, for all $i,j\in\{1,\ldots,p\}$,
\[
\Phi_i^{ss}\circ j_0^\ell\Phi_j=j_0^\ell\Phi_j\circ\Phi_i^{ss}.
\]
Take the inverse limit $\ell$ tends to $\infty$ and we get a formal transformation $\hat\Psi:=j_0^\infty\Psi=\displaystyle\lim_{\substack{\longleftarrow\\ \ell\rightarrow\infty}} j_0^\ell\Psi$ after which the above equations hold for all natural number $\ell$, i.e., $\Phi_{i}\circ\Phi_{j}^{ss}=\Phi_{j}^{ss}\circ\Phi_{i}$ in the formal sense.
Now assume that $\mathbb{K}=\mathbb{C}$ and that the anti-holomorphic involution $\rho$ commutes with all $\Phi_i$'s. As $\rho \tilde{\Phi}_i\rho= \tilde{\Phi}_i$, we have $\rho (j_0^\ell\tilde{\Phi}_i)\rho:=j_0^\ell(\rho \tilde{\Phi}_i\rho)= j_0^\ell\tilde{\Phi}_i$ and then $\left(\rho (j_0^\ell\tilde{\Phi}_i)\rho\right)^*=(j_0^\ell\tilde{\Phi}_i)^*$. It follows from the uniqueness of Jordan-Chevalley decomposition that $(\rho \tilde{S}_i\rho)^*=(\tilde{S}_i)^*$ where $\tilde{S}_i\in\mathcal{D}^{(\ell)}$ and $\tilde{S}_i^*$ is the semi-simple part of $(j_0^{\ell} \tilde{\Phi}_i)^*$ in $Aut(\mathcal{E}^{(\ell)})$. Therefore, for any common eigenvector $j_0^\ell f$ of the $\tilde{S}_i^*$'s, we set $\tilde{S}_i^* j_0^\ell f=\lambda_i j_0^\ell f$. Let $c$ denote the conjugate of complex vectors. Hence,
$j_0^\ell (c f \rho)=:c (j_0^\ell f) \rho$ is also a common eigenvector of the $\tilde{S}_i^*$'s with respect to the eigenvalue $\bar\lambda_i$. Indeed, on the one hand, we have
\[
\tilde{S}_i^* \left(j_0^\ell (c f\rho)\right)
=(\rho \tilde{S}_i\rho)^* j_0^\ell(c f\rho )
=j_0^\ell(c f\rho\rho\tilde{S}_i\rho)
=j_0^\ell(c f\tilde{S}_i\rho),
\]
on the other hand, we have
\[
\bar\lambda_i j_0^\ell(c f\rho)
=j_0^\ell(c \lambda_i f\rho)
=c \left(\lambda_i (j_0^\ell f)\right)\rho
=c (\tilde{S}_i^*j_0^\ell f)\rho
=c j_0^\ell( f\tilde{S}_i)\rho
=j_0^\ell(c f\tilde{S}_i\rho).
\]
Recall that $(j_0^\ell \Psi)^*$ is defined with the help of eigenvectors $j_0^\ell f_1,\ldots,j_0^\ell f_n$ such that $j_0^1 f_1,\ldots,j_0^1 f_n$ are independent.
Then one can verify $j_0^\ell(\rho\Psi\rho)=j_0^\ell \Psi$ directly since $(j_0^\ell(\rho\Psi\rho))^*$ also sends $j_0^\ell f_m^{(1)}$ to $j_0^\ell f_m$ for $m=1,\ldots,n$ as $(j_0^\ell \Psi)^*$ does: as
\[
j_0^\ell(c f_m^{(1)}\rho\Psi)
=(j_0^\ell \Psi)^* j_0^\ell(c f_m^{(1)}\rho)
=j_0^\ell(cf_m\rho)
=c j_0^\ell f_m \rho,
\]
we have
\[
(j_0^\ell(\rho\Psi\rho))^* j_0^\ell f_m^{(1)}
=j_0^\ell(f_m^{(1)}\rho\Psi\rho)
=j_0^\ell(ccf_m^{(1)}\rho\Psi\rho)
=c j_0^\ell(c f_m^{(1)}\rho\Psi)\rho
=c (c j_0^\ell f_m \rho) \rho
=j_0^\ell f_m.
\]
Hence we have $\rho\Psi\rho=\Psi$ by the inverse limit.
\end{proof}
Assuming that the semi-simple linear part $\Phi_i^{ss}$ of $\Phi_i$ is diagonal, we set
\[
\Phi_i^{ss}(x_1,\ldots,x_n)=(\mu_{i1}x_1,\ldots,\mu_{in}x_n).
\]
Let us write the homogeneous part of order $\ell$ of $\Phi_j$ as $\Phi_{j}^{(\ell)}=(\phi_{j1}^{(\ell)},\ldots,\phi_{jn}^{(\ell)})$, then we can express $\Phi_{j}^{(\ell)}\circ\Phi_{i}^{ss}=\Phi_{i}^{ss}\circ\Phi_{j}^{(\ell)}$ in local coordinates, that is, for any $m\in\{1,\ldots,n\}$, we have
\[
\phi_{jm}^{\ell}(\mu_{i1}x_1,\ldots,\mu_{in}x_n)=\mu_{im}\phi_{jm}^{\ell}(x_1,\ldots,x_n).
\]
It follows that for any $j$, the indices $(\gamma_1,\ldots,\gamma_n)$ of every monomial term $x_1^{\gamma_1}\cdots x_n^{\gamma_n}$ in the $m$-th component $\phi_{jm}$ of $\Phi_j$ satisfies the following \textbf{resonant equations} with respect to the $m$-th component
\begin{equation}
\label{eq:21}
\mu_{im}=\prod_{k=1}^n \mu_{ik}^{\gamma_k},\quad i=1,\ldots,p.
\end{equation}
We denote by $\mathcal R_m$ the set of solutions $(\gamma_1,\ldots,\gamma_n)$ with $\gamma_k$ natural numbers and $\sum_{k=1}^n\gamma_k\geqslant2$.
Conversely, it is easy to see that the commuting diffeomorphisms $\Phi_i=(\phi_{i1},\ldots,\phi_{in})$ are formally in the Poincar\'e-Dulac normal form if the Taylor expansion of $\phi_{im}$ contains only resonant terms with respect to the $m$-th component, i.e., the indices of every monomial term lie in $\mathcal R_m$.
We now turn to first integrals of a diffeomorphism already in the Poincar\'e-Dulac normal form. The second lemma is also a parallel version from the result for vector fields: A first integral of a vector field in the Poincar\'e-Dulac normal form is also a formal first integral of the semi-simple linear part of the vector field \cite{Walcher1991}.
Let us recall first integral relations for linear diffeomorphisms before stating our lemma. Given a semi-simple linear diffeomorphism $\Phi(x_1,\ldots,x_n)=(\mu_{1}x_1,\ldots,\mu_{n}x_n)$, the equation
\begin{equation}
\label{eq:22}
\mu_1^{\ell_1}\cdots\mu_n^{\ell_n}=1
\end{equation}
with respect to the non-negative integers $\ell_1,\ldots,\ell_n$ is called the \textbf{first integral equation} for $\Phi$. We denoted by $\Omega$ the set of the solutions of the first integral equation~:
\begin{equation}\label{Omega}
\Omega:=\left\{(\ell_1,\ldots,\ell_n)\in\mathbb{N}^n:\; \mu_1^{\ell_1}\cdots\mu_n^{\ell_n}=1\right\}.
\end{equation}
Hence, $\{x_1^{\ell_1}\cdots x_n^{\ell_n}:(\ell_1,\ldots,\ell_n)\in\Omega\}$ is the set of all monomial first integrals of $\Phi$ up to multiplication by constant coefficients.
For $p$ commuting linear diffeomorphisms, we will consider $p$ first integral equations simultaneously and the set of their common solutions is still denoted by $\Omega$.
\begin{lemma}\label{Lem:NFofFI}
Assume $\Phi_1,\Phi_2,\ldots,\Phi_p$ are in the Poincar\'e-Dulac normal form, then formal first integrals of the diffeomorphisms are formal first integrals of the semi-simple parts of the diffeomorphisms.
\end{lemma}
\begin{proof}
Write the semi-simple part of $\Phi_i$ as
\[
\Phi_i^{ss}(x_1,\ldots,x_n)=(\mu_{i1}x_1,\ldots,\mu_{in}x_n),
\]
then the lemma to prove is that if we consider the Taylor expansion of a first integral of the diffeomorphisms, then the indices of every monomial term lie in $\Omega$ provided that the diffeomorphisms are in the Poincar\'e-Dulac normal form.
Assume $F$ is a common first integral of the diffeomorphisms and let $F^{(low)}$ be the homogeneous part of lowest degree of $F$. Consider the homogeneous part of lowest degree of both sides of the equations $F\circ\Phi_j=F$, we have obviously $F^{(low)}\circ\Phi_j^{(1)}=F^{(low)}$. View $\Phi_j^{(1)}$ as a linear operator on the space of homogeneous polynomials of degree \textit{low} mapping $f$ to $f\circ\Phi_j^{(1)}$, then $F^{(low)}$ is in the eigenspace belonging to eigenvalue $1$, and therefore it is in the eigenspace belonging to the eigenvalue $1$ of the semi-simple part $\Phi_j^{ss}$, i.e., $F^{(low)}\circ\Phi_j^{ss}=F^{(low)}$.
We claim that the homogeneous part of any degree of $F$ is also a common first integral of $\Phi_j^{ss}$. Now assume the claim is true for homogeneous parts of degree less than $\ell$, which means that any monomial term $cx_1^{\ell_{1}}\cdots x_n^{\ell_{n}}$ in $F$ with $\ell_1+\cdots+\ell_n<\ell$ has indices $(\ell_1,\ldots,\ell_n)$ in $\Omega$. Consider the homogeneous part of degree $\ell$ of both sides of $F\circ\Phi_j=F$, we have
\begin{equation}
\label{eq:23}
F^{(\ell)}\circ\Phi_j^{(1)}+(F^{(<\ell)}\circ\Phi_j)^{(\ell)}=F^{(\ell)},
\end{equation}
where $F^{(<\ell)}$ denotes the part of $F$ with degree less than $\ell$.
By our inductive hypothesis, $(F^{(<\ell)}\circ\Phi_j)^{(\ell)}$ is a common first integral of $\Phi_i^{ss}$ since
$$
(F^{(<\ell)}\circ\Phi_j)^{(\ell)}\circ \Phi_i^{ss}=(F^{(<\ell)}\circ\Phi_j\circ \Phi_i^{ss})^{(\ell)}= (F^{(<\ell)}\circ\Phi_i^{ss}\circ\Phi_j)^{(\ell)}=(F^{(<\ell)}\circ\Phi_j)^{(\ell)}.
$$
Therefore, by \eqref{eq:23} we have
\[
(F^{(\ell)}-F^{(\ell)}\circ\Phi_j^{(1)})\circ\Phi_i^{ss}=F^{(\ell)}-F^{(\ell)}\circ\Phi_j^{(1)},
\]
equivalently,
\[
(F^{\ell}-F^{(\ell)}\circ\Phi_i^{ss})\circ\Phi_j^{(1)}=F^{\ell}-F^{(\ell)}\circ\Phi_i^{ss}.
\]
Considering composition by $\Phi_j^{(1)}$ on the right as a linear operator on the space of homogeneous polynomials of degree $\ell$, we have that $F^{(\ell)}-F^{(\ell)}\circ\Phi_i^{ss}$ lies in the eigenspace belonging to the eigenvalue $1$ of $\Phi_j^{(1)}$ and then of $\Phi_j^{ss}$. For any $i,j\in\{1,\ldots,p\}$, we have
\begin{equation}
\label{eq:24}
(F^{(\ell)}-F^{(\ell)}\circ\Phi_i^{ss})\circ\Phi_j^{ss}=F^{(\ell)}-F^{(\ell)}\circ\Phi_i^{ss},
\end{equation}
and it implies that $F^{(\ell)}$ is also a first integral of $\Phi_i^{ss}$ for $i=1,\ldots,p.$
Indeed, assume on the contrary that a monomial term $cx_1^{\ell_1}\cdots x_n^{\ell_n}$ in $F^{(\ell)}$ is not a first integral of $\Phi_i^{ss}$, then $F^{(\ell)}-F^{(\ell)}\circ\Phi_i^{ss}$ contains a non-vanishing term $c(1-\prod_{k=1}^n \mu_{ik}^{\ell_k})\prod_{k=1}^nx_k^{\ell_k}$, and then $(F^{(\ell)}-F^{(\ell)}\circ\Phi_i^{ss})-(F^{(\ell)}-F^{(\ell)}\circ\Phi_i^{ss})\circ\Phi_i^{ss}$ contains a non-vanishing term
$$
c(1-\prod_{k=1}^n \mu_{ik}^{\ell_k})^2\prod_{k=1}^nx_k^{\ell_k},
$$
which contradicts with equation \eqref{eq:24}.
\end{proof}
\begin{definition}
Given a family of $p$ commuting linear vector fields $X_1,\ldots,X_p$ on $(\mathbb{K}^n,0)$ and assume their semi-simple liner parts read $X_i^{ss}=\sum_{m=1}^n\lambda_{im} x_m\frac{\partial}{\partial x_m}$, we say it is \textbf{weakly resonant} with respect to first integrals if there exists integers $k_1,\ldots,k_n$ such that
\[
(\sum_{m=1}^n k_m\lambda_{1m},\ldots,\sum_{m=1}^n k_m\lambda_{pm})\in 2\sqrt{-1}\pi\mathbb{Z}^p-\{0\}.
\]
We say the family of commuting linear vector fields to be \textbf{weakly non-resonant} if there do not exist such integers $k_1,\ldots,k_n$.
Given a family of $p$ commuting diffeomorphisms on $(\mathbb{K}^n,0)$, we say it is weakly resonant (resp. weakly non-resonant) if the family of infinitesimal generators of their semi-simple liner parts is (resp. is not).
\end{definition}
We emphasize that the family of $X_1,\ldots,X_p$ can be weakly non-resonant and resonant as well since we could have $(\sum_{m=1}^n k_m\lambda_{1m},\ldots,\sum_{m=1}^n k_m\lambda_{pm})=0$.
We also remark that our notion of weak resonance with respect to first integrals is slightly different from that with respect to vector fields: the latter requires the existence of integers $k_1,\ldots,k_n$ such that they are no less than $-1$ and there is at most one integer equal to $-1$, see \cite{Li-Llibre-Zhang2002} for example.
\begin{definition}
Let $\Phi_1,\ldots,\Phi_p$ be $p$ commuting linear diffeomorphisms on $(\mathbb{K}^n,0)$ and assume the eigenvalues of the semi-simple linear part $\Phi_i^{ss}$ of $\Phi_i$ are $\mu_{i1},\ldots,\mu_{in}$. We say the family of diffeomorphisms is \textbf{projectively hyperbolic} if the $p$ real vectors $(\ln|\mu_{i1}|,\ldots,\ln|\mu_{in}|)$ are $\mathbb{R}$-linearly independent.
\end{definition}
We recall that the family $\{\Phi_i\}$ is said to be {\it hyperbolic} if any $p$ of the $n$ covectors $(\ln|\mu_{1j}|,\cdots, \ln|\mu_{pj}|)$ are linearly independent (which coincides with the usual meaning if $p = 1$).
By definition, the projection of a projectively hyperbolic family of $p$ linear diffeomorphisms onto some $p$-dimensional subspace is hyperbolic~: the $p$ by $n$ real matrix $(\ln|\mu_{im}|)$ has full rank and therefore there exist $p$ columns, say the $(m_1,\ldots,m_p)$-th columns, which are linearly independent, then the projection of the diffeomorphisms onto the subspace of $(x_{m_1},\ldots,x_{m_p})$ form a hyperbolic family of diffeomorphisms in the sense of definition \ref{def:44}. Particularly, for a single (linear) diffeomorphism, it is projectively hyperbolic if and only if there exists at least one eigenvalue that does not lie on the unit circle.
\begin{example}
\label{ex:25}
The diffeomorphism $(\Phi(x,y)=(e^{\sqrt{-1}}x,e^{-\sqrt{-1}}y)$ is not projectively hyperbolic.
\end{example}
With the notions above, we can now state our theorem.
\begin{theorem}
\label{thm:FormalNormalForm}
Let $(\Phi_1=\Phi,\Phi_2,\ldots,\Phi_p,F_1,\ldots,F_q)$ be a formal non-degenerate discrete integrable system of type $(p,q)$ on $\mathbb{K}^n$ at a common fixed point, say the origin $0$. Assume that the linear part of $\Phi_i$, at the origin reads $\Phi_i^{(1)}(x_1,\ldots,x_n)=(\mu_{i1}x_1,\ldots,\mu_{in}x_n)$, for all $i=1,\ldots,p$. If the family \{$\Phi_i^{(1)}\}$ is either projectively hyperbolic or infinitesimally integrable with a weakly non-resonant family of generators, then there is formal diffeomorphism, tangent to Identity, which conjugates each diffeomorphisms $\Phi_i$, $i=1,\ldots,p$ to
\begin{equation}
\label{good-nf}
\hat\Phi_i=(\mu_{i1}x_1(1+\hat\varphi_{i1}),\ldots,\mu_{in}x_n(1+\hat\varphi_{in})).
\end{equation}
Here, the $\hat\varphi_{ik}$'s are not only common first integrals of $\Phi_i^{ss}$ (this turns $\hat\Phi_i$ into a Poincar\'e-Dulac normal form) but also they satisfy
\begin{equation}\label{intgnf}
\prod_{k=1}^n (1+\hat\varphi_{ik})^{\gamma_k}=1
\end{equation}
for all $(\gamma_1,\ldots,\gamma_n)$ in the set $\Omega$ (defined by \re{Omega}).
\end{theorem}
We give a remark that the diffeomorphism in example \ref{ex:13} is projectively hyperbolic but has no infinitesimally integrable generator; example \ref{ex:25} provides an example of a diffeomorphism which is not projectively hyperbolic but which is infinitesimally integrable with a weakly non-resonant generator $X=\sqrt{-1}x\frac{\partial}{\partial x}-\sqrt{-1}y\frac{\partial}{\partial y}$;
and the system in example \ref{ex:34} in the next section satisfies none of the two conditions.
\section{Proof of the theorem}
\begin{lemma}
\label{lem:VectOmega}
Let $(\Phi_1,\Phi_2,\ldots,\Phi_p,F_1,\ldots,F_q)$ be a non-degenerate integrable system in which the diffeomorphisms are in the Poincar\'e-Dulac normal form and their linear parts read $\Phi_i^{(1)}(x_1,\ldots,x_n)=(\mu_{i1}x_1,\ldots,\mu_{in}x_n)$ for $i=1,\ldots,p$. Let $\mathit{Vect}_\mathbb{K}\Omega$ be the vector space spanned by $\Omega$ over $\mathbb{K}$. We have the dimension of $\mathit{Vect}_\mathbb{K}\Omega$ is no less than $q$; if the family \{$\Phi_i^{(1)}\}$ is either projectively hyperbolic or infinitesimally integrable with a weakly non-resonant family of generators, then the dimension is equal to $q$.
\end{lemma}
\begin{proof}
By the non-degeneracy condition, we have homogeneous polynomials $P_1,\ldots,P_q$ which are common first integrals of $\Phi_i^{ss}$ and the corresponding vector fields. Then every monomial term is a common first integral with indices in $\Omega$.
As $P_1,\ldots,P_q$ are functionally independent almost everywhere, i.e., $dP_1\wedge\cdots\wedge dP_q\neq0$, there exist monomial terms $G_j=x_1^{\ell_{j1}}\cdots x_n^{\ell_{jn}}$ (ignore coefficient) of $P_j$ such that $dG_1\wedge\cdots\wedge dG_q\neq0$, equivalently,
\[
\sum_{1\leqslant k_1<\ldots<k_q\leqslant n}\det
\begin{pmatrix}
\frac{\partial G_1}{\partial x_{k_1}}&\cdots&\frac{\partial G_1}{\partial x_{k_q}}\\
\vdots&&\vdots\\
\frac{\partial G_q}{\partial x_{k_1}}&\cdots&\frac{\partial G_q}{\partial x_{k_q}}
\end{pmatrix}
dx_{k_1}\wedge\cdots dx_{k_q}\neq0.
\]
It implies at least one determinant (as coefficient) in the above inequality is nonzero, that is, there exist $k_1<\ldots<k_q$ such that
$
\dfrac{G_1\cdots G_q}{x_{k_1}\cdots x_{k_q}}
\det
\begin{pmatrix}
\ell_{1k_1}&\cdots&\ell_{1k_q}\\
\vdots&&\vdots\\
\ell_{qk_1}&\cdots&\ell_{qk_q}
\end{pmatrix}
\neq0.
$
It follows directly that the $q$ elements $(\ell_{j1},\ldots,\ell_{jn})\in\Omega$ are independent, and therefore the dimension of $\mathit{Vect}_\mathbb{K}\Omega$ is no less than $q$.
Suppose $(\ell_1,\ldots,\ell_n)\in\Omega$, then it satisfies the first integral equations \eqref{eq:22}.
We have integers $K_1,\ldots,K_p$ such that
\begin{equation}
\label{eq:31}
\sum_{m=1}^n\ell_m\ln\mu_{im}=2K_i\sqrt{-1}\pi,\quad i=1,\ldots,p.
\end{equation}
If the system is infinitesimally integrable and the family of the infinitesimal generators $X_i=\sum_{m=1}^n\ln\mu_{im}x_m\dfrac{\partial}{\partial x_m}$ is weakly non-resonant, then all $K_i$ in equation \eqref{eq:31} vanish and we get linear equations
\begin{equation}
\label{eq:32}
\sum_{m=1}^n\ell_m\ln\mu_{im}=0,\quad i=1,\ldots,p.
\end{equation}
It means that $\Omega$ is contained in the space of solutions of equations \eqref{eq:32}. By the definition of integrability, we have the $p$ vectors $(\ln\mu_{i1},\ldots,\ln\mu_{in})$ are independent and therefore the space of solutions of \eqref{eq:32} is of dimension $n-p=q$ and therefore the dimension of $\mathit{Vect}_\mathbb{K}\Omega$ is no more than $q$. Hence, under the assumption of weak non-resonance, the vector space $\mathit{Vect}_\mathbb{K}\Omega$ is exactly the space of solutions of \eqref{eq:32} over $\mathbb{K}$.
If the system is projectively hyperbolic, we consider the real parts on both sides of equation \eqref{eq:31} and get
\begin{equation}
\label{eq:33}
\sum_{m=1}^n\ell_m\ln|\mu_{im}|=0,\quad i=1,\ldots,p.
\end{equation}
It means that $\Omega$ is contained in the space of solutions of equations \eqref{eq:33}. By the very definition of projective hyperbolicity, the dimension of the space of solutions of \eqref{eq:33} is $n-p=q$ and therefore the dimension of $\mathit{Vect}_\mathbb{K}\Omega$ is no more than $q$. Hence, under the assumption of projective hyperbolicity, the vector space $\mathit{Vect}_\mathbb{K}\Omega$ is exactly the space of solutions of \eqref{eq:33} over $\mathbb{K}$.
\end{proof}
We remark that the dimension of $\mathit{Vect}_\mathbb{K}\Omega$ could be bigger than $q$ without weak non-resonance.
for example, the linear diffeomorphism $\Phi(x,y)=({\sqrt{-1}x,-\sqrt{-1}y})$ on $\mathbb{C}^2$ have monomial first integrals $x^4,\,y^4,\,xy$ and therefore the dimension of $\mathit{Vect}_\mathbb{K}\Omega$ is $2$. In this case, the diffeomorphism is obviously weakly resonant.
\begin{proposition
\label{prop:MonomialAreFI}
Let $(\Phi_1,\Phi_2,\ldots,\Phi_p,F_1,\ldots,F_q)$ be a formal non-degenerate integrable system in which the diffeomorphisms are in the Poincar\'e-Dulac normal form. If the family of the linear parts of the diffeomorphisms is either projectively hyperbolic or infinitesimally integrable with a weakly non-resonant family of generators, then the common first integrals of the semi-simple parts of the diffeomorphisms are also first integrals of the (nonlinear) diffeomorphisms.
\end{proposition}
\begin{proof}
According to Lemma \ref{Lem:NFofFI}, formal or analytic first integrals of $\Phi_1,\Phi_2,\ldots,\Phi_p$ are formal or analytic first integrals of the semi-simple parts of the diffeomorphisms provided that the diffeomorphisms are in the Poincar\'e-Dulac normal form. Then any first integral is a series of finitely many monomial generators $G_1,\ldots,G_r$ which have exponents in $\Omega$. And the Lemma \ref{lem:VectOmega} shows that $\Omega$ lies in the $q$-dimensional vector space $\mathit{Vect}_\mathbb{K}\Omega$ if the family of the linear parts of the diffeomorphisms is either projectively hyperbolic or infinitesimally integrable with a weakly non-resonant family of generators
Now we turn to formal integrable system $(\hat\Phi_1,\hat\Phi_2,\ldots,\hat\Phi_p,\hat F_1,\ldots,\hat F_q)$. We assume by Ziglin's lemma \cite{Ziglin1982} that the homogeneous parts $F_1^{(low_1)},\ldots,F_q^{(low_q)}$ of lowest degree of $\hat F_1,\ldots,\hat F_q$ are functionally independent almost everywhere. For convenience, we use new first integrals $\hat F_1^{\frac{LCM}{low_1}},\ldots,\hat F_q^{\frac{LCM}{low_q}}$ where $LCM$ denotes the least common multiple of $low_1,\ldots,low_q$ so that their lowest degree are all the same. In the following, the new first integrals are still denoted by $\hat F_1,\ldots,\hat F_q$ and their homogeneous parts of lowest degree are denoted by $F_1^{(low)},\ldots,F_q^{(low)}$.
Let $H_k=x_1^{\ell_{k1}}\cdots x_n^{\ell_{kn}}$ for $k=1,\ldots,\tau$ be all monomial first integrals of $\Phi_i^{(1)}$ such that the coefficients are $1$ and $\ell_{k1}+\cdots+\ell_{kn}=low$.
Now write
\[
F_j^{(low)}=c_{j1}H_1+\cdots+c_{j\tau}H_\tau,
\]
where $c_{jk}$ are constants and the rank of the $q$ by $\tau$ matrix $C=(c_{jk})$ is $q\leqslant\tau$ by the functional independence.
As $\Phi_i=(\phi_{i1},\ldots,\phi_{in})$ is in the normal form, then for any monomial first integral $G=x_1^{\ell_1}\cdots x_n^{\ell_n}$ of $\Phi_i^{(1)}(x)=(\mu_{i1}x_1,\ldots,\mu_{in}x_n)$, we have $G=(\mu_{i1}x_1)^{\ell_1}\cdots(\mu_{in}x_n)^{\ell_n}$ and then
\begin{equation}
\label{eq:34}
G\circ\Phi_i=\phi_{i1}^{\ell_1}\cdots\phi_{in}^{\ell_n}
=G\left(1+\frac{\phi_{i1}^{(\geqslant2)}}{\mu_{i1}x_1}\right)^{\ell_1}\cdots\left(1+\frac{\phi_{in}^{(\geqslant2)}}{\mu_{in}x_n}\right)^{\ell_n},
\end{equation}
where $\phi_{im}^{(\geqslant2)}$ denotes the nonlinear part of $\phi_{im}$.
It is easy to see that the homogeneous part of degree $\ell_1+\cdots+\ell_n$ of $G\circ\Phi_i$ is $G$ and the homogeneous part of degree $\ell_1+\cdots+\ell_n+1$ is
\begin{equation}
\label{eq:35}
(G\circ\Phi_i)^{(\ell_1+\cdots+\ell_n+1)}=G\left(\ell_1\dfrac{\phi_{i1}^{(2)}}{\mu_{i1}x_1}+\cdots+\ell_n\dfrac{\phi_{in}^{(2)}}{\mu_{in}x_n}\right).
\end{equation}
As $\hat F_j\circ\Phi_i=\hat F_j$, their homogeneous parts of degree $low+1$ must be the same, i.e., $(\hat F_j\circ\Phi_i)^{(low+1)}=F_j^{(low+1)}$. On the other hand, we have
\[
\begin{aligned}
(\hat F_j\circ\Phi_i)^{(low+1)}&=(F_j^{(low)}\circ\Phi_i)^{(low+1)}+(F_j^{(low+1)}\circ\Phi_i)^{(low+1)}\\
&=(F_j^{(low)}\circ\Phi_i)^{(low+1)}+(F_j^{(low+1)}\circ\Phi_i^{(1)})=(F_j^{(low)}\circ\Phi_i)^{(low+1)}+F_j^{(low+1)}.
\end{aligned}
\]
Then we get
\[
(F_j^{(low)}\circ\Phi_i)^{(low+1)}=0.
\]
Substitute $F_j^{(low)}$ by $c_{j1}H_1+\cdots+c_{j\tau}H_\tau$ and use the equation \eqref{eq:35}, we get
\[
\sum_{m=1}^n \left(\sum_{k=1}^{\tau}c_{jk}\ell_{km}H_k\right)\dfrac{\phi_{im}^{(2)}}{{\mu_{im}x_m}}=0,
\quad j=1,\ldots,q.
\]
Using matrices, the equations above are equivalent to
\begin{equation}
\label{eq:36}
\begin{pmatrix}
c_{11}H_1&\cdots& c_{1\tau}H_\tau\\
\vdots&&\vdots\\
c_{q1}H_1&\cdots& c_{q\tau}H_\tau
\end{pmatrix}_{q\times\tau}
\begin{pmatrix}
\ell_{11}&\cdots&\ell_{1n}\\
\vdots&&\vdots\\
\ell_{\tau 1}&\cdots&\ell_{\tau n}
\end{pmatrix}_{\tau\times n}
\begin{pmatrix}
\frac{\phi_{i1}^{(2)}}{\mu_{i1}x_1}\\
\vdots\\
\frac{\phi_{in}^{(2)}}{\mu_{in}x_n}
\end{pmatrix}_{n\times1}
=0.
\end{equation}
Assume that $H_1,\ldots,H_q$ are functionally independent almost everywhere and then for any $k$ in $\{1,\ldots,\tau\}$ we can write $H_k=H_1^{\alpha_{k1}}\cdots H_q^{\alpha_{kq}}$. Equivalently, the $q$ vectors $(\ell_{11},\ldots,\ell_{1n}),\ldots,(\ell_{q1},\ldots,\ell_{qn})$ are linearly independent and for any $k$ in $\{1,\ldots,\tau\}$ we have $(\ell_{k1},\ldots,\ell_{kn})=\sum_{j=1}^q\alpha_{kj}(\ell_{j1},\ldots,\ell_{jn})$. Write the $\tau$ by $q$ matrix $(\alpha_{kj})=
\begin{pmatrix}
Id_q\\
B
\end{pmatrix}
$
with $B$ the submatrix consisting of the last $\tau-q$ rows,
then we have
\[
\begin{aligned}
&\quad
\begin{pmatrix}
c_{11}H_1&\cdots& c_{1\tau}H_\tau\\
\vdots&&\vdots\\
c_{q1}H_1&\cdots& c_{q\tau}H_\tau
\end{pmatrix}_{q\times\tau}
\begin{pmatrix}
Id_q&0\\
B&Id_{\tau-q}
\end{pmatrix}
\begin{pmatrix}
Id_q&0\\
-B&Id_{\tau-q}
\end{pmatrix}
\begin{pmatrix}
\ell_{11}&\cdots&\ell_{1n}\\
\vdots&&\vdots\\
\ell_{\tau 1}&\cdots&\ell_{\tau n}
\end{pmatrix}_{\tau\times n}\\
&=
\begin{pmatrix}
\displaystyle\sum_{k=1}^\tau\alpha_{k1}c_{1k}H_k&\mkern-5mu\cdots\mkern-5mu&\displaystyle\sum_{k=1}^\tau\alpha_{kq}c_{1k}H_k&c_{1\,q+1}H_{q+1}&\mkern-5mu\cdots\mkern-5mu&c_{1\tau}H_\tau\\
\vdots&&\vdots&\vdots&&\vdots\\
\displaystyle\sum_{k=1}^\tau\alpha_{k1}c_{qk}H_k&\mkern-5mu\cdots\mkern-5mu&\displaystyle\sum_{k=1}^\tau\alpha_{kq}c_{qk}H_k&c_{q\,q+1}H_{q+1}&\mkern-5mu\cdots\mkern-5mu&c_{q\tau}H_\tau
\end{pmatrix}_{q\times\tau}
\begin{pmatrix}
\ell_{11}&\cdots&\ell_{1n}\\
\vdots&&\vdots\\
\ell_{q1}&\cdots&\ell_{qn}\\
0&\cdots&0\\
\vdots&&\vdots\\
0&\cdots&0
\end{pmatrix}_{\tau\times n}
\end{aligned}
\]
and we get from equation \eqref{eq:36} that
\begin{equation}
\label{eq:37}
\begin{pmatrix}
\displaystyle\sum_{k=1}^\tau\alpha_{k1}c_{1k}H_k&\mkern-5mu\cdots\mkern-5mu&\displaystyle\sum_{k=1}^\tau\alpha_{kq}c_{1k}H_k\\
\vdots&&\vdots\\
\displaystyle\sum_{k=1}^\tau\alpha_{k1}c_{qk}H_k&\mkern-5mu\cdots\mkern-5mu&\displaystyle\sum_{k=1}^\tau\alpha_{kq}c_{qk}H_k
\end{pmatrix}_{q\times q}
\mkern-25mu
\begin{pmatrix}
\ell_{11}&\cdots&\ell_{1n}\\
\vdots&&\vdots\\
\ell_{q1}&\cdots&\ell_{qn}
\end{pmatrix}_{q\times n}
\begin{pmatrix}
\dfrac{\phi_{i1}^{(2)}}{\mu_{i1}x_1}\\
\vdots\\
\dfrac{\phi_{in}^{(2)}}{\mu_{in}x_n}
\end{pmatrix}_{n\times1}
\mkern-25mu
=0.
\end{equation}
Now let us compute the explicit expression of $dF_q^{(low)}\wedge\cdots\wedge dF_q^{(low)}$.
\[
\begin{aligned}
&\qquad dF_q^{(low)}\wedge\cdots\wedge dF_q^{(low)}\\
&=\sum_{1\leqslant k_1<\cdots<k_q\leqslant\tau}\det
\begin{pmatrix}
c_{1\,k_1}&\cdots&c_{1\,k_q}\\
\vdots&&\vdots\\
c_{q\,k_1}&\cdots&c_{q\,k_q}
\end{pmatrix} dH_{k_1}\wedge\cdots\wedge dH_{k_q}\\
&=\sum_{1\leqslant k_1<\cdots<k_q\leqslant\tau}\mkern-20mu\det\mkern-5mu
\begin{pmatrix}
c_{1\,k_1}&\cdots&c_{1\,k_q}\\
\vdots&&\vdots\\
c_{q\,k_1}&\cdots&c_{q\,k_q}
\end{pmatrix}
\sum_{1\leqslant m_1<\cdots<m_q\leqslant n}\mkern-20mu\det\mkern-5mu
\begin{pmatrix}
\ell_{k_1 m_1}&\cdots&\ell_{k_1 m_q}\\
\vdots&&\vdots\\
\ell_{k_q m_1}&\cdots&\ell_{k_q m_q}
\end{pmatrix}\dfrac{H_{k_1}\cdots H_{k_q}}{x_{m_1}\cdots x_{m_q}}dx_{m_1}\wedge\cdots\wedge dx_{m_q}\\
&=\sum_{1\leqslant m_1<\cdots<m_q\leqslant n}
\sum_{1\leqslant k_1<\cdots<k_q\leqslant\tau}\mkern-20mu\det
\{
\begin{pmatrix}
c_{1\,k_1}H_{k_1}&\cdots&c_{1\,k_q}H_{k_q}\\
\vdots&&\vdots\\
c_{q\,k_1}H_{k_1}&\cdots&c_{q\,k_q}H_{k_q}
\end{pmatrix}
\begin{pmatrix}
\ell_{k_1 m_1}&\cdots&\ell_{k_1 m_q}\\
\vdots&&\vdots\\
\ell_{k_q m_1}&\cdots&\ell_{k_q m_q}
\end{pmatrix}
\}\dfrac{dx_{m_1}\wedge\cdots\wedge dx_{m_q}}{x_{m_1}\cdots x_{m_q}}
\end{aligned}
\]
Remember $(\ell_{k\,m_1},\ldots,\ell_{k\,m_q})=\sum_{j=1}^{q}\alpha_{kj}(\ell_{j\,m_1},\ldots,\ell_{j\,m_q})$, we have
\[
\begin{pmatrix}
\ell_{k_1 m_1}&\cdots&\ell_{k_1 m_q}\\
\vdots&&\vdots\\
\ell_{k_q m_1}&\cdots&\ell_{k_q m_q}
\end{pmatrix}
=
\begin{pmatrix}
\alpha_{k_1\,1}&\cdots&\alpha_{k_1\,q}\\
\vdots&&\vdots\\
\alpha_{k_q\,1}&\cdots&\alpha_{k_q\,q}
\end{pmatrix}
\begin{pmatrix}
\ell_{1\,m_1}&\cdots&\ell_{1\,m_q}\\
\vdots&&\vdots\\
\ell_{q\,m_1}&\cdots&\ell_{q\,m_q}
\end{pmatrix},
\]
and therefore we can split the two summations on $1\leqslant m_1<\cdots<m_q\leqslant n$ and $1\leqslant k_1<\cdots<k_q\leqslant\tau$ in the expression of $dF_q^{(low)}\wedge\cdots\wedge dF_q^{(low)}$. Concretely, $dF_q^{(low)}\wedge\cdots\wedge dF_q^{(low)}$ is the product of
\[
\sum_{1\leqslant m_1<\cdots<m_q\leqslant n}
\dfrac{1}{x_{m_1}\cdots x_{m_q}}
\det\begin{pmatrix}
\ell_{1\,m_1}&\cdots&\ell_{1\,m_q}\\
\vdots&&\vdots\\
\ell_{q\,m_1}&\cdots&\ell_{q\,m_q}
\end{pmatrix}
dx_{m_1}\wedge\cdots\wedge dx_{m_q}
\]
and the homogeneous polynomial function
\[
\sum_{1\leqslant k_1<\cdots<k_q\leqslant\tau}
\det
\{
\begin{pmatrix}
c_{1\,k_1}H_{k_1}&\cdots&c_{1\,k_q}H_{k_q}\\
\vdots&&\vdots\\
c_{q\,k_1}H_{k_1}&\cdots&c_{q\,k_q}H_{k_q}
\end{pmatrix}
\begin{pmatrix}
\alpha_{k_1\,1}&\cdots&\alpha_{k_1\,q}\\
\vdots&&\vdots\\
\alpha_{k_q\,1}&\cdots&\alpha_{k_q\,q}
\end{pmatrix}
\}.
\]
This polynomial function cannot be zero since $dF_q^{(low)}\wedge\cdots\wedge dF_q^{(low)}\neq0$, and it equals to the determinant of the leftmost matrix $M(H_1,\ldots,H_\tau)$ in equation \eqref{eq:37}.
In fact, as the determinant is a linear function of each column, we write $\det M(H_1,\ldots,H_\tau)$ as a sum over all $k_1,\ldots,k_q$ from $1$ to $\tau$ of $\tau^q$ determinants
\[
\det
\begin{pmatrix}
\alpha_{k_11}c_{1\,k_1}H_{k_1}&\cdots&\alpha_{k_qq}c_{1\,k_q}H_{k_q}\\
\vdots&&\vdots\\
\alpha_{k_11}c_{q\,k_1}H_{k_1}&\cdots&\alpha_{k_qq}c_{q\,k_q}H_{k_q}
\end{pmatrix}
=\alpha_{k_11}\cdots\alpha_{k_qq}\det
\begin{pmatrix}
c_{1\,k_1}&\cdots&c_{1\,k_q}\\
\vdots&&\vdots\\
c_{q\,k_1}&\cdots&c_{q\,k_q}
\end{pmatrix}
H_{k_1}\cdots H_{k_q}
\]
which must vanish if two indices $k_j$ and $k_{j'}$ happen to be equal; fix $q$ pairwise distinct indices $\{k_1,\ldots,k_q\}$ in $\{1,\ldots,\tau\}$ and suppose $k_1<\cdots<k_q$ , then there are $q!$ terms similar to $H_{k_1}\cdots H_{k_q}$ and the sum of them is just
\[
\begin{aligned}
P(H_{k_1},\ldots,H_{k_q})
&:=
\sum_{\substack{\{k'_1,\ldots,k'_q\}\\ =\{k_1,\ldots,k_q\}}}
\alpha_{k'_11}\cdots\alpha_{k'_qq}\det
\begin{pmatrix}
c_{1\,k'_1}&\cdots&c_{1\,k'_q}\\
\vdots&&\vdots\\
c_{q\,k'_1}&\cdots&c_{q\,k'_q}
\end{pmatrix}
H_{k_1}\cdots H_{k_q}\\
&=
\sum_{\substack{\{k'_1,\ldots,k'_q\}\\ =\{k_1,\ldots,k_q\}}}
\alpha_{k'_11}\cdots\alpha_{k'_qq}
\,\,\epsilon(k'_1,\ldots,k'_q)\det
\begin{pmatrix}
c_{1\,k_1}&\cdots&c_{1\,k_q}\\
\vdots&&\vdots\\
c_{q\,k_1}&\cdots&c_{q\,k_q}
\end{pmatrix}
H_{k_1}\cdots H_{k_q}\\
&=\det
\begin{pmatrix}
\alpha_{k_1\,1}&\cdots&\alpha_{k_1\,q}\\
\vdots&&\vdots\\
\alpha_{k_q\,1}&\cdots&\alpha_{k_q\,q}
\end{pmatrix}
\det
\begin{pmatrix}
c_{1\,k_1}&\cdots&c_{1\,k_q}\\
\vdots&&\vdots\\
c_{q\,k_1}&\cdots&c_{q\,k_q}
\end{pmatrix}
H_{k_1}\cdots H_{k_q},
\end{aligned}
\]
in which $\epsilon(k'_1,\ldots,k'_q)=\pm1$ is the sign of the permutation $(k'_1,\ldots,k'_q)\mapsto(k_1,\ldots,k_q)$. Hence $\det M(H_1,\ldots,H_\tau)=\sum_{1\leqslant k_1<\cdots<k_q\leqslant\tau}P(H_{k_1},\ldots,H_{k_q})$.
Back to equation \eqref{eq:37}, as the matrix $M(H_1,\ldots,H_\tau)$ is invertible almost everywhere, it follows that, almost everywhere, we have
\[
\begin{pmatrix}
\ell_{11}&\cdots&\ell_{1n}\\
\vdots&&\vdots\\
\ell_{q1}&\cdots&\ell_{qn}
\end{pmatrix}_{q\times n}
\begin{pmatrix}
\frac{\phi_{i1}^{(2)}}{\mu_{i1}x_1}\\
\vdots\\
\frac{\phi_{in}^{(2)}}{\mu_{in}x_n}
\end{pmatrix}_{n\times1}
=0.
\]
It follows by Lemma \ref{lem:VectOmega} that, as polynomial functions,
\begin{equation}
\label{eq:38}
G\left(\ell_1 \frac{\phi_{i1}^{(2)}}{\mu_{i1}x_1}+\cdots+\ell_n \frac{\phi_{in}^{(2)}}{\mu_{in}x_n}\right)=0, \quad \forall G=x_1^{\ell_1}\cdots x_n^{\ell_n} \mbox{~with~} (\ell_1,\ldots,\ell_n)\in\Omega.
\end{equation}
In other words, the homogeneous part of degree $\ell_1+\cdots+\ell_n+1$ of $G\circ\Phi_i$ vanishes for any common monomial first integral $G=x_1^{\ell_1}\cdots x_n^{\ell_n}$ of $\Phi_i^{(1)}$ by equation \eqref{eq:35}.
We will show by induction that the homogeneous part of $G\circ\Phi_i$ with degree larger than $\ell_1+\cdots+\ell_n$ also vanishes for $G=x_1^{\ell_1}\cdots x_n^{\ell_n}$, then we can say that any monomial first integral of $\Phi_i^{(1)}$ is a first integral of $\Phi_i$. Assume the statement is true up to degree $\ell_1+\cdots+\ell_n+\sigma$, $\sigma>0$; it follows naturally that for any homogeneous polynomial first integral $F^{(\ell)}$, the homogeneous parts up to degree $\ell+\sigma$ of $F^{(\ell)}\circ\Phi_i$ all vanish.
Let $\xi_m=\ln(1+\dfrac{\phi_{im}^{(\geqslant2)}}{\mu_{im}x_m})$ and $\eta=\ln\left((1+\frac{\phi_{i1}^{(\geqslant2)}}{\mu_{i1}x_1})^{\ell_1}\cdots(1+\frac{\phi_{in}^{(\geqslant2)}}{\mu_{in}x_n})^{\ell_n}\right)
=\ell_1\xi_1+\cdots+\ell_n\xi_n$. Use the convention that the degree with respect to $x$ of $\dfrac{\phi_{im}^{(s)}}{\mu_{im}x_m}$ is $s-1$ and rewrite $\eta=\eta^{(1)}+\eta^{(2)}+\cdots$ where $\eta^{(s)}$ denotes the homogeneous part of degree $s$ with respect to $x$. Rewrite equation \eqref{eq:34} as $G\circ\Phi_i=Ge^\eta=G(1+\eta+\frac{1}{2}\eta^2+\cdots)$ for those $\eta$ with $|\eta|<\infty$, that is when $x$ does not belong to the union of hyperplane coordinates
By our assumption, we get that every homogeneous part of degree no more than $\sigma$ in $(\eta+\frac{1}{2}\eta^2+\cdots)$ must vanish. Then we have $\eta^{(1)}=\eta^{(2)}=\cdots=\eta^{(\sigma)}=0$ because for any $s$, we have
\[
(\eta+\frac{1}{2}\eta^2+\cdots)^{(s)}=\sum_{t=1}^s\sum_{s_1+\cdots+s_t=s}c_{s_1\cdots s_t}\eta^{(s_1)}\cdots \eta^{(s_t)},
\]
in which $c_{s_1\cdots s_t}$ are constants; and therefore the degree of the first possibly nonvanishing homogeneous part of $(\eta+\frac{1}{2}\eta^2+\cdots)$ must larger than $\sigma$.
For degree $\sigma+1$, we have
\[
(\eta+\frac{1}{2}\eta^2+\cdots)^{(\sigma+1)}=\sum_{t=1}^{\sigma+1}\sum_{s_1+\cdots+s_t=\sigma+1}c_{s_1\cdots s_t}\eta^{(s_1)}\cdots \eta^{(s_t)}=\eta^{(\sigma+1)}.
\]
We get that the homogeneous part of degree $\ell_1+\cdots+\ell_n+\sigma+1$ of $G\circ\Phi_i$ is just $G\eta^{(\sigma+1)}$, which reads
\begin{equation}
\label{eq:39}
\begin{aligned}
(G\circ\Phi_i)^{(\ell_1+\cdots+\ell_n+\sigma+1)}&=G(\ell_1\left(\ln(1+\frac{\phi_{i1}^{(\geqslant2)}}{\mu_{i1}x_1})\right)^{(\sigma+1)}+\cdots+\ell_n\left(\ln(1+\frac{\phi_{in}^{(\geqslant2)}}{\mu_{in}x_n})\right)^{(\sigma+1)})\\
&=G(\ell_1\xi_1^{(\sigma+1)}+\cdots+\ell_n\xi_n^{(\sigma+1)}).
\end{aligned}
\end{equation}
Now consider the homogeneous part $(\hat F_j\circ\Phi_i)^{(low+\sigma+1)}$ of degree $low+\sigma+1$ of $\hat F_j\circ\Phi_i$, which is
\begin{equation}
\label{eq:310}
(F_j^{(low)}\circ\Phi_i)^{(low+\sigma+1)}+\sum_{s=1}^{\sigma}(F_j^{(low+s)}\circ\Phi_i)^{(low+\sigma+1)}+(F_j^{(low+\sigma+1)}\circ\Phi_i)^{(low+\sigma+1)}.
\end{equation}
We recall that, by \ref{Lem:NFofFI}, $F_j^{(low+s)}$ is a common homogeneous polynomial first integral of the $\Phi^{(1)}_j$'s. By our inductive hypothesis, the $\sigma$ terms $(F_j^{(low+s)}\circ\Phi_i)^{(low+\sigma+1)}$ in the middle of equation \eqref{eq:310} vanish; the last term $(F_j^{(low+\sigma+1)}\circ\Phi_i)^{(low+\sigma+1)}$ is just $(F_j^{(low+\sigma+1)}\circ\Phi_i^{(1)})=F_j^{(low+\sigma+1)}$. Hence we get from $(\hat F_j\circ\Phi_i)^{(low+\sigma+1)}=F_j^{(low+\sigma+1)}$ that the first term in equation \eqref{eq:310} vanishes, i.e.,
\[
(F_j^{(low)}\circ\Phi_i)^{(low+\sigma+1)}=0.
\]
Substitute $F_j^{(low)}$ by $c_{j1}H_1+\cdots+c_{j\tau}H_\tau$ and use equation \eqref{eq:39}, we have
\[
\begin{aligned}
&c_{j1}(H_1\circ\Phi_i)^{(low+\sigma+1)}+\cdots+c_{j\tau}(H_\tau\circ\Phi_i)^{(low+\sigma+1)}\\
=&\,c_{j1}H_1(\ell_{11}\xi_1^{(\sigma+1)}+\cdots+\ell_{1n}\xi_n^{(\sigma+1)})+\cdots+c_{j\tau}H_\tau(\ell_{\tau 1}\xi_1^{(\sigma+1)}+\cdots+\ell_{\tau n}\xi_n^{(\sigma+1)})=0,
\end{aligned}
\]
that is,
\[
\sum_{m=1}^n \left(\sum_{k=1}^{\tau}c_{jk}\ell_{km}H_k\right)\xi_m^{(\sigma+1)}=0,
\quad j=1,\ldots,q.
\]
Using matrix and similar to equation \eqref{eq:36}, the above equations
\begin{equation}
\label{eq:311}
\begin{pmatrix}
c_{11}H_1&\cdots& c_{1\tau}H_\tau\\
\vdots&&\vdots\\
c_{q1}H_1&\cdots& c_{q\tau}H_\tau
\end{pmatrix}_{q\times\tau}
\begin{pmatrix}
\ell_{11}&\cdots&\ell_{1n}\\
\vdots&&\vdots\\
\ell_{\tau 1}&\cdots&\ell_{\tau n}
\end{pmatrix}_{\tau\times n}
\begin{pmatrix}
\xi_1^{(\sigma+1)}\\
\vdots\\
\xi_n^{(\sigma+1)}
\end{pmatrix}_{n\times1}
=0.
\end{equation}
Apply the same argument from equation \eqref{eq:36} to equation \eqref{eq:38}, we can get by equation \eqref{eq:311} that
\begin{equation}
\label{eq:312}
\ell_1\xi_1^{(\sigma+1)}+\cdots+\ell_n\xi_n^{(\sigma+1)}=0, \quad \mbox{~for all~} (\ell_1,\ldots,\ell_n)\in\Omega.
\end{equation}
Take equation \eqref{eq:312} back to equation \eqref{eq:39}, we get that the homogeneous part of degree $\ell_1+\cdots+\ell_n+\sigma+1$ of $G\circ\Phi_i$ vanishes. We finish our inductive step.
\end{proof}
\color{black}
\begin{lemma}[Division Lemma]
\label{lem:Division}
Let $(\Phi_1,\Phi_2,\ldots,\Phi_p,F_1,\ldots,F_q)$ be a non-degenerate integrable system of type $(p,q)$ such that the diffeomorphisms are in Poincar\'e-Dulac normal form. Write $\Phi_i=(\phi_{i1},\ldots,\phi_{in})$ for $i=1,\ldots,p$. If the family \{$\Phi_i^{(1)}\}$ is either projectively hyperbolic or infinitesimally integrable with a weakly non-resonant family of generators, then we have $\phi_{im}$ is divisible by $x_m$ for $m=1,\ldots,n.$
\end{lemma}
\begin{proof}
There are two cases according to different positions of the vector space $\mathit{Vect}_\mathbb{K}\Omega$.
Case 1: the vector space $\mathit{Vect}_\mathbb{K}\Omega$ is not contained in any hyperplane. In this case, for any $m$, there exists an element $(\ell_1,\ldots,\ell_n)\in\Omega$ such that $\ell_m\neq0$. The equation $\prod_{k=1}^n \phi_{ik}^{\ell_k}=x_1^{\ell_1}\cdots x_n^{\ell_n}$ implies that $\prod_{k=1}^n \phi_{ik}^{\ell_k}$ is divisible by $x_m^{\ell_m}$. On the other hand, as the linear part of $\phi_{ik}$ is $\mu_{ik}x_k$, we get $\prod_{k\neq m}\phi_{ik}$ is not divisible by $x_m$ since its homogeneous part of lowest degree is $\prod_{k\neq m}\mu_{ik}x_k$. Hence, $\prod_{k\neq m} \phi_{ik}^{\ell_k}$ is not divisible by $x_m$ neither. Hence $\phi_{im}$ is divisible by $x_m$.
Case 2: the vector space $\mathit{Vect}_\mathbb{K}\Omega$ is contained in a hyperplane. Assume for any $(\ell_1,\ldots,\ell_n)\in\Omega$ we have $\ell_m=0$ and and $x_1^{\gamma_1}\cdots x_n^{\gamma_n}$ is a term of $\phi_{im}$ with $\gamma_m=0$. we have the indices $(\gamma_1,\ldots,\gamma_n)$ lie in $\mathcal R_m$ which satisfy the resonant equations \eqref{eq:21}. Then We have integers $K_1,\ldots,K_n$ such that
\begin{equation}
\label{eq:313}
\ln\mu_{im}=\sum_{k=1}^n\gamma_k\ln\mu_{ik}+2K_i\pi\sqrt{-1},\quad i=1,\ldots,p.
\end{equation}
If the system is infinitesimally integrable and the family of the infinitesimal generators $X_i=\sum_{m=1}^n\ln\mu_{im}x_m\frac{\partial}{\partial x_m}$ is weakly non-resonant, then we have $K_i=0$ for all $i$ and $$(\gamma_1,\ldots,\gamma_{m-1},-1,\gamma_{m+1},\ldots,\gamma_n)$$ is an integer solution of the equations
\begin{equation}
\label{eq:314}
\sum_{k=1}^n\gamma_k\ln\mu_{ik}=0,\quad i=1,\ldots,p.
\end{equation}
As its $m$-th component is nonzero and therefore it cannot be expressed by a linear combination of elements in $\Omega$, then we can get $q+1$ independent solutions of \eqref{eq:314} which contradicts with that the dimension of $\mathit{Vect}_\mathbb{K}\Omega$ equals to $q$.
If the system is projectively hyperbolic, then we consider the real parts on both sides of equation \eqref{eq:313}
\[
\ln|\mu_{im}|-\sum_{k=1}^n \gamma_k\ln|\mu_{ik}|=0,\quad i=1,\ldots,p.
\]
We can see that $(\gamma_1,\ldots,\gamma_{m-1},-1,\gamma_{m+1},\ldots,\gamma_n)$ is an integer solution of the equations
\begin{equation}
\label{eq:315}
\sum_{k=1}^n \gamma_k\ln|\mu_{ik}|=0,\quad i=1,\ldots,p.
\end{equation}
Then the dimension of solutions of \eqref{eq:315} is larger than $q$ and so is that of $\mathit{Vect}_\mathbb{K}\Omega$, which contradicts with that the dimension of $\mathit{Vect}_\mathbb{K}\Omega$ equals to $q$.
Hence, under the assumption of weak non-resonance or projective hyperbolicity, we have for every term $x_1^{\gamma_1}\cdots x_n^{\gamma_n}$ of $\phi_{im}$, its $m$-th exponent $\gamma_m>0$.
\end{proof}
We point out that our hypothesis is necessary.
\begin{example}
\label{ex:34}
Consider two commuting diffeomorphisms on $(\mathbb{C}^2,0)$
\[
\Phi_1(x,y)=(2x,4y+x^2)\quad\text{and}\quad\Phi_2(x,y)=(-3x,9y).
\]
The commuting diffeomorphisms are in the Poincar\'e-Dulac normal forms but they can not be put into normal forms stated in theorem \ref{thm:FormalNormalForm}. In this case, the integrable system without common first integrals is neither weakly non-resonant nor projectively hyperbolic.
\end{example}
We also note that if $\Omega$ admits, say, only the first $p'$ entries are nonzero, which means the last $n-p'$ elements $\ell_{p'+1},\ldots,\ell_n$ must be zero, then the first integrals are independent of $x_{p'+1},\ldots,x_n$ by lemma \ref{Lem:NFofFI}. Moreover, we have all $\phi_{im}$ with $m\leqslant p'$ are independent of $x_{p'+1},\ldots,x_n$. In fact, we just proved that such $\phi_{im}$ is divisible by $x_m$, then by equation \eqref{eq:21}, the indices of the quotients of monomial terms in $\phi_{im}$ and $x_m$ also lie in $\Omega$, hence, the last $n-p'$ indices of every monomial term in $\phi_{im}$ must be zero. Hence, consider projections of $\Phi_1,\ldots,\Phi_n$ to the plane of first $p'$ coordinates, then any $p'$ independent of them as diffeomorphisms on the coordinate plane together with the $q$ first integrals as functions on the coordinate plane form an integrable system of type $(p',q)$.
\noindent\textbf{End of the proof of theorem \ref{thm:FormalNormalForm}}
By the division lemma \ref{lem:Division}, there exist functions $\varphi_{im}$ such that $\phi_{im}=\mu_{im}x_m(1+\hat\varphi_{im})$ for all $i$ and all $m$. By proposition \ref{prop:MonomialAreFI}, we have $(x_1^{\gamma_1}\cdots x_n^{\gamma_n})\circ\Phi_i= x_1^{\gamma_1}\cdots x_n^{\gamma_n}$ for every $(\gamma_1,\ldots,\gamma_n)$ in $\Omega$, and after substitutions of $\phi_{im}$ and a reduction, we get $\prod_{k=1}^n (1+\varphi_{ik})^{\gamma_k}=1$.
Notice the relation between $\mathcal R_m$ and $\Omega$ given by \eqref{eq:21} and \eqref{eq:22} respectively, every term of $\phi_{im}$ whose indices lie in $\mathcal R_m$ is a product of $x_m$ and a term of $\varphi_{im}$ whose indices lie in $\Omega$, so $\varphi_{im}$ are first integrals of $\Phi_j^{ss}$.
\section{Cases in analytic and smooth category}
\subsection*{\textit{Analytic case}}
For analytic integrable diffeomorphisms, we pay attention to the systems of the Poincar\'e type.
\begin{definition}(\cite{Gong-Stolovitch2016}[Definition 4.11])
Let $\Phi_1,\ldots,\Phi_p$ be $p$ commuting diffeomorphisms and $(\mu_{i1},\ldots,\mu_{in})$ be the eigenvalues of the linear part of $\Phi_i$. We say that the family of the diffeomorphisms (or their linear part) is of \textbf{the Poincar\'e type} if there exist $d>1$ and $c>0$ such that, for each
$(s_1,\ldots,s_n)\not\in\mathcal{R}_m$,
there exists $\left(i', (s'_1,\ldots,s'_n)\right) \in\{1, \ldots, p\} \times \mathbb{N}^n$ such that $\mu_{i1}^{s'_1}\cdots\mu_{in}^{s'_n}=\mu_{i1}^{s_1}\cdots\mu_{in}^{s_n}$ for all $1 \leq i \leq p, \mu_{i1}^{s'_1}\cdots\mu_{in}^{s'_n}-\mu_{i' m} \neq 0$, and
$$
\max \left(\left|\mu_{i'1}^{s'_1}\cdots\mu_{i'n}^{s'_n}\right|,\left|\mu_{i'1}^{s'_1}\cdots\mu_{i'n}^{s'_n}\right|^{-1}\right)>c^{-1} d^{s'_1+\cdots+s'_n},\,\, (s'_1-s_1,\ldots,s'_n-s_n) \in \mathbb{N}^{n} \cup\left(-\mathbb{N}^{n}\right).
$$
\end{definition}
By a theorem of X. Gong and L. Stolovitch \cite{Gong-Stolovitch2016}[Theorem 4.13], which says that if a commutative family of finitely many germs of biholomorphisms of the Poincar\'e type is formally conjugate to the normal form \eqref{good-nf} satisfying \eqref{intgnf}, then it is holomorphically conjugate to the normal form, we get the following theorem.
\begin{theorem}
Let $(\Phi_1,\ldots,\Phi_p,F_1,\ldots,F_q)$ be a non-degenerate analytic integrable system of type $(p,q)$ on $\mathbb{K}^n$ around $0$ satisfying the condition in theorem \ref{thm:FormalNormalForm}. If the family of diffeomorphisms is of Poincar\'e type, then the system is analytically conjugate to the normal form \ref{good-nf} together with \ref{intgnf} as in theorem \ref{thm:FormalNormalForm}., i.e., the normalization is convergent.
\end{theorem}
We remark that for integrable systems of type $(1,n-1)$, any diffeomorphism satisfying the assumption that at least one eigenvalue does not lie on the unit circle in \cite{Zhang2013} is projectively hyperbolic and of the Poincar\'e type~:
\begin{proposition}
Let $\Phi$ be an integrable diffeomorphism on $\mathbb{K}^n$ in the Poincar\'e-Dulac normal form formally. Suppose its linear part is diagonal written as $\Phi^{(1)}(x)=(\mu_1x_1,\ldots,\mu_nx_n)$ and at least one of its eigenvalues does not lie on the unit circle, then $\Phi$ is of the Poincar\'e type.
\end{proposition}
\begin{proof}
Suppose $x_1^{\ell_{j1}}\cdots x_n^{\ell_{jn}}, j=1,\ldots,n-1$ are $n-1$ independent first integrals of $\Phi^{(1)}$. Then then equation \eqref{eq:33} in this particular case becomes
\begin{equation}
\label{eq:40}
L
\begin{pmatrix}
\ln|\mu_1|\\
\vdots\\
\ln|\mu_n|
\end{pmatrix}
:=
\begin{pmatrix}
\ell_{11}&\cdots&\ell_{1\,n}\\
\vdots&&\vdots\\
\ell_{n-1\,1}&\cdots&\ell_{n-1\,n}
\end{pmatrix}
\begin{pmatrix}
\ln|\mu_1|\\
\vdots\\
\ln|\mu_n|
\end{pmatrix}
=0.
\end{equation}
As the $n-1$ by $n$ matrix $L$ has rank $n-1$ by independence, the dimension of the space of its solutions is one. Then the hypothesis that there exists at least one of the eigenvalues does not lie on the unit circle implies that $(\ln|\mu_1|,\ldots,\ln|\mu_n|)$ is a nonzero solution of equation \eqref{eq:40}; on the other hand, as $L:=(\ell_{ji})_{(n-1)\times n}$ is an integer matrix, equation \eqref{eq:40} has integer solutions. Thus there exists an integer solution $(k_1,\ldots,k_n)$ and a real number $c>0$ such that $(\ln|\mu_1|,\ldots,\ln|\mu_n|)=c(k_1,\ldots,k_n)$. Then we get $\ln|\mu_i|=ck_i$ and then $|\mu_i|=(e^c)^{k_i}=e^{ck_i}$.
Now write $\mu_i=e^{ck_i}e^{\sqrt{-1}\mathop{\rm Arg} \mu_i}$ where $0\leqslant\mathop{\rm Arg} \mu_i<2\pi$ denotes the principal value of the argument of $\mu_i$, then by the property $\mu_1^{\ell_{j1}}\cdots\mu_n^{\ell_{jn}}=1$, there exist integers $K_1,\ldots,K_{n-1}$ such that
\begin{equation}
\label{eq:41}
\ell_{j1}\mathop{\rm Arg} \mu_1+\cdots+\ell_{jn}\mathop{\rm Arg} \mu_n=2K_j\pi,\quad j=1,\ldots,n-1.
\end{equation}
This is a (non-homogeneous if $K_j\neq 0$) linear system and its real solutions form a one dimensional affine space: the difference of any two solutions is a solution of \eqref{eq:40}. Then, by the same argument as above, we can take a special solution $2\pi(\theta_1,\ldots,\theta_n)$ such that $\theta_i$'s are rational numbers and therefore there exists a real number $c'$ such that $(\mathop{\rm Arg} \mu_1,\ldots,\mathop{\rm Arg} \mu_n)=2\pi(\theta_1,\ldots,\theta_n)+c'(k_1,\ldots,k_n)$.
\[
\mu_i=e^{ck_i}e^{\sqrt{-1}\,2\pi\theta_i}e^{\sqrt{-1}\,c'k_i}
=e^{(c+\sqrt{-1}c')k_i}e^{\sqrt{-1}\,2\pi\theta_i}=d^{k_i}e^{\sqrt{-1}\,2\pi\theta_i},
\]
in which $d=e^{c+\sqrt{-1}c'}$ with $|d|=e^c>1$.
For any $\mu_i$ with $|\mu_i|=1$ or equivalently $k_i=0$, $\mu_i=e^{\sqrt{-1}\,2\pi\theta_i}$. Then there exists a natural number $\alpha_i$ such that $\mu_i^{\alpha_i}=1$ since $\theta_i$ is rational and therefore $x_i^{\alpha_i}$ is a first integral of $\Phi^{ss}$.
For any pair $\mu_i$ and $\mu_j$ with $|\mu_i|<1$ and $|\mu_j|>1$, we have $k_i<0<k_j$ and therefore there exist a pair of natural numbers $\beta_i$ and $\beta_j$ such that $\beta_ik_i+\beta_jk_j=0$ and $\beta_i\theta_i+\beta_j\theta_j\in\mathbb{Z}$. Then $\mu_i^{\beta_i}\mu_j^{\beta_j}=1$ and therefore $x_i^{\beta_i}x_j^{\beta_j}$ is a first integral of $\Phi^{ss}$.
We now claim that for any $(s_1,\ldots,s_n)\in\mathbb{N}^n$, there exists $(s'_1,\ldots,s'_n)\in\mathbb{N}^n$ such that
\begin{itemize}
\item $\mu_{1}^{s'_1}\cdots\mu_{n}^{s'_n}=\mu_{1}^{s_1}\cdots\mu_{n}^{s_n}$;
\item either $\{s'_i: i \mbox{~satisfies~}|\mu_i|\leqslant1\}$ or $\{s'_j: j \mbox{~satisfies~}|\mu_j|\geqslant1\}$ is bounded.
\end{itemize}
In fact, let $M$ be a natural number bigger than all possible $\alpha_i,\beta_i,\beta_j$, then for $s_i$ with $i$ satisfies $|\mu_i|=1$, set $s'_i$ to be the remainder of $s_i$ divided by $\alpha_i$. In the same spirit, for $s_i>M$ and $s_j>M$ with $i\in I:=\{i:|\mu_i|<1\}$ and $j\in J:=\{j:|\mu_j|>1\}$, we take the maximal integer $m$ such that $(s_i,s_j)-m(\beta_j,\beta_j)$ is nonnegative and take this vector to replace $(s_i,s_j)$, then the new $s_i$ and $s_j$ satisfy $s_i<\beta_i<M$ or $s_j<\beta_j<M$; continue the operation for the other $(s_i,s_j)$'s with $i\in I, j\in J$ and $s_i>M, s_j>M$ and obviously the operation will stop in finite steps; set $s'_i$ with $i\in I\cup J$ to be the final $s_i$ after reductions. Then $(s'_1,\ldots,s'_n)$ satisfies the second request. It satisfies the first request since each operation holds the property.
Now assume $s'_i<M$ for all $i\in\{i:|\mu_i|\leqslant1\}$. Remember $|\mu_j|\geqslant d>1$ for $j\in J$, we have
\[
\begin{aligned}
\left|\mu_1^{s'_1}\cdots\mu_n^{s'_n}\right|
&=\prod_{i\in\{i:|\mu_i|\leqslant1\}}|\mu_i|^{s'_i}\prod_{j\in J}|\mu_j|^{s'_j}\\
&\geqslant\prod_{i\in\{i:|\mu_i|\leqslant1\}}|\mu_i|^{s'_i}d^{\sum_{j\in J}s'_j}\\
&=\prod_{i\in\{i:|\mu_i|\leqslant1\}}\left({\frac{1}{d}|\mu_i|}\right)^{s'_i}d^{s'_1+\cdots+s'_n}
\geqslant\prod_{i\in\{i:|\mu_i|\leqslant1\}}\left({\frac{1}{d}|\mu_i|}\right)^{M}d^{s'_1+\cdots+s'_n}.
\end{aligned}
\]
Hence $\Phi$ is of the Poincar\'e type. One can get the same conclusion by a similar estimate on $\left|\mu_1^{s'_1}\cdots\mu_n^{s'_n}\right|^{-1}$ if $s'_j<M$ for all $j\in\{j:|\mu_j|\geqslant1\}$.
\end{proof}
With the help of a lemma (Lemma 2.5 in \cite{Zhang2013}) which claim that the linear part of the integrable diffeomorphism on $(\mathbb{C}^n,0)$ of type $(1,n-1)$ is diagonalizable, it follows that
\begin{corollary}\cite{Zhang2013}
An analytic integrable diffeomorphism of type $(1,n-1)$ on $(\mathbb{C}^n,0)$ such that at least one of its eigenvalues does not lie on the unit circle is analytically conjugate to the normal form \ref{good-nf} together with \ref{intgnf} as in theorem \ref{thm:FormalNormalForm}.
\end{corollary}
\subsection*{\textit{Smooth case}}
In the smooth category, we only consider the weakly hyperbolic systems, which were firstly introduced and studied by M. Chaperon.
\begin{definition}([section1.2] in \cite{Chaperon2013})
\label{def:44}
Let $\Phi_1,\ldots,\Phi_p$ be $p$ commuting diffeomorphisms on $(\mathbb{K}^n,0)$. Suppose the eigenvalues of the semi-simple part $\Phi_i^{ss}$ of the linear part of $\Phi_i$ are $\mu_{i1},\ldots,\mu_{in}$. For any $k\in\{1,\ldots,n\}$, we can get a linear form $c_k$ in $(\mathbb{R}^p)^*$ defined by mapping $(t_1,\ldots,t_p)\in\mathbb{R}^p$ to $\sum_{i=1}^p \ln|\mu_{ik}| t_i$.
The $\mathbb{Z}^p$-action generated by the diffeomorphisms is called
\begin{itemize}
\item\textbf{hyperbolic} if any $p$ linear forms in $\{c_1,\ldots,c_n\}$ are linearly independent in $(\mathbb{R}^p)^*$;
\item\textbf{weakly hyperbolic} if the convex hull of any $p$ linear forms in $\{c_1,\ldots,c_n\}$ does not contain the origin of $(\mathbb{R}^p)^*$.
\end{itemize}
Obviously, hyperbolicity implies weak hyperbolicity.
\end{definition}
We remark if $\mathbb{K}=\mathbb{C}$ and the diffeomorphisms are viewed as real diffeomorphisms from $(\mathbb{R}^2)^n$ to itself, then the eigenvalues of $\Phi_i^{ss}$ are $\mu_{i1},\bar\mu_{i1},\ldots,\mu_{in},\bar\mu_{in}$. Then we can get $2n$ linear forms $c_k, k=1,2,\ldots,2n$ with $c_{2k}=c_{2k-1}, k=1,\ldots,n,$ and therefore the property that the convex hull of any $p$ linear forms in $\{c_1,\ldots,c_{2n}\}$ does not contain the origin of $(\mathbb{R}^p)^*$ coincides with the previous one.
\begin{theorem}
Let $(\Phi_1=\Phi,\Phi_2,\ldots,\Phi_p,F_1,\ldots,F_q)$ be a non-degenerate smooth integrable system of type $(p,q)$ on $\mathbb{K}^n$ around $0$ satisfying the condition of theorem \ref{thm:FormalNormalForm}. If the system is weakly hyperbolic, then the diffeomorphisms are smoothly conjugate to a smooth normal form of the form \ref{good-nf} together with \ref{intgnf} as in theorem \ref{thm:FormalNormalForm}.
\end{theorem}
\begin{proof}
The idea of the proof is to construct another smooth integrable system which is formally conjugate to the original system and then we can apply Chaperon's theorem \cite{Chaperon2013}, which asserts that two weakly hyperbolic smooth $\mathbb{Z}^k\times\mathbb{R}^m$-action germs are smoothly conjugate if and only if they are formally conjugate.
By theorem \ref{thm:FormalNormalForm}, the system is formally conjugate to
\[
\hat\Phi_i=(\mu_{i1}x_1(1+\hat\varphi_{i1}),\ldots,\mu_{in}x_n(1+\hat\varphi_{in})),\,i=1,\ldots,p,
\]
where $\hat\varphi_{ik}$'s are formal series of finitely many generators, say $G_1,G_2,\ldots,G_r$, which are monomial first integrals of $\Phi_i^{ss}$'s. Moreover, these formal series satisfy the first integral relations $\prod_{k=1}^n (1+\hat\varphi_{ik})^{\gamma_k}=1$ in the formal sense for all $(\gamma_1,\ldots,\gamma_n)$ in the set $\Omega$ of common solutions of resonance equations \eqref{eq:22}.
By the Borel's theorem, there exist smooth functions $\tilde\varphi_{ik}$'s which are indeed smooth functions of $G_1,G_2,\ldots,G_r$ whose formal Taylor power series expansion at the origin are just $\hat\varphi_{ik}$'s respectively.
Define
\[
\tilde\Phi_i=(\mu_{i1}x_1(1+\tilde\varphi_{i1}),\ldots,\mu_{in}x_n(1+\tilde\varphi_{in})),\,i=1,\ldots,p,
\]
A priori, this new family of smooth dffeomorphisms do not commute any longer. In order to retrieve the commutativity property, it is sufficient to replace functions $\tilde\varphi_{ik}$'s by smooth functions $\varphi_{ik}$'s such that $\varphi_{ik}$'s satisfy $\prod_{k=1}^n (1+\varphi_{ik})^{\gamma_k}=1$ for $(\gamma_1,\ldots,\gamma_n)\in\Omega$. This replacement can by realized by only adjusting the flat parts of $\tilde\varphi_{ik}$'s as follows.
Take $q=n-p$ $\mathbb{Q}$-linearly independent elements in $\Omega$, denoted by $\omega_j:=(\omega_{j1},\ldots,\omega_{jn})$, $j=1,2,\ldots,q$. Fix $i$ and assume $\prod_{k=1}^n (1+\tilde\varphi_{ik})^{\omega_{jk}}=1+flat_{ij}$ for $j=1,\ldots,q$ in which $flat_{ij}$'s are flat functions i.e., their infinite jets at $0$ are zero. Take logarithm of the equations, we get
\begin{equation}
\label{eq:42}
\omega_{j1}\ln(1+\tilde\varphi_{i1})+\cdots+\omega_{jn}\ln(1+\tilde\varphi_{in})=\ln(1+flat_{ij}),\quad j=1,2,\ldots,q.
\end{equation}
Remember the functions $\varphi_{ik}$ we are searching for satisfy
\begin{equation}
\label{eq:43}
\omega_{j1}\ln(1+\varphi_{i1})+\cdots+\omega_{jn}\ln(1+\varphi_{in})=0,\quad j=1,2,\ldots,q.
\end{equation}
Assume without loss of generality the first $q$ columns of the matrix $(\omega_{jk})$ are independent and let $\varphi_{ik}=\tilde\varphi_{ik}$ for $k=q+1,\ldots,n$. For $k=1,\ldots,q$, let $\varphi_{ik}$ be the unique solution the linear equations obtained by \eqref{eq:42} minus \eqref{eq:43}
\[
\sum_{m=1}^q \omega_{jm}\left(\ln(1+\tilde\varphi_{im})-\ln(1+\varphi_{im})\right)=\ln(1+flat_{ij}),\quad j=1,2,\ldots,q.
\]
We get immediately that $\prod_{k=1}^n (1+\varphi_{ik})^{\omega_{jk}}=1$ for $j=1,\ldots,q$, and it is also easy to verify $\varphi_{i\ell}-\tilde\varphi_{i\ell}$ is flat since $\ln{\dfrac{1+\tilde\varphi_{i\ell}}{1+\varphi_{i\ell}}}$ is flat.
By Lemma \ref{lem:VectOmega}, any element $(\gamma_1,\ldots,\gamma_n)$ in $\Omega$ is a $\mathbb{Q}$-linear combination of $\omega_1,\ldots,\omega_q$. Hence, we get $\prod_{k=1}^n (1+\varphi_{ik})^{\gamma_k}=1$ for all $(\gamma_1,\ldots,\gamma_n)$ in $\Omega$.
Let us define the family of diffeomorphisms
\[
\Psi_i(x_1,\ldots,x_n):=(\mu_{i1}x_1(1+\varphi_{i1}),\ldots,\mu_{in}x_n(1+\varphi_{in})),\quad i=1,\ldots,p.
\]
Due to the property $\prod_{k=1}^n (1+\varphi_{ik})^{\omega_{jk}}=1$, it is commutative. As the infinite jets at $0$ of $\tilde\varphi_{ik}$ and $\varphi_{ik}$ are the same, the original family of diffeomorphisms $\Phi_1,\ldots,\Phi_p$ is still formally conjugate to the family of $\Psi_1,\ldots,\Psi_p$, and it follows by Chaperon's theorem that they are smoothly conjugate.
\end{proof}
Observe that hyperbolic systems are projectively hyperbolic and weakly hyperbolic, it follows naturally that
\begin{corollary}
Let $(\Phi_1=\Phi,\Phi_2,\ldots,\Phi_p,F_1,\ldots,F_q)$ be a non-degenerate smooth integrable system of type $(p,q)$ on $\mathbb{K}^n$ around $0$. If the system is hyperbolic, then the diffeomorphisms are smoothly conjugate to a smooth normal form of the form \eqref{good-nf} together with \eqref{intgnf} as in theorem \ref{thm:FormalNormalForm}.
\end{corollary}
\section{Real case}
In this section, we consider families of real commuting diffeomorphisms $\Phi_1,\ldots,\Phi_p$ on $(\mathbb{R}^n,0)$. The coefficients of the Taylor expansion of $\Phi_i(x)$'s at the origin are real numbers. If the linear parts $\Phi^{(1)}_i$ is diagonalizable over $\mathbb{R}$, then all preceding results hold true with the same proof.
Here we are concerned with cases where $\Phi^{(1)}_i=A_i$ are not diagonalizable over $\mathbb{R}$ but merely $\mathbb{C}$. By the commutativity, one can decompose $\mathbb{R}^n= \oplus_{j=1}^l V_j\oplus \mathbb{R}^{n-2l}$
where each $V_j$ is a real plane left invariant by all $A_i$'s and such that at least one of the $A_i|_{V_j}$'s is diagonalizable over $\mathbb{C}$ but not over $\mathbb{R}$.
Under a basis of vectors from eigenspaces, each $A_i$ becomes a block diagonal matrix consisting of $l$ two by two blocks and $n-2l$ real numbers. Suppose the eigenvalues of $A_i|_{V_j}$ are $\mu_{ij}=u_{ij}+\sqrt{-1}v_{ij},\bar\mu_{ij}=u_{ij}-\sqrt{-1}v_{ij}$, then the $j$-th block of $A_i$ can be of the form $\begin{pmatrix}u_{ij}&-v_{ij}\\v_{ij}&u_{ij}\end{pmatrix}$ if the basis is well chosen.
Denote by $\mathcal{E}_j:=V_j\oplus\sqrt{-1}V_j$ the complexification of $V_j$, it is natural to get a canonical linear map $A_i|_{\mathcal{E}_j}$. The complex vector $e=(\frac{1}{2},-\frac{1}{2}\sqrt{-1})$ in ${\mathcal{E}_j}$ is a common eigenvector belonging to $\mu_{ij}$ of $A_i|_{\mathcal{E}_j}$, i.e., $A_i|_{\mathcal{E}_j}e=\mu_{ij}e$ for $i=1,\ldots,p$. Then $\bar e=(\frac{1}{2},\frac{1}{2}\sqrt{-1})$ is a common eigenvector of $\bar \mu_{ij}$ and $\mathcal{E}_j$ is isomorphic to the $\mathbb{C}$-vector space generated by $e, \bar e$. Define
$D_{ij}:= \begin{pmatrix}\mu_{ij}& 0\\ 0&\bar \mu_{ij}\end{pmatrix}$ and
$P_j:=\begin{pmatrix}\frac{1}{2}&\frac{1}{2}\\ -\frac{1}{2}\sqrt{\scriptstyle{-}1}&\frac{1}{2}\sqrt{\scriptstyle{-}1}\end{pmatrix}$.
Then for $i=1,\ldots,p$ and $j=1,\ldots,l$, we have
\[
A_{i}|_{\mathcal{E}_j} \, P_j = P_j D_{ij}.
\]
Let $P$ be the linear transformation on $\mathbb{C}^n$ given by the block diagonal matrix consisting of $l$ copies of $\begin{pmatrix}\frac{1}{2}&\frac{1}{2}\\ -\frac{1}{2}\sqrt{\scriptstyle{-}1}&\frac{1}{2}\sqrt{\scriptstyle{-}1}\end{pmatrix}$ and Identity of size $n-2l$ and $D_i$ be the linear transformation on $\mathbb{C}^n$ given by the block diagonal matrix consisting of blocks $D_{i1},\ldots,D_{il}$ and Identity of size $n-2l$. Define $\rho$ to be the following involution $$
\rho(z_1,z_2,\ldots,z_{2l-1},z_{2l},z_{2l+1},\ldots,z_n):= (\bar z_2,\bar z_1,\ldots,\bar z_{2l},\bar z_{2l-1},\bar z_{2l+1},\ldots,\bar z_n),
$$ and denote by $c$ the complex conjugate $c(z_1,\ldots,z_n):=(\bar z_1,\ldots,\bar z_n)$. We easily have
$D_i\circ\rho=\rho\circ D_i \mbox{~for~} i=1,\ldots,p $ and
\begin{equation}
\label{eq:51}
P \rho=c P.
\end{equation}
Now let us consider the family $\{\tilde \Phi_i(z):=P^{-1}\Phi_i(P(z))\}_i$ of transformations of $\mathbb{C}^n$. Obviously, $\tilde \Phi_i(z)$'s commute pairwise. If the family of $\Phi_i(z)$'s is non-degenerate, weakly non-resonant, (projectively, weakly) hyperbolic, then the family of $\tilde\Phi_i(z)$'s keeps these properties defined according to the eigenvalues which are the same of $\tilde \Phi_i$'s and $\Phi_i$'s.
If $\Phi_i$'s have $q=n-p$ first integrals $F_1,\ldots,F_q$ functionally independent almost everywhere, then $\tilde F_j(z):=F_{j}(Pz)$'s are first integrals of the $\tilde \Phi_i$'s since
$$
\tilde F_{j}(\tilde \Phi_i(z))= F_{j}(PP^{-1}\Phi_i(Pz))=F_{j}(Pz)=\tilde F_{j}(z).
$$
We also have $\tilde F_1,\ldots,\tilde F_q$ are functionally independent almost everywhere since $P$ is invertible.
Hence, we get an integrable system $(\tilde \Phi_1,\ldots,\tilde \Phi_p,\tilde F_1,\ldots,\tilde F_q)$ on $\mathbb{C}^n$ of type $(p,q)$.
Notice the coefficients of the Taylor series at the origin of $\Phi_i$'s are real, we have $c\circ\Phi_i\circ c=\Phi_i$ formally. With the help of the equations \eqref{eq:51} and its equivalent equation $P^{-1}c=\rho^{-1}P^{-1}=\rho P^{-1}$, we have formally
\[
\begin{aligned}
\tilde{\Phi}_i\circ\rho= P^{-1}\Phi_i(P\rho)
&= P^{-1}\circ c\circ c\circ\Phi_i(cP)\\
&= P^{-1}\circ (c\circ\Phi_i)\circ P
= \rho\circ P^{-1}\circ \Phi_i\circ P
= \rho\circ \tilde{\Phi}_i.
\end{aligned}
\]
This is the formal $\rho$-equivariant normal form theory (see lemma \ref{lem:PD-NF}): there exists a formal transformation $\Psi(z)$, tangent to identity at the origin, such that
\begin{enumerate}
\item $\Psi\circ \rho= \rho\circ \Psi$
\item $\hat \Phi_i:=\Psi^{-1}\circ \tilde \Phi_i\circ \Psi$ is in the Poincar\'e-Dulac normal form, i.e.,
$\hat \Phi_i\circ D_j=D_j\hat \Phi_i$.
\end{enumerate}
Now the proof of theorem \ref{thm:FormalNormalForm} works and we get the complexified integrable diffeomorphisms $\tilde \Phi_i$'s deduced from a real integrable system $(\Phi_1,\ldots, \Phi_p,F_1,\ldots,F_q)$ are formally conjugated by $\Psi$ to $\hat \Phi_i$'s which are of the form \eqref{good-nf} together with \eqref{intgnf} as in theorem \ref{thm:FormalNormalForm} if the family is either projectively hyperbolic or infinitesimally integrable with a weakly non-resonant family of generators.
\begin{lemma}
The formal transformation $P\Psi P^{-1}$ is real in the sense that its coefficients are all real.
\end{lemma}
\begin{proof}
The equation $cP\Psi P^{-1}c=P\rho\Psi\rho P^{-1}=P\Psi\rho^2 P^{-1}=P\Psi P^{-1}$ holds.
\end{proof}
Observe that $P\hat \Phi_i P^{-1} =P\Psi^{-1}\circ\tilde\Phi_i\circ\Psi P^{-1} =(P\Psi P^{-1})^{-1}\circ\Phi_i\circ(P\circ\Psi P^{-1})$, we have a version of theorem \ref{thm:FormalNormalForm} for real diffeomorphisms having a linear part which is diagonal over $\mathbb{C}$ but not necessarily over $\mathbb{R}$.
\begin{theorem}
Let $(\Phi_1=\Phi,\Phi_2,\ldots,\Phi_p,F_1,\ldots,F_q)$ be a formal non-degenerate discrete integrable system of type $(p,q)$ on $\mathbb{R}^n$ at a common fixed point, say the origin $0$. If the family \{$\Phi_i^{(1)}\}$ is either projectively hyperbolic or infinitesimally integrable with a weakly non-resonant family of generators, then the family of real diffeomorphisms $\{\Phi_i\}$ is formally conjugated by the real formal transformation $P\Psi P^{-1}$ tangent to Identity to a real normal form $\{P\hat \Phi_i P^{-1}\}$ which is of the form
\begin{equation}\label{real-nf}
(\frac{\hat\Phi_{i1}+\hat\Phi_{i2}}{2},\frac{\hat\Phi_{i1}-\hat\Phi_{i2}}{2\sqrt{-1}},\ldots,\frac{\hat\Phi_{i(2l-1)}+\hat\Phi_{i\,2l}}{2},\frac{\hat\Phi_{i(2l-1)}-\hat\Phi_{i\,2l}}{2\sqrt{-1}} ,\hat\Phi_{i(2l+1)},\ldots,\hat\Phi_{in})(z),
\end{equation}
where $\hat\Phi_{im}$ denotes the $m$-th component of $\hat\Phi_i$ which is the complex normal form of $\Phi_i$ as in theorem \ref{thm:FormalNormalForm} and $z=(z_1,\ldots,z_n)$ is defined as $z_{2j-1}=x_{2j-1}+x_{2j}\sqrt{-1},\,$ $z_{2j}=x_{2j-1}-x_{2j}\sqrt{-1}$ for $j=1,\ldots,l$ and $z_j=x_j$ for $j>2l$.
\end{theorem}
\begin{proof}
From the expression above, we can see $\hat\Phi_{i(2j-1)}(z)$ and $\hat\Phi_{i(2j)}(z)$ have conjugate values. Indeed,
according to the properties of $\Psi$ and $\tilde \Phi_i$ above, we have
\begin{equation}\label{eq:53}
\rho\hat \Phi_i\rho= \rho\Psi\tilde \Phi_i\Psi^{-1}\rho=\Psi\rho\tilde \Phi_i\rho\Psi^{-1}=\Psi\tilde \Phi_i\Psi^{-1}=\hat\Phi_i.
\end{equation}
Therefore, by composition by $P$ on the left and by $P^{-1}$ on the right of the previous equation and by using \re{eq:51}, we obtain
$$
P\rho\hat \Phi_i\rho P^{-1}=cP\hat \Phi_i P^{-1}c=P\hat \Phi_i P^{-1},
$$
so that $P\hat \Phi_i P^{-1}$ is real.
Let $\sigma$ be the permutation mapping $2j-1$ to $2j$ and vice versa for $j\leqslant l$ and fixing all integers from $2l+1$ to $n$. As $\mu_{im}$ and $\mu_{i\sigma(m)}$ for $m\leqslant2l$ are a pair of conjugate eigenvalues and $\mu_{im}$ are real for $m>2l$, any element $\gamma:=(\gamma_1,\ldots,\gamma_n)\in\mathcal R_m$ (cf. \re{eq:21}), we have $\gamma^\sigma:=(\gamma_{\sigma(1)},\ldots,\gamma_{\sigma(n)})\in\mathcal R_{\sigma(m)}$.
It follows that if $\hat\Phi_{im,\gamma}w^\gamma$ is a resonant term in $\hat\Phi_{im}(w)$, then $\overline{\hat\Phi_{im,\gamma^{\sigma}}}w^{\gamma^{\sigma}}$ is a term in $\hat\Phi_{i\sigma(m)}$ by \eqref{eq:53}. Hence, for $m\leqslant2l$, $\hat\Phi_{im}$ and $\hat\Phi_{i\sigma(m)}$ are a pair of conjugate functions of variables $(z_1,z_2=\bar z_1,\ldots,z_{2l-1},z_{2l}=\bar z_{2l-1},z_{2l+1},\ldots,z_n)$ and for $m>2l$, the values (not the functions) $\hat\Phi_{im}(z_1,\bar z_1,\ldots,z_{2l-1},\bar z_{2l-1},z_{2l+1},\ldots,z_n)$ are real since
\begin{eqnarray*}
\overline{\hat\Phi_{im}(\bar z_2,\bar z_1,\ldots,\bar z_{2l},\bar z_{2l-1},\bar z_{2l+1},\ldots,\bar z_n)}&=& \overline{\hat\Phi_{im}( z_1,z_2,\ldots, z_{2l-1},z_{2l},z_{2l+1},\ldots, z_n)}\\
&=& \hat\Phi_{im}( z_1,z_2,\ldots, z_{2l-1},z_{2l},z_{2l+1},\ldots, z_n).
\end{eqnarray*}
\end{proof}
Let $(\Phi_1=\Phi,\Phi_2,\ldots,\Phi_p,F_1,\ldots,F_q)$ be a formal non-degenerate discrete integrable system of type $(p,q)$ on $\mathbb{R}^n$ at a common fixed point, say the origin $0$. We assume that the family of its linear parts $\{A_jx\}$ is either projectively hyperbolic or infinitesimally integrable with a weakly non-resonant family of generators.
Assume furthermore that the commuting family of real diffeomorphisms $\{\Phi_i\}$ satisfies $A_j\Phi_i=\Phi_i A_j$, for all $i,j$. Here we assume that the matrices $A_j=PD_jP^{-1}$ are simultaneously diagonalizable over $\mathbb{C}$ but not necessarily over $\mathbb{R}$. Then, we have $(P^{-1}A_j P)(P^{-1}\Phi_i P)=(P^{-1}\Phi_i P)(P^{-1}A_j P)$. Hence, the family $\{P^{-1}\Phi_i P\}$ is in Poincar\'e-Dulac normal form as it commutes with the family of its linear part $\{D_j\}
Since the family $(P^{-1}\Phi_1P,\ldots,P^{-1}\Phi_pP,F_1\circ P,\ldots,F_q\circ P)$ satisfies assumption of \rt{thm:FormalNormalForm}, then $P^{-1}\Phi_iP$ is of the form \re{good-nf} with \re{intgnf}, for all $i$. Therefore, $\Phi_i$ is of the form \re{real-nf} in which $\hat \Phi_i$ have to be replaced by $P^{-1}\Phi_iP$.
| proofpile-arXiv_065-244 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
For a finite group $G$, and a (complex) character $\alpha$ of $G$, the {\it McKay graph} $\mathcal {M}(G,\alpha)$ is defined to be the directed graph with vertex set ${\rm Irr}(G)$, there being an edge from $\chi_1$ to $\chi_2$ if and only if $\chi_2$ is a constituent of $\alpha\chi_1$.
A classical result of Burnside and Brauer \cite{Br} shows that $\mathcal {M}(G,\alpha)$ is connected if and only if $\alpha$ is faithful.
The study of McKay graphs for finite simple groups $G$ was initiated in \cite{LST}, with a particular focus on the diameters of these graphs. Theorem 2 of \cite{LST} establishes a quadratic upper bound $\hbox{diam}\,{\mathcal M}(G,\alpha) \le Cr^2$ for any simple group $G$ of Lie type or rank $r$ and any nontrivial $\alpha \in {\rm Irr}(G)$. Notice that the smallest (resp. largest) nontrivial irreducible character degrees of $G$ are at most $q^{cr}$ (resp. at least $q^{c'r^2}$), where $c,c'$ are constants, and hence the maximal diameter of a McKay graph ${\mathcal M}(G,\alpha)$ is at least a linear function of $r$. Theorem 3 of \cite{LST} implies a linear upper bound on these diameters for the classical groups $G=\mathrm {PSL}_n^\epsilon(q)$, provided $q$ is large compared to $n$. Our first main result establishes a linear upper bound for the remaining classical groups.
\begin{theorem}\label{main1}
Let $G$ be a quasisimple classical group $Sp_n(q)$ or $\Omega_n^\epsilon(q)$, and let $\alpha$ be a nontrivial irreducible character of $G$. Then $\hbox{diam}\,{\mathcal M}(G,\alpha) \le Cn$, where $C=16$ or $32$, respectively.
\end{theorem}
An obvious lower bound for $\hbox{diam}\,{\mathcal M}(G,\alpha)$ (when $\alpha(1)>1$) is given by
$\frac{\log \mathsf{b}(G)}{\log \alpha(1)}$, where $\mathsf{b}(G)$ is the largest degree of an irreducible character of $G$. In \cite[Conjecture 1]{LST} we conjectured that for simple groups $G$, this bound is tight up to a multiplicative constant. This conjecture was proved in \cite[Theorem 3]{LST} for the simple groups $\mathrm {PSL}_n^\epsilon(q)$, provided $q$ is large compared to $n$. Recently it has also been established for the symmetric groups in \cite{S}. Deducing it for the alternating groups is not entirely trivial, and this is the content of our next result.
\begin{theorem}\label{main2}
There is an effective absolute constant $C$ such that, for all $n \geq 5$ and for all nontrivial irreducible characters $\alpha$ of $G:=\mathsf{A}_n$,
$$\hbox{diam}\,{\mathcal M}(G,\alpha) \le C\frac{\log |G|}{\log \alpha(1)}.$$
\end{theorem}
In our final result, we consider covering ${\rm Irr}(G)$ by products of arbitrary irreducible characters, instead of powers of a fixed character. This idea was suggested by Gill \cite{G}, inspired by an analogous result of Rodgers and Saxl \cite{RS} for conjugacy classes in $G=\mathrm {SL}_n(q)$: this states that if a collection of conjugacy classes of $G$ satisfies the condition that the product of the class sizes is at least $|G|^{12}$, then the product of the classes is equal to $G$.
As a piece of notation, for characters $\chi_1,\ldots,\chi_l$ of $G$, we write $\chi_1\chi_2\cdots \chi_l \supseteq \mathrm{Irr}(G)$ to mean that every irreducible character of $G$ appears as a constituent of $\chi_1\chi_2\cdots \chi_l$. Also, let $g: \mathbb N\to \mathbb N$ be the function appearing in \cite[Theorem 3]{LST}.
\begin{theorem}\label{rodsax}
\begin{itemize}
\item[{\rm (i)}] Let $G$ be a simple group of Lie type of rank $r$, let $l \ge 489r^2$, and let $\chi_1,\ldots,\chi_l \in \mathrm{Irr}(G) \setminus 1_G$. Then $\chi_1\chi_2\cdots \chi_l \supseteq \mathrm{Irr}(G)$.
\vspace{2mm}
\item[{\rm (ii)}] Let $G = \mathrm {PSL}_n^\epsilon(q)$ with $q>g(n)$, let $l \in \mathbb N$, and let $\chi_1,\ldots \chi_l \in \mathrm{Irr}(G)$ satisfy $\prod_1^l \chi_i(1) > |G|^{10}$. Then $\chi_1\chi_2\cdots \chi_l \supseteq \mathrm{Irr}(G)$.
\end{itemize}
\end{theorem}
Gill \cite{G} has conjectured that part (ii) of the theorem holds for all simple groups (with the constant 10 possibly replaced by a different constant). As a stepping stone in the spirit of the linear bound given by Theorem \ref{main1}, let us pose the following more modest conjecture.
\begin{conj}\label{rsax} There is an absolute constant $C>0$ such that the following holds. Let $G={\rm Cl}_n(q)$, a classical simple group of dimension $n$, or $\mathsf{A}_n$, an alternating group of degree $n\ge 5$. Let $l \ge Cn$, and let $\chi_1,\ldots,\chi_l \in \mathrm{Irr}(G) \setminus 1_G$. Then $\chi_1\chi_2\cdots \chi_l \supseteq \mathrm{Irr}(G)$.
\end{conj}
See Proposition \ref{rs2-an} for some partial result on Conjecture \ref{rsax} in the cae of $\mathsf{A}_n$.
The layout of the paper is as follows. Section \ref{prel1} contains a substantial amount of character theory for symplectic and orthogonal groups that is required for the proof of Theorem \ref{main1}, which is completed in Section \ref{pfth1}. The remaining sections \ref{pfth2} and \ref{pfth3} contain the proofs of Theorems \ref{main2} and \ref{rodsax}, respectively.
\section{Some character theory for symplectic and orthogonal groups}\label{prel1}
Let $V = \mathbb F_q^d$ be endowed with a non-degenerate, alternating or quadratic of type $\epsilon = \pm$, form and
let $G$ denote the derived subgroup of the full isometry group of the form. Assume that $G$ is quasisimple, so that $G = \mathrm {Sp}(V) = \mathrm {Sp}_d(q)$ or $\Omega(V) = \Omega^\epsilon_d(q)$.
This section contains a detailed study of some specific irreducible characters $\chi$ of $G$ -- namely, the constituents of the permutation character $\mathrm{Ind}^G_{[P,P]}(1_{[P,P]})$, where $P$ is the maximal parabolic subgroup of $G$ stabilizing a singular 1-space. Two of the main results of the section are Propositions \ref{rat-so21} and \ref{rat-sp-so22}, which give upper bounds for the character ratios $|\chi(g)/\chi(1)|$ for $g\in G$. These will be used in Section \ref{pfth1} to prove Theorem \ref{main1}.
\subsection{Reduction lemmas}\label{red}
It is well known that the permutation action of $G$ on the set of singular $1$-spaces of
$V$ is primitive of rank $3$, and thus its character is $\rho = 1_G + \alpha + \beta$, with $\alpha, \beta \in \mathrm{Irr}(G)$. Let (the parabolic subgroup) $P=QL$ denote a point stabilizer in this action, with $Q$ the unipotent radical and $L$ a Levi subgroup.
Aside from $\alpha,\beta$, we also need to consider the remaining non-principal irreducible
constituents $\bar{g}_i$ of $\mathrm{Ind}^G_{[P,P]}(1_{[P,P]})$. Let $\mathsf{St}$ denote the
Steinberg character of $G$.
\begin{lem}\label{mc-r1}
The following statements hold.
\begin{enumerate}[\rm(i)]
\item Suppose that every semisimple element $s \in G$ is real. Then for any $\chi \in \mathrm{Irr}(G)$ and $k \in \mathbb N$, $\chi^{2k}$ contains
$\mathsf{St}$ if and only if $(\chi\overline\chi)^k$ contains $\mathsf{St}$.
\item All semisimple elements in $G$ are real, if $G = \mathrm {Sp}_{2n}(q)$, $\Omega_{2n+1}(q)$, or $\Omega^\epsilon_{4n}(q)$.
\end{enumerate}
\end{lem}
\begin{proof}
(i) Recall that $\mathsf{St}(g) = 0$ if $g \in G$ is not semisimple. Furthermore, $\chi(g) = \overline\chi(g)$ if $g \in G$ is semisimple, by hypothesis. Hence
$$\begin{aligned}
~[\chi^{2k},\mathsf{St}]_G & = \frac{1}{|G|}\sum_{g \in G}\chi(g)^{2k}\overline\mathsf{St}(g)\\
& = \frac{1}{|G|}\sum_{g \in G,~g\mbox{ {\tiny semisimple}}}\chi(g)^{2k}\overline\mathsf{St}(g)\\
& = \frac{1}{|G|}\sum_{g \in G,~g\mbox{ {\tiny semisimple}}}\chi(g)^{k}\overline\chi(g)^k\overline\mathsf{St}(g)\\
& = \frac{1}{|G|}\sum_{g \in G}\chi(g)^k\overline\chi(g)^{k}\overline\mathsf{St}(g) = [(\chi\overline\chi)^k,\mathsf{St}]_G,
\end{aligned}$$
and the claim follows.
\smallskip
(ii) This is well known, see e.g. \cite[Proposition 3.1]{TZ2}.
\end{proof}
\begin{lem}\label{mc-r2}
Let $G = \mathrm {Sp}(V) = \mathrm {Sp}_{2n}(q)$ with $n \geq 3$. Suppose $C \in \mathbb N$ is such that both $\alpha^C$ and $\beta^C$ contain $\mathsf{St}$.
Then for any $1_G \neq \chi \in \mathrm{Irr}(G)$, $\chi^{2C}$ contains $\mathsf{St}$.
\end{lem}
\begin{proof}
In the aforementioned rank $3$ permutation action of $G$ with character $\rho = 1_G+\alpha+\beta$, a point stabilizer
$P$ is the normalizer $\mathbf{N}_G(Z)$ of some long-root subgroup $Z$. Since $n \geq 3$, $Z$ has a nonzero fixed point on any
$\mathbb C G$-module affording $\chi$ by \cite[Theorem 1.6]{T}. It follows that $\chi|_P$ is reducible, and so
\begin{equation}\label{eq:mc1}
2 \leq [\chi|_P,\chi|_P]_P = [\chi\overline\chi,\mathrm{Ind}^G_P(1_P)]_G = [\chi\overline\chi,\rho]_G.
\end{equation}
As $[\chi\overline\chi,1_G]_G = 1$, $\chi\overline\chi$ contains either $\alpha$ or $\beta$, whence $(\chi\overline\chi)^C$ contains $\mathsf{St}$.
Applying Lemma \ref{mc-r1}, we conclude that $\chi^{2C}$ contains $\mathsf{St}$.
\end{proof}
\begin{lem}\label{mc-r3}
Let $G = \Omega(V) = \Omega^\epsilon_{n}(q)$ with $n \geq 5$. Suppose $C \in \mathbb N$ is such that both $\alpha^C$ and $\beta^C$ contain $\mathsf{St}$.
Consider any $1_G \neq \chi \in \mathrm{Irr}(G)$, and suppose in addition that either $n \not\equiv 2 (\bmod\ 4)$, or $\chi = \overline\chi$.
Then $\chi^{4C}$ contains $\mathsf{St}$.
\end{lem}
\begin{proof}
Again we consider a point stabilizer $P=QL$ in the aforementioned rank $3$ permutation action of $G$ with character
$\rho = 1_G+\alpha+\beta$.
Note that $Q$ is elementary abelian, $[L,L] \cong \Omega^\epsilon_{n-2}(q)$, and we can identify $\mathrm{Irr}(Q)$ with the natural module
$\mathbb F_q^{n-2}$ for $[L,L]$. In particular, any $[L,L]$-orbit on $\mathrm{Irr}(Q) \smallsetminus \{1_Q\}$ has length at least $2$. It is also
clear that some irreducible constituent of $\chi|_Q$ is non-principal, since $\mathrm{Ker}(\chi) \leq \mathbf{Z}(G)$ and $Q \not\leq \mathbf{Z}(G)$. It follows
that $\chi|_Q$ is reducible, and so
$$2 \leq [\chi|_Q,\chi|_Q]_Q = [(\chi\overline\chi)|_Q,1_Q]_Q.$$
Since $[\chi\overline\chi,1_G]_G = 1$, at least one non-principal irreducible constituent $\theta$ of $\chi\overline\chi$ contains $1_Q$ on restriction to $Q$. But $P$ normalizes $Q$, so the latter implies that $\theta|_P$ is reducible. Thus
\eqref{eq:mc1} holds for $\theta$ instead of $\chi$. Arguing as in the proof of Lemma \ref{mc-r1}, we obtain that
$\theta\overline\theta$ contains either $\alpha$ or $\beta$, whence $(\chi\overline\chi)^2$ contains either $\alpha$ or $\beta$.
It follows that $(\chi\overline\chi)^{2C}$ contains $\mathsf{St}$, and we are done if $\chi = \overline\chi$.
Applying Lemma \ref{mc-r1}, we also have that $\chi^{4C}$ contains $\mathsf{St}$ in the case $n \not\equiv 2 (\bmod\ 4)$.
\end{proof}
\begin{lem}\label{mc-r4}
Let $G = \Omega(V) = \Omega^\epsilon_{n}(q)$ with $n \geq 10$ and $n \equiv 2 (\bmod\ 4)$. Suppose $C \in \mathbb N$ is such that each of $\alpha^C$,
$\beta^C$, and $\bar{g}_i^C$ contains $\mathsf{St}$. Then for any $\chi \in \mathrm{Irr}(G)$ with $\chi \neq \overline\chi$, $\chi^{4C}$ contains $\mathsf{St}$.
\end{lem}
\begin{proof}
(i) As noted in the proof of Lemma \ref{mc-r3}, $Q$ is elementary abelian, $[L,L] \cong \Omega^\epsilon_{n-2}(q)$, and we can identify
$\mathrm{Irr}(Q)$ with the natural module $\mathbb F_q^{n-2}$ for $[L,L]$. Since $n-2 \geq 8$, it is straightforward to check that any
$[L,L]$-orbit on nonzero vectors of $\mathbb F_q^{n-2}$ contains a vector $v$ and also $-v$.
Thus, any $[L,L]$-orbit on $\mathrm{Irr}(Q) \smallsetminus \{1_Q\}$ contains a characters $\lambda$ and also its complex conjugate
$\overline\lambda$. As noted in the proof of Lemma \ref{mc-r3}, $Q \not\leq \mathrm{Ker}(\chi)$. Thus we may assume that $\chi|_Q$ contains
$\lambda$ and also $\overline\lambda$. It follows that
$1 \leq [\chi^2|_Q,1_Q]_Q$. Since $[\chi^2,1_G]_G = [\chi,\overline\chi]_G = 0$, at least one non-principal irreducible constituent
$\theta$ of $\chi^2$ contains $1_Q$ on restriction to $Q$.
In particular, $\theta|_P$ is reducible, since $P$ normalizes
$Q$, and \eqref{eq:mc1} holds for $\theta$ instead of $\chi$, and so the
arguments in the proof of Lemma \ref{mc-r2} shows that $\theta\overline\theta$ contains $\alpha$ or
$\beta$. If, moreover, $\theta = \overline\theta$, then we conclude that $\theta^2$ contains $\alpha$ or $\beta$.
\smallskip
(ii) Now consider the case $\theta \neq \overline\theta$, and
let $\theta$ be afforded by a $\mathbb C G$-module $U$. As shown in (i), the $Q$-fixed point subspace $U^Q$ on $U$ is
nonzero, and $L$ acts on $U^Q$. Recall that $4|(n-2)$ and $n-2 \geq 8$. Now, if $(\epsilon,q) \neq (+, \equiv 3(\bmod\ 4))$, then all irreducible characters of $[L,L] \cong \Omega^e_{n-2}(q)$ are real-valued, and so the $[L,L]$-module $U^Q$ contains an irreducible submodule $W \cong W^*$.
Consider the case $(\epsilon,q) = (+,\equiv 3(\bmod\ 4))$ and let $P = \mathrm{Stab}_G(\langle u \rangle_{\mathbb F_q})$ for
a singular vector $0 \neq u \in V$. We can consider $P$ inside $\tilde P:=\mathrm{Stab}_{\mathrm {SO}(V)}(\langle u \rangle_{\mathbb F_q})=Q\tilde L$,
and find another singular vector $u' \in V$ such that $V = V_1 \oplus V_2$, with $V_1 = \langle u,u' \rangle_{\mathbb F_q}$,
$V_2 = V_1^{\perp}$, and $[L,L] = \Omega(V_2)$. Since $q \equiv 3 (\bmod\ 4)$, $t:=-1_{V_1} \in \mathrm {SO}(V_1) \smallsetminus \Omega(V_1)$.
Choosing some $t' \in \mathrm {SO}(V_2) \smallsetminus \Omega(V_2)$, we see that $tt' \in \tilde L \cap \Omega(V) = L$, and
$L_1 := \langle [L,L],tt' \rangle \cong \mathrm {SO}^+_{n-2}(q)$. By \cite{Gow}, all irreducible characters of $L_1$ are real-valued, and so the $L_1$-module $U^Q$ contains an irreducible submodule $W \cong W^*$.
We have shown that the $[L,L]$-module $U^Q$ contains a nonzero submodule $W \cong W^*$. We can also inflate
$W$ to a nonzero self-dual module over $[P,P] = Q[L,L]$. It follows that $(U \otimes_{\mathbb C} U)|_{[P,P]}$ contains
$W \otimes_{\mathbb C} W^*$, which certainly contains the trivial submodule. Thus, $\theta^2|_{[P,P]}$ contains the principal
character $1_{[P,P]}$, and so
\begin{equation}\label{eq:mc2}
1 \leq [\theta^2,\mathrm{Ind}^G_{[P,P]}(1_{[P,P]})]_G.
\end{equation}
Recall we are assuming that $0 = [\theta,\overline\theta]_G = [\theta^2,1_G]_G$. Hence \eqref{eq:mc2} implies that
$\theta^2$ contains at least one of $\alpha$, $\beta$, or $\bar{g}_i$.
\smallskip
(iii) We have shown that, in all cases, $\theta^2$ contains at least one of $\alpha$, $\beta$, or $\bar{g}_i$. As $\chi^2$ contains $\theta$,
we see that $\chi^4$ contains at least one of $\alpha$, $\beta$, or $\bar{g}_i$, and so $\chi^{4C}$ contains $\mathsf{St}^2$.
\end{proof}
\subsection{Classical groups in characteristic $2$}
In this subsection we study certain characters of $\tilde G = \mathrm {Sp}(V) = \mathrm {Sp}_{2n}(q)$ and $G = \Omega(V)=\Omega^\epsilon_{2n}(q)$,
where $n \geq 5$ and $2|q$. These results will be used subsequently and are also of independent interest.
First we endow $V$ with a non-degenerate alternating form $(\cdot,\cdot)$, and work with its isometry group
$\tilde G = \mathrm {Sp}(V)$. We will consider the following irreducible characters of $\tilde G$:
$\bullet$ the $q/2+1$ {\it linear-Weil} characters:
$\rho^1_n$ of degree $(q^n+1)(q^n-q)/2(q-1)$, $\rho^2_n$ of
degree $(q^n-1)(q^n+q)/2(q-1)$, and $\tau^i_n$ of degree $(q^{2n}-1)/(q-1)$, $1 \leq i \leq (q-2)/2$, and
$\bullet$ the $q/2+2$ {\it unitary-Weil} characters:
$\alpha_n$ of degree $(q^n-1)(q^n-q)/2(q+1)$, $\beta_n$ of
degree $(q^n+1)(q^n+q)/2(q+1)$, and $\zeta^i_n$ of degree $(q^{2n}-1)/(q+1)$, $1 \leq i \leq q/2$;\\
see \cite[Table 1]{GT}. Then
\begin{equation}\label{eq:dec11}
\rho:=1_{\tilde G}+\rho^1_n+\rho^2_n
\end{equation}
is the rank $3$ permutation character of $\tilde G$ acting on the set of
$1$-spaces of $V$.
The following statement is well known, see e.g. formula (1) of \cite{FST}:
\begin{lem}\label{quad1}
For $\epsilon = \pm$, the character $\pi^\epsilon$ of the permutation action of $\tilde G$ on quadratic forms of type $\epsilon$ associated to
$(\cdot,\cdot)$ is given as follows:
$$ \pi^+ = 1_{\tilde G} + \rho^2_n + \sum^{(q-2)/2}_{i=1}\tau^i_n,~~~
\pi^- = 1_{\tilde G} + \rho^1_n + \sum^{(q-2)/2}_{i=1}\tau^i_n.$$
\end{lem}
Given any $g \in \mathrm {GL}(V)$, let
$$d(x,g):= \dim_{\overline{\mathbb F}_q}\mathrm{Ker}(g-x \cdot 1_{V \otimes_{\mathbb F_q}}\overline{\mathbb F}_q)$$
for any $x \in \overline{\mathbb F}_q^\times$, and define the {\it support} of $g$ to be
\[
\mathsf{supp}(g) := \dim(V)-\max_{x \in \overline{\mathbb F}_q^\times}d(x,g).
\]
Set
$$d(g):= \dim(V)-\mathsf{supp}(g).$$
\begin{prop}\label{rat-sp2}
Let $\tilde G = \mathrm {Sp}_{2n}(q)$ with $n \geq 3$ and $2|q$, and let $g \in \tilde G$ have support $s=\mathsf{supp}(g)$. If
$\chi \in \{\rho^1_n,\rho^2_n\}$, then
$$\frac{|\chi(g)|}{\chi(1)} \leq \frac{1}{q^{s/3}}.$$
\end{prop}
\begin{proof}
The statement is obvious if $s=0$. Suppose $s=1$. It is easy to see that in this case $g$ is a transvection, and so
$$\rho^1_n(g) = \rho^2_n(g) = \frac{q^{2n-1}-q}{2(q-1)}$$
by \cite[Corollary 7.8]{GT}, and the statement follows.
From now on we may assume $s \geq 2$. Observe that
$d:=\max_{x \in \mathbb F_q^\times}d(x,g) \leq d(g) = 2n-s$. Hence,
$$0 \leq \rho(g) = \sum_{x \in \mathbb F_q^\times}\frac{q^{d(x)}-1}{q-1} \leq q^d-1,$$
and so \eqref{eq:dec11} implies
$$|\rho^1_n(g)+\rho^2_n(g)| \leq q^d-1.$$
On the other hand, since $\pi^\pm(g) \geq 0$ and $\pi^++\pi^-$ is just the permutation character of $\tilde G$ acting on
$V$, Lemma \ref{quad1} implies that
$$|\rho^1_n(g)-\rho^2_n(g)| = |\pi^+(g)-\pi^-(g)| \leq \pi^+(g)+\pi^-(g) = q^{d(1,g)} \leq q^d.$$
It follows for any $i \in \{1,2\}$ that
$$|\rho^i_n(g)| \leq \bigl(|\rho^1_n(g)+\rho^2_n(g)|+|\rho^1_n(g)+\rho^2_n(g)|\bigr)/2 < q^d \leq q^{2n-s}.$$
Since $n \geq 3$, we can also check that
$$\rho^i_n(1) \geq \frac{(q^n+1)(q^n-q)}{2(q-1)} > q^{2n-4/3}.$$
Thus $|\chi(g)|/\chi(1)| < q^{4/3-s} \leq q^{-s/3}$, as stated.
\end{proof}
Next we endow $V = \mathbb F_q^{2n}$ with a non-degenerate quadratic form $\mathsf{Q}$ of type $\epsilon = \pm$ associated to
the alternating form $(\cdot,\cdot)$. Choose a Witt basis $(e_1,\ldots,e_n,f_1, \ldots, f_n)$ for $(\cdot,\cdot)$, such that
$\mathsf{Q}(e_1)=\mathsf{Q}(f_1)=0$. We may assume that $P = \mathrm{Stab}_G(\langle e_1 \rangle_{\mathbb F_q}) = QL$, where
$Q$ is elementary abelian of order $q^{2n-2}$, $L \cong \Omega^\epsilon_{2n-2}(q) \times C_{q-1}$, and
$$[P,P] = \mathrm{Stab}_G(e_1)=Q \rtimes [L,L]$$
has index $(q^n-\epsilon)(q^{n-1}+\epsilon)$ in $G$. Also consider $H := \mathrm{Stab}_G(e_1+f_1)$.
According to \cite[Theorem 1.3]{N}, $G$ has $q+1$
non-principal complex irreducible characters of degree at most $(q^n-\epsilon)(q^{n-1}+\epsilon)$, namely, $\alpha$ of degree
$(q^n-\epsilon)(q^{n-1}+\epsilon q)/(q^2-1)$, $\beta$ of degree $(q^{2n}-q^2)/(q^2-1)$,
$\bar{g}_i$ of degree $(q^n-\epsilon)(q^{n-1}+\epsilon)/(q-1)$, $1 \leq i \leq (q-2)/2$, and $\delta_j$ of degree
$(q^n-\epsilon)(q^{n-1}-\epsilon)/(q+1)$, $1 \leq j \leq q/2$.
\begin{prop}\label{dec-so2}
Let $G = \Omega^\epsilon_{2n}(q)$ with $n \geq 5$ and $2|q$, and consider $P = \mathrm{Stab}_G(e_1)$ and $H = \mathrm{Stab}_G(e_1+f_1)$ as
above. Then the following statements hold.
\begin{enumerate}[\rm(i)]
\item $\mathrm{Ind}^G_P(1_P) = 1_G + \alpha + \beta$.
\item $\mathrm{Ind}^G_{[P,P]}(1_{[P,P]}) = 1_G +\alpha+\beta + 2\sum^{(q-2)/2}_{i=1}\bar{g}_i$.
\item $\mathrm{Ind}^G_H(1_H) = 1_G +\beta + \sum^{(q-2)/2}_{i=1}\bar{g}_i+\sum^{q/2}_{j=1}\delta_j$.
\end{enumerate}
\end{prop}
\begin{proof}
(i) is well known. Next, $P/[P,P] \cong C_{q-1}$ has $q-1$ irreducible characters: $1_P$ and $(q-2)/2$ pairs of
$\{\nu_i,\overline\nu_i\}$, $1 \leq i \leq (q-2)/2$. An application of Mackey's formula shows that
$\mathrm{Ind}^G_P(\nu_i) = \mathrm{Ind}^G_P(\overline\nu_i)$ is irreducible for all $i$. Now using (i) we can write
\begin{equation}\label{eq:dec1}
\mathrm{Ind}^G_{[P,P]}(1_{[P,P]}) = \mathrm{Ind}^G_P\bigl( \mathrm{Ind}^P_{[P,P]}(1_{[P,P]}) \bigr) =
1_G+\alpha+\beta + 2\sum^{(q-2)/2}_{i=1}\mathrm{Ind}^G_P(\nu_i).
\end{equation}
On the other hand, note that $[P,P]$ has exactly $2q-1$ orbits on the set of nonzero singular vectors in $V$:
$q-1$ orbits $\{xe_1\}$ with $x \in \mathbb F_q^\times$, one orbit $\{v \in e_1^\perp \smallsetminus \langle e_1 \rangle_{\mathbb F_q} \mid \mathsf{Q}(v)=0\}$, and $(q-1)$ orbits
$\{yf_1 + v \mid v \in e_1^\perp, \mathsf{Q}(yf_1+v) =0\}$ with $y \in \mathbb F_q^\times$. Together with \eqref{eq:dec1}, this implies
that all summands in the last decomposition in \eqref{eq:dec1} are pairwise distinct.
Since $\bar{g}_i = (q^n-\epsilon)(q^{n-1}+\epsilon)/(q-1) = \mathrm{Ind}^G_P(\nu_{i'})$, renumbering the $\nu_i$ if necessary, we may assume
that $\mathrm{Ind}^G_P(\nu_i)=\bar{g}_i$, and (ii) follows.
\smallskip
For (iii), first note that $P$ has two orbits on the set $\mathcal {X} := \{ v \in V \mid \mathsf{Q}(v)=1\}$, namely, $\mathcal {X} \cap e_1^\perp$ and
$\mathcal {X} \smallsetminus e_1^\perp$. Since $\mathrm{Ind}^G_H(1_H)$ is the character of the permutation action of $G$ on $\mathcal {X}$, we get
\begin{equation}\label{eq:dec2}
[\mathrm{Ind}^G_P(1_P),\mathrm{Ind}^G_H(1_H)]_G = 2.
\end{equation}
Next, $[P,P]$ has $q$ orbits on $\mathcal {X}$, namely, $\mathcal {X} \cap e_1^\perp$, and $\{yf_1+w \in \mathcal {X} \mid w \in e_1^\perp\}$ with
$y \in \mathbb F_q^\times$. Thus
\begin{equation}\label{eq:dec3}
[\mathrm{Ind}^G_{[P,P]}(1_{[P,P]}),\mathrm{Ind}^G_H(1_H)]_G = q.
\end{equation}
Combining the results of (i), (ii), with \eqref{eq:dec2}, \eqref{eq:dec3}, and again using \cite[Theorem 1.3]{N}, we can write
\begin{equation}\label{eq:dec4}
\mathrm{Ind}^G_H(1_H) = 1_G + (a\alpha + b\beta) +\sum^{(q-2)/2}_{i=1}c_i\bar{g}_i + \sum^{q/2}_{j=1}d_j\delta_j,
\end{equation}
where $a,b,c_i,d_j \in \mathbb{Z}_{\geq 0}$, $a+b=1$, $\sum_ic_i = (q-2)/2$.
\smallskip
Let $\tau$ denote the character of the permutation action of $G$ on $V \smallsetminus \{0\}$, so that
$$\tau = \mathrm{Ind}^G_{[P,P]}(1_{[P,P]}) + (q-1)\mathrm{Ind}^G_H(1_H).$$
Note that $G$ has $q^3+q^2-q$ orbits on $(V \smallsetminus \{0\}) \times (V \smallsetminus \{0\})$, namely,
$q(q-1)$ orbits of $(u,xu)$, where $x \in \mathbb F_q^\times$ and $\mathsf{Q}(u) = y \in \mathbb F_q$, and $q^3$ orbits of
$(u,v)$, where $u,v$ are linearly independent and $(\mathsf{Q}(u),(u,v),\mathsf{Q}(v)) = (x,y,z) \in \mathbb F_q^3$. In other words,
$[\tau,\tau]_G = q^3+q^2-q$. Using (ii) and \eqref{eq:dec3}, we deduce that
\begin{equation}\label{eq:dec5}
[\mathrm{Ind}^G_H(1_H),\mathrm{Ind}^G_H(1_H)]_G = q+1.
\end{equation}
In particular, if $q=2$ then $\mathrm{Ind}^G_H(1_H)$ is the sum of $3$ pairwise distinct irreducible characters. By checking the
degrees of $\alpha,\beta$ and $\delta_1$, (iii) immediately follows from \eqref{eq:dec4}.
\smallskip
Now we may assume $q=2^e \geq 4$. Let $\ell_+ = \ell(2^{ne}-1)$ denote a primitive prime divisor of $2^{ne}-1$,
which exists by \cite{Zs}. Likewise, let $\ell_- = \ell(2^{2ne}-1)$ denote a primitive prime divisor of $2^{2ne}-1$.
Then note that $\ell_\epsilon$ divides the degree of each of $\alpha$, $\bar{g}_i$, $d_j$, but neither $[G:H]-1$ nor $\beta(1)$.
Hence \eqref{eq:dec4} implies that $(a,b)=(0,1)$. Comparing the degrees in \eqref{eq:dec4}, we also see that
$\sum_jd_j = q/2$. Now
$$q+1 = [\mathrm{Ind}^G_H(1_H),\mathrm{Ind}^G_H(1_H)]_G = 2 + \sum^{(q-2)/2}_{i=1}c_i^2 + \sum^{q/2}_{j=1}d_j^2
\geq 2 + \sum^{(q-2)/2}_{i=1}c_i + \sum^{q/2}_{j=1}d_j = 2 +\frac{q-2}{2}+\frac{q}{2},$$
yielding $c_i^2=c_i$, $d_j^2=d_j$, $c_i,d_j \in \{0,1\}$, and so $c_i = d_j = 1$, as desired.
\end{proof}
In the next statement, we embed $G = \Omega(V)$ in $\tilde G := \mathrm {Sp}(V)$ (the isometry group of the form $(\cdot,\cdot)$ on $V$).
\begin{prop}\label{sp-so1}
Let $n \geq 5$, $2|q$, and $\epsilon = \pm$. Then the characters $\rho^1_n$ and $\rho^2_n$ of $\mathrm {Sp}(V) \cong \mathrm {Sp}_{2n}(q)$
restrict to $G = \Omega(V) \cong \Omega^\epsilon_{2n}(q)$ as follows:
$$\begin{array}{ll}(\rho^1_n)|_{\Omega^+_{2n}(q)} = \beta + \sum^{q/2}_{j=1}\delta_j, &
(\rho^2_n)|_{\Omega^+_{2n}(q)} = 1+\alpha+\beta + \sum^{(q-2)/2}_{i=1}\bar{g}_i,\\
(\rho^1_n)|_{\Omega^-_{2n}(q)} = 1+\alpha+\beta + \sum^{(q-2)/2}_{i=1}\bar{g}_i, & (\rho^2_n)|_{\Omega^-_{2n}(q)} = \beta + \sum^{q/2}_{j=1}\delta_j.
\end{array}$$
\end{prop}
\begin{proof}
Note by \eqref{eq:dec11}
that $1_G + (\rho^1_n+\rho^2_n)|_G$ is just the character of the permutation action on the set of $1$-spaces of
$V$. Hence, by Proposition \ref{dec-so2} we have
\begin{equation}\label{eq:dec21}
\bigl( \rho^1_n+\rho^2_n \bigr)|_G = \mathrm{Ind}^G_P(1_P) + \mathrm{Ind}^G_H(1_H) -1_G = 1_G +\alpha+2\beta + \sum^{(q-2)/2}_{i=1}\bar{g}_i
+ \sum^{q/2}_{j=1}\delta_j.
\end{equation}
Furthermore, Lemma \ref{quad1} implies by Frobenius' reciprocity that
\begin{equation}\label{eq:dec22}
\bigl(\rho^2_n\bigr)|_G \mbox { contains }1_G \mbox { when }\epsilon=+, \mbox{ and }\bigl(\rho^1_n\bigr)|_G \mbox { contains }1_G \mbox { when }\epsilon=-.
\end{equation}
\smallskip
(i) First we consider the case $\epsilon = +$. If $(n,q) \neq (6,2)$, one can find a primitive prime divisor $\ell = \ell(2^{ne}-1)$, where
$q = 2^e$. If $(n,q) = (6,2)$, then set $\ell = 7$. By its choice, $\ell$ divides the degrees of $\rho^2_n$, $\alpha$, $\bar{g}_i$, and
$\delta_j$, but $\beta(1) \equiv \rho^1_n(1) \equiv -1 (\bmod\ \ell)$. Hence, \eqref{eq:dec21} and \eqref{eq:dec22} imply that
$$\bigl(\rho^2_n\bigr)|_G = 1_G +\beta +x\alpha + \sum^{(q-2)/2}_{i=1}y_i\bar{g}_i + \sum^{q/2}_{j=1}z_j\delta_j,$$
where $x,y_i,z_j \in \{0,1\}$. Setting $y:=\sum^{(q-2)/2}_{i=1}y_i$ and $z:=\sum^{q/2}_{j=1}z_j$ and comparing the degrees,
we get
$$(1-x)(q^{n-1}+q)+(q^{n-1}+1)(q+1)((q-2)/2-y) = z(q^{n-1}-1)(q-1),$$
and so $q^{n-1}+1$ divides $(1-x+2z)(q-1)$. Note that $\gcd(q-1,q^{n-1}+1)=1$ and
$0 \leq (1-x+2z)(q-1) \leq q^2-1 < q^{n-1}+1$. It follows that $x=1$, $z=0$, $y=(q-2)/2$, whence $y_i=1$ and $z_j=1$, as stated.
\smallskip
(ii) Now let $\epsilon = -$, and choose $\ell$ to be a primitive prime divisor $\ell(2^{2ne}-1)$. By its choice, $\ell$ divides the degrees of $\rho^1_n$, $\alpha$, $\bar{g}_i$, and
$\delta_j$, but $\beta(1) \equiv \rho^2_n(1) \equiv -1 (\bmod\ \ell)$. Hence, \eqref{eq:dec21} and \eqref{eq:dec22} imply that
$$\bigl(\rho^1_n\bigr)|_G = 1_G +\beta +x\alpha + \sum^{(q-2)/2}_{i=1}y_i\bar{g}_i + \sum^{q/2}_{j=1}z_j\delta_j,$$
where $x,y_i,z_j \in \{0,1\}$. Setting $y:=\sum^{(q-2)/2}_{i=1}y_i$ and $z:=\sum^{q/2}_{j=1}z_j$ and comparing the degrees,
we get
$$(1-x)(q^{n-1}-q)+(q^{n-1}-1)(q+1)((q-2)/2-y) = z(q^{n-1}+1)(q-1),$$
and so $(q^{n-1}-1)/(q-1)$ divides $1-x+2z$. Since
$0 \leq 1-x+2z \leq q+1 < (q^{n-1}-1)/(q-1)$, it follows that $x=1$, $z=0$, $y=(q-2)/2$, whence $y_i=1$ and $z_j=1$, as stated.
\end{proof}
For the subsequent discussion, we recall the {\it quasi-determinant} $\kappa_\epsilon: \mathrm {O}_\epsilon \to \{-1,1\}$,
where $\mathrm {O}_\epsilon:= \mathrm{GO}(V) \cong \mathrm{GO}^\epsilon_{2n}(q)$, defined via
$$\kappa_\epsilon(g) := (-1)^{\dim_{\mathbb F_q}\mathrm{Ker}(g-1_V)}.$$
It is known, see e.g. \cite[Lemma 5.8(i)]{GT}, that $\kappa$ is a group homomorphism, with
\begin{equation}\label{eq:kappa1}
\mathrm{Ker}(\kappa_\epsilon) = \Omega_\epsilon:= \Omega(V) \cong \Omega^\epsilon_{2n}(q).
\end{equation}
Now we prove the ``unitary'' analogue of Lemma \ref{quad1}:
\begin{lem}\label{quad2}
For $n \geq 3$ and $2|q$, the following decompositions hold:
$$\mathrm{Ind}^{\tilde G}_{\mathrm {O}_+}(\kappa_+) = \beta_n + \sum^{q/2}_{i=1}\zeta^i_n,~~~
\mathrm{Ind}^{\tilde G}_{\mathrm {O}_-}(\kappa_-) = \alpha_n + \sum^{q/2}_{i=1}\zeta^i_n.$$
\end{lem}
\begin{proof}
According to formulae (10) and (4)--(6) of \cite{GT},
\begin{equation}\label{eq:dec31}
\mathrm{Ind}^{\tilde G}_{\Omega_+}(\kappa_+)+ \mathrm{Ind}^{\tilde G}_{\Omega_-}(\kappa_-) = \alpha_n+\beta_n + 2\sum^{q/2}_{i=1}\zeta^i_n.
\end{equation}
Hence we can write
\begin{equation}\label{eq:dec32}
\mathrm{Ind}^{\tilde G}_{\Omega_+}(\kappa_+) = x\alpha_n+y\beta_n + \sum^{q/2}_{i=1}z_i\zeta^i_n,
\end{equation}
where $x,y,z_i \in \mathbb{Z}_{\geq 0}$, $x,y \leq 1$ and $z_i \leq 2$.
Note that, since $\pi^+= \mathrm{Ind}^{\tilde G}_{\mathrm {O}_+}(1_{\mathrm {O}_+})$, Lemma \ref{quad1} implies that
$$|\mathrm {O}_+ \backslash \tilde G/\mathrm {O}_+| = \frac{q}{2}+1.$$
Next, by Mackey's formula we have
$$[\mathrm{Ind}^{\tilde G}_{\Omega_+}(\kappa_+),\mathrm{Ind}^{\tilde G}_{\Omega_+}(\kappa_+)]_G = \sum_{\mathrm {O}_+t\mathrm {O}_+ \in \mathrm {O}_+ \backslash \tilde G/\mathrm {O}_+}
[(\kappa_+)|_{\mathrm {O}_+ \cap t\mathrm {O}_+t^{-1}},(\kappa^t_+)|_{\mathrm {O}_+ \cap t\mathrm {O}_+t^{-1}}]_{\mathrm {O}_+ \cap t\mathrm {O}_+t^{-1}},$$
where $\kappa^t_+(x) = \kappa(x^t) := \kappa(t^{-1}xt)$ for any $x \in \mathrm {O}_+ \cap t\mathrm {O}_+t^{-1}$. For such an $x$, note that
\begin{equation}\label{eq:dec321}
\kappa_+(x) = 1 \Leftrightarrow 2 | \dim_{\mathbb F_q}\mathrm{Ker}(x-1_V) \Leftrightarrow 2 | \dim_{\mathbb F_q}\mathrm{Ker}(x^{t-1}-1_V) \Leftrightarrow
(\kappa_+)^t(x) = 1,
\end{equation}
i.e. $\kappa_+(x) = \kappa^t_+(x)$. It follows that
\begin{equation}\label{eq:dec33}
x^2+y^2+\sum^{q/2}_{i=1}z_i^2=
[\mathrm{Ind}^{\tilde G}_{\Omega_+}(\kappa_+),\mathrm{Ind}^{\tilde G}_{\Omega_+}(\kappa_+)]_G = |\mathrm {O}_+ \backslash \tilde G/\mathrm {O}_+| = \frac{q}{2}+1.
\end{equation}
On the other hand, equating the character degrees in \eqref{eq:dec32} we obtain
\begin{equation}\label{eq:dec34}
\frac{q^n(q^n+1)}{2} = x\frac{(q^n-1)(q^n-q)}{2(q+1)}+y\frac{(q^n+1)(q^n+q)}{2(q+1)}+\sum^{q/2}_{i=1}z_i \cdot
\frac{q^{2n}-1}{q+1}.
\end{equation}
We claim that $x=0$. Indeed, if $(n,q) = (3,2)$, then \eqref{eq:dec34} implies that $3|x$, and so $x=0$ as $0 \leq x \leq 1$.
Assume $(n,q) \neq (3,2)$. Then we can find a primitive prime divisor $\ell = \ell(2^{2ne}-1)$ for $q = 2^e$, and note
from \eqref{eq:dec34} that $\ell|x$. Since $\ell > 2$ and $x \in \{0,1\}$, we again have $x=0$.
Now if $y=0$, then \eqref{eq:dec34} implies that $q^n(q^n+1)/2$ is divisible by $(q^{2n}-1)/(q+1)$, a contradiction. Hence
$y=1$, and from \eqref{eq:dec34} we obtain that $\sum^{q/2}_{i=1}z_i = q/2$. On the other hand,
$\sum^{q/2}_{i=1}z_i^2 = q/2$ by \eqref{eq:dec33}. Thus $\sum^{q/2}_{i=1}(z_i-1)^2 = 0$, and so $z_i = 1$ for all
$i$. Together with \eqref{eq:dec31}, this yields the two stated decompositions.
\end{proof}
\begin{prop}\label{sp-so2}
Let $n \geq 5$, $2|q$, and $\epsilon = \pm$. Then the characters $\alpha_n$ and $\beta_n$ of $\mathrm {Sp}(V) \cong \mathrm {Sp}_{2n}(q)$
restrict to $G = \Omega(V) \cong \Omega^\epsilon_{2n}(q)$ as follows:
$$\begin{array}{ll}(\alpha_n)|_{\Omega^+_{2n}(q)} = \sum^{q/2}_{j=1}\delta_j, &
(\beta_n)|_{\Omega^+_{2n}(q)} = 1+\alpha + \sum^{(q-2)/2}_{i=1}\bar{g}_i,\\
(\alpha_n)|_{\Omega^-_{2n}(q)} = 1+\alpha + \sum^{(q-2)/2}_{i=1}\bar{g}_i, & (\beta_n)|_{\Omega^-_{2n}(q)} = \sum^{q/2}_{j=1}\delta_j.
\end{array}$$
In particular, the following formula holds for the irreducible character $\beta$ of $G$ of degree $(q^{2n}-q^2)/(q^2-1)$:
$$\bigl( (\rho^1_n+\rho^2_n)-(\alpha_n+\beta_n)\bigr)|_{\Omega^\epsilon_{2n}(q)} = 2\beta.$$
\end{prop}
\begin{proof}
By Mackey's formula,
$$\bigl(\mathrm{Ind}^{\tilde G}_{\Omega_+}(\kappa_+)\bigr)|_G = \sum_{Gt\mathrm {O}_+\in G \backslash \tilde G/\mathrm {O}_+}
\mathrm{Ind}^G_{G \cap t\mathrm {O}_+t^{-1}}\bigl((\kappa^t_+)|_{G \cap t\mathrm {O}_+t^{-1}}\bigr),$$
and similarly for $\pi^+=\mathrm{Ind}^{\tilde G}_{\Omega_+}(1_{\mathrm {O}_+})$.
The argument in \eqref{eq:dec321} shows that $\kappa^t_+(x)=1$ for all $x \in G \cap t\mathrm {O}_+t^{-1}$, and so
$\pi^+$ and $\mathrm{Ind}^{\tilde G}_{\Omega_+}(\kappa_+)$ agree on $G$. Similarly,
$\pi^-$ and $\mathrm{Ind}^{\tilde G}_{\Omega_-}(\kappa_-)$ agree on $G$. It then follows from Lemmas \ref{quad1} and
\ref{quad2} that
\begin{equation}\label{eq:dec41}
\bigl( \rho^2_n-\rho^1_n\bigr)|_G = \bigl( \pi^+-\pi^-\bigr)|_G =
\bigl( \mathrm{Ind}^{\tilde G}_{\Omega_+}(\kappa_+)- \mathrm{Ind}^{\tilde G}_{\Omega_-}(\kappa_-) \bigr)|_G = \bigl( \beta_n-\alpha_n\bigr)|_G.
\end{equation}
First assume that $\epsilon=+$. Then using Proposition \ref{sp-so1} we get
$$\bigl( \beta_n-\alpha_n\bigr)|_G= 1_G+\alpha+\sum^{(q-2)/2}_{i=1}\bar{g}_i-\sum^{q/2}_{j=1}\delta_j,$$
i.e.
$$\sum^{q/2}_{j=1}\delta_j+(\beta_n)|_G = 1_G +\alpha+\sum^{(q-2)/2}_{i=1}\bar{g}_i + (\alpha_n)|_G.$$
Aside from $(\alpha_n)|_G$ and $(\beta_n)|_G$, all the other characters in the above equality are irreducible and
pairwise distinct. It follows that $(\alpha_n)|_G$ contains $\sum^{q/2}_{j=1}\delta_j$. Comparing the degrees, we
see that
$$(\alpha_n)|_G = \sum^{q/2}_{j=1}\delta_j,$$
which then implies that
$$(\beta_n)|_G = 1_G +\alpha+\sum^{(q-2)/2}_{i=1}\bar{g}_i.$$
Now assume that $\epsilon=-$. Then again using Proposition \ref{sp-so1} and \eqref{eq:dec41} we get
$$\bigl( \alpha_n-\beta_n\bigr)|_G= 1_G+\alpha+\sum^{(q-2)/2}_{i=1}\bar{g}_i-\sum^{q/2}_{j=1}\delta_j,$$
i.e.
$$\sum^{q/2}_{j=1}\delta_j+(\alpha_n)|_G = 1_G +\alpha+\sum^{(q-2)/2}_{i=1}\bar{g}_i + (\beta_n)|_G.$$
Aside from $(\alpha_n)|_G$ and $(\beta_n)|_G$, all the other characters in the above equality are irreducible and
pairwise distinct. It follows that $(\beta_n)|_G$ contains $\sum^{q/2}_{j=1}\delta_j$. Comparing the degrees, we
see that
$$(\beta_n)|_G = \sum^{q/2}_{j=1}\delta_j,$$
which then implies that
$$(\alpha_n)|_G = 1_G +\alpha+\sum^{(q-2)/2}_{i=1}\bar{g}_i.$$
For both $\epsilon = \pm$, the last statement now follows from \eqref{eq:dec21}.
\end{proof}
Proposition \ref{sp-so2} leads to the following explicit
formula for $\beta$, which we will show to hold for all special orthogonal groups in all characteristics and all dimensions, and which
is of independent interest.
In this result, we let $V = \mathbb F_q^n$ be a quadratic space, $L := \mathrm {SO}(V)$ if $2 \nmid q$,
$L := \Omega(V)$ if $2|q$, and extend the action of $L$ on $V$ to $\tilde V := V \otimes_{\mathbb F_q}\mathbb F_{q^2}$, and we assume
$2 \nmid q$ if $2 \nmid n$. Also, set
$$\mu_{q-1}:= \mathbb F_q^\times,~~\mu_{q+1} := \{ x \in \mathbb F_{q^2}^\times \mid x^{q+1} = 1 \}.$$
If $2 \nmid q$, let $\chi_2^+$ be the unique linear character of order $2$ of $\mu_{q-1}$, and let $\chi_2^-$ be the unique linear character of order $2$ of $\mu_{q+1}$.
\begin{thm}\label{beta-so2}
Let $n \geq 10$, $\epsilon = \pm$, and let $q$ be any prime power. If $2|n$, let $\psi = \beta$ be the irreducible constituent $\beta$ of
degree $(q^{n}-q^2)/(q^2-1)$ of the rank $3$ permutation character of $L = \Omega(V)$ when $2|q$, and
of $L = \mathrm {SO}(V)$ when $2 \nmid q$, on the set of singular
$1$-spaces of its natural module $V=\mathbb F_q^{n}$. If $2 \nmid qn$, let $\psi$ be the irreducible character of
$L = \mathrm {SO}(V)$ of degree $(q^n-q)/(q^2-1)$ denoted by $D_{\mathsf{St}}$ in \cite[Proposition 5.7]{LBST}. Then for any $g \in L$ we have
$$\psi(g) = \frac{1}{2(q-1)}\sum_{\lambda \in \mu_{q-1}}q^{\dim_{\mathbb F_q}\mathrm{Ker}(g - \lambda \cdot 1_V)} -
\frac{1}{2(q+1)}\sum_{\lambda \in \mu_{q+1}}(-q)^{\dim_{\mathbb F_{q^2}}\mathrm{Ker}(g - \lambda \cdot 1_{\tilde V})}
-1$$
when $2|n$, and by
$$\psi(g) = \frac{1}{2(q-1)}\sum_{\lambda \in \mu_{q-1}}\chi_2^+(\lambda)q^{\dim_{\mathbb F_q}\mathrm{Ker}(g - \lambda \cdot 1_V)} +
\frac{1}{2(q+1)}\sum_{\lambda \in \mu_{q+1}}\chi^-_2(\lambda)(-q)^{\dim_{\mathbb F_{q^2}}\mathrm{Ker}(g - \lambda \cdot 1_{\tilde V})}
$$
when $2 \nmid qn$.
\end{thm}
\begin{proof}
In the case $2|q$, the statement follows from the last formula in Proposition \ref{sp-so2}, together with formulae (3) and (6) of \cite{GT}. Assume now that $2 \nmid q$, and set $\kappa := 1$ if $2|n$ and $\kappa := 0$ if $2 \nmid n$.
By \cite[Proposition 5.7]{LBST} (and in the notation of \cite[\S5.1]{LBST}),
$$\psi(g)=\frac{1}{|\mathrm {Sp}_2(q)|}\sum_{x \in \mathrm {Sp}_2(q)}\omega_{n}(xg)\mathsf{St}(x)-\kappa,$$
where $\omega_{2n}$ denotes a reducible Weil character of $\mathrm {Sp}_{2n}(q)$ and $\mathsf{St}$ denotes the Steinberg character of
$S:= \mathrm {Sp}_2(q)$.
If $x \in S$ is not semisimple, then $\mathsf{St}(x) = 0$.
Suppose $x = \mathrm{diag}(\lambda,\lambda^{-1}) \in T_1 <S$, where $T_1 \cong C_{q-1}$ is a split torus and $\lambda \in \mu_{q-1}$.
In this case, we can view $T_1$ as $\mathrm {GL}_1(q)$, embed $G$ in $\mathrm {GL}_{n}(q)$, and view $xg$ as
an element $h=\lambda g$ in a Levi subgroup $\mathrm {GL}_{n}(q)$ of $\mathrm {Sp}_{2n}(q)$, with $\det(h) = \lambda^{n}$. It follows from
\cite[Theorem 2.4(c)]{Ge} that
$$\omega_{n}(xg) = \chi_2^+(\lambda^n) q^{\dim_{\mathbb F_q}\mathrm{Ker}(h-1)} = \chi_2^+(\lambda^n) q^{\dim_{\mathbb F_q}\mathrm{Ker}(g-\lambda^{-1})}.$$
If $\lambda \neq \pm 1$, then $|x^S| = q(q+1)$ and $\mathsf{St}(x) = 1$. If $\lambda = \pm 1$, then $|x^S| = 1$ and $\mathsf{St}(x)=q$.
Note that since $g \in \mathrm{GO}(V)$,
$$\dim_{\mathbb F_q}\mathrm{Ker}(g-\lambda^{-1}) = \dim_{\mathbb F_q}\mathrm{Ker}(\tw t g-\lambda^{-1}) = \dim_{\mathbb F_q}\mathrm{Ker}(g^{-1}-\lambda^{-1}) = \dim_{\mathbb F_q}\mathrm{Ker}(g-\lambda).$$
We also note that since $g \in \mathrm {SO}(V)$,
\begin{equation}\label{eq:kappa2}
\dim_{\mathbb F_q}\mathrm{Ker}(g-1_V) \equiv n (\mod 2),~~ \dim_{\mathbb F_q}\mathrm{Ker}(g+1_V) \equiv 0 (\mod 2).
\end{equation}
(Indeed, since $\det(g)=1$, each of $\mathrm{Ker}(g_s-1_V)$ and $\mathrm{Ker}(g_s+1_V)$ is a non-degenerate subspace of $V$ if nonzero, where $g=g_sg_u$ is the Jordan decomposition; furthermore, $2|\dim_{\mathbb F_q}\mathrm{Ker}(g_s+1_V)$ and
$\dim\mathrm{Ker}_{\mathbb F_q}\mathrm{Ker}(g_s-1_V) \equiv n (\mod 2)$. Hence the claim reduces to the unipotent case $g=g_u$. In the latter case,
the number of Jordan blocks of $g_u$ of each even size is even, see \cite[\S13.1]{Car}, and the claim follows.)
Suppose $x = \mathrm{diag}(\mu,\mu^{-1}) \in T_2 <S$, where $T_2 \cong C_{q+1}$ is a non-split torus and
$\mu \in \mu_{q+1}$ with $\mu \neq \pm 1$. Then $\mathsf{St}(x) = -1$ and $|x^S| = q(q-1)$. In this case, we can view $T_2$ as
$\mathrm {GU}_1(q)$, embed $G$ in $\mathrm {GU}_{n}(q)$, and view $xg$ as
an element $h=\mu g$ in a subgroup $\mathrm {GU}_{n}(q)$ of $\mathrm {Sp}_{2n}(q)$, with $\det(h) = \mu^{n}$. It follows from
\cite[Theorem 3.3]{Ge} that
$$\omega_{n}(xg) = (-1)^n\chi_2^-(\mu^n)(-q)^{\dim_{\mathbb F_{q^2}}\mathrm{Ker}(h-1)} = (-1)^n\chi_2^-(\mu^n)(-q)^{\dim_{\mathbb F_{q^2}}\mathrm{Ker}(g-\mu^{-1})}.$$
Altogether, we have shown that
\begin{equation}\label{eq:dec51}
\begin{aligned}\psi(g) & = \frac{1}{q^2-1}\bigl(q^{\dim_{\mathbb F_q}\mathrm{Ker}(g-1)}+\chi^+_2((-1)^n)q^{\dim_{\mathbb F_q}\mathrm{Ker}(g+1)}\bigr)\\
& +\frac{1}{2(q-1)}\sum_{\lambda \in \mu_{q-1} \smallsetminus \{\pm 1\}}\chi^+_2(\lambda^n)q^{\dim_{\mathbb F_q}\mathrm{Ker}(g -\lambda)}\\
& - \frac{(-1)^n}{2(q+1)}\sum_{\mu \in \mu_{q+1} \smallsetminus \{\pm 1\}}\chi^-_2(\mu^n)(-q)^{\dim_{\mathbb F_{q^2}}\mathrm{Ker}(g-\mu)}
-\kappa,\end{aligned}
\end{equation}
and the statement now follows if we use \eqref{eq:kappa2}.
\end{proof}
\subsection{Some character estimates}
\begin{prop}\label{rat-so21}
Let $q$ be any prime power, $G = \Omega^\epsilon_{2n}(q)$ with $n \geq 5$, $\epsilon=\pm$,
and let $g \in G$ have support $s=\mathsf{supp}(g)$.
Assume that $\chi \in \{\alpha,\beta\}$ if $2 \nmid q$, and $\chi \in \{\alpha,\beta,\bar{g}_i\}$ if $2|q$. Then
$$\frac{|\chi(g)|}{\chi(1)} \leq \frac{1}{q^{s/3}}.$$
\end{prop}
\begin{proof}
(i) First we consider the case $s \geq n \geq 5$. Then
\begin{equation}\label{eq:rb1}
d(x,g) \leq 2n-s
\end{equation}
for any $x \in \overline{\mathbb F}_q^\times$. In particular,
\begin{equation}\label{eq:rb2}
0 \leq \rho(g) \leq \sum_{x \in \mathbb F_q^\times}\frac{q^{d(x,g)}-1}{q-1} \leq q^{2n-s}-1.
\end{equation}
Now, (when $2|q$) part (i) of the proof of Proposition \ref{dec-so2} shows that
$\bar{g}_i = \mathrm{Ind}^G_P(\nu_j)$ for some linear character $\nu_j$ of
$P$, and recall that $\rho = \mathrm{Ind}^G_P(1_P)$. It follows that
$$|\bar{g}_i(g)| \leq |\rho(g)| \leq q^{2n-s}-1,$$
and so $|\bar{g}_i(g)/\bar{g}_i(1)| < 1/q^{s-2} \leq q^{-3s/5}$ as $\bar{g}_i(1) = [G:P] > q^{2n-2}$. Next, using Theorem \ref{beta-so2} and
\eqref{eq:rb1} we also see that
\begin{equation}\label{eq:rb3}
|\beta(g)+1| \leq \frac{1}{2(q-1)}\sum_{x \in \mathbb F_q^\times}q^{d(x,g)}
+ \frac{1}{2(q+1)}\sum_{x \in \overline{\mathbb F}_q^\times,x^{q+1}=1}q^{d(x,g)} \leq q^{2n-s}.
\end{equation}
In particular, $|\beta(g)| \leq q^{2n-s}+1$. Since $\beta(1) = (q^{2n}-q^2)/(q^2-1)$, we deduce that
$|\beta(g)/\beta(1)| < q^{-3s/5}$. Furthermore, as $\alpha(g) = \rho(g)-(\beta(g)+1)$, we obtain from \eqref{eq:rb2}--\eqref{eq:rb3} that
$$|\alpha(g)| \leq 2q^{2n-s}-1.$$
If $s \geq 6$, then it follows that $|\alpha(g)/\alpha(1)| < q^{4-s} \leq q^{-s/3}$, since $\alpha(1) > q^{2n-3}$. Suppose that $s=n=5$.
Then we can strengthen \eqref{eq:rb3} to
$$\frac{-2q^5-(q-1)q^3}{2(q+1)} \leq \beta(g)+1 \leq q^5.$$
Together with \eqref{eq:rb2}, this implies that
$$|\alpha(g)| = |\rho(g)-(\beta(g)+1)| < q^5+q^4 < \alpha(1)/q^{s/3}$$
since $\alpha(1) \geq (q^5+1)(q^4-q)/(q^2-1)$.
\smallskip
(ii) From now on we may assume that $s \leq n-1$. As $g \in G=\Omega^\epsilon_{2n}(q)$, it follows that $d(z,g) = 2n-s$ for a unique
$z \in \{1,-1\}$. Furthermore, $2|s$. (Indeed, this has been recorded in \eqref{eq:kappa1} when $2|q$, and
in \eqref{eq:kappa2} when $2 \nmid q$.) We also have that
\begin{equation}\label{eq:rb4}
d(x,g) \leq 2n-d(z,g) =s
\end{equation}
for all $x \in \overline\mathbb F_q^\times \smallsetminus \{z\}$,
Assume in addition that $s \geq 4$. Using \eqref{eq:rb4}
we obtain
\begin{equation}\label{eq:rb5}
0 \leq \rho(g) \leq \frac{q^{2n-s}-1+(q-2)(q^s-1)}{q-1}.
\end{equation}
As $\rho(1)=(q^n-\epsilon)(q^{n-1}+\epsilon)/(q-1)$, it follows that $|\rho(g)/\rho(1)| < q^{-3s/5}$. As above, the same bound also applies
to $\chi=\bar{g}_i$ when $2|q$.
Next, since $2|s$, using Theorem \ref{beta-so2} and applying \eqref{eq:rb4} to $x^{q \pm 1} = 1$ and $x \neq z$, we have that
\begin{equation}\label{eq:rb6}
\frac{q^{2n-s}}{q^2-1}-q^s \cdot \frac{q}{2(q+1)} \leq \beta(g)+1 \leq \frac{q^{2n-s}}{q^2-1}+ q^s \cdot \biggl( \frac{q-2}{2(q-1)} +
\frac{q}{2(q+1)} \biggr);
\end{equation}
in particular,
$$|\beta(g)| < \frac{q^{2n-s}+q^s(q^2-q-1)}{q^2-1}.$$
Since $\beta(1) = (q^{2n}-q^2)/(q^2-1)$, we obtain that $|\beta(g)/\beta(1)| < q^{-4s/5}$. Furthermore, using
\eqref{eq:rb5}--\eqref{eq:rb6}, we can bound
$$|\alpha(g)| = |\rho(g)-(\beta(g)+1)| < \frac{q^{2n-s+1}+q^s(3q^2-3q-4)/2}{q^2-1} <\frac{\alpha(1)}{q^{2s/5}}$$
since $\alpha(1) \geq (q^n+1)(q^{n-1}-q)/(q^2-1)$.
\smallskip
(iii) Since the statement is obvious for $s=0$, it remains to consider the case $s=2$, i.e. $d(1,zg) = 2n-2$.
Using \cite[Lemma 4.9]{TZ1}, one can readily show that $g$ fixes an orthogonal decomposition $V = U \oplus U^\perp$, with
$U \subset \mathrm{Ker}(g-z \cdot 1_V)$ being non-degenerate of dimension $2n-4$, and
\begin{equation}\label{eq:rb7}
\dim_{\mathbb F_q}(U^\perp)^{zg} = 2.
\end{equation}
First we estimate $\rho(g)$. Suppose $g(v) = tv$ for some singular $0 \neq v \in V$ and $t \in \mathbb F_q^\times$. If $t \neq z$,
then $v \in U^\perp$, and \eqref{eq:rb7} implies that $g$ can fixes at most $q+1$ such singular $1$-spaces
$\langle v \rangle_{\mathbb F_q}$. Likewise, $g$ fixes at most $q+1$ singular $1$-spaces $\langle v \rangle_{\mathbb F_q} \subset U^\perp$
with $g(v) = zv$. Assume now that $g(v) = zv$ with $v = u+u'$, $0 \neq u \in U$ and $u' \in U^\perp$.
As $0 = \mathsf{Q}(v) = \mathsf{Q}(u)+\mathsf{Q}(u')$, the total number of such $v$ is
$$N:=\sum_{x \in \mathbb F_q}|\{ 0 \neq w \in U \mid \mathsf{Q}(w) = x \}| \cdot |\{ w' \in U^\perp \mid g(w') = zw',\mathsf{Q}(w') = -x \}|.$$
Note that, since $U$ is a non-degenerate quadratic space of dimension $2n-4$,
$$(q^{n-2}+1)(q^{n-3}-1) \leq |\{ 0 \neq w \in U \mid \mathsf{Q}(w) = x \}| \leq (q^{n-2}-1)(q^{n-3}+1)$$
for any $x \in \mathbb F_q$. On the other hand, \eqref{eq:rb7} implies that
$$\sum_{x \in \mathbb F_q}|\{ w' \in U^\perp \mid g(w') = zw',\mathsf{Q}(w') = -x \}| = |(U^\perp)^{zg}| = q^2.$$
It follows that
$$q^2(q^{n-2}+1)(q^{n-3}-1) \leq N \leq q^2(q^{n-2}-1)(q^{n-3}+1),$$
and so
\begin{equation}\label{eq:rb8}
\frac{q^2(q^{n-2}+1)(q^{n-3}-1)}{q-1} \leq \rho(g) \leq 2q+2+\frac{q^2(q^{n-2}-1)(q^{n-3}+1)}{q-1}.
\end{equation}
In particular, when $2|q$ we have $|\bar{g}_i(g)| \leq |\rho(g)| < \rho(1)/q^{4s/5}$.
Next, applying \eqref{eq:rb6} to $s=2$ we have
$$|\beta(g)| \leq \frac{q^{2n-2}+q^2(q^2-q-1)}{q^2-1} < \frac{\beta(1)}{q^{4s/5}}.$$
Finally, using \eqref{eq:rb6} with $s=2$ and \eqref{eq:rb8}, we obtain
$$|\alpha(g)| = |\rho(g)-(\beta(g)+1)| < \frac{q^{2n-3}+q^{n+1}-q^{n-1}}{q^2-1}+(q+1) <\frac{\alpha(1)}{q^{3s/5}}.$$
\end{proof}
\begin{prop}\label{rat-sp-so22}
Let $q$ be any odd prime power, $n \geq 5$, and $\epsilon=\pm$.
Assume that $\chi \in \mathrm{Irr}(G)$, where either $G \in \{\mathrm {Sp}_{2n}(q), \Omega_{2n+1}(q)\}$ and $\chi \in \{\alpha,\beta\}$, or
$G = \Omega^\epsilon_{2n}(q)$ and $\chi \in \{\alpha,\beta,\bar{g}_i\}$. If $g \in G$ has support $s=\mathsf{supp}(g)$, then
$$\frac{|\chi(g)|}{\chi(1)} \leq \frac{1}{q^{s/3}}.$$
\end{prop}
\begin{proof}
(i) As usual, we may assume $s \geq 1$.
First we consider the case $G = \Omega^\epsilon_{2n}(q)$. Then \cite[Corollary 5.14]{NT} and \cite[Proposition 5.7]{LBST} show
(in their notation) that $\alpha=D_{1}-1_G$, $\beta = D_{\mathsf{St}}-1_G$. Furthermore if $\nu \neq 1_P$ is a linear character of $P$, then
$\mathrm{Ind}^G_P(\nu) = D_{\chi_j}$ if $\nu$ has order $>2$, and $\mathrm{Ind}^G_P(\nu) = D_{\xi_1}+D_{\xi_2}$ if $\nu$ has order $2$.
If $\chi = \alpha$ or $\beta$, then the statement is already proved in Proposition \ref{rat-so21}, whose proof also applies to
the case $\chi=\bar{g}_i = D_{\chi_j}$ (using the estimate $|\mathrm{Ind}^G_P(\nu)(g)| \leq \rho(g)$).
It remains to consider the case $\chi = \bar{g}_i = D_{\xi_j}$ for $j = 1,2$. Again the previous argument applied to
$\nu$ of order $2$ shows that
$$|D_{\xi_1}(g)+D_{\xi_2}(g)| \leq \frac{[G:P]}{q^{3s/5}} = \frac{2\chi(1)}{q^{3s/5}}.$$
On the other hand, the formula for $D_\alpha$ in \cite[Lemma 5.5]{LBST}, the character table of $\mathrm {SL}_2(q)$ \cite[Theorem 38.1]{D},
and part 1) of the proof of \cite[Proposition 5.11]{LBST} imply that
\begin{equation}\label{eq:rb21}
|D_{\xi_1}(g)-D_{\xi_2}(g)| \leq \frac{2(q^2-1)q^n \cdot\sqrt{q}}{q(q^2-1)} = 2q^{n-1/2}.
\end{equation}
If $4 \leq s \leq 2n-2$, then since $\chi(1) \geq (q^n+1)(q^{n-1}-1)/2(q-1)> q^{2n-3}(q+1)$ it follows that
$$\begin{aligned}|\chi(g)| & \leq \bigl(|D_{\xi_1}(g)+D_{\xi_2}(g)|+|D_{\xi_1}(g)+D_{\xi_2}(g)|\bigr)/2 \\
& \leq \frac{\chi(1)}{q^{3s/ 5}}+q^{n-1/2}
< \frac{\chi(1)}{q^{3s/5}} + \frac{2\chi(1)}{q^{s/3-1/6}(q+1)} < \frac{\chi(1)}{q^{s/3}}.\end{aligned}$$
If $1 \leq s \leq 4$, then $s < n$, and so $2|s$ as shown in part (ii) of the proof of Proposition \ref{rat-so21}. Hence $s=2$, and
we again have
$$|\chi(g)| \leq \frac{\chi(1)}{q^{3s/ 5}}+q^{n-1/2} < \frac{\chi(1)}{q^{3s/5}} + \frac{2\chi(1)}{q^{s/3+17/6}} <
\frac{\chi(1)}{q^{s/3}}.$$
Finally, if $s=2n-1$, then $d(x,g) \leq 1$ for all $x \in \overline\mathbb F_q^\times$ by \eqref{eq:rb1}; moreover,
$d(\pm 1,g) = 0$. Hence, instead of \eqref{eq:rb21} we now have the stronger bound
$$|D_{\xi_1}(g)-D_{\xi_2}(g)| \leq \frac{2(q^2-1) \cdot\sqrt{q}}{q(q^2-1)} = 2q^{-1/2},$$
whence $|\chi(g)| \leq \chi(1)q^{-3s/5}+q^{-1/2} < \chi(1)q^{-s/3}$.
\smallskip
(ii) Next we consider the case $G = \Omega^\epsilon_{2n+1}(q)$. Then \cite[Corollary 5.15]{NT} and \cite[Proposition 5.7]{LBST} show
(in their notation) that $\alpha=D_{\xi_1}-1_G$, $\beta = D_{\xi_2}-1_G$. Again using the formula for $D_\alpha$ in \cite[Lemma 5.5]{LBST}, the character table of $\mathrm {SL}_2(q)$ \cite[Theorem 38.1]{D},
and part 1) of the proof of \cite[Proposition 5.11]{LBST}, we obtain that
\begin{equation}\label{eq:rb22}
|\alpha(g)-\beta(g)| = |D_{\xi_1}(g)-D_{\xi_2}(g)| \leq \frac{2(q^2-1)q^{n+1/2} \cdot\sqrt{q}}{q(q^2-1)} = 2q^n.
\end{equation}
Suppose in addition that $3 \leq s \leq 2n-2$. Since $d(x,g) \leq 2n+1-s$ by \eqref{eq:rb1}, we have that
$$0 \leq \rho(g)=1+\alpha(g)+\beta(g) \leq \sum_{x \in \mu_{q-1}}\frac{q^{d(x,g)}-1}{q-1} \leq q^{2n+1-s}.$$
As $\chi(1) \geq (q^n+1)(q^n-q)/2(q-1)$, it follows that
$$|\alpha(g)+\beta(g)| \leq q^{2n+1-s}-1 < \frac{2(1-1/q)q^{2-s}\chi(1)}{(1+1/q^n)(1-1/q^{n-1})} <
\frac{2(1-1/q)\chi(1)}{q^{s/3}(1-1/q^{n-1})}.$$
On the other hand, \eqref{eq:rb22} implies that
$$|\alpha(g)-\beta(g)| \leq \frac{4(1-1/q)\chi(1)}{q^{(s+4)/3}(1-1/q^{n-1})},$$
and so
$$\frac{|\chi(g)|}{\chi(1)} < \frac{(1-1/q)}{q^{s/3}(1-1/q^{n-1})}+ \frac{2(1-1/q)}{q^{(s+4)/3}(1-1/q^{n-1})} < \frac{1}{q^{s/3}}.$$
If $s=2n-1$ or $2n$, then $d(x,g) \leq 2$ for all $x \in \overline\mathbb F_q^\times$ by \eqref{eq:rb1}.
Hence, instead of \eqref{eq:rb21} we now have the stronger bound
$$|\alpha(g)-\beta(g)|=|D_{\xi_1}(g)-D_{\xi_2}(g)| \leq \frac{2(q^2-1)q^2 \cdot\sqrt{q}}{q(q^2-1)} = 2q^{3/2},$$
whence
$$|\chi(g)| < \frac{(1-1/q)q^{2-s}\chi(1)}{(1-1/q^{n-1})} +q^{3/2} < \chi(1)q^{-s/3}.$$
It remains to consider the case $s=1,2$, i.e. $d(1,zg) = 2n$ or $2n-1$ for some $z \in \{1,-1\}$.
Using \cite[Lemma 4.9]{TZ1}, one can readily show that $g$ fixes an orthogonal decomposition $V = U \oplus U^\perp$, with
$U \subset \mathrm{Ker}(g-z \cdot 1_V)$ being non-degenerate of dimension $2n-3$, and
\begin{equation}\label{eq:rb23}
\dim_{\mathbb F_q}(U^\perp)^{zg} = 4-s.
\end{equation}
First we estimate $\rho(g)$. Suppose $g(v) = tv$ for some singular $0 \neq v \in V$ and $t \in \mathbb F_q^\times$. If $t \neq z$,
then $v \in U^\perp$, and \eqref{eq:rb23} implies that $g$ can fixes at most $(q^s-1)/(q-1) \leq q+1$ such singular $1$-spaces
$\langle v \rangle_{\mathbb F_q}$. Likewise, $g$ fixes at most $(q+1)^2$ singular $1$-spaces
$\langle v \rangle_{\mathbb F_q} \subset U^\perp$ with $g(v) = zv$, since $\dim U^\perp = 4$. Assume now that $g(v) = zv$ with $v = u+u'$, $0 \neq u \in U$ and $u' \in U^\perp$. As $0 = \mathsf{Q}(v) = \mathsf{Q}(u)+\mathsf{Q}(u')$, the total number of such $v$ is
$$N:=\sum_{x \in \mathbb F_q}|\{ 0 \neq w \in U \mid \mathsf{Q}(w) = x \}| \cdot |\{ w' \in U^\perp \mid g(w') = zw',\mathsf{Q}(w') = -x \}|.$$
Note that, since $U$ is a non-degenerate quadratic space of dimension $2n-3$,
$$q^{n-2}(q^{n-2}-1) \leq |\{ 0 \neq w \in U \mid \mathsf{Q}(w) = x \}| \leq q^{n-2}(q^{n-2}+1)$$
for any $x \in \mathbb F_q$. On the other hand, \eqref{eq:rb23} implies that
$$\sum_{x \in \mathbb F_q}|\{ w' \in U^\perp \mid g(w') = zw',\mathsf{Q}(w') = -x \}| = |(U^\perp)^{zg}| = q^{4-s}.$$
It follows that
$$q^{n+2-s}(q^{n-2}-1) \leq N \leq q^{n+2-s}(q^{n-2}+1),$$
and so
$$\frac{q^{n+2-s}(q^{n-2}-1)}{q-1} \leq \rho(g)=1+\alpha(g)+\beta(g) \leq q^2+3q+2+\frac{q^{n+2-s}(q^{n-2}+1)}{q-1}.$$
Together with \eqref{eq:rb22}, this implies that
$$\frac{|\chi(g)|}{\chi(1)} \leq \frac{(q^2-1)(q+2)+q^{n+2-s}(q^{n-2}+1)+2q^n(q-1)}{(q^n+1)(q^n-q)} < \frac{1}{q^{s/2}}.$$
\smallskip
(iii) Finally, we consider the case $G = \mathrm {Sp}_{2n}(q)$. In this case, arguing similarly to the proof of \cite[Proposition 5.7]{LBST},
one can show that $\{\alpha,\beta\} = \{D^\circ_{\lambda_0},D^\circ_{\lambda_1}\}$,
where $S = \mathrm {O}^+_2(q) \cong D_{2(q-1)}$, with $\lambda_0$, $\lambda_1$ being
the two linear characters trivial at $\mathrm {SO}^+_2(q)$, and we consider the dual pairs $G \times S \to \mathrm {Sp}_{4n}(q)$.
In particular, $\chi(1) \geq (q^n+1)(q^n-q)/2(q-1) > q^{2n-4/3}$.
Now, the formula for $D_\alpha$ in \cite[Lemma 5.5]{LBST}, the character table of $S$,
and part 1) of the proof of \cite[Proposition 5.11]{LBST} imply that
\begin{equation}\label{eq:rb24}
|\alpha(g)-\beta(g)| \leq q^{(d(1,g)+d(-1,g))/2} \leq q^{2n-s}.
\end{equation}
On the other hand, using \eqref{eq:rb1} we have $0 \leq \rho(g) = \alpha(g)+\beta(g)+1 \leq q^{2n-s}-1$. In particular, when
$s \geq 2$ we have
$$|\chi(g)| \leq \bigl(|\alpha(g)+\beta(g)|+|\alpha(g)-\beta(g)|\bigr)/2 \leq q^{2n-s} < \chi(1)q^{-s/3}.$$
Assume now that $s=1$. Then $g = zu$ for some $z = \pm 1$ and unipotent $u \in G$; furthermore,
$\rho(g) = (q^{2n-1}-1)/(q-1)$. Applying also \eqref{eq:rb24}, we obtain
$$|\chi(g)| \leq \biggl(|\alpha(g)+\beta(g)|+|\alpha(g)-\beta(g)|\biggr)/2 \leq \biggl(\frac{q^{2n-1}-q}{q-1}+q^{n-1/2}\biggr)/2 < \chi(1)q^{-4s/5},$$
and the proof is complete.
\end{proof}
\section{Classical groups: Proof of Theorem \ref{main1}}\label{pfth1}
Let $G = \mathrm {Sp}(V)$ or $\Omega(V)$, where $V = V_n(q)$. Write $G = {\rm Cl}_n(q)$ to cover both cases. As before, for a semisimple element $g \in G$, define $\nu(s) = \hbox{supp}(g)$, the codimension of the largest eigenspace of $g$ over $\bar \mathbb F_q$.
For $n<10$, Theorem \ref{main1} can be easily proved by exactly the same method of proof of \cite[Theorem 2]{LST} (improving the constant $D$ in Lemma 2.3 of \cite{LST} by using better bounds for $|G|$ and $|C_G(g)|_p$).
So assume from now on that $n\ge 10$, so that the character ratio bounds in Propositions \ref{rat-so21} and \ref{rat-sp-so22} apply.
We begin with a lemma analogous to \cite[Lemma 3.2]{LST}.
\begin{lem}\label{sest} For $1\le s<n$, define
$$N_s(G) := \{g\in G_{\mathrm {ss}} : \nu(g)=s\}$$
and let $n_s(G):=|N_s(G)|$.
\begin{itemize}
\item[{\rm (i)}] If $g \in N_s(g)$ and $s<\frac{n}{2}$ then $|\mathbf{C}_G(g)|_p < q^{\frac{1}{4}((n-s)^2+s^2) - v\frac{n-1}{2}}$, where $v=0$ or $1$ according as $G$ is symplectic or orthogonal.
\item[{\rm (ii)}] If $g \in N_s(g)$ and $s\ge \frac{n}{2}$ then $|\mathbf{C}_G(g)|_p < q^{\frac{1}{4}(n^2-ns)}$.
\item[{\rm (iii)}] $\sum_{n-1 \geq s \geq n/2}n_s(G) < |G| < q^{\frac{1}{2}(n^2+n)-vn}$, where $v$ is as in $(ii)$.
\item[{\rm (iv)}] If $s < n/2$, then $n_s(G) < cq^{\frac{1}{2}s(2n-s+1)+\frac{n}{2}}$,
where $c$ is an absolute constant that can be taken to be $15.2$.
\end{itemize}
\end{lem}
\begin{proof}
(i) If $\nu(g)=s<\frac{n}{2}$, then the largest eigenspace of $g$ has dimension $n-s>\frac{n}{2}$, so has eigenvalue $\pm 1$, and so $\mathbf{C}_G(g) \le {\rm Cl}_{n-s}(q) \times {\rm Cl}_s(q)$. Part (i) follows.
\vspace{2mm}
(ii) Now suppose $\nu(g) = s \ge \frac{n}{2}$, and let $E_\lambda$ ($\lambda \in \bar \mathbb F_q$) be an eigenspace of maximal dimension $n-s$.
Assume first that $\lambda \ne \pm 1$. Then letting $a$ and $b$ denote the dimensions of the $+1$- and $-1$-eigenspaces, we have
\begin{equation}\label{cent}
\mathbf{C}_G(g) \le \prod_{i=1}^t \mathrm {GL}_{d_i}(q^{k_i}) \times {\rm Cl}_a(q) \times {\rm Cl}_b(q),
\end{equation}
where $n-s = d_1 \ge d_2\ge \cdots \ge d_t$ and also $d_1 \ge a\ge b$ and $2\sum_1^t k_id_i+a+b = n$.
Hence $|\mathbf{C}_G(g)|_p \le q^D$, where
\begin{equation}\label{expd}
D = \frac{1}{2}\sum_{i=1}^t k_id_i(d_i-1) + \frac{1}{4}(a^2+b^2).
\end{equation}
If $n\ge 4d_1$, this expression is maximised when $a=b=d_1$ and $(d_1,\ldots ,d_t) = (d_1,\ldots ,d_1,r)$ with $r\le d_1$ and $k_i=1$ for all $i$.. Hence in this case,
\[
D \le \frac{1}{2}(t-1)d_1(d_1-1) + \frac{1}{2}r(r-1) + \frac{1}{2}d_1^2 = \frac{1}{2}td_1^2-\frac{1}{2}(t-1)d_1+\frac{1}{2}r(r-1),
\]
and this is easily seen to be less than $\frac{1}{4}nd_1$, as required for part (ii).
Similarly, if $4d_1>n\ge 3d_1$, the expression (\ref{expd}) is maximised when $t=1$, $k_1=1$, $a=d_1$ and $b=r < d_1$; and when $3d_1>n \ge 2d_1$ (note that $n\ge 2d_1 = 2(n-s)$ by our assumption that $\nu(g) = s \ge \frac{n}{2}$), the expression (\ref{expd}) is maximised when $t=1$ and $a=r< d_1$. In each case, we see that $D< \frac{1}{4}nd_1$ as above.
Assume finally that the eigenvalue $\lambda = \pm 1$. In this case the centralizer $\mathbf{C}(g)$ is as in (\ref{cent}), with $n-s=a \ge d_1\ge \cdots \ge d_t$ and also $a\ge b$ and $2\sum_1^t k_id_i+a+b = n$. Again we have $|\mathbf{C}_G(g)|_p \le q^D$, with $D$ as in (\ref{expd}), and we argue as above that $D < \frac{1}{4}na = \frac{1}{4}n(n-s)$. This completes the proof of (ii).
\vspace{2mm}
(iii) This is clear.
\vspace{2mm}
(iv) If $\nu(g) = s < \frac{n}{2}$ then as in (i), the largest eigenspace of $g$ has eigenvalue $\pm 1$, so we have
$\mathbf{C}_G(g) \ge {\rm Cl}_{n-s}(q) \times T_s$, where $T_s$ is a maximal torus of ${\rm Cl}_s(q)$. Hence $|g^G| \le |G:{\rm Cl}_{n-s}(q)T_s| \le q^{\frac{1}{2}s(2n-s+1)}$. Also the number of conjugacy classes in $G$ is at most $15.2q^{n/2}$ by \cite{FG}, and (iv) follows.
\end{proof}
\begin{lem}\label{stein} Let $\chi \in \{\alpha,\beta,\bar{g}_i\}$, where $\alpha,\beta,\bar{g}_i$ are the irreducible characters of $G$ defined in Section \ref{red}. Then $\mathsf{St} \subseteq \chi^{4n}$.
\end{lem}
\begin{proof}
As in the proof of \cite[Lemma 2.3]{LST}, there are signs $\epsilon_g=\pm 1$ such that
\begin{equation}\label{useag}
\begin{array}{ll}
[\chi^l,\mathsf{St}]_G & = \dfrac{1}{|G|}\sum_{g\in G_{\mathrm {ss}}} \epsilon_g \chi^l(g)|\mathbf{C}_G(g)|_p \\
& = \dfrac{\chi^l(1)}{|G|}\left(|G|_p + \sum_{1 \neq g \in G_{\mathrm {ss}}} \epsilon_g \left(\frac{\chi(g)}{\chi(1)}\right)^l|\mathbf{C}_G(g)|_p\right).
\end{array}
\end{equation}
Hence $[\chi^l,\mathsf{St}]_G \ne 0$ provided $\Sigma_l < |G|_p$, where
\[
\Sigma_l := \sum_{1 \neq g\in G_{\mathrm {ss}}} \left|\frac{\chi(g)}{\chi(1)}\right|^l|\mathbf{C}_G(g)|_p.
\]
By Propositions \ref{rat-so21} and \ref{rat-sp-so22}, if $s = \nu(g)$ we have
\[
\frac{|\chi(g)|}{\chi(1)} \le \frac{1}{q^{s/3}}.
\]
Hence applying Lemma \ref{sest}, we have $\Sigma_l \le \Sigma_1+\Sigma_2$, where
\[
\begin{array}{l}
\Sigma_1 = \sum_{1\le s<\frac{n}{2}} cq^{\frac{1}{2}s(2n-s+1)+\frac{n}{2}}. \frac{1}{q^{ls/3}} . q^{\frac{1}{4}((n-s)^2+s^2) - v\frac{n-1}{2}}, \\
\Sigma_2 = \sum_{\frac{n}{2}\le s < n} q^{\frac{1}{2}(n^2+n)-vn}. \frac{1}{q^{ls/3}} . q^{\frac{1}{4}(n^2-ns)}.
\end{array}
\]
For a term in $\Sigma_1$, the exponent of $q$ is
\[
\frac{1}{4}n^2-v\frac{n-1}{2} + \frac{1}{2}s(n+1)+\frac{1}{2}n-\frac{ls}{3}.
\]
As $|G|_p \le q^{\frac{1}{4}n^2-v\frac{n-1}{2}}$, taking $l=4n$ this gives
\[
\begin{array}{ll}
\frac{\Sigma_1}{|G|_p} & \le \sum_{1\le s<\frac{n}{2}} cq^{\frac{1}{2}s(n+1)+\frac{n}{2}-\frac{ls}{3}} \\
& \le \sum_{1\le s<\frac{n}{2}} cq^{\frac{1}{2}n(1-\frac{5s}{3})+\frac{s}{2}}.
\end{array}
\]
Recalling that $c=15.2$, it follows that $\frac{\Sigma_1}{|G|_p} < \frac{1}{2}$ (except for $q=2, n\le 20$, in which case we obtain the same conclusion using slightly more refined estimates instead of Lemma \ref{sest}(iv)).
For a term in $\Sigma_2$, the exponent of $q$ is
\[
\frac{1}{2}(n^2+n)-vn +\frac{1}{4}n(n-s) - \frac{ls}{3},
\]
and leads similarly to the inequality $\frac{\Sigma_2}{|G|_p} < \frac{1}{2}$ when $l=4n$.
We conclude that $\Sigma_l < |G|_p$ for $l=4n$, proving the lemma.
\end{proof}
\begin{proof}[Proof of Theorem \ref{main1}]
Let $1\ne \psi \in \mathrm{Irr}(G)$. By Lemma \ref{stein} together with Lemmas \ref{mc-r2} and \ref{mc-r3}, we have $\mathsf{St} \subseteq \psi^{8n}$ for $G = Sp_n(q)$, and
$\mathsf{St} \subseteq \psi^{16n}$ for $G = \Omega^\epsilon_n(q)$. Since $\mathsf{St}^2$ contains all irreducible characters by \cite{HSTZ}, the conclusion of Theorem \ref{main1} follows.
\end{proof}
\section{Alternating groups: Proof of Theorem \ref{main2}}\label{pfth2}
In this section we prove Theorem \ref{main2}.
\begin{lem}\label{staircase}
Let $n := m(m+1)/2$ with $m \in \mathbb{Z}_{\geq 6}$, and let $\chi_m := \chi^{(m,m-1,\ldots,1)}$ be the staircase character of $\mathsf{S}_n$. Then
$$\chi_m(1) \geq |\mathsf{S}_n|^{5/11}.$$
\end{lem}
\begin{proof}
We will proceed by induction on $m \geq 6$. The induction base $m=6,7$ can be checked directly. For the induction step going from
$m$ to $m+2$, note by
the hook length formula that $\chi_m(1)= n!/H_m$, where $H_m$ is the product of all the hook lengths in the Young diagram of
the staircase partition $(m,m-1, \ldots,1)$. Hence it is equivalent to to prove that
$$(m(m+1)/2)! > H_m^{11/6}.$$
Since the statement holds for $m$ and $H_{m+2}/H_m = (2m+3)!!(2m+1)!!$, it suffices to prove that
\begin{equation}\label{eq:st1}
\prod^{2m+3}_{i=1}(m(m+1)/2+i) > \bigl((2m+3)!!(2m+1)!!\bigr)^{11/6}
\end{equation}
for any $m \geq 6$. Direct computation shows that \eqref{eq:st1} holds when $3 \leq m \leq 40$. When $m \geq 40$, note that
$$\begin{array}{ll}
\prod^{2m+3}_{i=1}(m(m+1)/2+i) & > \bigl(m(m+1)/2+1\bigr)^{2m+3}\\
& > \bigl((m+3)^{m+1}(m+2)^m\bigr)^{11/6}\\
& > \bigl((2m+3)!!(2m+1)!!\bigr)^{11/6},\end{array}$$
proving \eqref{eq:st1} and completing the induction step.
\end{proof}
\begin{proof}[Proof of Theorem \ref{main2}]
We will make use of \cite[Theorem 1.4]{S} which states that there exists an effective absolute constant $C_1 \geq 2$ such that
\begin{equation}\label{eq:s}
\chi^{t} \mbox{ contains }\mathrm{Irr}(\mathsf{S}_n) \mbox{ whenever }t \geq C_1n\log(n)/\log(\chi(1))
\end{equation}
for every non-linear $\chi \in \mathrm{Irr}(\mathsf{S}_n)$. With this, we will prove that when $n$ is sufficiently large we have
\begin{equation}\label{eq:main2}
\varphi^{k} \mbox{ contains }\mathrm{Irr}(\mathsf{A}_n) \mbox{ whenever }k \geq Cn\log(n)/\log(\varphi(1))
\end{equation}
for every nontrivial $\varphi \in \mathrm{Irr}(\mathsf{A}_n)$, with $C=5C_1^2$.
\smallskip
(i) Consider any $n \geq 5$ and any nontrivial $\varphi \in \mathrm{Irr}(\mathsf{A}_n)$. If $\varphi$ extends to $\mathsf{S}_n$, then we are done by
\eqref{eq:s}. Hence we may assume that $\varphi$ lies under some $\chi^\lambda \in \mathrm{Irr}(\mathsf{S}_n)$, where $\lambda \vdash n$ is
self-associate, and that $n$ is sufficiently large. By \cite[Proposition 4.3]{KST}, the latter implies that
\begin{equation}\label{eq:a1}
\varphi(1) \geq 2^{(n-5)/4}.
\end{equation}
Consider the Young diagram $Y(\lambda)$ of $\lambda$, and let $A$ denote the removable node in the last row of $Y(\lambda)$.
Also let $\rho:=\chi^{\lambda \smallsetminus A} \in \mathrm{Irr}(\mathsf{S}_{n-1})$. Since $\lambda \smallsetminus A$ is not self-associate,
$\rho$ is also irreducible over $\mathsf{A}_{n-1}$. Furthermore, by Frobenius' reciprocity,
$$1 \leq [(\chi^\lambda)|_{\mathsf{S}_{n-1}},\rho]_{\mathsf{S}_{n-1}} = [\chi^\lambda,\mathrm{Ind}^{\mathsf{S}_n}_{\mathsf{S}_{n-1}}(\rho)]_{\mathsf{S}_n},$$
whence $2\varphi(1) = \chi^\lambda(1) \leq \mathrm{Ind}^{\mathsf{S}_n}_{\mathsf{S}_{n-1}}(\rho)(1) = n\rho(1)$, and so
\begin{equation}\label{eq:a2}
\rho(1) \geq (2/n)\varphi(1).
\end{equation}
It follows from \eqref{eq:a1} and \eqref{eq:a2} that when $n$ is large enough,
$$\log(\rho(1)) \geq \log(\varphi(1))-\log(n/2) \geq (9/10)\log(\varphi(1)).$$
Now we consider any integer
\begin{equation}\label{eq:a3}
s \geq \frac{10C_1}{9} \cdot \frac{n\log(n)}{\log(\varphi(1))}.
\end{equation}
This ensures that $s \geq C_1(n-1)\log(n-1)/\log(\rho(1))$, and so,
by \eqref{eq:s} applied to $\rho$,
$\rho^s$ contains $\mathrm{Irr}(\mathsf{S}_{n-1})$.
\smallskip
(ii) Next, we can find a unique $m \in \mathbb{Z}_{\geq 3}$ such that
\begin{equation}\label{eq:a4}
n_0:=m(m+1)/2 \leq n-3 < (m+1)(m+2)/2,
\end{equation}
and consider the following partition
\begin{equation}\label{eq:a5}
\mu:= (n-1-m(m-1)/2,m-1,m-2, \ldots,2,1)
\end{equation}
of $n-1$. Note that $\mu$ has $m$ rows, with the first (longest) row
$$\mu_1=n-1-m(m-1)/2 \geq m+2$$
by \eqref{eq:a4}. Hence, if $B$ is any addable
node for the Young diagram $Y(\mu)$ of $\mu$, $Y(\mu \sqcup B)$ has at most $m+1$ rows and at least $m+2$ columns, and
so is not self-associate. It follows that, for any such $B$, the character $\chi^{\mu \sqcup B}$ of $\mathsf{S}_n$ is irreducible over $\mathsf{A}_n$.
\smallskip
(iii) Recall that $\chi^\lambda|_{\mathsf{A}_n} = \varphi+\varphi^\star$ with $\varphi^\star$ being $\mathsf{S}_n$-conjugate to $\varphi$.
It suffices to prove \eqref{eq:main2} for an $\mathsf{S}_n$-conjugate of $\varphi$. As $\chi^\lambda|_{\mathsf{S}_{n-1}}$ contains
$\rho=\chi^{\lambda \smallsetminus A}$ which is irreducible over $\mathsf{A}_{n-1}$, without loss we may assume that $\varphi|_{\mathsf{A}_{n-1}}$
contains $\rho|_{\mathsf{A}_{n-1}}$. By the result of (i), $\rho^s$ contains $\chi^\mu$, with $\mu$ defined in \eqref{eq:a5} Thus
\begin{equation}\label{eq:a6}
1 \leq [\varphi^s|_{\mathsf{A}_{n-1}},(\chi^\mu)|_{\mathsf{A}_{n-1}}]_{\mathsf{A}_{n-1}}=\bigl[\varphi^s,
\mathrm{Ind}^{\mathsf{A}_n}_{\mathsf{A}_{n-1}}\bigl((\chi^\mu)|_{\mathsf{A}_{n-1}}\bigr)\bigr]_{\mathsf{A}_n}.
\end{equation}
Also recall that $\chi^\mu$ is an $\mathsf{S}_{n-1}$-character and $\mathsf{S}_n = \mathsf{A}_n\mathsf{S}_{n-1}$. Hence
$$\mathrm{Ind}^{\mathsf{A}_n}_{\mathsf{A}_{n-1}}\bigl((\chi^\mu)|_{\mathsf{A}_{n-1}}\bigr) = \bigl(\mathrm{Ind}^{\mathsf{S}_n}_{\mathsf{S}_{n-1}}(\chi^\mu)\bigr)|_{\mathsf{A}_{n}}.$$
Next,
$$\mathrm{Ind}^{\mathsf{S}_n}_{\mathsf{S}_{n-1}}(\chi^\mu) = \sum_{B~{\rm \tiny{addable}}}\chi^{\mu \sqcup B},$$
where, as shown in (ii), each such $\chi^{\mu \sqcup B}$ is irreducible over $\mathsf{A}_n$. Hence, it now follows from \eqref{eq:a6} that
there is an addable node $B_0$ for $Y(\mu)$ that $\varphi^s$ contains $\psi|_{\mathsf{A}_n}$, with $\psi:=\chi^{\mu \sqcup B_0}$.
\smallskip
(iv) By the choice of $B_0$, $\psi|_{\mathsf{S}_{n-1}}$ contains $\chi^\mu$, whence $\psi(1) \geq \chi^\mu(1)$. Next, by \eqref{eq:a4}, we
can remove $n-1-n_0 \geq 2$ nodes from the first row to arrive at the staircase partition $(m,m-1, \ldots,1) \vdash n_0$. In particular,
$\psi|_{\mathsf{S}_{n_0}}$ contains the character $\chi_m$ of $\mathsf{S}_{n_0}$. By Lemma \ref{staircase}, for $n$ sufficiently large we have
\begin{equation}\label{eq:a7}
\log(\psi(1)) \geq \log(\chi_m(1)) \geq (5/11)\log(n_0!) \geq (2/5)n\log(n),
\end{equation}
since
$$n_0 = m(m+1)/2 \geq n-(m+2) \geq n-(3/2+\sqrt{2n-4})$$
by the choice \eqref{eq:a4} of $m$. Now we consider the integer $t := \lceil (5/2)C_1 \rceil \leq 3C_1$ (since $C_1 \geq 2$). Then
$$C_1n\log(n)/\log(\psi(1)) \leq (5/2)C_1 \leq t$$
by \eqref{eq:a7}, and so $\psi^t$ contains $\mathrm{Irr}(\mathsf{S}_n)$ by \eqref{eq:s} applied to $\psi$. In particular,
$(\psi^t)|_{\mathsf{A}_n}$ contains $\mathrm{Irr}(\mathsf{A}_n)$.
Recall from (iii) that $\varphi^s$ contains the irreducible character $\psi|_{\mathsf{A}_n}$. It follows that $\varphi^{st}$
contains $(\psi^t)|_{\mathsf{A}_n}$, and so $\varphi^{st}$ contains $\mathrm{Irr}(\mathsf{A}_n)$.
\smallskip
(v) Finally, consider any integer $k \geq Cn\log(n)/\varphi(1)$ with $C=5C_1^2$. Then
$$k/t \geq k/3C_1 \geq (5/3)C_1n\log(n)/\log(\varphi(1)).$$
As $C_1\geq 1$ and $n\log(n)/\log(\varphi(1) \geq 2$, we have that
$$(5/3-10/9)C_1n\log(n)/\log(\varphi(1)) \geq 10/9.$$
In particular, we can find an integer $s_0$ such that
$$k/t \geq s_0 \geq (10/9)C_1n\log(n)/\log(\varphi(1)).$$
As $s$ satisfies \eqref{eq:a3}, the result of (iv) shows that $\varphi^{s_0t}$ contains $\mathrm{Irr}(\mathsf{A}_n)$.
Now, given any $\gamma \in \mathrm{Irr}(\mathsf{A}_n)$, we can find an irreducible constituent $\delta$ of $\varphi^{k-s_0t}\overline\gamma$.
By the previous result, $\varphi^{s_0t}$ contains $\overline\delta$. It follows that $\varphi^k$ contains
$\varphi^{k-s_0t}\overline\delta$, and
$$[\varphi^{k-s_0t}\overline\delta,\gamma]_{\mathsf{A}_n}= [\varphi^{k-s_0t}\overline\gamma,\delta]_{\mathsf{A}_n} \geq 1,$$
i.e. $\varphi^k$ contains $\gamma$, and the proof of \eqref{eq:main2} is completed.
\end{proof}
\section{Products of characters}\label{pfth3}
\subsection{Products of characters in classical groups}
This is very similar to the proof of Theorem 2 of \cite{LST}. Let $G = G_r(q)$ be a simple group of Lie type of rank $r$ over
$\mathbb F_q$.
\begin{lem}\label{stdiam}
There is an absolute constant $D$ such that for any $m\ge Dr^2$ and any $\chi_1,\ldots,\chi_m \in {\rm Irr}(G)$, we have
$[\prod_1^m\chi_i,\mathsf{St}]_G\ne 0$. Indeed, $D=163$ suffices.
\end{lem}
\begin{proof}
This is proved exactly as for \cite[Lemma 2.3]{LST}, replacing the power $\chi^m$ by the product $\prod_1^m\chi_i$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{rodsax}(i)]
Take $c_1=3D$ with $D$ as in the lemma, and let $\chi_1,\ldots ,\chi_l \in {\rm Irr}(G)$ with $l=c_1r^2$. Writing $m=l/3 = Dr^2$, Lemma \ref{stdiam} shows that each of the products $\prod_1^m\chi_i$, $\prod_{m+1}^{2m}\chi_i$ and $\prod_{2m+1}^{3m}\chi_i$ contains $\mathsf{St}$. Hence $\prod_1^l\chi_i$ contains $\mathsf{St}^3$, and this contains ${\rm Irr}(G)$ by \cite[Prop. 2.1]{LST}. This completes the proof.
\end{proof}
\subsection{Products of characters in linear and unitary groups}
This is similar to the proof of Theorem 3 of \cite{LST}. Let $G = \mathrm {PSL}_n^\epsilon(q)$.
We shall need \cite[Theorem 3.1]{LST}, which states that there is a function $f:\mathbb N\to \mathbb N$ such that for any $g \in G_{\mathrm {ss}}$ with $s = \nu(g)$, and any $\chi \in {\rm Irr}(G)$, we have
\begin{equation}\label{31lst}
|\chi(g)| < f(n)\chi(1)^{1-\frac{s}{n}}.
\end{equation}
Again we begin with a lemma involving the Steinberg character.
\begin{lem}\label{ste} Let $m\in \mathbb N$ and let $\chi_1,\ldots,\chi_m \in {\rm Irr}(G)$. Set $c=44.1$, and define
\[
\begin{array}{l}
\Delta_{1m} = cf(n)^m \sum_{1\le s <n/2} q^{ns+\frac{3n}{2}-1}\left(\prod_1^m\chi_i(1)\right)^{-s/n},\\
\Delta_{2m} = f(n)^m \sum_{n/2\le s<n}q^{n^2-\frac{1}{2}n(s-1)-1}\left(\prod_1^m\chi_i(1)\right)^{-s/n}.
\end{array}
\]
If $\Delta_{1m}+\Delta_{2m}<1$, then $[\prod_1^m\chi_i,\,\mathsf{St}]_G \ne 0$.
\end{lem}
\begin{proof}
Arguing as in the proof of \cite[Lemma 3.3]{LST}, we see that $[\prod_1^m\chi_i,\,\mathsf{St}]_G \ne 0$ provided $\Delta_m <1$, where
\[
\Delta_m := \sum_{1 \leq s < n/2} cq^{ns+\frac{3n}{2}-1}\left|\prod_1^m\frac{\chi_i(g_{i,s})}{\chi(1)}\right| +
\sum_{n/2 \leq s < n} q^{n^2-\frac{1}{2}n(s-1)-1}\left|\prod_1^m\frac{\chi(g_{i,s})}{\chi(1)}\right|,
\]
where $g_{i,s} \in G_{\mathrm {ss}}$ is chosen such that $\nu(g_{i,s})=s$ and $|\chi_i(g_{i,s})|$ is maximal. Now application of (\ref{31lst}) gives the conclusion.
\end{proof}
\begin{lem}\label{better} There is a function $g:\mathbb N\to \mathbb N$ such that the following holds. Suppose that $\chi_1,\ldots,\chi_m \in {\rm Irr}(G)$ satisfy $\prod_1^m \chi_i(1) > |G|^3$. Then provided $q>g(n)$, we have $[\prod_1^m\chi_i,\,\mathsf{St}]_G \ne 0$.
\end{lem}
\begin{proof}
We have $|G|>\frac{1}{2}q^{n^2-2}$, so for $s<n$,
\[
\left(\prod_1^m\chi_i(1)\right)^{-s/n} < 8q^{-3ns+\frac{6s}{n}}.
\]
Hence
\[
\Delta_{1m} \le 8cf(n)^m \sum_{1\le s <n/2} q^{-2ns+\frac{3n}{2}+2},
\]
and
\[
\begin{array}{ll}
\Delta_{2m} & \le 8f(n)^m \sum_{n/2\le s<n}q^{n^2-\frac{1}{2}n(s-1)-1} q^{-3ns+6} \\
& \le 8f(n)^m \sum_{n/2\le s<n}q^{-\frac{3n^2}{4}+\frac{1}{2}n+5}.
\end{array}
\]
Now the conclusion follows from Lemma \ref{ste} (using some slight refinements of the above inequalities for $n\le 4$).
\end{proof}
\begin{proof}[Proof of Theorem \ref{rodsax}(ii)]
Assume $\chi_1,\ldots,\chi_l \in {\rm Irr}(G)$ satisfy $\prod_1^l \chi_i(1) > |G|^{10}$. Since $\chi_i(1) < |G|^{1/2}$ for all $i$, there are disjoint subsets $I_1,I_2,I_3$ of $\{1,\ldots ,m\}$ such that $\prod_{i\in I_k} \chi_i(1) > |G|^3$ for $k=1,2,3$. Then $\prod_{i\in I_k} \chi_i$ contains $\mathsf{St}$ for each $k$, by Lemma \ref{better}, and so $\prod_1^l\chi_i$ contains $\mathsf{St}^3$, hence contains ${\rm Irr}(G)$, completing the proof.
\end{proof}
\subsection{Products of characters in symmetric and alternating groups}
\begin{prop}\label{rs2-an}
Let $G \in \{\mathsf{S}_n,\mathsf{A}_n\}$, $l \in \mathbb{Z}_{\geq 1}$, and let $\chi_1,\chi_2, \ldots,\chi_l \in \mathrm{Irr}(G)$ with $\chi_i(1) > 1$ for all $i$.
\begin{enumerate}[\rm(i)]
\item If $l \geq 8n-11$, then $\bigl(\prod^l_{i=1}\chi_i\bigr)^{2}$ contains $\mathrm{Irr}(G)$.
\item Suppose that, for each $1 \leq i \leq l$, there exists some $j \neq i$ such that $\chi_j = \chi_i$. If $l \geq 24n-33$ then
$\prod^l_{i=1}\chi_i$ contains $\mathrm{Irr}(G)$.
\end{enumerate}
\end{prop}
\begin{proof}
(i) Let $\chi^\lambda$ denote the irreducible character of $\mathsf{S}_n$ labeled by the partition $\lambda \vdash n$. A key result established in the proof of
\cite[Theorem 5]{LST} is that, for any $i$ there exists
$$\alpha_i \in \left\{\chi^{(n-1,1)},\chi^{(n-2,2)},\chi^{(n-2,1^2)},\chi^{(n-3,3)}\right\}$$
such that $\chi_i^2$ contains $(\alpha_i)|_G$. Since $l \geq 8n-11$, there must be some
$$\beta \in \left\{\chi^{(n-1,1)},\chi^{(n-2,2)},\chi^{(n-2,1^2)},\chi^{(n-3,3)}\right\}$$
such that $\beta=\alpha_i$ for at least $2n-2$ distinct values of $i$. It follows that $\bigl(\prod^l_{i=1}\chi_i\bigr)^{2}=\bar{g}\delta$,
where $\bar{g} := \beta^{2n-2}|_G$, and $\delta$ is a character of $G$. By \cite[Theorem 5]{LST}, $\beta^{2n-2}$ contains
$\mathrm{Irr}(\mathsf{S}_n)$, whence $\bar{g}$ contains $\mathrm{Irr}(G)$. Now the arguments in the last paragraph of the proof of
Theorem \ref{main2} show that $\bar{g}\delta$ contains $\mathrm{Irr}(G)$ as well.
\smallskip
(ii) Note that the assumptions imply, after a suitable relabeling, that $\prod^l_{i=1}\chi_i$ contains $\sigma\lambda$,
where $\lambda$ is a character of $G$ and
$$\sigma= \prod^{8n-11}_{i=1}\chi_i^2.$$
(Indeed, any subproduct $\chi_{i_1}\ldots\chi_{i_t}$ with $t> 1$ and $\chi_{i_1}=\ldots =\chi_{i_t}$ yields a term
$(\chi_{i_1}^2)^{\lfloor t/2 \rfloor}$ in $\sigma$.) By (i), $\sigma$ contains $\mathrm{Irr}(G)$, and so we are done as above.
\end{proof}
| proofpile-arXiv_065-245 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction\label{sec:intro}}
The magnetic field is believed to play an essential role in various astrophysical phenomena, including the formation of stars and planets (\citealt{2012ARA&A..50...29C}). The alignment of dust grains with the magnetic field induces polarization of starlight and of thermal dust emission. The polarization vectors of starlight are parallel to the magnetic field, while those of thermal dust are perpendicular to the magnetic field. Thus, dust polarization has become a popular technique to constrain the magnetic field direction and strength (\citealt{2007JQSRT.106..225L}; \citealt{2015ARA&A..53..501A}).
Observations have reported an anti-correlation trend of the fractional polarization of thermal dust emission with the column density of the gas in molecular clouds (e.g., \citealt{1998ApJ...499L..93A}; \citealt{2008ApJ...674..304W}; \citealt{2015A&A...576A.105P}; \citealt{2016ApJ...824..134F}; \citealt{2017ApJ...837..161S,2019ApJ...882..113S}; \citealt{2018arXiv180706212P}; \citealt{2019ApJ...872..187C}). This trend is explained by the loss of grain alignment toward dense cores (\citealt{2008ApJ...674..304W}) or by the turbulent structure of magnetic field within the scale of the beam size (see \citealt{2015psps.book..147J}; \citealt{2015A&A...576A.105P}).
A popular theory describing grain alignment is RAdiative Torques (hereafter referred to as RATs) (\citealt{2007MNRAS.378..910L}; see \citealt{2007JQSRT.106..225L}; \citealt{2015ARA&A..53..501A} for reviews). One of the key predictions of the RAT theory is that the polarization degree correlates with the intensity of the radiation field (or equivalently dust temperature $T_{\rm d}$). This prediction was numerically demonstrated by \cite{2020ApJ...896...44L}. However, observations revealed that the dust polarization degree does not always increase with $T_{\rm d}$. For example, \cite{2018arXiv180706212P} showed that the 40' spatial resolution polarization degree at 850 $\mu$m, measured by the \textit{Planck} satellite toward four molecular regions, including Aquila Rift, Cham-Musca, Orion, and Ophiuchus in the Gould belt cloud, decreases for $T_{\rm d}>19~\,{\rm K}$ (see their Figure 18). Additionally, far-Infrared polarimetric data observed by the High-resolution Airborne Wide band Camera Plus (HAWC+) instrument (\citealt{2018JAI.....740008H}) onboard the Stratosphere Observatory for Infrared Astronomy (SOFIA) toward the molecular cloud Ophiuchus A (\citealt{2019ApJ...882..113S}) at 89 $\mu$m (7.8'' spatial resolution) and 154 $\mu$m (13.6'' spatial resolution) also reported the decrease of the polarization degree for $T_{\rm d}>32\,{\rm K}$ (see Section \ref{sec:obs} below). These observational features are challenging the popular RAT alignment theory.
Dust grain-size distribution is an important parameter when it comes to interpreting the polarization of dust. The grain size distribution is expected to evolve from the diffuse interstellar medium (ISM) to dense molecular clouds (MCs) due to grain accretion of gas species and grain-grain collisions (\citealt{2013MNRAS.434L..70H}). Recently, \cite{2019NatAs...3..766H} discovered that a large grain exposed to a strong radiation field could be disrupted into small fragments due to centrifugal stress induced by suprathermal rotation by RATs. This effect is termed Radiative Torques Disruption (RATD) (see \citealt{2020arXiv200616084H} for a review). Since RATs are stronger for larger grains
(\citealt{2007MNRAS.378..910L}; \citealt{2008MNRAS.388..117H}), RATD
is more efficient for large grains than smaller ones. As shown
in \cite{2019ApJ...876...13H}, the RATD mechanism is much faster than
grain shattering and thus determines the upper cutoff of the
grain size distribution in the ISM.
\cite{2020ApJ...896...44L} carried out numerical modeling of multi-wavelength polarization of thermal dust emission from aligned grains by RATs. They show that the polarization degree at 850 $\mu$m first increases with increasing dust temperature. However, when RATD is accounted for, they find that the polarization degree decreases for higher dust temperature, which is different from classical RATs prediction. The level of the decline is found to depend on the tensile strength, which is determined by the internal structure of dust grains (\citealt{2019ApJ...876...13H}). Interestingly, accounting for RATD, the model predicts the same $P(\%)-T_{\rm d}$ trend as reported by \textit{Planck} data (\citealt{2018arXiv180706212P}) at the same wavelength as mentioned above. The success of the joint effect of RAT alignment and RATD in explaining {\it Planck} data motivates us to use this approach to better interpret the SOFIA/HAWC+ data.
Coming back to the SOFIA/HAWC+ observation toward $\rho$ Oph-A at band C (89 $\mu$m) and D (154 $\mu$m) as mentioned above, \cite{2019ApJ...882..113S} mainly studied the variation of the ratio of the polarization degree ($P_{\rm D}(\%)/P_{\rm C}(\%)$) with respect to the dust temperature, which is opposed to the polarization degree studies. Furthermore, the authors showed that classical RATs mechanism was able to explain the increasing (e.g., positive) part of the ratio curve and discarded the decreasing (negative) part (see their Figure 6d). In this study, we will: (1) use this dataset to show the correlation to the polarization degree itself to dust temperature; and (2) extend the improved polarized thermal dust model introduced by \cite{2020ApJ...896...44L} to interpret these SOFIA/HAWC+ observational trends.
This paper is structured as follows. We present the archival SOFIA/HAWC+ data from $\rho$ Oph-A at $89~\mu$m and $154~\mu$m observed by SOFIA/HAWC+ in Section \ref{sec:obs}. We describe our modeling method of polarized thermal dust emission by aligned grains in Section \ref{sec:model}. In Section \ref{sec:compare}, we compare our numerical results obtained for $\rho$ Oph-A cloud with observational data. A summary of our findings and conclusions are presented in Section \ref{sec:discussion}.
\section{Observations toward $\rho$ Oph-A} \label{sec:obs}
\begin{figure*}
\centering
\includegraphics[width=0.45\textwidth]{f1a.pdf}
\includegraphics[width=0.45\textwidth]{f1b.pdf}
\includegraphics[width=0.45\textwidth]{f1c.pdf}
\includegraphics[width=0.45\textwidth]{f1d.pdf}
\caption{Maps of polarization degree and dust temperature of $\rho$ Oph-A. (a) and (b) The polarization degrees measured by bands C and D of HAWC+/SOFIA in the actually resolutions. (c) and (d) The maps of the H$_{2}$ column density ($N$) and the dust temperature ($T_{\rm d}$) derived from 70, 100, 160$\,\mu$m PACS/\textit{Herschel} data. The star symbol locates the position of Oph S1. The black filled circles show the beam size. The physical scale is derived from 140 pc of distance.}
\label{fig:polametric_maps}
\end{figure*}
$\rho$ Oph-A is a molecular cloud in one of the closest dark cloud complex and star-forming region $\rho$ Ophiuchi. Distance to this complex is reported to be $\sim$ 120--160 pc \citep{1981A&A....99..346C, 1989A&A...216...44D, 1998A&A...338..897K, 2004AJ....127.1029R, 2008ApJ...675L..29L, 2008A&A...480..785L, 2008AN....329...10M, 2008ApJ...679..512S, 2017ApJ...834..141O}. This region is significantly influenced by high energy radiation from a high-mass Oph-S1 star, which is a B association star (\citealt{1977AJ.....82..198V}; \citealt{1988ApJ...335..940A}; \citealt{1989ApJ...338..902L, 1989ApJ...338..925L};\citealt{2003PASJ...55..981H}). Among several dark clouds cores in $\rho$ Ophiuchi, Oph-A is identified as one of the warmest cores compared to Oph-B and C regions. Several studies i.e. $Herschel$, $Spitzer$, and James Clarke Maxwell Telescope (JCMT) Gould belt surveys (\citealt{2010A&A...518L.102A}; \citealt{2009ApJS..181..321E}; \citealt{2007PASP..119..855W}) include this region for various investigations on dust and gas properties.
This cloud complex is also widely studied in multi-wavelength imaging and polarimetry. Recent attempts were made to map magnetic fields in Oph-A region using near-IR and sub-mm polarization measurements by \cite{2015ApJS..220...17K, 2018ApJ...859....4K} and far-IR by \cite{2019ApJ...882..113S}, respectively. Oph-A is one of the best laboratory to understand the multi-band dust polarization in context of high energy radiation giving opportunity to investigate RAT in detail.
\begin{figure*}
\centering
\includegraphics[width=0.95\textwidth]{f2.pdf}
\caption{2D histogram of the dust polarization degree and dust temperature for 89 $\mu$m (left panel) and 154 $\mu$m (right panel). This diagram is made of 52 bins and the gray lines show the binning weighted-mean of the data and the error bars represent the standard deviation within the bin. The black dashed lines show the best fit of the piecewise linear function to the data. The maps of the dust temperature and polarization at 89 $\mu$m are smoothed to $13''.6$ of FWHM.}
\label{fig:hist2d_maps}
\end{figure*}
\subsection{Polarization maps} \label{sec:pol_map}
In this work, we use the archival FIR polarimetric data observed by SOFIA/HAWC+. These data sets are introduced in \cite{2019ApJ...882..113S}. The observations were made in 2017 using two bands of HAWC+ instrument, namely C (89 $\mu m$) and D (154 $\mu$m). The angular resolutions are $7''.8$ and $13''.6$, respectively. The polarization degree maps in these bands are shown in Figure \ref{fig:polametric_maps}(a,b)\footnote{We smoothed the $7''.8$ band C and the $11''.4$ dust temperature maps to $13''.6$ band D resolution using the python-package \textsc{Gaussian2DKernel}.}. We select the common sky positions in which data is detected in both bands. The local polarization degree varies significantly across the $\rho$ Oph-A cloud, in which the median value is 7.5$\%$ in band C and 5.0$\%$ in band D as discussed in \cite{2019ApJ...882..113S}. Figure \ref{fig:polametric_maps}(a,b) shows a "tight" spatial correlation in polarization degree in two bands except at the southernmost area ($T_{\rm d} \simeq 25\,{\rm K}$), where the data at 89$\,\mu$m is more polarized than at 154$\,\mu$m. The reason for this such a difference is beyond this work's scope because we do not have enough information to investigate it quantitatively. However, a possible explanation could be that there is a warmer outer component (with the local temperature larger than $25\,{\rm K}$) and a colder inner component (with the local temperature smaller than $25\,{\rm K}$) along the line-of-sight (LOS) as proposed in \citealt{2019ApJ...882..113S}. In the warmer component, which favors enhancing the shorter wavelength polarization (band C), grains are well exposed to radiation. Therefore the alignment is efficient, causing a larger value of $P_{\rm C}(\%)$. On the contrary, the colder component along the LOS favors emission at longer wavelengths (band D); however, the grain alignment is less effective due to the shielding. Therefore, the value of $P_{\rm D}(\%)$ become smaller. A star symbol locates the high-mass star Oph S1.
\subsection{Map of dust temperature and gas density}
We adopt the dust temperature ($T_{\rm d}$) and the gas column density ($N$) maps of \cite{2019ApJ...882..113S}. These maps were generated by a fit of the modified thermal spectral energy distribution (SED) to each pixel using 70, 100, and 160$\,\mu$m from \textit{Herschel/PACS} data (\citealt{2010A&A...518L...2P}) with the fixed exponential index of the dust opacity 1.6. Figure \ref{fig:polametric_maps}(c,d) shows the gas density and dust temperature maps in the same regions that HAWC+ detected data. The high-mass star Oph S1 warms up the surrounding environment, causing a large temperature gradient, i.e., from $\simeq 45\,{\rm K}$ near Oph S1 down to $\simeq 20\,{\rm K}$ at the edge of the cloud. On the contrary, the gas is densest at the edge of the map and radially diffuses backward to Oph S1.
\subsection{Dust polarization and temperature}
Figure \ref{fig:hist2d_maps} shows the 2D-histogram dust polarization degree in band C (left panel) and D (right panel) to dust temperature made of 52 bins. They share the same feature (1) the polarization degree increases as the dust temperature increases up to $T_{\rm d} \simeq T_{\rm crit}$ (i.e., positive slope region) and (2) the polarization degree decreases for higher dust temperature (i.e., negative slope region). In the positive slope, the polarization degree in band C is higher than band D, while it is lower in the negative slope, which was also showed by these fractional polarization ratio by Figure 6d in \citealt{2019ApJ...882..113S}). In other words, the polarization degree at a shorter wavelength (89 $\mu$m) is higher than at the longer wavelength (154 $\mu$m) in the denser region (i.e., at the edge of the polarimetric map) while it is the opposite in the less dense region (close central star) in $\rho$ Oph-A.
Using the RATs theory, the spherical model of \cite{2019ApJ...882..113S} could explain the increase (decrease) of the $P_{\rm D}/P_{\rm C}$ ratio with respect to dust temperature (gas column density) in the dense ($N>10^{21.75}\rm cm^{-2}$) and cold region ($T_{\rm d} \leq 32-34\,{\rm K}$) (see their Figure 6). However, this model could not explain the observational trend in the more diffuse and hotter region. We fitted the data with a piecewise linear function\footnote{We used the python-package \textsc{pwlf} (piecewise linear fitting).}. The best fits (i.e., black dashed lines) show that the transition takes place at $T_{\rm crit} \simeq 25\,{\rm K}$ for band C, and $\simeq 32\,{\rm K}$ for band D. In the statistical point of view, the reason for low $T_{\rm crit}$ in band C is that there is an excess of the polarization degree at $T_{\rm d} \simeq 25\,{\rm K}$ as mentioned in Section \ref{sec:pol_map}, which makes the piecewise linear fit peak toward this value of $T_{\rm d}$. In the framework of our theoretical point of view, however, the transition from positive to negative slope at $\simeq 25\,{\rm K}$ is unlikely physical because the tensile strength of grains must be extremely small (see Section \ref{sec:compare}). In addition, the polarization ratio $P_{\rm D}/P_{\rm C}$ changes its slope at $T_{\rm d}\simeq 32-34\,{\rm K}$ as shown in \cite{2019ApJ...882..113S}.
\section{Modelling thermal dust polarization} \label{sec:model}
The multi-wavelength polarization model of the thermal dust emission is described in detail in \cite{2020ApJ...896...44L}. The schematic of the model is illustrated in Figure \ref{fig:model_schem}. The radiative source (e.g., a O/B star) is denoted by a star symbol surrounded by a cloud. The radiative strength ($U$) gets smaller at further in the cloud. As follows, we describe the model to calculate the polarization of thermal dust emission from this cloud.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{f3.pdf}
\caption{Schematic illustration of the model. A molecular cloud is irradiated by the central star, so that the radiation strength (in equivalent to dust temperature) decreases from $U_{0}$ to $U_{\rm ISRF}$ The color gradient indicates the increase of the gas density from the central star.}
\label{fig:model_schem}
\end{figure}
\subsection{Fractional polarization of thermal emission}
Dust grains are heated by the radiation and re-emit in the thermal range. The fractional polarization of the thermal dust emission is the ratio of the polarized intensity ($I_{\rm pol}$) to the total emission intensity ($I_{\rm em}$), which yields
\begin{eqnarray} \label{eq:pol_degree}
P(\%) = 100\times \frac{I_{\rm pol}}{I_{\rm em}}.
\ena
Assuming a dust environment containing carbonaceous and silicate grains, the total emission intensity is given by
\begin{eqnarray}
\frac{I_{\rm em}(\lambda)}{N_{{\rm H}}} = \sum_{j=\rm sil,car} &&\int^{a_{\rm max}}_{a_{\rm min}} Q_{\rm ext}\pi a^{2} \nonumber\\
&\times&\int dT B_{\lambda}(T_{\rm d})\frac{dP}{dT}\frac{1}{n_{\rm H}}\frac{dn_{j}}{da}da.~~~
\ena
If silicate and carbon are separated populations, then as paramagnetic grains, silicates can align with the ambient magnetic field, while carbon grains cannot (\citealt{2016ApJ...831..159H}). Thus, the polarized intensity resulting from its alignment is given by
\begin{eqnarray}
\frac{I_{\rm pol}(\lambda)}{N_{\rm H}}= &&\int^{a_{\rm max}}_{a_{\rm min}} f(a)Q_{\rm pol}\pi a^{2} \nonumber\\
&\times&\int dT B_{\lambda}(T_{\rm d})\frac{dP}{dT}\frac{1}{n_{\rm H}}\frac{dn_{sil}}{da}da, ~~~
\ena
where $B_{\lambda}(T_{\rm d})$ is the black-body radiation at dust temperature $T_{\rm d}$, $dP/dT$ is the distribution of dust temperature, $f(a)$ is the alignment function, Q$_{ext}$ is the extinction coefficient, $Q_{\rm pol}$ is the polization coefficient, $dn/da$ is the grain-size distribution. The dust temperature distribution depends on the grain size and radiation strength, which is computed by the DustEM code (\citealt{2011A&A...525A.103C}, see e.g., Figure 8 in \citealt{2020ApJ...896...44L}). The extinction and polarization coefficients are computed by the DDSCAT model (\citealt{1994JOSAA..11.1491D, 2008JOSAA..25.2693D}; \citealt{2012OExpr..20.1247F}) for a prolate spheroidal grain shape with an axial ratio of 1/3.
If silicate and carbon grains are mixed together (e.g., \citealt{2013A&A...558A..62J}), which may exist in dense clouds due to many cycles of photo-processing, coagulation, shattering, accretion, and erosion, carbon grains could be aligned with the ambient magnetic field and its thermal emission could be polarized. For a simplest case, assuming these grain populations have the same alignment parameters (i.e., $a_{\rm align}$, $f(a)$), the total polarized intensity is
\begin{eqnarray}
\frac{I_{\rm pol}(\lambda)}{N_{\rm H}} = \sum_{j=\rm sil,car} &&\int^{a_{\rm max}}_{a_{\rm min}} f(a)Q_{\rm pol}\pi a^{2} \nonumber\\
&\times&\int dT B_{\lambda}(T_{\rm d})\frac{dP}{dT}\frac{1}{n_{\rm H}}\frac{dn_{j}}{da}da.
\ena
\subsection{Radiative torques disruption and grain-size distribution} \label{sec:RATD}
Let us consider a radiation field with the energy density of $u_{\rm rad} (\,{\rm erg} \,{\rm cm}^{-3})$, the mean wavelength of $\bar{\lambda}$ and an anisotropy degree of $\gamma$. Its strength is defined by a dimensionless $U=u_{\rm rad}/u_{\rm ISRF}$, where $u_{\rm ISRF}=8.64\times 10^{-13}\,{\rm erg}\,{\rm cm}^{-3}$ is the radiation energy density of the interstellar radiation field (ISRF) in the solar neighborhood (\citealt{1983A&A...128..212M}). This radiation field can spin a dust grain of size $a$ and density $\rho$ up to the rotational rate\footnote{Note that $\omega_{\rm RAT}/\omega_{\rm T} \sim 1/(1+F_{\rm IR})$, not $\sim (1+F_{\rm IR})$ as the typo in \cite{2020ApJ...896...44L}, Equation (3).}
\begin{eqnarray} \label{eq:omega_RAT}
\frac{\omega_{\rm RAT}}{\omega_{\rm T}} \simeq &&2.9\times 10^{2} \hat{\rho}^{0.5} \gamma a^{3.2}_{-5} U\left(\frac{10^{3} \rm{cm^{-3}}}{n_{{\rm H}}}\right)\left(\frac{\bar{\lambda}}{0.5 \rm{\mu m}}\right)^{-1.7} \nonumber\\
&\times& \left(\frac{20 \,{\rm K}}{T_{\rm gas}}\right)\left(\frac{1}{1+F_{\rm IR}}\right),
\ena
where $a_{-5}=a/(10^{-5} \rm cm)$, $\hat{\rho}=\rho/(3 \rm g cm^{-3})$, $n_{{\rm H}}, T_{\rm gas}$ are the gas density and temperature. $\omega_{\rm T}=(k_{\rm B}T_{\rm gas}/I)^{0.5}$ is the thermal angular velocity with $I=8\pi \rho a^{5}/15$ the inertia moment of grain. A rotating grain is damped by gas collisions and IR emission (see \citealt{2019ApJ...876...13H}). The dimensionless parameter ($F_{\rm IR}$) that describes the ratio of the IR damping to collisional damping \footnote{The factor is corrected to be 0.4 from Equation (4) in \cite{2020ApJ...896...44L}.} is defined as
\begin{eqnarray}
F_{\rm IR} \simeq 0.4\left(\frac{U^{2/3}}{a_{-5}}\right)\left(\frac{30 \rm{cm^{-3}}}{n_{{\rm H}}}\right)\left(\frac{100\,{\rm K}}{T_{\rm gas}}\right)^{1/2}.
\ena
A grain rotating at rotational velocity $\omega$ results in a tensile stress $S=\rho \omega^{2}a^{2}/4$ on the materials making up the grain. Thus, the maximum rotational velocity that a grain can withstand is:
\begin{eqnarray} \label{eq:omega_crit}
\omega_{\rm crit} = \frac{2}{a}\left(\frac{S_{\rm max}}{\rho}\right)^{1/2} \simeq \frac{3.6\times 10^{8}}{a_{-5}}S^{1/2}_{\rm max,7}\hat{\rho}^{-1/2},
\ena
where $S_{\rm max,7}=S_{\rm max}/(10^{7} \rm erg \,{\rm cm}^{-3})$.
One can see from Equation (\ref{eq:omega_RAT}) that the stronger the radiation field and the larger the grain size, the faster the rotation of the grain. A strong radiation field can thus generate such a fast rotation that the induced stress on large grains can result in a spontaneous disruption. This disruption mechanism is named as RATD and discovered by \cite{2019NatAs...3..766H}. From Equations (\ref{eq:omega_RAT}) and (\ref{eq:omega_crit}), we can derive the critical size above which grains are disrupted:
\begin{eqnarray}
\left(\frac{a_{\rm disr}}{0.1\,\rm \mu m}\right)^{2.7} \simeq 5.1\gamma^{-1}_{0.1}U^{-1/3}\bar{\lambda}^{1.7}_{0.5}S^{1/2}_{\rm max,7},
\ena
where $\bar{\lambda}_{0.5} = \bar{\lambda}/(0.5\,\mu \rm m)$.
Dust grains are disrupted efficiently (for $a$ greater than $a_{\rm disr}$) in stronger radiation fields. The disruption of dust grain by RATD can modify the grain-size distribution. Since only the largest grains are affected by the RATD mechanism, RATD determines the upper limit of the size distribution. The disruption is thus expected to enhance more smaller grains, resulting in a steeper grain size distribution than in the standard ISM. In the particular case of $\rho$ Oph-A cloud, \cite{2015A&A...578A.131L} furthermore showed that the grain size distribution experiences a varying power index across the cloud. In this work, we adopt a power-law grain size distribution assumption for both the original large grains and the smaller grains produced by disruption, with a power-law index $\beta$:
\begin{eqnarray}
\frac{1}{n_{{\rm H}}}\frac{dn_{\rm sil,car}}{da}=C_{\rm sil,car}a^{\beta} \ \ \ \rm{(a_{\rm min}\leq a \leq a_{\rm max})},
\ena
where $C_{\rm sil}$ and $C_{\rm car}$ are the normalization constants for silicate and carbonaceous grains, respectively. The smallest grain size is chosen as $a_{\rm min}=10~\AA$, while the maximum size is constrained by the RATD mechanism (i.e., $a_{\rm max}=a_{\rm disr}$). The normalization constants are determined through the dust-to-gas mass ratio $M_{\rm d/g}$ (see \citealt{2020arXiv200906958C}; \citealt{2020ApJ...893..138T}) as
\begin{eqnarray}
\sum_{\rm j=sil,car} C_{j}\rho_{j} &=& \frac{(4+\beta)M_{\rm d/g} m_{\rm gas}}{\frac{4}{3}\pi (a^{4+\beta}_{\rm max}-a^{4+\beta}_{\rm min})} ~~~~~~~\rm{for\ \beta \neq -4} \\ \nonumber
\sum_{\rm j=sil,car} C_{j}\rho_{j} &=& \frac{M_{\rm d/g}m_{\rm gas}}{\frac{4}{3}\pi(\ln a_{\rm max} - \ln a_{\rm min})} ~~~\rm{for\ \beta = -4}.
\ena
where $C_{\rm sil}/C_{\rm car}$ is adopted as 1.12 (\citealt{1984ApJ...285...89D}), and the dust-to-mass ratio $M_{\rm d/g}$ is fixed as 0.01 throughout this work. The latter assumption is close to what is derived from X-ray observations $\simeq 0.011 - 0.0125$ (\citealt{2003A&A...408..581V}) or gas tracers $\simeq 0.0114$ (\citealt{2015A&A...578A.131L}).
\subsection{Grain alignment by RATs}
An anisotropic radiation field can align dust grains
via the RATs mechanism (see \citealt{2007JQSRT.106..225L}; \citealt{2015ARA&A..53..501A} for reviews). In the unified theory
of RATs alignment, grains are first spun-up to suprathermal rotation and then driven to be aligned with the ambient magnetic fields by superparamagnetic relaxation within grains having iron inclusions (\citealt{2016ApJ...831..159H}). Therefore, grains are only efficiently aligned
when they can rotate suprathermally. This aligned grain size ($a_{\rm align}$) is determined by the following condition $\omega_{\rm RAT}(a_{\rm align}) = 3\omega_{T}$ as in \cite{2008MNRAS.388..117H}. From Equation \ref{eq:omega_RAT}, we have:
\begin{eqnarray}
a_{\rm align} \simeq &&0.024\hat{\rho}^{-5/32} \gamma^{-5/16} U^{-5/16} \left(\frac{10^{3} \,{\rm cm}^{-3}}{n_{{\rm H}}}\right)^{-5/16} \\ \nonumber
&&\times \left(\frac{\bar{\lambda}}{0.5\rm \mu m}\right)^{17/32} \left(\frac{20\,{\rm K}}{T_{\rm gas}}\right)^{-5/16}\left(\frac{1}{1+F_{\rm IR}}\right)^{-5/16} ~\rm{\mu m},
\ena
which implies $a_{\rm align} \sim 0.02\,\mu$m for a dense ISM with $\gamma=1.0$, $U=1$, and $\bar{\lambda}=0.3\,\mu$m. In this work, we adopt the alignment function as in \cite{2020ApJ...896...44L}:
\begin{eqnarray} \label{eq:fa}
f(a)=f_{\rm min}+(f_{\rm max}-f_{\rm min})\left\{1-\exp{\left[-\left(\frac{0.5a}{a_{\rm align}}\right)^{3}\right]}\right\}.~~~
\ena
For those grains with $a\ll a_{\rm align}$, the alignment is minimum as $f_{\rm min}=10^{-3}$, while the alignment degree gets maximum $f_{\rm max}$ for $a\gg a_{\rm align}$. This parametric function agrees the results obtained from inverse modeling to interstellar polarization data (\citealt{2009ApJ...696....1D}; \citealt{2014ApJ...790....6H}). For the model with only silicate grains aligned, modeling requires $f_{\rm max}=1$, and for a mixture model of both carbon and silicate grains to be aligned, it requries $f_{\rm max}<1$ (\citealt{2009ApJ...696....1D}; \citealt{2018A&A...610A..16G}).
\section{Application to $\rho$ Oph-A} \label{sec:compare}
\subsection{Numerical setup}
As discussed in Section \ref{sec:model}, the parameters of the model include the gas properties: gas number density ($n_{{\rm H}}$) and gas temperature ($T_{\rm gas}$); the dust properties: size ($a$), shape, internal structure (i.e., tensile strength $S_{\rm max}$) and size distribution power index ($\beta$); and the ambient properties: radiation field strength $U$ (which is in fact equivalent to the dust temperature $T_{\rm d}$), mean wavelength ($\bar{\lambda}$) and an anisotropy degree ($\gamma$) of the radiation field.
Figure \ref{fig:polametric_maps}c shows that the gas is denser at the edge of the polarimetric map area and more diffuse close to the Oph S1 star. We derive the relation between the gas number density and the dust temperature by assuming that the dust temperature linearly decreases from $45\,{\rm K}$ down to $20\,{\rm K}$ at the edge of the polarimetric map area, and the gas number density is calculated from a spherical model as in Section 3.5 in \cite{2019ApJ...882..113S}. This relation is shown in Figure \ref{fig:nH_Td}. Throughout this work, we fix the value of the gas temperature as $T_{\rm gas}=20$ K, which is fairly common for dense molecular clouds.
In a dense molecular cloud, large grains are expected to be present thanks the coagulation process. We set the initial maximum value of grain size as $1\,\mu$m, then the RATD mechanism constrains the actual maximum value. The smallest value for the grain sizes is kept fixed at $10\,\AA$. The internal structure of grains is determined via their tensile strength (e.g., large composite grains have $S_{\rm max}\simeq 10^{7}\,{\rm erg} \,{\rm cm}^{-3}$, stronger grains have a higher value of $S_{\rm max}$), which is a free parameter. The grain-size distribution could change across the $\rho$ Oph-A cloud (\citealt{2015A&A...578A.131L}), thus we vary the power index $\beta$ as another free parameter. In our model, the local value of the radiation strength is determined by the dust temperature as shown in Figure \ref{fig:polametric_maps}d via the relation $T_{\rm d}=16.4 a^{1/15}_{-5}U^{1/6}$K (\citealt{2011piim.book.....D}). The mean wavelength is $\bar{\lambda}\simeq 0.3\mu$m corresponding to a B-like star with $T_{\ast}\simeq 1.5\times 10^{4}$K. The anisotropy degree is $\gamma= 1$ for the unidirectional radiation field from a nearby star.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{f4.pdf}
\caption{Relation between the local gas number density and the local dust temperature.}
\label{fig:nH_Td}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=0.45\textwidth]{f5a.pdf}
\includegraphics[width=0.45\textwidth]{f5b.pdf}
\caption{Polarization spectrum of thermal dust emission calculated from grain alignment by RATs only (without RATD) for different grain temperatures, $T_{\rm d}$, assuming the size distribution power index $\beta$=-3.5 (left panel) and $\beta$=-4.0 (right panel). Higher dust temperatures result in higher polarization degree and smaller peak wavelength of the spectrum. Steeper size distribution leads to lower polarization degree (right panel). Only silicate grains are assumed to be aligned, and carbonaceous grains are randomly oriented.}
\label{fig:disruptionoff}
\includegraphics[width=0.45\textwidth]{f6a.pdf}
\includegraphics[width=0.45\textwidth]{f6b.pdf}
\caption{Polarization spectrum of thermal dust emission calculated with both grain alignment and disruption by RATs for two values of the tensile strength. The RATD effects decreases the polarization degree for $T_{\rm d}>33.7\ \,{\rm K}$ (left) and for $T_{\rm d}>37.9\,{\rm K}$ (right). The decline is more substantial for composite grains (left panel) than for more compact grain (right panel).}
\label{fig:disruptionon}
\includegraphics[width=0.46\textwidth]{f7a.pdf}
\includegraphics[width=0.45\textwidth]{f7b.pdf}
\caption{Same as Figure \ref{fig:disruptionon} (left panel) but for a mixture of silicate and carbon grains aligned with $f_{\rm max}=0.3$ (left panel) and $f_{\rm max}=0.5$ (right panel). The disruption effect also happens once $T_{\rm d}>34\,{\rm K}$. However, the shape shows a flat feature. Higher $f_{\rm max}$ leads to higher polarization degree.}
\label{fig:disruptionon_silcar}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.45\textwidth]{f8a.pdf}
\includegraphics[width=0.45\textwidth]{f8b.pdf}
\caption{Variation of the polarization degree with the grain temperature, computed at 89 $\mu$m (left panel) and 154 $\mu$m (right panel) with and without RATD effect at a given grain-size distribution. Without RATD, the polarization degree monotonically increases as dust temperature increases (dotted blue line). With RATD, the polarization degree first increases and then decreases when the dust temperature exceeds some critical value. This value (labeled by A, B, and C), is lower for weak grains and larger for stronger grains. Only silicate grains are assumed to be aligned.}
\label{fig:pol_Td_Smaxfixed}
\includegraphics[width=0.45\textwidth]{f9a.pdf}
\includegraphics[width=0.45\textwidth]{f9b.pdf}
\caption{Effect of the size-dependent tensile strength on the fractional polarization emission of silicate grains. The dotted blue line and the dashed orange line are computed for a fixed $S_{\rm max}$ as in Figure \ref{fig:pol_Td_Smaxfixed}. The solid black line is the model prediction for a size-dependent $S_{\rm max}$ (see text for details).}
\label{fig:pol_Td_Smaxvaried}
\includegraphics[width=0.45\textwidth]{f10a.pdf}
\includegraphics[width=0.45\textwidth]{f10b.pdf}
\caption{Same as Figure \ref{fig:pol_Td_Smaxfixed}, but both silicate and carbon grains are assumed to be aligned with $f_{\rm max}=0.5$. The trend and the critical temperature are the same but the decline is less steep and the polarization amplitude is higher than in the case of silicate grains alone.}
\label{fig:pol_Td_Smaxfixed_carsil}
\end{figure*}
\subsection{Numerical results} \label{sec:numerical_results}
Here, we show the numerical results of the multi-wavelength polarization degree of thermal dust emission using RATs theory in two cases: without disruption (namely classical RATs) and with disruption for comparison.
Figure \ref{fig:disruptionoff} shows the polarization spectra obtained with grain alignment by RATs only (without RATD), computed for several values of the dust temperature with different grain-size distributions, i.e., $\beta=-3.5$ (left panel) and $\beta=-4.0$ (right panel). One can see that (1) the polarization degree proportionally increases as the dust temperature increases, and (2) the polarization degree is lower for lower values of $\beta$ for the same $T_{\rm d}$. The first effect is due to the fact that a higher dust temperature (equivalent to higher radiation strength) causes larger torques acting on dust grains, which decreases the alignment size $a_{\rm align}$ and then increases the polarization degree of dust emission. Moreover, for a lower $\beta$, the dust mass contained in large grains is smaller, decreasing the polarization degree of the thermal dust emission that is dominantly produced by aligned, large grains. This explains the second effect.
Figure \ref{fig:disruptionon} shows the polarization spectra obtained with both grain alignment and disruption by RATs (with RATD), assuming different values of tensile strength, i.e., $S_{\rm max}=10^{7}\,{\rm erg} \,{\rm cm}^{-3}$ (left panel) and $S_{\rm max}=10^{9}\,{\rm erg} \,{\rm cm}^{-3}$ (right panel). In the left panel, the low-$T_{\rm d}$ curves are the same with Figure \ref{fig:disruptionoff} (blue, orange, green, and dashed-dotted red lines). However, differing from Figure \ref{fig:disruptionoff}, the polarization degree decreases as dust temperature increases beyond a critical value (i.e., $\simeq 34\,{\rm K}$, the dotted violet and dashed brown lines). Higher $S_{\rm max}$ leads the disruption to occur at a higher critical dust temperature (i.e., $\simeq 38\,{\rm K}$, the dashed brown line). The reason is that dust grains, exposed to strong radiation (indicated by where the dust temperature is high, see Figure \ref{fig:polametric_maps}d), can be rotated extremely fast due to strong radiative torques while damping is inefficient (because of a low gas density, Figure \ref{fig:polametric_maps}c), resulting in radiative torques disruption (RATD) as described in Section \ref{sec:RATD}. For $T_{\rm d}$ lower than the critical temperature, on the contrary, the radiative torques are weaker, and the damping process is more substantial (because gas is denser) so that the RATD cannot occur and thus the results are the same as for the classical RATs calculations.
The disruption leads to a drop in the polarization degree. The critical temperature above which RATD occurs and the level of the decline depend on the internal structure of the grains controlled by $S_{\rm max}$. The composite grains ($S_{\rm max}=10^{7}\,{\rm erg} \,{\rm cm}^{-3}$) are more easily disrupted, resulting in a significant decrease of the polarization degree (Figure \ref{fig:disruptionon}, left panel), than for the compact grains ($S_{\rm max}=10^{9}\,{\rm erg} \,{\rm cm}^{-3}$) (Figure \ref{fig:disruptionon}, right panel).
Figure \ref{fig:disruptionon_silcar} shows the polarization spectrum for the case of mixed silicate and carbon grains in which both grain populations are aligned by RATs. Similar to Figure \ref{fig:disruptionon}, the disruption occurs for $T_{\rm d}>34\,{\rm K}$. In this case, the spectrum shows an increase and then a plateau feature, which differs from Figure \ref{fig:disruptionon}. The reason is that the polarization degree is the ratio of the polarized intensity ($I_{\rm pol}$) to the total intensity ($I_{\rm em}$) (Equation \ref{eq:pol_degree}). Since the $T_{\rm d}$ of silicate grains is lower than that of carbon grains, their spectrum slopes differ from each other. When only silicate grains are aligned the different spectrum slopes of $I_{\rm pol}$ to $I_{\rm em}$ result in a slope in polarization spectrum (see e.g., Figure \ref{fig:disruptionon}). When both silicate and carbon grains are aligned, $I_{\rm pol}$ and $I_{\rm em}$ differ by a factor of degree of grain alignment, which results in a flat spectrum. The degree of grain alignment is defined by $f_{\rm max}$ (Equation \ref{eq:fa}). For a combination of carbon and silicate grains, the non-perfect aligned grains ($f_{\rm max}<1$) can reproduce observation (see e.g., \citealt{2009ApJ...696....1D}; \citealt{2018A&A...610A..16G}). Grains with higher value of $f_{\rm max}$ (right panel) produce more polarized thermal emission than for a lower value of $f_{\rm max}$ (left panel).
Figure \ref{fig:pol_Td_Smaxfixed} shows the polarization degree at 89 $\mu$m (left panel) and 154 $\mu$m (right panel) with respect to dust temperature. In the case without RATD (dotted lines), the polarization degree first increases rapidly with increasing dust temperature and then slowly changes (as shown in Figure \ref{fig:disruptionoff}). Accounting for RATD, the polarization degree first increases with $T_{\rm d}$ and then rapidly declines once the dust temperature exceeds a critical value, which depends on the grains' tensile strength as shown in Figure \ref{fig:disruptionon}. The critical dust temperature is lower for weaker grains (i.e., lower value of $S_{\rm max}$) because of an effective disruption, which leads to a deeper decrease of the polarization degree in comparison to stronger grains.
Above, we assume that all grains have the same tensile strength ($S_{\rm max}$). However, the tensile strength of composite grains scales with the radius of monomoers as $a_{p}^{-2}$ (see \citealt{2019ApJ...876...13H} for a detailed demonstration), which implies that large grains (comprising many monomers) are more breakable than smaller grains presumably having a compact structure. As an example, we set $S_{\rm max}=10^{7}\,{\rm erg} \,{\rm cm}^{-3}$ for all grains with size $a\geq 0.1\,\mu$m while $S_{\rm max}=10^{9}\,{\rm erg} \,{\rm cm}^{-3}$ for smaller grains. The results are shown in Figure \ref{fig:pol_Td_Smaxvaried}, black solid lines. The trend of the polarization degree also shows an increase and decrease feature to dust temperature. However, for $T_{\rm d}>T_{\rm crit}$ (i.e., denoted by the position B), its amplitude is higher than in the case of fixed $S_{\rm max}$. The reason is as follows. When the RATD does not occur, the polarization is higher for higher dust temperature/stronger radiation. When the dust temperature is just enough for RATD to occur, the disruption mostly effects the largest grains (i.e., low $S_{\rm max}=10^{7}\,{\rm erg} \,{\rm cm}^{-3}$ in this example), so that the curve follows the case of fixed $S_{\rm max}=10^{7}\,{\rm erg} \,{\rm cm}^{3}$ (e.g., BB1 slope). As the dust temperature increases, the RATD can affect smaller grains (i.e., higher $S_{\rm max}$). Because the decline of polarization is smaller for higher $S_{\rm max}$, there is a short increasing interval in the polarization (see the B1, B2 segment). Finally, once RATD only effects "strong" grains, the trend of the polarization follows the fixed $S_{\rm max}=10^{9}\,{\rm erg} \,{\rm cm}^{-3}$ case, as shown in Figure \ref{fig:pol_Td_Smaxfixed}.
Figure \ref{fig:pol_Td_Smaxfixed_carsil} shows the polarization degree of thermal dust as a function of the grain temperature, assuming that both carbon and silicate grains are aligned. The results generally show that the polarization degree drops at the same critical dust temperature as Figure \ref{fig:pol_Td_Smaxfixed} in which only silicate grains are aligned. However, the mixed grain model results in higher polarization, as well as in less decline than in the case of single grains due to the contribution of aligned carbon grains. With the $T_{\rm d}-n_{{\rm H}}$ relation, we varied the value of $n_{{\rm H}}$ by 10$\%$ but we do not see a significant change (i.e., the correlation coefficient is $\simeq 0.99$). However, another relation of $n_{{\rm H}}$ to $T_{\rm d}$ could affect more significantly.
\subsection{Interpretation of observations}
\begin{figure*}
\centering
\includegraphics[width=0.9\textwidth]{f11.pdf}
\caption{Comparison of the polarization degree of dust emission from our models with observations at 89 $\mu$m (left panel) and at 154 $\mu$m (right panel). Background colored points are the observational polarization degree (Figure \ref{fig:hist2d_maps}). Colored lines are the models for different value of the power-index $\beta$. The dashed line shows the results without RATD, while solid lines show results with RATD. Tensile strength is $S_{\rm max}=10^{7}\,{\rm erg} \,{\rm cm}^{-3}$. Only silicate grains are aligned, and carbons are randomly oriented.}
\label{fig:fits_obs_Smax1e7}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.9\textwidth]{f12.pdf}
\caption{Similar to Figure \ref{fig:fits_obs_Smax1e7} but for a combination of aligned carbonaceous and silicate grains. The same range of $\beta$ can match better to the observations with the degree of alignment $f_{\rm max}=0.35$.}
\label{fig:fits_obs_Smax1e7_carsil}
\end{figure*}
Since the $T_{\rm crit}\simeq 30-34\,{\rm K}$ critical dust temperature above which the polarized thermal dust emission drops for $S_{\rm max}=10^{6}-10^{7}\,{\rm erg} \,{\rm cm}^{-3}$ (see Figures \ref{fig:pol_Td_Smaxfixed}, \ref{fig:pol_Td_Smaxfixed_carsil}) is consistent with observations (see Figure \ref{fig:hist2d_maps}), Figure \ref{fig:fits_obs_Smax1e7} shows only the numerical results for $S_{\rm max}=10^{7}\,{\rm erg} \,{\rm cm}^{-3}$ with a variation of the silicate grain-size distribution power-index $\beta$ overlaid over the observational data. For illustration, we also show the results from the RATs model without the disruption effect (dashed line), which we denote as classical RATs theory. Since RATs theory implies that stronger radiation strength causes more torques acting on grains, which results in higher polarization, then the classical RATs model can only lead to an increase of the dust polarization degree with respect to dust temperature and fails to explain its decrease beyond $T_{\rm crit}$ dust temperature (as discussed in Section \ref{sec:numerical_results}). When the rotational disruption mechanism is incorporated into RATs (solid lines), the model can reproduce the increase and decrease features from observations. For $T_{\rm d}<T_{\rm crit}$, the disruption does not proceed; hence, the model is exactly as classical RATs, which accounts for the increase of the polarization degree. For $T_{\rm d}\geq T_{\rm crit}$, on the contrary, the disruption occurs so that large grains are disrupted into many smaller fragments. The enhancement of smaller grains causes a decrease in the polarization degree at these FIR wavelengths.
Furthermore, different solid lines correspond to different values of the grain-size distribution power-index $\beta$. In the case of silicate grains alone, a simple $\chi^{2}$ calculation as in Table \ref{tab:chi2} shows that it gets minimum at $\beta \simeq -4.0$ to observational data at 89 $\mu$m, while it is slightly lower as $\beta \simeq -4.1$ to 154 $\mu$m data. Hence, the slope of the size distribution is steeper than the MRN size distribution for the standard interstellar medium (\citealt{1977ApJ...217..425M}), which is evidence of enhancement of small grains by RATD. 154 $\mu$m probes different (more embedded) layers of the $\rho$ Oph-A than the 89 $\mu$m observations do, thereby the size-distribution could be slightly different. Polarimetric data at longer wavelengths (e.g., 850 $\mu$m JCMT/SCUBA-2 observations, see \citealt{2018ApJ...859....4K}), which trace bigger grain size, are desired to get a more comprehensive picture.
Figure \ref{fig:fits_obs_Smax1e7_carsil} shows the comparison for a mixture of carbon and silicate grains with respect to observations. As shown in Section \ref{sec:numerical_results}, both grain-size distribution ($\beta$) and degree of alignment ($f_{\rm max}$) control the amplitude of the polarization degree, but they do not affect the spectrum trend. We found that the same range of $\beta$ as in Figure \ref{fig:fits_obs_Smax1e7} also nicely fits the observational trend with $f_{\rm max}\simeq 0.35$. In this case, the $\chi^{2}$ calculation in Table \ref{tab:chi2} indicates that $\beta \simeq -4.0$ and $\beta \simeq -4.1$ also result in minimum $\chi^{2}$ to observed data at $89\,\mu$m and $154\,\mu$m, respectively.
\begin{table}
\centering
\caption{$\chi^{2}$ of the models with a single aligned silicate grains (Figure \ref{fig:fits_obs_Smax1e7}) and a combination of aligned carbonaceous and silicate grains (Figure \ref{fig:fits_obs_Smax1e7_carsil}) to observations computed by}
\label{tab:chi2}
\begin{tabular}{ccc|cc}
\multicolumn{5}{c}{$\chi^{2}=\frac{1}{N}\sum^{N}_{i} (P^{i}_{\rm obs} - P_{\rm mod})^{2}/P^{i}_{\rm obs}$} \\
\multicolumn{5}{c}{with $N$ the number of data points} \\
\\
\hline
{}&\multicolumn{2}{c|}{$\chi^{2}$ (89$\,\mu$m, $f_{\rm max}=1$)} & \multicolumn{2}{c}{$\chi^{2}$ (154$\,\mu$m, $f_{\rm max}=0.35$)} \\
$\beta$ & sil grain & car+sil grain & sil grain & car+sil grain \\
\hline
-3.5 & 4.99 & 3.41 & 10.62 & 7.40 \\
-3.6 & 3.45 & 2.65 & 7.60 & 5.69 \\
-3.7 & 2.36 & 2.03 & 5.12 & 4.15 \\
-3.8 & 1.68 & 1.59 & 3.25 & 2.86 \\
-3.9 & 1.37 & 1.35 & 1.98 & 1.89 \\
-4.0 & 1.34 & 1.31 & 1.26 & 1.28 \\
-4.1 & 1.53 & 1.44 & 1.01 & 1.02 \\
-4.2 & 1.86 & 1.71 & 1.09 & 1.04 \\
-4.3 & 2.26 & 2.08 & 1.41 & 1.28 \\
-4.4 & 2.71 & 2.50 & 1.84 & 1.67 \\
-4.5 & 3.15 & 2.94 & 2.33 & 2.12 \\
\hline
\end{tabular}
\end{table}
\subsection{Limitations of the model}
Our model's primary and most sensitive input parameters are the local gas column density and the local dust temperature. The first controls the damping process of the rotating grains, while the second defines the angular rotational rate of grains. The value for the gas column density is derived from a spherical model, whereas the value for the dust temperature is adopted from observations. Therefore, our results contain uncertainties, and we would like to address here the main limitations of our model. First, the adopted value of dust temperature is, in fact, the projection on the plane of the sky, the actual value could be higher than these. Second, the dust temperature and gas density maps are derived from only three FIR bands of \textit{Herschel}/PACS ($60\,\mu$m, $100\,\mu$m, and $160\,\mu$m). The derivation could be more accurate if the (sub)millimeter and radio bands are taken into account as it was in \cite{2019ApJ...872..187C}. However, we expect that accounting for local variations of dust temperature and gas number density could explain the observational scatter, but should not change the trend or our conclusions.
Because our main input parameters are the local values, our prescription will be easy to incorporate into more elaborate models that have better physical treatments for the gas and dust properties, such as 3D radiative dust modeling codes (e.g., \citealt{2012ascl.soft02015D}; \citealt{2015A&A...578A.131L}).
Finally, we note that the magnetic field geometry is assumed to not vary along the line of sights toward $\rho$ Oph-A in the modeling. The effect of turbulent magnetic field would reduce the polarization degree predicted, but the trend $P(\%)$ vs. $T_{\rm d}$ is not affected. Nevertheless, the inferred magnetic field direction shown in Figure 2 in \cite{2019ApJ...882..113S} indicates the coherent magnetic stream lines in $\rho$ Oph-A. The turbulence, therefore, may occur at very small scale.
\section{Summary and conclusions} \label{sec:discussion}
We showed and interpreted the relation between the fractional polarization of thermal dust emission and dust temperature in $\rho$ Oph-A molecular cloud using the archival SOFIA/HAWC+ observations at 89 $\mu$m and 154 $\mu$m. The observed fractional polarization first increases with increasing dust temperature and then decreases once the dust temperature exceeds $\simeq 25-32\,{\rm K}$. This is similar to what seen in {\it Planck} data for other clouds (\citealt{2018arXiv180706212P}). This trend differs from the prediction by the classical RAT theory and represents a challenge to grain alignment theory.
We calculated the polarization degree of thermal dust emission by simultaneously considering grain alignment and rotational disruption (RATD) induced by RATs. The RATD mechanism relies on the extremely fast rotation of large grains exposed in a strong radiation field (or high dust temperature in equivalent). For sufficiently high rotation rate, the centrifugal force can exceed the binding force that holds the grain's structure and disrupts the large grain into smaller species. Since RATs are stronger for larger grains, the RATD mechanism constrains the upper limit for the grain size distribution. The efficiency of RATD also depends on the grain tensile strength ($S_{\rm max}$), which is determined by its internal structure. {A compact structure grain has a high value of $S_{\rm max}\simeq 10^{9}\,{\rm erg} \,{\rm cm}^{-3}$, while a composite structure has a lower value of $S_{\rm max} \simeq 10^{6}-10^{7}\,{\rm erg} \,{\rm cm}^{-3}$, and a porous structure has even lower $S_{\rm max}<10^{6}\,{\rm erg} \,{\rm cm}^{-3}$. Accounting for this disruption effect, we can reproduce a drop in the fractional polarization of thermal dust emission with respect to dust temperature, above a critical value which depends on the tensile strength of the grains. The successful polarization model with RATD and a low tensile strength suggests a composite grain structure instead of a compact grain model, in agreement with \cite{2020ApJ...896...44L}.
We successfully reproduced the observed $P(\%)-T_{\rm d}$ trend in the case of $\rho$ Oph-A by considering both only silicate grains and mixed carbon and silicate grains to align with the magnetic field, assuming that the grain size distribution produced by the RATD follows a power-law distribution. With the parameters adapted in this work, our results indicate that composite grains with a power-index of size distribution steeper than the standard MRN distribution (i.e., $\beta<-3.5$) can reproduce the observational data, which well agrees with \cite{2015A&A...578A.131L}. Polarimetric data at longer wavelengths would help us to have a better understanding of grain alignment and disruption induced by RATs. In the forthcoming work, we combine these FIR data with 450 $\mu$m and 850 $\mu$m (\citealt{2018ApJ...859....4K}) data observed by JCMT to study the polarization spectrum.
We thank the anonymous referee for helpful comments that improved the impact and the presentation of this paper. This research is based on observations made with the NASA/DLR Stratospheric Observatory for Infrared Astronomy (SOFIA). SOFIA is jointly operated by the Universities Space Research Association, Inc. (USRA), under NASA contract NNA17BF53C, and the Deutsches SOFIA Institut (DSI) under DLR contract 50 OK 0901 to the University of Stuttgart. Financial support for this work was provided by NASA through award 4$\_$0152 issued by USRA. T.H is funded by the National Research Foundation of
Korea (NRF) grants funded by the Korea government
(MSIT) through a Mid-career Research Program (2019R1A2C1087045).
A.G is supported by the Programme National "Physique et
Chimie du Milieu Interstellaire" (PCMI) of CNRS/INSU with INC/INP co-funded by CEA and CNES. A.S. acknowledge support from the NSF through grant AST-1715876.
\section{Introduction\label{sec:intro}}
The magnetic field is believed to play an essential role in various astrophysical phenomena, including the formation of stars and planets (\citealt{2012ARA&A..50...29C}). The alignment of dust grains with the magnetic field induces polarization of starlight and of thermal dust emission. The polarization vectors of starlight are parallel to the magnetic field, while those of thermal dust are perpendicular to the magnetic field. Thus, dust polarization has become a popular technique to constrain the magnetic field direction and strength (\citealt{2007JQSRT.106..225L}; \citealt{2015ARA&A..53..501A}).
Observations have reported an anti-correlation trend of the fractional polarization of thermal dust emission with the column density of the gas in molecular clouds (e.g., \citealt{1998ApJ...499L..93A}; \citealt{2008ApJ...674..304W}; \citealt{2015A&A...576A.105P}; \citealt{2016ApJ...824..134F}; \citealt{2017ApJ...837..161S,2019ApJ...882..113S}; \citealt{2018arXiv180706212P}; \citealt{2019ApJ...872..187C}). This trend is explained by the loss of grain alignment toward dense cores (\citealt{2008ApJ...674..304W}) or by the turbulent structure of magnetic field within the scale of the beam size (see \citealt{2015psps.book..147J}; \citealt{2015A&A...576A.105P}).
A popular theory describing grain alignment is RAdiative Torques (hereafter referred to as RATs) (\citealt{2007MNRAS.378..910L}; see \citealt{2007JQSRT.106..225L}; \citealt{2015ARA&A..53..501A} for reviews). One of the key predictions of the RAT theory is that the polarization degree correlates with the intensity of the radiation field (or equivalently dust temperature $T_{\rm d}$). This prediction was numerically demonstrated by \cite{2020ApJ...896...44L}. However, observations revealed that the dust polarization degree does not always increase with $T_{\rm d}$. For example, \cite{2018arXiv180706212P} showed that the 40' spatial resolution polarization degree at 850 $\mu$m, measured by the \textit{Planck} satellite toward four molecular regions, including Aquila Rift, Cham-Musca, Orion, and Ophiuchus in the Gould belt cloud, decreases for $T_{\rm d}>19~\,{\rm K}$ (see their Figure 18). Additionally, far-Infrared polarimetric data observed by the High-resolution Airborne Wide band Camera Plus (HAWC+) instrument (\citealt{2018JAI.....740008H}) onboard the Stratosphere Observatory for Infrared Astronomy (SOFIA) toward the molecular cloud Ophiuchus A (\citealt{2019ApJ...882..113S}) at 89 $\mu$m (7.8'' spatial resolution) and 154 $\mu$m (13.6'' spatial resolution) also reported the decrease of the polarization degree for $T_{\rm d}>32\,{\rm K}$ (see Section \ref{sec:obs} below). These observational features are challenging the popular RAT alignment theory.
Dust grain-size distribution is an important parameter when it comes to interpreting the polarization of dust. The grain size distribution is expected to evolve from the diffuse interstellar medium (ISM) to dense molecular clouds (MCs) due to grain accretion of gas species and grain-grain collisions (\citealt{2013MNRAS.434L..70H}). Recently, \cite{2019NatAs...3..766H} discovered that a large grain exposed to a strong radiation field could be disrupted into small fragments due to centrifugal stress induced by suprathermal rotation by RATs. This effect is termed Radiative Torques Disruption (RATD) (see \citealt{2020arXiv200616084H} for a review). Since RATs are stronger for larger grains
(\citealt{2007MNRAS.378..910L}; \citealt{2008MNRAS.388..117H}), RATD
is more efficient for large grains than smaller ones. As shown
in \cite{2019ApJ...876...13H}, the RATD mechanism is much faster than
grain shattering and thus determines the upper cutoff of the
grain size distribution in the ISM.
\cite{2020ApJ...896...44L} carried out numerical modeling of multi-wavelength polarization of thermal dust emission from aligned grains by RATs. They show that the polarization degree at 850 $\mu$m first increases with increasing dust temperature. However, when RATD is accounted for, they find that the polarization degree decreases for higher dust temperature, which is different from classical RATs prediction. The level of the decline is found to depend on the tensile strength, which is determined by the internal structure of dust grains (\citealt{2019ApJ...876...13H}). Interestingly, accounting for RATD, the model predicts the same $P(\%)-T_{\rm d}$ trend as reported by \textit{Planck} data (\citealt{2018arXiv180706212P}) at the same wavelength as mentioned above. The success of the joint effect of RAT alignment and RATD in explaining {\it Planck} data motivates us to use this approach to better interpret the SOFIA/HAWC+ data.
Coming back to the SOFIA/HAWC+ observation toward $\rho$ Oph-A at band C (89 $\mu$m) and D (154 $\mu$m) as mentioned above, \cite{2019ApJ...882..113S} mainly studied the variation of the ratio of the polarization degree ($P_{\rm D}(\%)/P_{\rm C}(\%)$) with respect to the dust temperature, which is opposed to the polarization degree studies. Furthermore, the authors showed that classical RATs mechanism was able to explain the increasing (e.g., positive) part of the ratio curve and discarded the decreasing (negative) part (see their Figure 6d). In this study, we will: (1) use this dataset to show the correlation to the polarization degree itself to dust temperature; and (2) extend the improved polarized thermal dust model introduced by \cite{2020ApJ...896...44L} to interpret these SOFIA/HAWC+ observational trends.
This paper is structured as follows. We present the archival SOFIA/HAWC+ data from $\rho$ Oph-A at $89~\mu$m and $154~\mu$m observed by SOFIA/HAWC+ in Section \ref{sec:obs}. We describe our modeling method of polarized thermal dust emission by aligned grains in Section \ref{sec:model}. In Section \ref{sec:compare}, we compare our numerical results obtained for $\rho$ Oph-A cloud with observational data. A summary of our findings and conclusions are presented in Section \ref{sec:discussion}.
\section{Observations toward $\rho$ Oph-A} \label{sec:obs}
\begin{figure*}
\centering
\includegraphics[width=0.45\textwidth]{f1a.pdf}
\includegraphics[width=0.45\textwidth]{f1b.pdf}
\includegraphics[width=0.45\textwidth]{f1c.pdf}
\includegraphics[width=0.45\textwidth]{f1d.pdf}
\caption{Maps of polarization degree and dust temperature of $\rho$ Oph-A. (a) and (b) The polarization degrees measured by bands C and D of HAWC+/SOFIA in the actually resolutions. (c) and (d) The maps of the H$_{2}$ column density ($N$) and the dust temperature ($T_{\rm d}$) derived from 70, 100, 160$\,\mu$m PACS/\textit{Herschel} data. The star symbol locates the position of Oph S1. The black filled circles show the beam size. The physical scale is derived from 140 pc of distance.}
\label{fig:polametric_maps}
\end{figure*}
$\rho$ Oph-A is a molecular cloud in one of the closest dark cloud complex and star-forming region $\rho$ Ophiuchi. Distance to this complex is reported to be $\sim$ 120--160 pc \citep{1981A&A....99..346C, 1989A&A...216...44D, 1998A&A...338..897K, 2004AJ....127.1029R, 2008ApJ...675L..29L, 2008A&A...480..785L, 2008AN....329...10M, 2008ApJ...679..512S, 2017ApJ...834..141O}. This region is significantly influenced by high energy radiation from a high-mass Oph-S1 star, which is a B association star (\citealt{1977AJ.....82..198V}; \citealt{1988ApJ...335..940A}; \citealt{1989ApJ...338..902L, 1989ApJ...338..925L};\citealt{2003PASJ...55..981H}). Among several dark clouds cores in $\rho$ Ophiuchi, Oph-A is identified as one of the warmest cores compared to Oph-B and C regions. Several studies i.e. $Herschel$, $Spitzer$, and James Clarke Maxwell Telescope (JCMT) Gould belt surveys (\citealt{2010A&A...518L.102A}; \citealt{2009ApJS..181..321E}; \citealt{2007PASP..119..855W}) include this region for various investigations on dust and gas properties.
This cloud complex is also widely studied in multi-wavelength imaging and polarimetry. Recent attempts were made to map magnetic fields in Oph-A region using near-IR and sub-mm polarization measurements by \cite{2015ApJS..220...17K, 2018ApJ...859....4K} and far-IR by \cite{2019ApJ...882..113S}, respectively. Oph-A is one of the best laboratory to understand the multi-band dust polarization in context of high energy radiation giving opportunity to investigate RAT in detail.
\begin{figure*}
\centering
\includegraphics[width=0.95\textwidth]{f2.pdf}
\caption{2D histogram of the dust polarization degree and dust temperature for 89 $\mu$m (left panel) and 154 $\mu$m (right panel). This diagram is made of 52 bins and the gray lines show the binning weighted-mean of the data and the error bars represent the standard deviation within the bin. The black dashed lines show the best fit of the piecewise linear function to the data. The maps of the dust temperature and polarization at 89 $\mu$m are smoothed to $13''.6$ of FWHM.}
\label{fig:hist2d_maps}
\end{figure*}
\subsection{Polarization maps} \label{sec:pol_map}
In this work, we use the archival FIR polarimetric data observed by SOFIA/HAWC+. These data sets are introduced in \cite{2019ApJ...882..113S}. The observations were made in 2017 using two bands of HAWC+ instrument, namely C (89 $\mu m$) and D (154 $\mu$m). The angular resolutions are $7''.8$ and $13''.6$, respectively. The polarization degree maps in these bands are shown in Figure \ref{fig:polametric_maps}(a,b)\footnote{We smoothed the $7''.8$ band C and the $11''.4$ dust temperature maps to $13''.6$ band D resolution using the python-package \textsc{Gaussian2DKernel}.}. We select the common sky positions in which data is detected in both bands. The local polarization degree varies significantly across the $\rho$ Oph-A cloud, in which the median value is 7.5$\%$ in band C and 5.0$\%$ in band D as discussed in \cite{2019ApJ...882..113S}. Figure \ref{fig:polametric_maps}(a,b) shows a "tight" spatial correlation in polarization degree in two bands except at the southernmost area ($T_{\rm d} \simeq 25\,{\rm K}$), where the data at 89$\,\mu$m is more polarized than at 154$\,\mu$m. The reason for this such a difference is beyond this work's scope because we do not have enough information to investigate it quantitatively. However, a possible explanation could be that there is a warmer outer component (with the local temperature larger than $25\,{\rm K}$) and a colder inner component (with the local temperature smaller than $25\,{\rm K}$) along the line-of-sight (LOS) as proposed in \citealt{2019ApJ...882..113S}. In the warmer component, which favors enhancing the shorter wavelength polarization (band C), grains are well exposed to radiation. Therefore the alignment is efficient, causing a larger value of $P_{\rm C}(\%)$. On the contrary, the colder component along the LOS favors emission at longer wavelengths (band D); however, the grain alignment is less effective due to the shielding. Therefore, the value of $P_{\rm D}(\%)$ become smaller. A star symbol locates the high-mass star Oph S1.
\subsection{Map of dust temperature and gas density}
We adopt the dust temperature ($T_{\rm d}$) and the gas column density ($N$) maps of \cite{2019ApJ...882..113S}. These maps were generated by a fit of the modified thermal spectral energy distribution (SED) to each pixel using 70, 100, and 160$\,\mu$m from \textit{Herschel/PACS} data (\citealt{2010A&A...518L...2P}) with the fixed exponential index of the dust opacity 1.6. Figure \ref{fig:polametric_maps}(c,d) shows the gas density and dust temperature maps in the same regions that HAWC+ detected data. The high-mass star Oph S1 warms up the surrounding environment, causing a large temperature gradient, i.e., from $\simeq 45\,{\rm K}$ near Oph S1 down to $\simeq 20\,{\rm K}$ at the edge of the cloud. On the contrary, the gas is densest at the edge of the map and radially diffuses backward to Oph S1.
\subsection{Dust polarization and temperature}
Figure \ref{fig:hist2d_maps} shows the 2D-histogram dust polarization degree in band C (left panel) and D (right panel) to dust temperature made of 52 bins. They share the same feature (1) the polarization degree increases as the dust temperature increases up to $T_{\rm d} \simeq T_{\rm crit}$ (i.e., positive slope region) and (2) the polarization degree decreases for higher dust temperature (i.e., negative slope region). In the positive slope, the polarization degree in band C is higher than band D, while it is lower in the negative slope, which was also showed by these fractional polarization ratio by Figure 6d in \citealt{2019ApJ...882..113S}). In other words, the polarization degree at a shorter wavelength (89 $\mu$m) is higher than at the longer wavelength (154 $\mu$m) in the denser region (i.e., at the edge of the polarimetric map) while it is the opposite in the less dense region (close central star) in $\rho$ Oph-A.
Using the RATs theory, the spherical model of \cite{2019ApJ...882..113S} could explain the increase (decrease) of the $P_{\rm D}/P_{\rm C}$ ratio with respect to dust temperature (gas column density) in the dense ($N>10^{21.75}\rm cm^{-2}$) and cold region ($T_{\rm d} \leq 32-34\,{\rm K}$) (see their Figure 6). However, this model could not explain the observational trend in the more diffuse and hotter region. We fitted the data with a piecewise linear function\footnote{We used the python-package \textsc{pwlf} (piecewise linear fitting).}. The best fits (i.e., black dashed lines) show that the transition takes place at $T_{\rm crit} \simeq 25\,{\rm K}$ for band C, and $\simeq 32\,{\rm K}$ for band D. In the statistical point of view, the reason for low $T_{\rm crit}$ in band C is that there is an excess of the polarization degree at $T_{\rm d} \simeq 25\,{\rm K}$ as mentioned in Section \ref{sec:pol_map}, which makes the piecewise linear fit peak toward this value of $T_{\rm d}$. In the framework of our theoretical point of view, however, the transition from positive to negative slope at $\simeq 25\,{\rm K}$ is unlikely physical because the tensile strength of grains must be extremely small (see Section \ref{sec:compare}). In addition, the polarization ratio $P_{\rm D}/P_{\rm C}$ changes its slope at $T_{\rm d}\simeq 32-34\,{\rm K}$ as shown in \cite{2019ApJ...882..113S}.
\section{Modelling thermal dust polarization} \label{sec:model}
The multi-wavelength polarization model of the thermal dust emission is described in detail in \cite{2020ApJ...896...44L}. The schematic of the model is illustrated in Figure \ref{fig:model_schem}. The radiative source (e.g., a O/B star) is denoted by a star symbol surrounded by a cloud. The radiative strength ($U$) gets smaller at further in the cloud. As follows, we describe the model to calculate the polarization of thermal dust emission from this cloud.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{f3.pdf}
\caption{Schematic illustration of the model. A molecular cloud is irradiated by the central star, so that the radiation strength (in equivalent to dust temperature) decreases from $U_{0}$ to $U_{\rm ISRF}$ The color gradient indicates the increase of the gas density from the central star.}
\label{fig:model_schem}
\end{figure}
\subsection{Fractional polarization of thermal emission}
Dust grains are heated by the radiation and re-emit in the thermal range. The fractional polarization of the thermal dust emission is the ratio of the polarized intensity ($I_{\rm pol}$) to the total emission intensity ($I_{\rm em}$), which yields
\begin{eqnarray} \label{eq:pol_degree}
P(\%) = 100\times \frac{I_{\rm pol}}{I_{\rm em}}.
\ena
Assuming a dust environment containing carbonaceous and silicate grains, the total emission intensity is given by
\begin{eqnarray}
\frac{I_{\rm em}(\lambda)}{N_{{\rm H}}} = \sum_{j=\rm sil,car} &&\int^{a_{\rm max}}_{a_{\rm min}} Q_{\rm ext}\pi a^{2} \nonumber\\
&\times&\int dT B_{\lambda}(T_{\rm d})\frac{dP}{dT}\frac{1}{n_{\rm H}}\frac{dn_{j}}{da}da.~~~
\ena
If silicate and carbon are separated populations, then as paramagnetic grains, silicates can align with the ambient magnetic field, while carbon grains cannot (\citealt{2016ApJ...831..159H}). Thus, the polarized intensity resulting from its alignment is given by
\begin{eqnarray}
\frac{I_{\rm pol}(\lambda)}{N_{\rm H}}= &&\int^{a_{\rm max}}_{a_{\rm min}} f(a)Q_{\rm pol}\pi a^{2} \nonumber\\
&\times&\int dT B_{\lambda}(T_{\rm d})\frac{dP}{dT}\frac{1}{n_{\rm H}}\frac{dn_{sil}}{da}da, ~~~
\ena
where $B_{\lambda}(T_{\rm d})$ is the black-body radiation at dust temperature $T_{\rm d}$, $dP/dT$ is the distribution of dust temperature, $f(a)$ is the alignment function, Q$_{ext}$ is the extinction coefficient, $Q_{\rm pol}$ is the polization coefficient, $dn/da$ is the grain-size distribution. The dust temperature distribution depends on the grain size and radiation strength, which is computed by the DustEM code (\citealt{2011A&A...525A.103C}, see e.g., Figure 8 in \citealt{2020ApJ...896...44L}). The extinction and polarization coefficients are computed by the DDSCAT model (\citealt{1994JOSAA..11.1491D, 2008JOSAA..25.2693D}; \citealt{2012OExpr..20.1247F}) for a prolate spheroidal grain shape with an axial ratio of 1/3.
If silicate and carbon grains are mixed together (e.g., \citealt{2013A&A...558A..62J}), which may exist in dense clouds due to many cycles of photo-processing, coagulation, shattering, accretion, and erosion, carbon grains could be aligned with the ambient magnetic field and its thermal emission could be polarized. For a simplest case, assuming these grain populations have the same alignment parameters (i.e., $a_{\rm align}$, $f(a)$), the total polarized intensity is
\begin{eqnarray}
\frac{I_{\rm pol}(\lambda)}{N_{\rm H}} = \sum_{j=\rm sil,car} &&\int^{a_{\rm max}}_{a_{\rm min}} f(a)Q_{\rm pol}\pi a^{2} \nonumber\\
&\times&\int dT B_{\lambda}(T_{\rm d})\frac{dP}{dT}\frac{1}{n_{\rm H}}\frac{dn_{j}}{da}da.
\ena
\subsection{Radiative torques disruption and grain-size distribution} \label{sec:RATD}
Let us consider a radiation field with the energy density of $u_{\rm rad} (\,{\rm erg} \,{\rm cm}^{-3})$, the mean wavelength of $\bar{\lambda}$ and an anisotropy degree of $\gamma$. Its strength is defined by a dimensionless $U=u_{\rm rad}/u_{\rm ISRF}$, where $u_{\rm ISRF}=8.64\times 10^{-13}\,{\rm erg}\,{\rm cm}^{-3}$ is the radiation energy density of the interstellar radiation field (ISRF) in the solar neighborhood (\citealt{1983A&A...128..212M}). This radiation field can spin a dust grain of size $a$ and density $\rho$ up to the rotational rate\footnote{Note that $\omega_{\rm RAT}/\omega_{\rm T} \sim 1/(1+F_{\rm IR})$, not $\sim (1+F_{\rm IR})$ as the typo in \cite{2020ApJ...896...44L}, Equation (3).}
\begin{eqnarray} \label{eq:omega_RAT}
\frac{\omega_{\rm RAT}}{\omega_{\rm T}} \simeq &&2.9\times 10^{2} \hat{\rho}^{0.5} \gamma a^{3.2}_{-5} U\left(\frac{10^{3} \rm{cm^{-3}}}{n_{{\rm H}}}\right)\left(\frac{\bar{\lambda}}{0.5 \rm{\mu m}}\right)^{-1.7} \nonumber\\
&\times& \left(\frac{20 \,{\rm K}}{T_{\rm gas}}\right)\left(\frac{1}{1+F_{\rm IR}}\right),
\ena
where $a_{-5}=a/(10^{-5} \rm cm)$, $\hat{\rho}=\rho/(3 \rm g cm^{-3})$, $n_{{\rm H}}, T_{\rm gas}$ are the gas density and temperature. $\omega_{\rm T}=(k_{\rm B}T_{\rm gas}/I)^{0.5}$ is the thermal angular velocity with $I=8\pi \rho a^{5}/15$ the inertia moment of grain. A rotating grain is damped by gas collisions and IR emission (see \citealt{2019ApJ...876...13H}). The dimensionless parameter ($F_{\rm IR}$) that describes the ratio of the IR damping to collisional damping \footnote{The factor is corrected to be 0.4 from Equation (4) in \cite{2020ApJ...896...44L}.} is defined as
\begin{eqnarray}
F_{\rm IR} \simeq 0.4\left(\frac{U^{2/3}}{a_{-5}}\right)\left(\frac{30 \rm{cm^{-3}}}{n_{{\rm H}}}\right)\left(\frac{100\,{\rm K}}{T_{\rm gas}}\right)^{1/2}.
\ena
A grain rotating at rotational velocity $\omega$ results in a tensile stress $S=\rho \omega^{2}a^{2}/4$ on the materials making up the grain. Thus, the maximum rotational velocity that a grain can withstand is:
\begin{eqnarray} \label{eq:omega_crit}
\omega_{\rm crit} = \frac{2}{a}\left(\frac{S_{\rm max}}{\rho}\right)^{1/2} \simeq \frac{3.6\times 10^{8}}{a_{-5}}S^{1/2}_{\rm max,7}\hat{\rho}^{-1/2},
\ena
where $S_{\rm max,7}=S_{\rm max}/(10^{7} \rm erg \,{\rm cm}^{-3})$.
One can see from Equation (\ref{eq:omega_RAT}) that the stronger the radiation field and the larger the grain size, the faster the rotation of the grain. A strong radiation field can thus generate such a fast rotation that the induced stress on large grains can result in a spontaneous disruption. This disruption mechanism is named as RATD and discovered by \cite{2019NatAs...3..766H}. From Equations (\ref{eq:omega_RAT}) and (\ref{eq:omega_crit}), we can derive the critical size above which grains are disrupted:
\begin{eqnarray}
\left(\frac{a_{\rm disr}}{0.1\,\rm \mu m}\right)^{2.7} \simeq 5.1\gamma^{-1}_{0.1}U^{-1/3}\bar{\lambda}^{1.7}_{0.5}S^{1/2}_{\rm max,7},
\ena
where $\bar{\lambda}_{0.5} = \bar{\lambda}/(0.5\,\mu \rm m)$.
Dust grains are disrupted efficiently (for $a$ greater than $a_{\rm disr}$) in stronger radiation fields. The disruption of dust grain by RATD can modify the grain-size distribution. Since only the largest grains are affected by the RATD mechanism, RATD determines the upper limit of the size distribution. The disruption is thus expected to enhance more smaller grains, resulting in a steeper grain size distribution than in the standard ISM. In the particular case of $\rho$ Oph-A cloud, \cite{2015A&A...578A.131L} furthermore showed that the grain size distribution experiences a varying power index across the cloud. In this work, we adopt a power-law grain size distribution assumption for both the original large grains and the smaller grains produced by disruption, with a power-law index $\beta$:
\begin{eqnarray}
\frac{1}{n_{{\rm H}}}\frac{dn_{\rm sil,car}}{da}=C_{\rm sil,car}a^{\beta} \ \ \ \rm{(a_{\rm min}\leq a \leq a_{\rm max})},
\ena
where $C_{\rm sil}$ and $C_{\rm car}$ are the normalization constants for silicate and carbonaceous grains, respectively. The smallest grain size is chosen as $a_{\rm min}=10~\AA$, while the maximum size is constrained by the RATD mechanism (i.e., $a_{\rm max}=a_{\rm disr}$). The normalization constants are determined through the dust-to-gas mass ratio $M_{\rm d/g}$ (see \citealt{2020arXiv200906958C}; \citealt{2020ApJ...893..138T}) as
\begin{eqnarray}
\sum_{\rm j=sil,car} C_{j}\rho_{j} &=& \frac{(4+\beta)M_{\rm d/g} m_{\rm gas}}{\frac{4}{3}\pi (a^{4+\beta}_{\rm max}-a^{4+\beta}_{\rm min})} ~~~~~~~\rm{for\ \beta \neq -4} \\ \nonumber
\sum_{\rm j=sil,car} C_{j}\rho_{j} &=& \frac{M_{\rm d/g}m_{\rm gas}}{\frac{4}{3}\pi(\ln a_{\rm max} - \ln a_{\rm min})} ~~~\rm{for\ \beta = -4}.
\ena
where $C_{\rm sil}/C_{\rm car}$ is adopted as 1.12 (\citealt{1984ApJ...285...89D}), and the dust-to-mass ratio $M_{\rm d/g}$ is fixed as 0.01 throughout this work. The latter assumption is close to what is derived from X-ray observations $\simeq 0.011 - 0.0125$ (\citealt{2003A&A...408..581V}) or gas tracers $\simeq 0.0114$ (\citealt{2015A&A...578A.131L}).
\subsection{Grain alignment by RATs}
An anisotropic radiation field can align dust grains
via the RATs mechanism (see \citealt{2007JQSRT.106..225L}; \citealt{2015ARA&A..53..501A} for reviews). In the unified theory
of RATs alignment, grains are first spun-up to suprathermal rotation and then driven to be aligned with the ambient magnetic fields by superparamagnetic relaxation within grains having iron inclusions (\citealt{2016ApJ...831..159H}). Therefore, grains are only efficiently aligned
when they can rotate suprathermally. This aligned grain size ($a_{\rm align}$) is determined by the following condition $\omega_{\rm RAT}(a_{\rm align}) = 3\omega_{T}$ as in \cite{2008MNRAS.388..117H}. From Equation \ref{eq:omega_RAT}, we have:
\begin{eqnarray}
a_{\rm align} \simeq &&0.024\hat{\rho}^{-5/32} \gamma^{-5/16} U^{-5/16} \left(\frac{10^{3} \,{\rm cm}^{-3}}{n_{{\rm H}}}\right)^{-5/16} \\ \nonumber
&&\times \left(\frac{\bar{\lambda}}{0.5\rm \mu m}\right)^{17/32} \left(\frac{20\,{\rm K}}{T_{\rm gas}}\right)^{-5/16}\left(\frac{1}{1+F_{\rm IR}}\right)^{-5/16} ~\rm{\mu m},
\ena
which implies $a_{\rm align} \sim 0.02\,\mu$m for a dense ISM with $\gamma=1.0$, $U=1$, and $\bar{\lambda}=0.3\,\mu$m. In this work, we adopt the alignment function as in \cite{2020ApJ...896...44L}:
\begin{eqnarray} \label{eq:fa}
f(a)=f_{\rm min}+(f_{\rm max}-f_{\rm min})\left\{1-\exp{\left[-\left(\frac{0.5a}{a_{\rm align}}\right)^{3}\right]}\right\}.~~~
\ena
For those grains with $a\ll a_{\rm align}$, the alignment is minimum as $f_{\rm min}=10^{-3}$, while the alignment degree gets maximum $f_{\rm max}$ for $a\gg a_{\rm align}$. This parametric function agrees the results obtained from inverse modeling to interstellar polarization data (\citealt{2009ApJ...696....1D}; \citealt{2014ApJ...790....6H}). For the model with only silicate grains aligned, modeling requires $f_{\rm max}=1$, and for a mixture model of both carbon and silicate grains to be aligned, it requries $f_{\rm max}<1$ (\citealt{2009ApJ...696....1D}; \citealt{2018A&A...610A..16G}).
\section{Application to $\rho$ Oph-A} \label{sec:compare}
\subsection{Numerical setup}
As discussed in Section \ref{sec:model}, the parameters of the model include the gas properties: gas number density ($n_{{\rm H}}$) and gas temperature ($T_{\rm gas}$); the dust properties: size ($a$), shape, internal structure (i.e., tensile strength $S_{\rm max}$) and size distribution power index ($\beta$); and the ambient properties: radiation field strength $U$ (which is in fact equivalent to the dust temperature $T_{\rm d}$), mean wavelength ($\bar{\lambda}$) and an anisotropy degree ($\gamma$) of the radiation field.
Figure \ref{fig:polametric_maps}c shows that the gas is denser at the edge of the polarimetric map area and more diffuse close to the Oph S1 star. We derive the relation between the gas number density and the dust temperature by assuming that the dust temperature linearly decreases from $45\,{\rm K}$ down to $20\,{\rm K}$ at the edge of the polarimetric map area, and the gas number density is calculated from a spherical model as in Section 3.5 in \cite{2019ApJ...882..113S}. This relation is shown in Figure \ref{fig:nH_Td}. Throughout this work, we fix the value of the gas temperature as $T_{\rm gas}=20$ K, which is fairly common for dense molecular clouds.
In a dense molecular cloud, large grains are expected to be present thanks the coagulation process. We set the initial maximum value of grain size as $1\,\mu$m, then the RATD mechanism constrains the actual maximum value. The smallest value for the grain sizes is kept fixed at $10\,\AA$. The internal structure of grains is determined via their tensile strength (e.g., large composite grains have $S_{\rm max}\simeq 10^{7}\,{\rm erg} \,{\rm cm}^{-3}$, stronger grains have a higher value of $S_{\rm max}$), which is a free parameter. The grain-size distribution could change across the $\rho$ Oph-A cloud (\citealt{2015A&A...578A.131L}), thus we vary the power index $\beta$ as another free parameter. In our model, the local value of the radiation strength is determined by the dust temperature as shown in Figure \ref{fig:polametric_maps}d via the relation $T_{\rm d}=16.4 a^{1/15}_{-5}U^{1/6}$K (\citealt{2011piim.book.....D}). The mean wavelength is $\bar{\lambda}\simeq 0.3\mu$m corresponding to a B-like star with $T_{\ast}\simeq 1.5\times 10^{4}$K. The anisotropy degree is $\gamma= 1$ for the unidirectional radiation field from a nearby star.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{f4.pdf}
\caption{Relation between the local gas number density and the local dust temperature.}
\label{fig:nH_Td}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=0.45\textwidth]{f5a.pdf}
\includegraphics[width=0.45\textwidth]{f5b.pdf}
\caption{Polarization spectrum of thermal dust emission calculated from grain alignment by RATs only (without RATD) for different grain temperatures, $T_{\rm d}$, assuming the size distribution power index $\beta$=-3.5 (left panel) and $\beta$=-4.0 (right panel). Higher dust temperatures result in higher polarization degree and smaller peak wavelength of the spectrum. Steeper size distribution leads to lower polarization degree (right panel). Only silicate grains are assumed to be aligned, and carbonaceous grains are randomly oriented.}
\label{fig:disruptionoff}
\includegraphics[width=0.45\textwidth]{f6a.pdf}
\includegraphics[width=0.45\textwidth]{f6b.pdf}
\caption{Polarization spectrum of thermal dust emission calculated with both grain alignment and disruption by RATs for two values of the tensile strength. The RATD effects decreases the polarization degree for $T_{\rm d}>33.7\ \,{\rm K}$ (left) and for $T_{\rm d}>37.9\,{\rm K}$ (right). The decline is more substantial for composite grains (left panel) than for more compact grain (right panel).}
\label{fig:disruptionon}
\includegraphics[width=0.46\textwidth]{f7a.pdf}
\includegraphics[width=0.45\textwidth]{f7b.pdf}
\caption{Same as Figure \ref{fig:disruptionon} (left panel) but for a mixture of silicate and carbon grains aligned with $f_{\rm max}=0.3$ (left panel) and $f_{\rm max}=0.5$ (right panel). The disruption effect also happens once $T_{\rm d}>34\,{\rm K}$. However, the shape shows a flat feature. Higher $f_{\rm max}$ leads to higher polarization degree.}
\label{fig:disruptionon_silcar}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.45\textwidth]{f8a.pdf}
\includegraphics[width=0.45\textwidth]{f8b.pdf}
\caption{Variation of the polarization degree with the grain temperature, computed at 89 $\mu$m (left panel) and 154 $\mu$m (right panel) with and without RATD effect at a given grain-size distribution. Without RATD, the polarization degree monotonically increases as dust temperature increases (dotted blue line). With RATD, the polarization degree first increases and then decreases when the dust temperature exceeds some critical value. This value (labeled by A, B, and C), is lower for weak grains and larger for stronger grains. Only silicate grains are assumed to be aligned.}
\label{fig:pol_Td_Smaxfixed}
\includegraphics[width=0.45\textwidth]{f9a.pdf}
\includegraphics[width=0.45\textwidth]{f9b.pdf}
\caption{Effect of the size-dependent tensile strength on the fractional polarization emission of silicate grains. The dotted blue line and the dashed orange line are computed for a fixed $S_{\rm max}$ as in Figure \ref{fig:pol_Td_Smaxfixed}. The solid black line is the model prediction for a size-dependent $S_{\rm max}$ (see text for details).}
\label{fig:pol_Td_Smaxvaried}
\includegraphics[width=0.45\textwidth]{f10a.pdf}
\includegraphics[width=0.45\textwidth]{f10b.pdf}
\caption{Same as Figure \ref{fig:pol_Td_Smaxfixed}, but both silicate and carbon grains are assumed to be aligned with $f_{\rm max}=0.5$. The trend and the critical temperature are the same but the decline is less steep and the polarization amplitude is higher than in the case of silicate grains alone.}
\label{fig:pol_Td_Smaxfixed_carsil}
\end{figure*}
\subsection{Numerical results} \label{sec:numerical_results}
Here, we show the numerical results of the multi-wavelength polarization degree of thermal dust emission using RATs theory in two cases: without disruption (namely classical RATs) and with disruption for comparison.
Figure \ref{fig:disruptionoff} shows the polarization spectra obtained with grain alignment by RATs only (without RATD), computed for several values of the dust temperature with different grain-size distributions, i.e., $\beta=-3.5$ (left panel) and $\beta=-4.0$ (right panel). One can see that (1) the polarization degree proportionally increases as the dust temperature increases, and (2) the polarization degree is lower for lower values of $\beta$ for the same $T_{\rm d}$. The first effect is due to the fact that a higher dust temperature (equivalent to higher radiation strength) causes larger torques acting on dust grains, which decreases the alignment size $a_{\rm align}$ and then increases the polarization degree of dust emission. Moreover, for a lower $\beta$, the dust mass contained in large grains is smaller, decreasing the polarization degree of the thermal dust emission that is dominantly produced by aligned, large grains. This explains the second effect.
Figure \ref{fig:disruptionon} shows the polarization spectra obtained with both grain alignment and disruption by RATs (with RATD), assuming different values of tensile strength, i.e., $S_{\rm max}=10^{7}\,{\rm erg} \,{\rm cm}^{-3}$ (left panel) and $S_{\rm max}=10^{9}\,{\rm erg} \,{\rm cm}^{-3}$ (right panel). In the left panel, the low-$T_{\rm d}$ curves are the same with Figure \ref{fig:disruptionoff} (blue, orange, green, and dashed-dotted red lines). However, differing from Figure \ref{fig:disruptionoff}, the polarization degree decreases as dust temperature increases beyond a critical value (i.e., $\simeq 34\,{\rm K}$, the dotted violet and dashed brown lines). Higher $S_{\rm max}$ leads the disruption to occur at a higher critical dust temperature (i.e., $\simeq 38\,{\rm K}$, the dashed brown line). The reason is that dust grains, exposed to strong radiation (indicated by where the dust temperature is high, see Figure \ref{fig:polametric_maps}d), can be rotated extremely fast due to strong radiative torques while damping is inefficient (because of a low gas density, Figure \ref{fig:polametric_maps}c), resulting in radiative torques disruption (RATD) as described in Section \ref{sec:RATD}. For $T_{\rm d}$ lower than the critical temperature, on the contrary, the radiative torques are weaker, and the damping process is more substantial (because gas is denser) so that the RATD cannot occur and thus the results are the same as for the classical RATs calculations.
The disruption leads to a drop in the polarization degree. The critical temperature above which RATD occurs and the level of the decline depend on the internal structure of the grains controlled by $S_{\rm max}$. The composite grains ($S_{\rm max}=10^{7}\,{\rm erg} \,{\rm cm}^{-3}$) are more easily disrupted, resulting in a significant decrease of the polarization degree (Figure \ref{fig:disruptionon}, left panel), than for the compact grains ($S_{\rm max}=10^{9}\,{\rm erg} \,{\rm cm}^{-3}$) (Figure \ref{fig:disruptionon}, right panel).
Figure \ref{fig:disruptionon_silcar} shows the polarization spectrum for the case of mixed silicate and carbon grains in which both grain populations are aligned by RATs. Similar to Figure \ref{fig:disruptionon}, the disruption occurs for $T_{\rm d}>34\,{\rm K}$. In this case, the spectrum shows an increase and then a plateau feature, which differs from Figure \ref{fig:disruptionon}. The reason is that the polarization degree is the ratio of the polarized intensity ($I_{\rm pol}$) to the total intensity ($I_{\rm em}$) (Equation \ref{eq:pol_degree}). Since the $T_{\rm d}$ of silicate grains is lower than that of carbon grains, their spectrum slopes differ from each other. When only silicate grains are aligned the different spectrum slopes of $I_{\rm pol}$ to $I_{\rm em}$ result in a slope in polarization spectrum (see e.g., Figure \ref{fig:disruptionon}). When both silicate and carbon grains are aligned, $I_{\rm pol}$ and $I_{\rm em}$ differ by a factor of degree of grain alignment, which results in a flat spectrum. The degree of grain alignment is defined by $f_{\rm max}$ (Equation \ref{eq:fa}). For a combination of carbon and silicate grains, the non-perfect aligned grains ($f_{\rm max}<1$) can reproduce observation (see e.g., \citealt{2009ApJ...696....1D}; \citealt{2018A&A...610A..16G}). Grains with higher value of $f_{\rm max}$ (right panel) produce more polarized thermal emission than for a lower value of $f_{\rm max}$ (left panel).
Figure \ref{fig:pol_Td_Smaxfixed} shows the polarization degree at 89 $\mu$m (left panel) and 154 $\mu$m (right panel) with respect to dust temperature. In the case without RATD (dotted lines), the polarization degree first increases rapidly with increasing dust temperature and then slowly changes (as shown in Figure \ref{fig:disruptionoff}). Accounting for RATD, the polarization degree first increases with $T_{\rm d}$ and then rapidly declines once the dust temperature exceeds a critical value, which depends on the grains' tensile strength as shown in Figure \ref{fig:disruptionon}. The critical dust temperature is lower for weaker grains (i.e., lower value of $S_{\rm max}$) because of an effective disruption, which leads to a deeper decrease of the polarization degree in comparison to stronger grains.
Above, we assume that all grains have the same tensile strength ($S_{\rm max}$). However, the tensile strength of composite grains scales with the radius of monomoers as $a_{p}^{-2}$ (see \citealt{2019ApJ...876...13H} for a detailed demonstration), which implies that large grains (comprising many monomers) are more breakable than smaller grains presumably having a compact structure. As an example, we set $S_{\rm max}=10^{7}\,{\rm erg} \,{\rm cm}^{-3}$ for all grains with size $a\geq 0.1\,\mu$m while $S_{\rm max}=10^{9}\,{\rm erg} \,{\rm cm}^{-3}$ for smaller grains. The results are shown in Figure \ref{fig:pol_Td_Smaxvaried}, black solid lines. The trend of the polarization degree also shows an increase and decrease feature to dust temperature. However, for $T_{\rm d}>T_{\rm crit}$ (i.e., denoted by the position B), its amplitude is higher than in the case of fixed $S_{\rm max}$. The reason is as follows. When the RATD does not occur, the polarization is higher for higher dust temperature/stronger radiation. When the dust temperature is just enough for RATD to occur, the disruption mostly effects the largest grains (i.e., low $S_{\rm max}=10^{7}\,{\rm erg} \,{\rm cm}^{-3}$ in this example), so that the curve follows the case of fixed $S_{\rm max}=10^{7}\,{\rm erg} \,{\rm cm}^{3}$ (e.g., BB1 slope). As the dust temperature increases, the RATD can affect smaller grains (i.e., higher $S_{\rm max}$). Because the decline of polarization is smaller for higher $S_{\rm max}$, there is a short increasing interval in the polarization (see the B1, B2 segment). Finally, once RATD only effects "strong" grains, the trend of the polarization follows the fixed $S_{\rm max}=10^{9}\,{\rm erg} \,{\rm cm}^{-3}$ case, as shown in Figure \ref{fig:pol_Td_Smaxfixed}.
Figure \ref{fig:pol_Td_Smaxfixed_carsil} shows the polarization degree of thermal dust as a function of the grain temperature, assuming that both carbon and silicate grains are aligned. The results generally show that the polarization degree drops at the same critical dust temperature as Figure \ref{fig:pol_Td_Smaxfixed} in which only silicate grains are aligned. However, the mixed grain model results in higher polarization, as well as in less decline than in the case of single grains due to the contribution of aligned carbon grains. With the $T_{\rm d}-n_{{\rm H}}$ relation, we varied the value of $n_{{\rm H}}$ by 10$\%$ but we do not see a significant change (i.e., the correlation coefficient is $\simeq 0.99$). However, another relation of $n_{{\rm H}}$ to $T_{\rm d}$ could affect more significantly.
\subsection{Interpretation of observations}
\begin{figure*}
\centering
\includegraphics[width=0.9\textwidth]{f11.pdf}
\caption{Comparison of the polarization degree of dust emission from our models with observations at 89 $\mu$m (left panel) and at 154 $\mu$m (right panel). Background colored points are the observational polarization degree (Figure \ref{fig:hist2d_maps}). Colored lines are the models for different value of the power-index $\beta$. The dashed line shows the results without RATD, while solid lines show results with RATD. Tensile strength is $S_{\rm max}=10^{7}\,{\rm erg} \,{\rm cm}^{-3}$. Only silicate grains are aligned, and carbons are randomly oriented.}
\label{fig:fits_obs_Smax1e7}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.9\textwidth]{f12.pdf}
\caption{Similar to Figure \ref{fig:fits_obs_Smax1e7} but for a combination of aligned carbonaceous and silicate grains. The same range of $\beta$ can match better to the observations with the degree of alignment $f_{\rm max}=0.35$.}
\label{fig:fits_obs_Smax1e7_carsil}
\end{figure*}
Since the $T_{\rm crit}\simeq 30-34\,{\rm K}$ critical dust temperature above which the polarized thermal dust emission drops for $S_{\rm max}=10^{6}-10^{7}\,{\rm erg} \,{\rm cm}^{-3}$ (see Figures \ref{fig:pol_Td_Smaxfixed}, \ref{fig:pol_Td_Smaxfixed_carsil}) is consistent with observations (see Figure \ref{fig:hist2d_maps}), Figure \ref{fig:fits_obs_Smax1e7} shows only the numerical results for $S_{\rm max}=10^{7}\,{\rm erg} \,{\rm cm}^{-3}$ with a variation of the silicate grain-size distribution power-index $\beta$ overlaid over the observational data. For illustration, we also show the results from the RATs model without the disruption effect (dashed line), which we denote as classical RATs theory. Since RATs theory implies that stronger radiation strength causes more torques acting on grains, which results in higher polarization, then the classical RATs model can only lead to an increase of the dust polarization degree with respect to dust temperature and fails to explain its decrease beyond $T_{\rm crit}$ dust temperature (as discussed in Section \ref{sec:numerical_results}). When the rotational disruption mechanism is incorporated into RATs (solid lines), the model can reproduce the increase and decrease features from observations. For $T_{\rm d}<T_{\rm crit}$, the disruption does not proceed; hence, the model is exactly as classical RATs, which accounts for the increase of the polarization degree. For $T_{\rm d}\geq T_{\rm crit}$, on the contrary, the disruption occurs so that large grains are disrupted into many smaller fragments. The enhancement of smaller grains causes a decrease in the polarization degree at these FIR wavelengths.
Furthermore, different solid lines correspond to different values of the grain-size distribution power-index $\beta$. In the case of silicate grains alone, a simple $\chi^{2}$ calculation as in Table \ref{tab:chi2} shows that it gets minimum at $\beta \simeq -4.0$ to observational data at 89 $\mu$m, while it is slightly lower as $\beta \simeq -4.1$ to 154 $\mu$m data. Hence, the slope of the size distribution is steeper than the MRN size distribution for the standard interstellar medium (\citealt{1977ApJ...217..425M}), which is evidence of enhancement of small grains by RATD. 154 $\mu$m probes different (more embedded) layers of the $\rho$ Oph-A than the 89 $\mu$m observations do, thereby the size-distribution could be slightly different. Polarimetric data at longer wavelengths (e.g., 850 $\mu$m JCMT/SCUBA-2 observations, see \citealt{2018ApJ...859....4K}), which trace bigger grain size, are desired to get a more comprehensive picture.
Figure \ref{fig:fits_obs_Smax1e7_carsil} shows the comparison for a mixture of carbon and silicate grains with respect to observations. As shown in Section \ref{sec:numerical_results}, both grain-size distribution ($\beta$) and degree of alignment ($f_{\rm max}$) control the amplitude of the polarization degree, but they do not affect the spectrum trend. We found that the same range of $\beta$ as in Figure \ref{fig:fits_obs_Smax1e7} also nicely fits the observational trend with $f_{\rm max}\simeq 0.35$. In this case, the $\chi^{2}$ calculation in Table \ref{tab:chi2} indicates that $\beta \simeq -4.0$ and $\beta \simeq -4.1$ also result in minimum $\chi^{2}$ to observed data at $89\,\mu$m and $154\,\mu$m, respectively.
\begin{table}
\centering
\caption{$\chi^{2}$ of the models with a single aligned silicate grains (Figure \ref{fig:fits_obs_Smax1e7}) and a combination of aligned carbonaceous and silicate grains (Figure \ref{fig:fits_obs_Smax1e7_carsil}) to observations computed by}
\label{tab:chi2}
\begin{tabular}{ccc|cc}
\multicolumn{5}{c}{$\chi^{2}=\frac{1}{N}\sum^{N}_{i} (P^{i}_{\rm obs} - P_{\rm mod})^{2}/P^{i}_{\rm obs}$} \\
\multicolumn{5}{c}{with $N$ the number of data points} \\
\\
\hline
{}&\multicolumn{2}{c|}{$\chi^{2}$ (89$\,\mu$m, $f_{\rm max}=1$)} & \multicolumn{2}{c}{$\chi^{2}$ (154$\,\mu$m, $f_{\rm max}=0.35$)} \\
$\beta$ & sil grain & car+sil grain & sil grain & car+sil grain \\
\hline
-3.5 & 4.99 & 3.41 & 10.62 & 7.40 \\
-3.6 & 3.45 & 2.65 & 7.60 & 5.69 \\
-3.7 & 2.36 & 2.03 & 5.12 & 4.15 \\
-3.8 & 1.68 & 1.59 & 3.25 & 2.86 \\
-3.9 & 1.37 & 1.35 & 1.98 & 1.89 \\
-4.0 & 1.34 & 1.31 & 1.26 & 1.28 \\
-4.1 & 1.53 & 1.44 & 1.01 & 1.02 \\
-4.2 & 1.86 & 1.71 & 1.09 & 1.04 \\
-4.3 & 2.26 & 2.08 & 1.41 & 1.28 \\
-4.4 & 2.71 & 2.50 & 1.84 & 1.67 \\
-4.5 & 3.15 & 2.94 & 2.33 & 2.12 \\
\hline
\end{tabular}
\end{table}
\subsection{Limitations of the model}
Our model's primary and most sensitive input parameters are the local gas column density and the local dust temperature. The first controls the damping process of the rotating grains, while the second defines the angular rotational rate of grains. The value for the gas column density is derived from a spherical model, whereas the value for the dust temperature is adopted from observations. Therefore, our results contain uncertainties, and we would like to address here the main limitations of our model. First, the adopted value of dust temperature is, in fact, the projection on the plane of the sky, the actual value could be higher than these. Second, the dust temperature and gas density maps are derived from only three FIR bands of \textit{Herschel}/PACS ($60\,\mu$m, $100\,\mu$m, and $160\,\mu$m). The derivation could be more accurate if the (sub)millimeter and radio bands are taken into account as it was in \cite{2019ApJ...872..187C}. However, we expect that accounting for local variations of dust temperature and gas number density could explain the observational scatter, but should not change the trend or our conclusions.
Because our main input parameters are the local values, our prescription will be easy to incorporate into more elaborate models that have better physical treatments for the gas and dust properties, such as 3D radiative dust modeling codes (e.g., \citealt{2012ascl.soft02015D}; \citealt{2015A&A...578A.131L}).
Finally, we note that the magnetic field geometry is assumed to not vary along the line of sights toward $\rho$ Oph-A in the modeling. The effect of turbulent magnetic field would reduce the polarization degree predicted, but the trend $P(\%)$ vs. $T_{\rm d}$ is not affected. Nevertheless, the inferred magnetic field direction shown in Figure 2 in \cite{2019ApJ...882..113S} indicates the coherent magnetic stream lines in $\rho$ Oph-A. The turbulence, therefore, may occur at very small scale.
\section{Summary and conclusions} \label{sec:discussion}
We showed and interpreted the relation between the fractional polarization of thermal dust emission and dust temperature in $\rho$ Oph-A molecular cloud using the archival SOFIA/HAWC+ observations at 89 $\mu$m and 154 $\mu$m. The observed fractional polarization first increases with increasing dust temperature and then decreases once the dust temperature exceeds $\simeq 25-32\,{\rm K}$. This is similar to what seen in {\it Planck} data for other clouds (\citealt{2018arXiv180706212P}). This trend differs from the prediction by the classical RAT theory and represents a challenge to grain alignment theory.
We calculated the polarization degree of thermal dust emission by simultaneously considering grain alignment and rotational disruption (RATD) induced by RATs. The RATD mechanism relies on the extremely fast rotation of large grains exposed in a strong radiation field (or high dust temperature in equivalent). For sufficiently high rotation rate, the centrifugal force can exceed the binding force that holds the grain's structure and disrupts the large grain into smaller species. Since RATs are stronger for larger grains, the RATD mechanism constrains the upper limit for the grain size distribution. The efficiency of RATD also depends on the grain tensile strength ($S_{\rm max}$), which is determined by its internal structure. {A compact structure grain has a high value of $S_{\rm max}\simeq 10^{9}\,{\rm erg} \,{\rm cm}^{-3}$, while a composite structure has a lower value of $S_{\rm max} \simeq 10^{6}-10^{7}\,{\rm erg} \,{\rm cm}^{-3}$, and a porous structure has even lower $S_{\rm max}<10^{6}\,{\rm erg} \,{\rm cm}^{-3}$. Accounting for this disruption effect, we can reproduce a drop in the fractional polarization of thermal dust emission with respect to dust temperature, above a critical value which depends on the tensile strength of the grains. The successful polarization model with RATD and a low tensile strength suggests a composite grain structure instead of a compact grain model, in agreement with \cite{2020ApJ...896...44L}.
We successfully reproduced the observed $P(\%)-T_{\rm d}$ trend in the case of $\rho$ Oph-A by considering both only silicate grains and mixed carbon and silicate grains to align with the magnetic field, assuming that the grain size distribution produced by the RATD follows a power-law distribution. With the parameters adapted in this work, our results indicate that composite grains with a power-index of size distribution steeper than the standard MRN distribution (i.e., $\beta<-3.5$) can reproduce the observational data, which well agrees with \cite{2015A&A...578A.131L}. Polarimetric data at longer wavelengths would help us to have a better understanding of grain alignment and disruption induced by RATs. In the forthcoming work, we combine these FIR data with 450 $\mu$m and 850 $\mu$m (\citealt{2018ApJ...859....4K}) data observed by JCMT to study the polarization spectrum.
We thank the anonymous referee for helpful comments that improved the impact and the presentation of this paper. This research is based on observations made with the NASA/DLR Stratospheric Observatory for Infrared Astronomy (SOFIA). SOFIA is jointly operated by the Universities Space Research Association, Inc. (USRA), under NASA contract NNA17BF53C, and the Deutsches SOFIA Institut (DSI) under DLR contract 50 OK 0901 to the University of Stuttgart. Financial support for this work was provided by NASA through award 4$\_$0152 issued by USRA. T.H is funded by the National Research Foundation of
Korea (NRF) grants funded by the Korea government
(MSIT) through a Mid-career Research Program (2019R1A2C1087045).
A.G is supported by the Programme National "Physique et
Chimie du Milieu Interstellaire" (PCMI) of CNRS/INSU with INC/INP co-funded by CEA and CNES. A.S. acknowledge support from the NSF through grant AST-1715876.
| proofpile-arXiv_065-246 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Graphene has been of intense theoretical and experimental interest due to its unusual electronic properties \cite{Novoselov04(art1)}. With a single sheet of graphene being a zero-gap semiconductor much effort was directed toward engineering gap in the electronic spectrum of graphene by controlling its lateral size and shape. Close to Dirac points, the electrons can be effectively modeled by a massless Dirac equation, showing that electrons behave as chiral particles \cite{Castro09n}. The characteristic feature of the massless Dirac electrons in graphene and their linear energy dispersion
are at the origin of its unique electronic properties that could be of great importance in nanoelectronic applications \cite{Zhang05,Chung10}.
The electronic band structure of graphene
involves two nodal zero-gap points $(K, K')$, called Dirac points, in the first Brillouin zone at which the conduction and valence bands touch. This leads to a number of its unusual peculiar electronic properties such as its high electric conductivity \cite{Geim09, Geim07}.
However, there is general argument against the possibility to confine electrons electrostatically in graphene due to Klein tunneling, which hindered the possibility to use this marvelous material in electronic switching devices that require a gate control over charge carriers \cite{Katsnelson06}. Thus
pristine graphene quantum dots (GQDs) will allow electron to escape from any confining electrostatic potential and will not allow for quantum bound states in an electrostatically confined quantum dot. A large amount of research effort were deployed to create a band gap that allows for charge confinement in GQDs in various ways. This feature along with the characteristic size of the quantum dot, usually in the size range of 1-10 nm, will enable us to control a wide range of applications including highly tunable physicochemical and fluorescence properties as well as other electrical and optoelectronic properties.
The recent advances in controlled manufacturing of high quality GQDs as well as its strictly two-dimensional nature have established graphene as an exceptional candidate for future nano electronic devices \cite{Geim09,Beenakker08,Peres10}. For this reason the experimental activity aimed at confining electrons in GQDs \cite{Bunch05} had an upsurge in recent years.
On the other hand, the application of a magnetic flux in GQDs allows to control and strengthen the possibility of electrostatically confining fermions \cite{Julia13}. It was shown that the magnetic flux shifts the kinematic angular momentum to integer values, hence allowing for states that cannot be confined by electrostatic gate potentials alone \cite{Bardarson09}.
Theoretically, in the absence of a spectral gap, it has been shown that an electrostatically confined QD can only accommodate quasi-linked states
\cite{Matulis08}. At Dirac point, i.e.
at energy $E=0$, where the valence and conduction bands touch, electronic transport through QDs of certain shapes has also been considered \cite{Bardarson09}. In this particular situation, strong resonances in the two-terminal conductance have been predicted. However, in the presence of a spectral gap, real bound states have been obtained \cite{Trauzettel07,Recher09}. The physical methods used to open a gap in the energy spectrum of graphene are of vital importance for future potential applications \cite{Geim09}.
We study the electrostatic confinement of electrons in a quantum dot of gapped graphene, surrounded by a sheet of undoped graphene, in the presence of the magnetic flux. We assume that the quantum dot edge smearing is much less than the Fermi wavelength of the electrons and much larger than the graphene lattice constant to ensure the validity of our continuum model \cite{Martin14}.
Solving Dirac equation in each region and applying the continuity of our spinors at the boundary enables us to determine the solutions in each region of space and the corresponding energy spectrum. Subsequently, we use the asymptotic behavior of Hankel functions for large arguments to study approximately the density of states (DOS) as a function of magnetic flux $\phi$, energy gap $\Delta$ and applied electrostatic potential $V$. We numerically compute the DOS under suitable selections of the physical parameters and investigate the different oscillatory behaviors and resonances as well as the dependence of the DOS peaks on the quantum momentum numbers.
The manuscript is organized as follows. In section 2, we set our theoretical model describing electrostatically confined Dirac fermions. The energy spectrum is given in each region of our system. In section 3, we introduce the scattering matrix formalism to determine the DOS in terms of various physical parameters. We compute the DOS and present our results, which reflect the effect of magnetic flux and energy gap on the resonant peaks in the DOS. In section 4, we further discuss different numerical results related to the density of the states and conclude our results in the final section.
\section{Theoretical model}
We consider a quantum dot (QD) defined by a gate with finite-carrier density and surrounded by a sheet of undoped graphene, which is connected to a metallic contact in the form of a ring as depicted in Figure 1.
\begin{figure}[H]\centering
\includegraphics[width=6cm,height=5cm]{fig1}
\caption{\sf (color online) Gate-defined graphene quantum dot (gold color) surrounded by an intrinsic graphene sheet and coupled to a source and drain reservoirs (gray color). \label{f1}}
\end{figure}
For a Dirac electron in a circular electrostatically defined quantum dot in gapped graphene, the single-valley Hamiltonian can be written as
\begin{equation}
\label{eq:Dirac}
H=v_F (\vec p+e\vec{A})\cdot\vec \sigma+V(r) \mathbb{I}+\Delta\sigma_z
\end{equation}
such that the potential barrier $V(r)$ and energy gap $\Delta(r)$ are defined by
\begin{equation}\label{e2}
V(r)=
\left\{%
\begin{array}{ll}
-\hbar v_F V_0, & r<R\\
- \hbar v_F V_\infty, & r>L \\
0, & \mbox{elsewhere} \\
\end{array}%
\right., \qquad
\Delta(r)=
\left\{%
\begin{array}{ll}
\Delta, & r< R \\
0, & \mbox{elsewhere} \\
\end{array}%
\right.
\end{equation}
where $v_F = 10^6$ m/s is the Fermi velocity, $p=(p_x,p_y)$ is the momentum operator,
$\sigma_i$ are Pauli matrices in the basis of the two sublattices of $A$ and $B$ atoms.
We choose the parameters $V_0$ and $V_{\infty}$ to be positive, such that dot and lead region are electron-doped. The metallic contact for $r > L$ is modeled by taking the limit $V_{\infty} \to \infty$. The chief reason for our choice of a piece-wise uniform potential is to simplify our analytic calculations.
In the polar coordinate system $(r,\theta)$, we introduce the vector potential that generates a solenoid type of magnetic flux
$
\vec{A}(r)=\frac{\hbar}{e}\frac{\phi}{r}\vec{e}_\theta
$
so that the magnetic flux $\phi$ is measured in units of flux quantum $h/e$ and $\vec{e}_\theta$ is the unit vector along the azimuthal direction.
Now the Hamiltonian \eqref{eq:Dirac} takes the form
\begin{equation}
\label{eq:Hpolar}
H = \left(\begin{array}{cc} V(r)+\Delta&D_-\\D_+&V(r)-\Delta\\\end{array}\right)
\end{equation}
where the ladder operators are given by
\begin{equation}
D_{\pm}= -i \hbar v_F e^{\pm i \theta} \left(\partial_r \pm i \frac{1}{r} \partial_{\theta} \mp \frac{\phi}{r} \right).
\end{equation}
Knowing that the total angular momentum $J_z=L_z+\hbar\sigma_z/2$ commutes with the Hamiltonian \eqref{eq:Dirac}, then we look for eigenspinors that are common eigenvectors of both $H$ and $J_z$. These are
\begin{equation}\label{e2}
\Psi(r,\theta)=
e^{im\theta}\left(%
\begin{array}{c}
e^{-i\theta/2}\chi_1(r) \\
i e^{i\theta/2}\chi_2(r) \\
\end{array}%
\right)
\end{equation}
where $m=\pm1/2,\pm3/2 \cdots $ are eigenvalues of $J_z$.
In the forthcoming analysis, we solve the Dirac equation $H \Psi = E \Psi$ in the three regions: $0 < r < R$, $R < r < L$ and $r > L$.
We obtain
\begin{equation}\label{e8}
\left(\frac{\partial}{\partial r}+\frac{1}{r} \left((m+\phi)+\frac{1}{2}\right)\right)\chi_2(r)={(\epsilon-V_i-\delta)\chi_1}(r)
\end{equation}
\begin{equation}\label{e88}
\left(-\frac{\partial}{\partial r}+\frac{1}{r} \left((m+\phi)-\frac{1}{2}\right)\right)\chi_1(r)={(\epsilon-V_i+\delta)\chi_2}(r)
\end{equation}
where the dimensionless parameters are used $\epsilon=\frac{E}{\hbar v_F}$, $V_i=\frac{V}{\hbar v_F}$, $\delta=\frac{\Delta}{\hbar v_F}$. For region $R<r<L$ and when $\epsilon=0$, the radial components have the forms
\begin{equation}
\label{psinoenergy}
\chi_1(r)= a_{+} r^{m+\phi-\frac{1}{2}}, \qquad \chi_2(r)= a_{-} r^{-m-\phi-\frac{1}{2}}.
\end{equation}
and to avoid divergence, we impose constraints to fulfill this requirement, $a_{+}=0$ for $m>0$ and $a_{-}=0$ for $m<0$.
Now we consider our system in the absence of magnetic flux $\phi=0$. Then \eqref{e8} and \eqref{e88} reduce to the following equations
\begin{equation}\label{e8b}
\left[\frac{\partial}{\partial r}+\frac{1}{r} \left((m+\frac{1}{2}\right)\right]\chi_2(r)={(\epsilon-V_i-\delta)\chi_1}(r)
\end{equation}
\begin{equation}\label{e88b}
\left[-\frac{\partial}{\partial r}+\frac{1}{r} \left((m-\frac{1}{2}\right)\right]\chi_1(r)={(\epsilon-V_i+\delta)\chi_2}(r).
\end{equation}
Injecting \eqref{e8b} into \eqref{e88b} to get a second order differential equation for $\chi_1(\rho)$
\begin{equation}\label{e9}
\left[\rho^2 \frac{\partial^2}{\partial \rho^2}+\rho
\frac{\partial}{\partial \rho}+ \rho^2 - \left(m-\frac{1}{2}\right)^2
\right]\chi_1(\rho)=0
\end{equation}
where we have set the variable $\rho=\kappa r$ and the wave number $\kappa$ is defined,
according to each region, by
\begin{equation}\label{kappa}
\kappa =\begin{cases}
\kappa_0= \sqrt{|(\epsilon+V_0)^2-\delta^2|}, & r<R\\
\kappa=\epsilon, & R < r < L \\
\kappa_{\infty}=\epsilon+V_\infty, &r > L \\
\end{cases}
\end{equation}
\eqref{e9} has the Hankel function of first $H^{+}_n\left(\rho\right)$ and second $H^{-}_n\left(\rho\right)$ kinds as solutions. Then, we combine all to end up with the eignespinors
\begin{equation} \label{eq:psiref}
\psi_{\kappa,m}^{\pm}(r) =e^{i m \theta} \sqrt{\frac{\kappa}{4\pi}}\begin{pmatrix}
e^{-i\theta/2 }H^\pm_{|m|-1/2}(\kappa r) \\
i~\mathrm{sign}(m)e^{i\theta/2}H^{\pm}_{|m|+1/2}(\kappa r)
\end{pmatrix}
\end{equation}
With the requirement that the wave function is regular at $r=0$, we have the solution inside the quantum dot $r<R$
\begin{equation} \label{eq:psidot}
\psi_{\kappa,m}(r) =e^{i m \theta} \sqrt{\frac{\kappa}{4\pi}}\begin{pmatrix}
e^{-i\theta/2 }J_{|m|-1/2}(\kappa r) \\
i~\mathrm{sign}(m)e^{i\theta/2}J_{|m|+1/2}(\kappa r).
\end{pmatrix}
\end{equation}
Note that the Hankel functions are related to the Bessel $J_n$ and Neumann $Y_n$ functions by the relations
$
H_n^{(\pm)}=J_n\pm iY_n.$
The presence of the flux $\phi=1/2$ modifies the eigenspinors \eqref{eq:psiref}, because the kinematic angular momentum will be replaced by the canonical one, i.e. $J_{z,kin}=J_z+\hbar \sigma_z/{2}$. We label the new basis states by the integer indices $\mu= m+1/2$ that are eigenvalues of $J_{z,kin}$.
For nonzero $\mu$, the eigenspinors now read as
\begin{equation} \label{eq:psirefflux}
\psi_{\kappa,\mu}^{\pm}(r) =\sqrt{\frac{\kappa}{4\pi}}\begin{pmatrix}
e^{i (\mu-1) \theta }H^\pm_{|\mu|-1/2}(\kappa r) \\
i~\mathrm{sign}(\mu)e^{i\mu \theta}H^{\pm}_{|\mu|+1/2}(\kappa r)
\end{pmatrix}
\end{equation}
Note that the half-integer Bessel functions are $Y_{1/2}(x)= - J_{-1/2}(x)= -\sqrt{\frac{2}{\pi x}} \cos x$, $Y_{-1/2}(x)=J_{1/2}(x)=\sqrt{\frac{2}{\pi x}}\sin x$ and $\frac{\cos(kr)}{\sqrt{r}}$ diverges at the origin.
Then for $\mu=0$, we have the eigenspinors
\begin{equation} \label{eq:psirefflux0}
\psi_{\kappa,0}^{\pm}(r) =\frac{e^{\pm i \kappa r}}{\sqrt{8 \pi^2 r}}\begin{pmatrix}
\pm e^{- i \theta } \\
1
\end{pmatrix}.
\end{equation}
In the next, we will show how the above results can be used to analyze the density of states
associated to our system. In fact, it will be done by distinguishing two cases:
without and with magnetic flux.
\section{Density of states}
To give a better understanding of the basic features of our system, let us
investigate the density of states (DOS). For this,
we introduce the local DOS $\nu(r,\epsilon)$
that is given in terms of the scattering matrix $\mathcal{S}(\varepsilon)$ \cite{Langer1961,Buttiker1993,Buttiker1994}
\begin{equation}
\nu(r,\epsilon) = \frac{1}{2 \pi i \hbar v_F } \mbox{Tr}\, {\cal S}^{\dagger} \left( \frac{\delta {\cal S}}{\delta V(r)}+\frac{\delta {\cal S}}{\delta \Delta(r)}\right)
\end{equation}
such that $\mathcal{S}(\varepsilon)$ can be determined using the boundary conditions.
Now
to get the total DOS, we simply integrate over the region $r < L$
to end up with
\begin{equation}
\nu_{dot}(\epsilon) = \frac{1}{2 \pi i \hbar v_F}
\int_{r < L} \mbox{Tr}{\cal S}^\dagger \left( \frac{\delta {\cal S}}{\delta V(r)}+\frac{\delta {\cal S}}{\delta \Delta(r)}\right)~dr.
\label{eq:nuWS}
\end{equation}
To calculate $\nu_{\rm dot}$ at zero energy $(\epsilon=0)$ as a function of the quantum dot parameters, it suffices to solve the Dirac equation associated to the Hamiltonian \eqref{eq:Dirac}, at small but finite energy $\epsilon$, and determine the scattering matrix $S$. This will be done by considering the zero and nonzero magnetic flux cases.
\subsection{Zero magnetic flux}
In the present case and for $r > L$, the eigenspinors can be written as a linear combination of the two solutions of \eqref{eq:psiref}
\begin{equation}
\label{eq:psi}
\psi_{\epsilon,m}(r)= a_{m}(\epsilon) \psi^{-}_{k_\infty,m} (r)+b_{m}(\epsilon) \psi^{+}_{k_\infty,m} (r).
\end{equation}
To determine the coefficients $a_{m}(\epsilon)$ and $b_{m}(\epsilon)$, we use the boundary conditions at interfaces $r=L$ and $r=R$, together with the regularity at $r=0$. This process allows to obtain
\begin{equation}
\label{eq:scatt}
b_{m}(\epsilon) = \mathcal{S}_{m}(\epsilon) a_{m}(\epsilon)
\end{equation}
such that the scattering matrix $\mathcal{S}_{m}(\epsilon)$ reads as
\begin{equation}
\label{S}
S_{m}(\epsilon) = - \frac{\det D^{(-)}}{\det D^{(+)}}
\end{equation}
where both matrices are given by
\begin{equation}
\label{D12}
D^{(+,-)}=
\begin{pmatrix}
0 & \sqrt{\kappa} H^{(+)}_{|m|-\frac{1}{2}} (\kappa R) &\sqrt{\kappa} H^{(-)}_{|m|-\frac{1}{2}} (\kappa R) & \sqrt{\kappa_\infty} J_{|m|-\frac{1}{2}}(\kappa R) \\
0 & \sqrt{\kappa} H^{(+)}_{|m|+\frac{1}{2}} (\kappa R) & \sqrt{\kappa} H^{(-)}_{|m|+\frac{1}{2}} (\kappa R) & \sqrt{\kappa_\infty} J_{|m|+\frac{1}{2}}(\kappa_0 R) \\
\sqrt{\kappa_0} H^{(-,+)}_{|m|-\frac{1}{2}} (\kappa_\infty L) & - \sqrt{\kappa} H^{(+)}_{|m|-\frac{1}{2}} (\kappa L) & - \sqrt{\kappa} H^{(-)}_{|m|-\frac{1}{2}} (\kappa L) & 0 \\
\sqrt{\kappa_0} H^{(-,+)}_{|m|+\frac{1}{2}} (\kappa_\infty L) & - \sqrt{\kappa} H^{(+)}_{|m|+\frac{1}{2}} (\kappa L) & - \sqrt{\kappa} H^{(-)}_{|m|+\frac{1}{2}} (\kappa L) & 0
\end{pmatrix}.
\end{equation}
We consider the limit of a highly doped lead $k_{\infty} L\gg 1$ to approximate the asymptotic behavior of the Hankel functions for large arguments as
\begin{equation}
H^{(\pm)}_n(x)\approx (2/\pi x)^{1/2} e^{\pm i(x-n\frac{\pi}{2}-\frac{\pi}{4})}
\end{equation}
which is valid in the lead region $r > L$.
For a short-distance, we have
\begin{equation}
J_{n}(x)\sim \frac{1}{n!} \left(\frac{x}{2}\right)^{n},
\qquad
Y_{n}(x) \sim \left\{\begin{array}{c} -\frac{\Gamma(n)}{\pi}\left(\frac{2}{x}\right)^n,~~n>0\\ \frac{2}{\pi}\ln\left(\gamma_E\frac{x}{2}\right),~~n=0
\end{array} \right.
\end{equation}
where $\ln\left(\gamma_E\right)=0.577\cdots$ is the Euler’s constant. For negative integer, $m<0$, we have the relation $J_{-m}=(-1)^m J_m$ and $Y_{-m}=(-1)^m Y_m$. For small energy $\epsilon$, we can develop the scattering matrix as a function of $\kappa$ in the region $R <r <L$ and choose $\chi_{1,2}(r)\propto J_{n}(kr)$ regular at $r=0$. We then find
\begin{align}
\mathcal{S}_{m}(\epsilon) = e^{-2 i \kappa_{\infty} L + i |m| \pi}
\left[ \mathcal{S}_{m}^{(0)} + \kappa \mathcal{S}_{m}^{(1 )}
+ {\cal O}(\epsilon^2) \right]
\end{align}
such that
\begin{equation}
\label{eq:calS}
\mathcal{S}^{(0)}_{m}=\frac{L^{2|m|}+i {\cal J}_m R^{2|m|}}
{L^{2|m|} -i {\cal J}_m R^{2|m|}}
\end{equation}
and $S_{m}^{1}$ takes the following forms for $m\neq \frac{1}{2}$
\begin{equation}
\label{eq:calS1m}
\mathcal{S}^{(1)}_{m}=
\displaystyle
-\frac{2 i L}{2|m|-1} {\cal S}^{(0)}_m
+
\frac{8 i |m| L^{4 |m|+1} +2 i[(2|m|+1){\cal J}_m^2-(2|m|-1)]R^{2|m|+1} L^{2|m|}}{(4 |m|^2 - 1)(L^{2 |m|} - i {\cal J}_m R^{2|m|})^2}
\end{equation}
or for $m=\pm \frac{1}{2}$
\begin{equation}
\label{eq:S1m12}
\mathcal{S}^{(1)}_{\pm 1/2}=
\frac{i L (L^2-R^2)+2 i{\cal J}_{\frac{1}{2}}^2 R^2 L \ln (L/R)}{(L-i{\cal J}_{\frac{1}{2}}R)^2}
\end{equation}
where ${\cal J}_m$ is given by
\begin{equation}
{\cal J}_m=\frac{J_{|m|+1/2}(\kappa_0 R)}{J_{|m|-1/2}(\kappa_0 R)}.
\end{equation}
We now use \eqref{eq:nuWS} to calculate DOS $\nu_{\rm dot}$ at zero energy for the both cases $m\neq \frac{1}{2}$, $m= \frac{1}{2}$. Then, our calculation shows
\begin{equation}
\label{eq:deltanu}
\nu_{\rm dot} = \frac{1}{2 \pi i \hbar v_F} \sum_{m}
~ \mathcal{S}^{(0)*}_m \left[ \frac{\partial \mathcal{S}^{(0)}_m}{\partial \kappa_0}
+ \mathcal{S}^{(1)}_m \right] \left[\frac{\partial \kappa_0}{\partial V_0}-\frac{\partial \kappa_0}{\partial \delta}\right].
\end{equation}
The first term in \eqref{eq:deltanu} represents the integral of the local DOS inside the QD region ($r < R$), while the second one its integral in the undoped layer that separates QD and the metallic contact \cite{Martin14}.
On the other hand, for zero energy and by using the continuity of the eigenspinors \eqref{eq:psidot} and \eqref{psinoenergy} at $r = R$, we find the resonance condition
\begin{equation}
\label{eq:resonanceposition}
J_{|m|-1/2}(\kappa'_0 R)=0
\end{equation}
where $\kappa_0=\kappa'_0$.
In the limit $R \ll L$, DOS exhibits isolated resonances at gate values
close to resonance, we can then write
\begin{equation}
{\cal J}_{m}\approx \frac{-1}{R(\kappa_0 - \kappa'_0)}
\end{equation}
showing that DOS has a Lorentzian dependence on $\kappa_0$.
Now
for $|m| \neq 1/2$, the zero-energy DOS takes the form
\begin{equation}
\label{eq:deltanures}
\nu_{dot} = \frac{4 R |m|}{\pi \hbar v_F (2|m|-1)}
\frac{\Gamma}{4 R^2 (\kappa_0 - \kappa'_0)^2 + \Gamma^2} \frac{|V_0-\delta|}{\kappa_0}
\end{equation}
whereas for $|m| = 1/2$, it reads as
\begin{equation}
\nu_{\rm dot} = \frac{2 R}{\pi \hbar v_F}
\left(1 + \ln \frac{L}{R} \right)
\frac{\Gamma}{4 R^2 (\kappa_0 - \kappa'_0)^2 + \Gamma^2} \frac{|V_0-\delta|}{\kappa_0}
\end{equation}
where we have set the parameter of our theory as $\kappa_0=\sqrt{|V_0^2-\delta^2|}$ and the dimensionless resonance width is given by
\begin{equation}
\label{eq:Gamma}
\Gamma=2\left( \frac{R}{L}\right)^{2|m|}.
\end{equation}
\subsection{Non zero magnetic flux}
We now investigate the density of states for a gapped graphene quantum dot in the presence of the magnetic flux, such that eigenspinors are those in \eqref{eq:psirefflux} and \eqref{eq:psirefflux0} taking into consideration the kinematic angular momentum $\mu$.
The states with zero kinematic angular momentum need to be discussed separately in the presence and absence of magnetic flux. We first discuss the states with $\mu\neq 0$, where magnetic field only leads to slight modifications. Effectively, one finds that the results of \eqref{S} and \eqref{eq:calS1m} remain valid, as much as the half-integer index $m$ is replaced by the integer index $\mu$. For $\mu \neq 0$, the calculation of bound states proceeds in the same way as without flux and we find that the resonance condition is given by
\begin{equation}
\label{eq40}
J_{|\mu|-1/2}(\kappa'_0 R)=0.
\end{equation}
We conclude that, if the quantum dot and the surrounding undoped graphene layer are contacted to source and drain reservoirs, the width $\Gamma$ of the resonances is
\begin{equation}
\label{eq41}
\Gamma=2(R/L)^{2\mu}.
\end{equation}
For the case $\mu=0$, regularity of the wave function at the origin is not sufficient to determine the scattering matrix $S_0(\epsilon)$. Taking a flux line of extended diameter, we find the condition of the wave function has to vanish at the origin \cite{heinl2013}. The calculation of the scattering matrix $S_0(\epsilon)$ is straightforward and leads to the following result
\begin{equation}
S_0=e^{-2i(\kappa_{\infty}-\kappa_0)R} e^{-2i(\kappa_{\infty}-\kappa)(L-R)}
\end{equation}
where $\kappa$, $\kappa_0$ and $\kappa_{\infty}$ are given in \eqref{kappa}
as function of the gate voltage $V$ and energy gap $\Delta$. Note that by requiring $\Delta=0$, we recover both DOS derived in \cite{Martin14}.
The obtained results so far will be numerically analyzed to emphasis the main features
of our system and therefore underline the influence of the energy gap on the quantum dot.
\section{Results and discussions}
We study the influence of the introduced energy gap $\delta$ and magnetic flux $\phi=1/2$ at energy incident $\epsilon=0$ on the bound states of an electrostatically confined graphene quantum dot of radius $R$ and the contact size $L$.
Indeed, because the parameter of our theory is $\kappa_0=\sqrt{|V_0^2-\delta^2|}$, then we choose to numerically analyze the DOS versus the gate voltage $V_0R$ under suitable conditions
of the physical parameters.
More precisely, we consider
particular values of the ratio
$R/L$=(0.05, 0.07, 0.1, 0.15, 0.2)
energy gap $|\delta R|\leq (0,0.5, 1,2,3,4)$,
angular quantum numbers $m=(1/2,3/2)$ for zero and $\mu=(1,2)$ for nonzero fluxes.\\
\begin{figure}[!hbt]
\centering
\includegraphics[width=0.49\linewidth]{fig01gp0-1}\includegraphics[width=0.49\linewidth]{fig01gp2-4}\\\includegraphics[width=0.49\linewidth]{fig02gp0-1}\includegraphics[width=0.49\linewidth]{fig02gp2-4}
\caption{\sf (color online) The DOS as function of the gate voltage $V_0 R$ at incident energy $\epsilon =0$ for ratio $R/L = 0.2$ and different values of the energy gap $\delta R$. The resonances are labeled according to their angular momentum $m=\pm 1/2, \cdots, \pm 9/2$. (a): $\delta R=0, 0.5, 1$. (b): $\delta R= 0, -0.5,-1$. (c): $\delta R= 2, 3, 4$. (d): $\delta R=-2, -3, -4$.}
\label{f2}
\end{figure}
The DOS for a circular quantum dot as function of the gate voltage $V_0R$ at $\epsilon =0$ for $R/L = 0.2$ and different values of the energy gap $\delta R$, is shown in Figure \ref{f2}. We observe that the DOS exhibits an oscillatory behavior with the appearance of resonance peaks, which are labeled according to their angular quantum momentum $m$. This behavior shows that when $\delta$ increases the amplitude of DOS decreases with a shift to the right when $\delta R$ is positive see Figure \ref{f2}(a,c). For negative values, the amplitude and width increase when the absolute value of $\delta$ increases. We also notice that the resonance peaks move towards the left see Figure \ref{f2}(b,d). Note that for $\delta=0$, the position of resonances as well as width and amplitude are in agreement with the results obtained in the literature \cite{Martin14,Bardarson2009,Titov2010}. It is clearly seen that for higher value of
$m$,
the resonance disappear and peaks take places.
\begin{figure}[h]\centering
\includegraphics[width=0.5\linewidth]{fig03gp0-6}\includegraphics[width=0.5\linewidth]{fig03gpnegative0-6}\\
\includegraphics[width=0.5\linewidth]{fig04gp0-6}\includegraphics[width=0.5\linewidth]{fig04gpnegative0-6}
\caption{\sf (color online) The DOS as function of the gate voltage $V_{0} R$ incident energy $\epsilon =0$ for ratio $R/L=0.2$ and different values of the energy gap $\delta R$. (a): $\delta R=0, 1, 2, 3, 4, 5, 6$ and (b): $\delta R=0, -1, -2, -3, -4, -5, -6$ for first resonance $m=1/2$. (c): $\delta R=0, 1, 2, 3,4, 5, 6$ and (d): $\delta R=0, -1, -2, -3, -4, -5, -6$ for second resonance $m=3/2$.}\label{f3}
\end{figure}
The plot of the DOS shows clearly the first resonance $m=1/2$ and second resonance $m=3/2$ for different values of $\delta R=0,\pm1,\pm2,\pm3,\pm4$. We deduce that the resonance characteristics depend on both the sign and magnitude of $\delta$. Indeed, from Figure \ref{f3}(a,c) and for positive $\delta$, the resonance positions shift so that the amplitude and width of the resonance decrease if $\delta R$ increases. Whereas from Figure \ref{f3}(b,d) and for negative $\delta$, there appear sharp peaks at the location which corresponds to the chosen values of $|\delta R|$.
We also notice that the first and second resonances are doubled when $\delta R$ exceeds the position $V_0 R\geq |\delta R|=3$ (Figure \ref{f3}(a,c)) and $V_0 R\geq |\delta R|=4$ (Figure \ref{f3} (b,d)). We observe that the DOS exhibits an oscillatory behavior with decreased amplitude when $V_0R$ increases for $\delta\geq 0$ \cite{Martin14} and an increase in the oscillation amplitude for $\delta\leq 0$. Moreover, the width of the second resonance in the DOS (Figure \ref{f3}(c,d)) is very small compared to first resonance (Figure \ref{f3}(a,b)).
\begin{figure}[h]\centering
\includegraphics[width=0.33\linewidth]{Fig05gp0}\includegraphics[width=0.33\linewidth]{Fig05gp4}\includegraphics[width=0.33\linewidth]{Fig05gp-4}
\caption{\sf(color online) The DOS as function of the gate voltage $V_{0} R$ at incident energy $\epsilon=0$ and first resonance $m=1/2$ for different values of the energy gap and
ratio $R/L$. (a): $\delta R=0$, (b): $\delta R=4$ and (c): $\delta R=-4$.}
\label{f4}
\end{figure}
To show how the first resonance $m=1/2$ behaves when we modify the energy gap $\delta R$ and increase the contact size $L$ for a fixed radius $R$, we present the DOS as function of the gate voltage $V_0 R$ in Figure \ref{f4}, with (a): $\delta R=0$, (b): $\delta R=4$ and (c): $\delta R=-4$.
It is clearly seen that when $R/L$ is very small, the DOS saturates (maximum), which could be explained by invoking the weak coupling between the QD and the metallic contacts \cite{Martin14}. Now by comparing Figures \ref{f4}(a,b,c), we notice that the amplitudes of resonance become very important for very small and negative value of $\delta R$.\\
\begin{figure}[H]\centering
\includegraphics[width=0.5\linewidth]{fig06gp0-1}\includegraphics[width=0.5\linewidth]{fig06gp2-4}\\\includegraphics[width=0.5\linewidth]{fig07gp0-1}\includegraphics[width=0.5\linewidth]{fig07gp2-4}
\caption{\sf(color online) The DOS as function of the gate voltage $V_0 R$ at incident energy $\epsilon=0$ and magnetic flux $\phi=1/2$ for $R/L=0.2$ and different values of the energy gap $\delta R$. Here the resonances are labeled according to their angular momentum $\mu=m+1/2$, with $\mu=\pm 1, \cdots,\pm 4$. (a): $\delta R =0, 0.5, 1$. (b): $\delta R=0, -0.5, -1$. (c): $\delta R =2, 3, 4$. (d): $\delta R=-2, -3, -4$.}
\label{f5}
\end{figure}
In Figure \ref{f5}, we show the DOS as a function of the gate voltage $V_0 R$ for $R/L=0.2$ and different values of $\delta R$ where the resonances are labeled according to their angular momentum $\mu=m+1/2$. We observe that the amplitude of DOS decreases
as we increase $\delta R$ with a shift to the right for $\delta R$ positive (Figure \ref{f5}(a,c)) and a shift to left for $\delta R$ negative (Figure \ref{f5}(b,d)). The presence of the magnetic flux causes the elimination of the resonance corresponding to $m=-1/2$, that is, no normalizable bound state exists for this value of $m$. \\
\begin{figure}[h]\centering
\includegraphics[width=0.5\linewidth]{fig08gp0-6}\includegraphics[width=0.5\linewidth]{fig08gpnegative0-6}\\
\includegraphics[width=0.5\linewidth]{fig09gp0-6}\includegraphics[width=0.5\linewidth]{fig09gpnegative0-6}
\caption{\sf (color online) The DOS as function of the gate voltage $V_0 R$ at incident energy $\epsilon=0$
and magnetic flux $\phi=1/2$ for $R/L=0.2$ and different values of the energy gap $\delta R$. (a): $\delta R =0, 1, 2, 3, 4, 5, 6$ and (b): $\delta R =0, -1, -2, -3, -4, -5, -6$ for fist resonance $\mu=1$. (c): $\delta R=0, 1, 2, 3, 4, 5, 6$ and (d): $\delta R =0, -1, -2, -3, -4, -5, -6$ for second resonance $\mu=2$.}\label{f6}
\end{figure}
In Figure \ref{f6}, we show the DOS as function of the gate voltage $V_0 R$ in the presence of the magnetic flux $\phi=1/2$ at incident energy $\epsilon=0$ for $R/L=0.2$ and different value of the energy gap $\delta R$. Figure \ref{f6}(a) corresponds to $\delta R=0, 1, 2, 3, 4, 5, 6$ with $\mu=1$ (first resonance), which shows that
the DOS exhibits oscillations whose amplitudes decrease by increasing the enrgy gap $\delta R$. In addition,
there is a doubling of the peaks as compared to the situation with $\delta R=4$. In Figure \ref{f6}(b) we choose the values $\delta R=0, -1, -2, -3, -4, -5, -6$ with $\mu=1$ (first resonance), one sees that there is the same behavior as that of Figure \ref{f6}(a) except that when the absolute value of $\delta R$ increases the amplitude of DOS increases. For the values $\delta R=0, 1, 2, 3, 4, 5, 6$ and $\mu=2$ (second resonance), Figure \ref{f6}(c) shows the appearance of peaks for each value of $\delta R$. The height of the peaks decreases when $\delta R$ increases, we also notice a doubling of resonances as compared to the situation with $\delta R=5$. Now for $\delta R=0, -1, -2, -3, -4, -5, -6$, with $\mu=2$ (second resonance), Figure \ref{f6}(d) presents the same behavior as that of Figure \ref{f6}(c) except that when the absolute value of $\delta R$ increases the oscillation amplitudes increase. Note that if we compare Figure \ref{f3} (absence of magnetic flux) and Figure \ref{f6} (presence of magnetic flux), we conclude that the energy gap in the presence of magnetic flux increases the heights and decreases the widths of the oscillation resonances.
In Figure \ref{f7}, we show how the first resonance $\mu=1$ behaves when we modify the energy gap $\delta R$ at $\epsilon=0$ and $\phi=1/2$. Indeed, the plot shows that the resonance width increases, while the height decreases when the contact size $L$ increases
for a fixed value of $R$. From Figure \ref{f7}(b,c), we notice that the introduction of $\delta R$ in the presence of magnetic flux allows to amplify the resonance height by a factor of 10 as compared to
Figures \ref{f4}(b,c) for $m=1/2$ (zero magnetic flux).\\
\begin{figure}[h]\centering
\includegraphics[width=0.33\linewidth]{Fig10gp0}\includegraphics[width=0.33\linewidth]{Fig10gp4}\includegraphics[width=0.33\linewidth]{Fig10gp-4}
\caption{\sf(color online) The DOS as function of the gate voltage $V_0 R$ at incident energy $\epsilon=0$ and first resonance $\mu=1$ for different values of the energy gap $\delta R$ and ratio $R/L$. (a): $\delta R=0$, (b): $\delta R=4$ and (c): $\delta R=-4$ }\label{f7}
\end{figure}
In comparison to the DOS analysis reported in \cite{Martin14}, we have
some comments in order. Indeed,
we observe that considering an energy gap $\delta$ in graphene quantum dot of radius $R$ with magnetic flux $\phi=1/2$ changes the resonance properties of the DOS. More precisely, we notice that the amplitudes and widths of resonances decrease for the case $\delta>0$,
but they increase otherwise as well as the positions of resonances undergo changes.
In addition, we observe that there are appearance of the resonances and peaks when
$\delta$ is greater than critical value, which can be fixed according to each considered configuration of the physical parameters.
In summary, the energy gap $\delta$ amplifies the DOS in the presence of magnetic flux and therefore we conclude that it can be used as a tunable parameter to control the properties of our system. Of course the DOS results obtained in \cite{Martin14} can be recovered by switching off $\delta$.
\section{Conclusion}
We have studied the confinement of charge carriers in a quantum dot of graphene
surrounded by a sheet of undoped graphene and connected to a metallic contacts in the presence of an energy gap and magnetic flux.
We have solved the two-band Dirac Hamiltonian in the vicinity of the $K$ and $K'$ valleys and obtained analytically the solutions of energy spectrum for three regions composing our system.
Using the asymptotic behavior of the
Hankel functions for large arguments, we have derived an approximate formula for the the density of states (DOS) as a function of magnetic flux, energy gap and the applied electrostatic potential. We have
found the resonance conditions at zero energy under suitable boundary conditions.
We have shown that the DOS exhibits an oscillatory behavior which reflects the
appearance of resonances. The amplitude of DOS oscillation resonances was found to decrease and shift to the right when $\delta$ increases for $\delta>0$. On the other hand, when $\delta$ is negative the resonance peaks shift to the left. It was also observed that
for higher values of the angular momentum $m$, the resonances disappear and peaks take places either in presence or absence of magnetic flux.
We have shown that the presence of magnetic flux eliminates the resonance which correspond to $m=-\frac{1}{2}$ while the resonances corresponding to $m\neq -\frac{1}{2}$ becomes sharper with an amplification of its amplitude.
\section*{Acknowledgments}
The generous support provided by the Saudi Center for Theoretical Physics (SCTP) is highly appreciated by all authors.
AJ and HB also acknowledge the support of King Fahd University of Petroleum and Minerals under research group project RG181001.
| proofpile-arXiv_065-247 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Battlesnake is an extension of the traditional snake arcade game where multiple snakes compete against one another for food and survival.
The last surviving snake is the winner of the game.
Competitors traditionally develop heuristics such as using the A* search algorithm \cite{russell2002artificial} and the tree search \cite{mci/Schier2019} to seek food, enemy heads, and its tail.
Meanwhile, Reinforcement Learning (RL), which learns a policy by interacting with an environment through trial-and-error, has been naturally adopted to tackle such sequential problems.
Recent advances in deep RL further allows modelling such decision making problems with high-dimensional visual perceptual inputs made up of thousands of pixels \cite{mnih2015human}.
In this paper, we study how to utilise deep RL in conjunction with human knowledge to
Battlesnakes focuses on a particular branch of RL where multiple agents learn to interact within the same environment.
Such systems are typically characterised as multi-agent RL problems \cite{littman1994markov, bu2008comprehensive, bucsoniu2010multi}).
Multi-agent RL paradigms can be divided into three categories according to the problem setting: fully competitive, fully cooperative, and a mix of the two.
Battlesnake falls in the fully competitive setting \cite{silver2017mastering}, in which the outcome of a game is only determined by the last survived snake, suggesting that each snake is tasked to maximise its own reward while minimising their opponent's rewards.
Developers with superior domain knowledge build snakes with unique strategies and heuristics.
Providing RL agents with these knowledge can drastically improve the policy \cite{christiano2017deep, abel2017agent, saunders2018trial}.
This is also known as human-in-the-loop learning (HILL).
Humans intuition could be provided as feedback \cite{arakawa2018dqn,xiao2020fresh}, teachers \cite{abel2017agent,zhang2019leveraging}, and overseers \cite{saunders2018trial}.
Including human intuition has been shown to simplify the RL problem, speed up training, and prevent catastrophic actions \cite{abel2017agent, saunders2018trial}.
To the best of our knowledge, there exists no standard benchmark to evaluate and compare across the aforementioned HILL methods, especially in a multi-agent setting.
Specifically, including human intuition is not limited to interactions with the environment, but also with other agents.
To fill in this gap, we propose that Battlesnake can serve as a testbed.
This is motivated by the fact that the progression of a Battlesnake game is straight-forward, and engineering heuristics-based rules are easy to conceptualise, visualise, and develop.
Examples of heuristics include handcrafted rules to avoid hitting walls or to eat smaller snakes.
We present a standardised training-deployment-testing framework to facilitate HILL RL studies, which allows users to utilise a suite of state-of-the-art RL algorithms.
We identify a number of standard heuristics and demonstrate baselines techniques to incorporate as human feedback.
Our framework is \textit{agent-agnostic} and \textit{heuristics-agnostic} such that researchers can design their own HILL RL algorithms, train their models, and demonstrate in the real Battlesnake competition.
The code is available at http://github.com/REDACTED.
\section{Related works}
\textbf{Multi-agent Reinforcement Learning:} Much recent research work has been done in multi-agent RL.
We refer to \citet{zhang2019multi} and \citet{nguyen2020deep} for a comprehensive review for recent developments.
In particular, multi-agent RL in a fully competitive setting like Battlesnake is typically modelled as a zero-sum Markov game \cite{littman1994markov, silver2017mastering}, through which the goal is to achieve an approximate Nash equilibrium.
The framework of centralised training with decentralised action has been utilised for actor-critic algorithms to find such equilibrium \cite{foerster2016learning, lowe2017multi}, where the critic can observe the joint sate and actions of all agents.
Another line of research focuses on designing environments to study and evaluate the multi-agent RL agents.
For example, Keepaway soccer \cite{stone2005keepaway} and its extension \cite{kalyanakrishnan2006half,hausknecht2016half} provide a simulated football environment.
A set of gridwold-like environments has been developed to encompass various multi-agent tasks, covering both continuous \cite{lowe2017multi} and discrete \cite{yang2018mean, zheng2018magent} control problems.
\citet{resnick2018pommerman} proposed \textit{Pommerman}, a game stylistically to the Nintendo game Bomberman, as a playground for bench-marking.
The game uses low dimensional symbolic state interpretations as the system input.
More recent work uses StarCraft as a multi-agent learning testbed \cite{samvelyan2019starcraft, vinyals2019grandmaster} and handles partially observability and high-dimensional inputs.
The competitive environments described in \cite{yang2018mean} trained agents with different RL algorithms and were evaluated with the rewards and the win rate when the agents competed against one another.
In \citet{samvelyan2019starcraft}, agents were trained to compete against the in-built StarCraft II AI and the performance of the agents were evaluated with the win rate against the in-built AI.
In \citet{resnick2018pommerman}, Pommerman agents are evaluated in a leaderboard where agents compete against one another in free for all format.
\textcolor{magenta}{Battlesnake is less complex but more financially efficient. Honors of Kings framework runs over a total number of 600,000 CPU cores and 1,064 GPUs }
\textbf{Human-in-the-loop Reinforcement Learning:} The data hungry nature of RL has prompted researchers to develop techniques to leverage human knowledge for RL tasks \cite{zhang2019leveraging}.
Often, human information are passed along for human intervention \cite{saunders2018trial}, reward shaping \cite{knox2009interactively,knox2012reinforcement,warnell2018deep,arakawa2018dqn,xiao2020fresh} and policy evaluation \cite{griffith2013policy,macglashan2017interactive,arumugam2019deep}.
Our work is closely related to the agent-agnostic framework proposed by \citet{abel2017agent}.
The framework contains a protocol program that controls the human interaction with the agent, through which human can perform learning interventions such as state manipulation, action pruning, and reward shaping.
Action pruning is a technique to bar possible actions from the policy based on engineered rules.
Reward shaping is the act of specifically designing a reward function.
In particular, the authors found that action pruning simplifies the RL problem and was effective to prevent catastrophic actions \cite{saunders2018trial}.
\textbf{Our contributions:} Battlesnake fills the need for a standardised benchmark for HILL in a multi-agent environment.
Leveraging the Battlesnake arena\footnote{https://play.battlesnake.com/arena/global/}, a leaderboard (similar to pommerman \cite{resnick2018pommerman}) could be used to evaluate the algorithms.
Unlike pommerman, the Battlesnake leaderboard also contains other types of AI bots (e.g., decision tree based algorithms).
Several works, that used the pommerman environment, leveraged simple action filters to prevent the agent committing suicide \cite{meisheri2019accelerating} and killing teammates \cite{gao2019skynet}.
The rules are largely limited to conditional statements of simple heuristics.
Rules to provide guidance on how to address competing agents or perform complex interactions with the environment were not utilised.
Since Battlesnake is easy to visualise and were traditionally developed with heuristics based rules, it is clear that providing human intuition into the Battlesnake RL agents could be greatly beneficial.
In this works, we provided a framework to develop, train and evaluate HILL in Battlesnake.
By providing a standardised framework to develop and assess HILL, novel techniques to incorporate such sophisticated rules could be developed.
\section{Proposed Framework for the Battlesnake challenge}
\begin{figure}[h!]
\centering
\includegraphics[width=0.48\textwidth]{Images/bs_framework.jpg}
\caption{The development, training, and deployment framework of the Battlesnake challenge}
\label{fig:battlesnake_framework}
\end{figure}
The framework design diagram of the Battlesnake challenge is presented in Figure \ref{fig:battlesnake_framework}.
The framework includes an offline main HILL RL training loop with an environment that simulates Battlesnake.
The trained agent could then be deployed online to interface with the Battlesnake engine.
This is integrated with ad hoc heuristics to alter the actions provided to the Battlesnake engine.
\textcolor{magenta}{Note that these HILL can be flexibly configured, so that the researchers can focus on the algorithm design.\\
++ More explanation on each module. e.g. training, inference
}
\textbf{Human intuition:}
Human intuition could be injected at different stages, during RL training or at inference time (Figure \ref{fig:battlesnake_framework}).
This could be in the form of altering the actions or rewards.
For more details please see Section \ref{sec:heuristics}.
\subsection{Battlesnake description}
We first provide a detailed description of Battlesnake game logic.
A typical game in Battlesnake consists of three to five snakes on a board ranging from $7 \times 7$ (small), $11 \times 11$ (medium) to $19 \times 19$ (large).
At the start of the game, $N$ snakes are randomly distributed along the boundaries of the board, each with health of $100$.
There is one piece of food randomly distributed at the same time.
At each turn, the health of every snake is decreased by one and each snake reacts to the environment indicating whether it will move up, down, left or right;
food are then randomly spawned.
Unlike the traditional snakes game, if a snake is facing up and the next action is to move down, the game considers the snake hitting its own body and it will be eliminated from the game.
This is known as a \textit{forbidden move}.
If a snake eats a food, its health will be returned to 100 and its length will grow by one.
If a snake hits another snake's head, the shorter of the two snakes is eliminated from the game. This is referred to as \textit{eating another snake}.
In addition, a snake is eliminated from the game if it: 1) goes out of the boundaries of the map, 2) hits another snake's body, 3) hits its own body, or 4) has a health of 0.
The last surviving snake becomes the winner.
\subsection{Battlesnake as a Reinforcement Learning Environment}
\label{sec:bs_rl}
\begin{figure}[h!]
\centering
\includegraphics[width=0.43\textwidth]{Images/rl.png}
\caption{Modelling Battlesnake with reinforcement learning}
\label{fig:rl_battlesnake}
\end{figure}
We consider a standard Markov game \cite{littman1994markov} regime to model the interaction between multiple Battlesnake agents with the environment.
The Markov game is specified by a tuple $M = (\mathcal{N}, \mathcal{S},\{\mathcal{A}^i\}_{i\in\mathcal{N}},\mathcal{T},R^i\}_{i\in\mathcal{N}},\gamma)$, where $\mathcal{N} = \{1,\dots,N\}, N> 1$ denotes the set of agents, $\mathcal{S}$ is the state space observed by all agents and $\mathcal{A}^i$ is the action space of agent $i$.
$\mathcal{T}: \mathcal{S}\times\mathcal{A}^1\times\cdots\mathcal{A}^N\times\mathcal{S} \rightarrow [0,1]$ denotes the transition function that maps a state $s_t\in \mathcal{S}$ and action $a_t^i\in\mathcal{A}^i$ pair for each agent $i$ to a probability distribution over the next state $s_{t+1} \in \mathcal{S}$.
The environment emits a reward $R^i: \mathcal{S}\times\mathcal{A}^1\times\cdots\mathcal{A}^N\rightarrow \mathbb{R}$ on each transition for each agent $i$; $\gamma$ denotes the discount factor.
Figure \ref{fig:rl_battlesnake} illustrates our setup, in which the agent interacts with the environment over the OpenAI Gym interface \cite{brockman2016openai}. Components in the MDP are given as follows:
\textbf{State:} We provide the Battlesnake simulator an image based observation space $s_t$ at time $t$ to represent the spatial distribution of all the snakes and food.
Agent (snake) $i$ is represented by a list of coordinates ($x \in \mathcal{R}^2$), $\textbf{x}^i = [x_1^i, x_2^i ... x_{L_i}^i]$ where $L_i$ is the length of snake $i$.
The $N$ snakes are collectively referred to with $\textbf{X}$ such that $\textbf{X} = [\textbf{x}^1 ... \textbf{x}^i ...\textbf{x}^N]$.
Food $\textbf{F}$ is represented as $[x^1, x^2 \dots x^{M^t}]$ where $x$ is a coordinate corresponding to the location of the food and $M^t$ represents the number of food at time $t$.
The state for agent $i$, $s_{t}^{i}$, is organised as a grid where $s_{t}^{i} \in \mathbb{R}^{w \times h \times 3}$, and $w$ and $h$ are the size of the map.
Channels $c \in [0, 1, 2]$ in $s_{t}^{i}[c]$ represents the food, position of agent $i$, and the positions of other agents, respectively.
Specifically, for $c = 0$, $s_{t}^{i}[j, k, 0] = 1$ if $(j, k) \in \textbf{F}$ and $0$ otherwise.
The state for $c = 0$ should be identical for all agents.
For $c = 1$ provides the position of agent $i$ where $s_{t}^{i}[j, k, 1] = 1 \forall (j, k) \in \textbf{x}^i$ and $0$ otherwise.
Also, we set $s_{t}^{i}[j_h, k_h, 1] = 5$ where ($j_h, k_h$) denotes the head of agent $i$.
Finally, $c = 2$ is defined as $s_{t}^{i}[j, k, 2] = 1 \forall (j, k) \in \textbf{x}^{i'}$ where $\textbf{x}^{i'}$ are all other agents in $\textbf{X}$ where $i' \ne i$.
Similarly, the heads of the snakes in $\textbf{x}^{i'}$ are set to $5$.
\textbf{Action:} The action space $\mathcal{A}^i$ is identical for each agent $i$. which corresponds to the direction the agent moves towards in the next turn.
That is, $\mathcal{A} = [0, 1, 2, 3]$ corresponding to up, down, left, and right.
Thus the joint action space is defined as $\textbf{a}_t = [a_t^1, a_t^2 ... a_t^N]$.
\textbf{Reward:} We apply the same reward formulation for all the agents.
Specifically, a negative reward $-1$ is imposed if a snake dies,
and the last snake alive is assigned with a reward $1$.
We grant a small reward $\epsilon$ for each snake when it survives another turn, with an intuition to promote the snakes to move and grow.
Users can also design their own reward functions, such as eating food and killing another snake.
\subsection{Training Algorithm}
We train each snake's RL agent independently using the Proximal Policy Optimisation (PPO) algorithm \cite{schulman2017proximal}. It is a widely used, modern on-policy actor-critic algorithm that has presented stable performances in many of the recent successes in deep RL. The algorithm employs two neural networks during training -- a policy network and a value network.
The policy network interacts with the Battlesnake environment and generates Gaussian-distributed actions given the state. The value network estimates the expected cumulative discounted reward using the generalised advantage algorithm \cite{schulman2015high}.
Note that while we present our results with this algorithm, the proposed framework can be used with various discrete action-based state-of-the-art algorithms such QMIX \cite{rashid2018qmix} and SAC \cite{haarnoja2018soft}.
\subsection{Heuristics with human-in-the-loop learning}
\label{sec:heuristics}
In this work, we identified several human engineered heuristics rules that could be included into Battlesnake agents as a part of the HILL challenge.
The philosophy is to provide agents information regarding what actions to \textit{avoid} and to guide agents towards superficial skills such as heading to food when starving.
Specifically, we provide the following heuristics:
\begin{enumerate}[topsep=0pt,itemsep=-1ex,partopsep=1ex,parsep=1ex]
\item Avoid hitting the walls.
\item Avoiding moving in the opposite direction as the snake is facing (i.e., \textit{forbidden moves}).
\item Moving towards and eating food when the snake health is low (i.e., to prevent \textit{starving}).
\item Killing another snake (e.g., eating another snake or trapping another snake).
\end{enumerate}
\begin{table}[h!]
\centering
\begin{tabular}{c | c c c}
\hline
Rule & Prevention/ & Interaction & Training \\
&Promotion & & phase \\ [0.5ex]
\hline\hline
1 & Prev. & Env. & Early \\
2 & Prev. & Env. & Early \\
3 & Promo. & Env. & Middle \\
4 & Promo. & Agents & Late \\ [1ex]
\hline
\end{tabular}
\caption{Properties of the heuristics}
\label{table:rule_properties}
\end{table}
Table \ref{table:rule_properties} provides an overview of the properties of each rule.
Prevention/promotion refers to \textit{action prevention} or \textit{action promoting}.
Action prevention rules forbid certain actions from the action space.
Similar to rules to prevent catastrophic actions in \cite{abel2017agent,saunders2018trial} and the action filters in \cite{gao2019skynet,meisheri2019accelerating}.
In most cases, action prevention rules could be resolved with a single conditional statement.
On the other hand, the described action promoting rules are more complex as they typically require multiple steps to achieve.
\textit{Interaction} describes whether the rules are interacting with the environment or other agents.
For example to be successful for rule 4, it is clear that agent $i$ would have to anticipate the movements of other agents in order to kill them.
Finally, training phase indicates when the rules become more important.
Rules 1 and 2 are necessary for basic navigation and movement;
they prevent the agents from committing ``suicide" in the \textit{early} phases of training and are essential throughout the duration of training.
Rules 3 is necessary for survival after the early phases of training when basic navigation is learnt.
Rule 4 requires high level strategies once the snakes have no issues with surviving.
While human rules are instilled with a goal to accelerate the learning procedure, they can also be biased and limiting the snakes' performance.
For instance, Rule 3 could lead to snakes staying far from each other, whereas Rule 4 could result in over-aggressive snakes.
To this end, we design our platform such that the impact of the heuristics can be controlled and even removed once an agent acquires some basic skills.
We now describe three baseline HILL methods to include the heuristics rules into the RL agents.
\subsubsection{In-training action masking}
\label{sec:heuristics:action_masking}
\begin{figure}[h]
\centering
\centering
\includegraphics[width=0.4\textwidth]{Images/in-training-action_mask.jpg}
\caption{In-training action masking}
\label{fig:action_masking}
\end{figure}
Let $H(s_t^i)$ be the output of the human engineered rules; $H(s_t^i) \in \{0, 1\}$ where $1$ corresponds to a valid action and $0$ corresponds to an action to be \textit{masked}.
\begin{equation}
\label{eqn:action_masking}
a_t^{*i} = argmax_{a_t^i} [a_t^i + w \times \thicksim H(s_t^i)]
\end{equation}
where $a$ denotes the action from the agent, $a_t^{*i}$ denotes the masked action, $w\times$ denotes broadcasting of $w$, and $w = -\epsilon^{-1}$.
Equation \ref{eqn:action_masking} is used to replace the action computation steps during training.
Action masking is more applicable to catastrophic action prevention and single step heuristics.
As such this was applied to rules 1 and 2.
To prevent snakes from hitting a wall (i.e., rule 1), $H(s_t^i)$ is $0$ for actions that will lead the snake to hit a wall and $1$ otherwise.
For rule 2, to stop forbidden moves, $H(s_t^i)$ is $0$ for the actions that point towards the direction of the second element in the snake's body and $1$ otherwise.
\subsubsection{Ad hoc action masking}
\begin{figure}[h]
\centering
\centering
\includegraphics[width=0.4\textwidth]{Images/ad hoc action mask.jpg}
\caption{Ad hoc action masking}
\label{fig:ad_hoc_action_masking}
\end{figure}
Ad hoc action masking is similar to in-training action masking however the heuristics are only applied during inference.
Specifically, given $H(s_t^i)$ where $H(s_t^i) \in \{0, 1\}$, $a_t^{*i} = argmax_{a_t^i} [a_t^i * H(s_t^i)]$ where $*$ denotes element wise multipliation.
\subsubsection{Reward manipulation}
Reward manipulation includes human intuition by specifically designing a reward function to encourage events corresponding to the heuristics \cite{abel2017agent}.
During training, the heuristics based reward function is defined as $\hat{R}(s_t^i, a_t^i) = \hat{r}_t^i$ where the heuristics based reward is $\hat{r}_t^i$.
The heuristics based reward $\hat{r}_t^i$ is then fed into the learning process by $r_t^{*i} = r_t^i + \hat{r}_t^i$ and $r_t^{*i}$ is used as a part of the experience rollout.
For example, penalties ($r_t^i < 0$) can be provided whenever a snake hits a wall to account for rule 1.
\section{Experiments}
\subsection{Implementation details}
The source code of the Battlesnake package is available at https://github.com/REDACTED.
This package contains the code for the gym, RL training scripts, heuristics implementations, a heuristics developer/simulator, and code to deploy trained agents to compete in the Battlesnake arena (see Figure \ref{fig:battlesnake_framework}).
Examples of using the Battlesnake package to train agents was developed within the RL package RLlib \cite{liang2017rllib}.
\subsection{Evaluation}
There are two main avenues of evaluation for the Battlesnake HILL multi-agent RL challenge.
The first avenue evaluates the performance of the Battlesnakes during training.
The \textit{episode length} could be used to evaluate how long the agents survived for.
To investigate the baseline performance, we experimented with 1) a map size of $11 \times 11$ comparing the performance of 3, 5, and 7 agents, and 2) 5 agents comparing the performance on map sizes of $7 \times 7$, $11 \times 11$, and $19 \times 19$.
To evaluate HILL, the episode length as well as the frequency of events that each heuristics aims to prevent (or encourage) is collected.
Specifically, the frequency of 1) hitting a wall, 2) forbidden moves, 3) starving, and 4) one snake killing another are recorded.
To investigate the baseline performance, the map size and agents were fixed at $11 \times 11$ and 5 agents respectively.
We investigated the effects of in-training action masking and reward manipulation during training.
The episode length was presented for action masking for the rule 1 and 2 (as these heuristics are easy to implement for action masking).
The episode length and frequency of each event was presented to assess reward manipulation for the heuristics.
The second avenue of evaluation occurs during inference, where the \textit{win rate} within the Battlesnake arena is measured.
The performance of snakes could acts as a leaderboard to evaluate the RL agent's performance.
The performance of the baselines were assessed in the Battlesnake arena.
We compared the performance of agents with in-training action masking, ad hoc action masking, reward manipulation, and no HILL for rule 2.
Please note that we only evaluated rule 2 to reduce the number of comparisons and forbidden moves occurs every action.
Specifically, we ran 30 games with the four snakes in the Battlesnake arena.
The last surviving snake was given four points, the second last surviving snake was given three points, and so on.
The total score was used to assess the performance of the snakes.
\section{Results}
\subsection{Multi-agent reinforcement learning}
\begin{figure}[h!]
\centering
\includegraphics[width=0.4\textwidth]{Images/iterate_agent_num episode_len.png}
\caption{Experiments varying the number of agents on a $11 \times 11$ map}
\label{fig:results_MARL}
\end{figure}
The results of varying the number of agents are presented in Figures \ref{fig:results_MARL}.
We can observe that in general, the mean episode length of the games the agents rapidly increase from 0 to 100 thousand steps trained.
After $\approx200$ thousands, gains start to diminish per time step.
We can observe that the maximum episode length for 3 agents is distinctively better than 5 and 7 agents.
This is likely a result of games with 3 agents having more space to roam around the map compared to games with 5 or 7 agents.
Similarly, when investigating the effects of the map sizes, we observed that the episode length of 5 agents on larger map sizes were longer than that of smaller map sizes.
\subsection{Human-in-the-loop learning}
\subsubsection{In-training action masking}
\begin{figure}[h!]
\centering
\includegraphics[width=0.4\textwidth]{Images/results_action_masking.png}
\caption{Episode length of snakes with different in-training action masking schemes}
\label{fig:results_AM:episode}
\end{figure}
As shown in Figure \ref{fig:results_AM:episode}, the episode length of using the forbidden move (rule 2) in-training action masking is higher than compared to no action masking.
A slight increase is also present when exploring the episode length of using wall hitting action mask.
\subsubsection{Reward manipulation}
As an example, the frequency of forbidden moves (rule 1) and starving (rule 3) are presented in Figure \ref{fig:results_RM}.
We can observe a slight decrease in the frequency of forbidden moves \ref{fig:results_RM:forbidden}.
From Figure \ref{fig:results_RM:starving}, the chances of starvation increases with the training progression.
This is because once the snakes comprehend basic navigation skills, they would start to reach a plateau where their health diminishes to 0.
It is obvious that reward manipulation with rule 3 slightly lowered the agent's chance to starve however, starvation was not eliminated by reward manipulation.
\begin{figure}[h!]
\centering
\begin{subfigure}[b]{0.38\textwidth}
\centering
\includegraphics[width=\textwidth]{Images/reward_shaping_for_bin.png}
\caption{ }
\label{fig:results_RM:forbidden}
\end{subfigure}
\begin{subfigure}[b]{0.38\textwidth}
\centering
\includegraphics[width=\textwidth]{Images/reward_shaping_starving.png}
\caption{ }
\label{fig:results_RM:starving}
\end{subfigure}
\caption{Experiments with reward manipulation for rule 2 and 3 with 5 agents on a $11 \times 11$ map.
(a) Forbidden moves reward manipulation, (b) Starving reward manipulation.}
\label{fig:results_RM}
\end{figure}
\subsection{Arena testing}
Four agents, with 1) in-training action masking, 2) reward manipulation, 3) ad hoc action masking, and 4) no HILL, were deployed on the Battlesnake arena to assess the performance of the snakes.
Specifically the agents with HILL were provided with rule 2 to prevent forbidden moves.
Our experiments indicated that on average, the agent with ad hoc action masking won the most (score = 3.133).
Then the agent with no HILL (score = 2.76) was closely followed by reward manipulation (score = 2.6).
Finally, the agent with in-training action masking performed the worst during inference time (score = 1.433).
\section{Conclusion}
In conclusion, we presented Battlesnake as a framework for investigating human-in-the-loop multi-agent RL.
Methods of evaluating the training progression of this competitive multi-agent RL problem was presented along with results of baseline methods.
Furthermore, we recommended several heuristics-based rules so that researchers could investigate human-in-the-loop RL.
The results of baseline methods such as action masking and reward manipulation with a discussion of the varying degrees of success were presented.
Future works will include developing additional heuristics-based rules as well as methods to leaverage such information.
This paper present methods to evaluate the training progression internally with multiple agents, however, the most definitive method of evaluation will be to compete in the global Battlesnake arena.
The arena features a Kaggle like leader board where developers utilising different techniques could upload their snakes.
The code described in this paper is open source and provides infrastructure to compete in the global arena.
We look forward to contributions from the researchers and developers to build new RL based Battlesnakes.
\textcolor{magenta}{Future work: include non-RL agent in both training and arena.}
\section{Introduction}
Battlesnake is an extension of the traditional \textit{Snake} arcade game where multiple snakes compete against one another for food and survival.
The last surviving snake is the winner of the game.
Competitors traditionally develop heuristics such as using the A* search algorithm \cite{russell2002artificial} and the tree search \cite{mci/Schier2019} to seek food, enemy heads, and its tail.
Meanwhile, Reinforcement Learning (RL), which learns a policy by interacting with an environment through trial-and-error, has been naturally adopted to tackle such sequential problems.
Recent advances in deep RL further allows modelling such decision making problems with high-dimensional visual perceptual inputs made up of thousands of pixels \cite{mnih2015human}.
Battlesnake focuses on a particular branch of RL where multiple agents learn to interact within the same environment.
It is denoted as multi-agent RL problems \cite{littman1994markov, bu2008comprehensive, bucsoniu2010multi}.
The game fits the fully competitive setting \cite{silver2017mastering} in which each agent is tasked to maximise its own reward while minimising their opponent's rewards.
Developers with superior domain knowledge build snakes with unique strategies and heuristics.
Having Human-In-the-Loop Learning (HILL) aids the policy training and prevents catastrophic actions \cite{christiano2017deep, abel2017agent}.
Human intuition could be provided as feedback \cite{arakawa2018dqn,xiao2020fresh}, teachers \cite{abel2017agent,zhang2019leveraging}, and overseers \cite{saunders2018trial}.
It can also steer agents to have more optimal learning by identifying important and omitting misleading features, subsequently reducing the dimensionality of the state space \cite{abel2017agent}.
However, these methods incorporate human guidance in a single agent setting.
To the best of our knowledge, there exists no playground designated to evaluate multi-agent RL algorithms with HILL.
To fill in this gap, we introduce the Battlesnake Challenge, an accessible and standardised framework that allows researchers to effectively train and evaluate their multi-agent RL algorithms with various HILL methods.
We choose to use Battlesnake as the underlying game engine because it can be seamlessly integrated to the multi-agent RL research direction, while remaining intuitive and lightweight. Besides, the rules governing the game are relatively simple, but the resulting strategies can still be complex. Such setting facilitates the use of human knowledge to provide policy training guidance.
In specific, we offer a simulated Battlesnake environment for offline training, after which the snakes can be deployed to the cloud to compete with other snakes in the Battlesnake global arena\footnote{https://play.battlesnake.com/arena/global/. The arena hosts all types of AI bots, being RL or non-RL, to compete against each other.}.
To accommodate human knowledge, we identify a number of standard heuristics and further demonstrate the baseline techniques to incorporate them into the agent.
Our proposed framework is \textit{agent-agnostic} and \textit{heuristics-agnostic} such that researchers can design their own algorithms, train their models, and demonstrate in the real Battlesnake competition.
We hope that the Battlesnake Challenge will serve as a testbed to encourage research into multi-agent RL and HILL.
Our contributions are as follows:
\begin{itemize}
\item We propose an end-to-end training-deployment-testing framework that consists of an offline module for multi-agent RL training with HILL, and an online module for performance evaluation against other public agents.
\item On the multi-agent RL aspect, we develop a simulator for the Battlesnake problem through a proper design of state, action and reward. On HILL aspect, we identify a set of baseline heuristics that can be instilled to improve the agents during training or after deployment.
\item We validate the proposed framework and baseline heuristics with our preliminary experiments.
We investigate how different incorporation methods affect task performance both offline and online.
Our results show that agents with HILL outperform agents without HILL and careful reward manipulation performs the best among our proposed heuristics in the online Battlesnake arena.
\item We open-source the Battlesnake Challenge framework at \texttt{http://github.com/ awslabs/sagemaker-battlesnake-ai} to encourage broader research directions.
\end{itemize}
\section{Related works}
\textbf{Multi-agent RL Testbed:} There is a growing number of studies focusing on designing environments to evaluate the agents' performance with the advancements in the multi-agent RL regime \cite{zhang2019multi, nguyen2020deep}.
For example, Keepaway soccer \cite{stone2005keepaway} and its extension \cite{kalyanakrishnan2006half,hausknecht2016half} provide a simulated football environment.
Meanwhile, a set of gridwold-like environments has been developed to encompass various multi-agent tasks, covering both continuous \cite{lowe2017multi} and discrete \cite{yang2018mean, zheng2018magent} control problems.
\citet{resnick2018pommerman} proposed \textit{Pommerman},
a game stylistically similar to the Nintendo game Bomberman,
as a playground for bench-marking different agents.
The system uses low dimensional symbolic state interpretations input, and the authors built an online leader board where researchers could submit their agents and compete against one another.
It, however, allows only up to four agents and does not include the mechanism that adds human intuition to the agents.
More recent work considers real-time strategy games that require complex environments and controls.
The StarCraft Multi-Agent Challenge (SMAC) \cite{samvelyan2019starcraft, vinyals2019grandmaster}, developed based on \textit{StarCraft II}, focuses on handling partially observability and high-dimensional inputs.
It aims to serve as a benchmark for \textit{cooperative} multi-agent RL, rather than the \textit{competitive} setting as in Battlesnake.
In contrast, \citet{ye2019mastering} presented a 1v1 game mode using multi-agent RL in a competitive setting through the game \textit{Honor of Kings}.
They developed an off-policy RL system architecture for scalable training and eliminated invalid actions to improve the training efficiency.
Nonetheless, the complexities inherent in both of these games require specific game knowledge, making it difficult for general researchers to develop and evaluate their multi-agent RL algorithms.
\textbf{Human-in-the-loop Reinforcement Learning:} The data hungry nature of RL has prompted researchers to develop techniques to leverage human knowledge for RL tasks \cite{zhang2019leveraging}.
Often, human information is passed along in the form of human intervention \cite{saunders2018trial}, reward shaping \cite{knox2008tamer, knox2009interactively,knox2012reinforcement,warnell2018deep,arakawa2018dqn,xiao2020fresh} and policy evaluation \cite{griffith2013policy,macglashan2017interactive,arumugam2019deep}.
In principal, human intuition could be injected in two methods, 1) by evaluating the actions during training through real-time feedback or intervention; and 2) by defining handcrafted rules prior to training to alter the agent's behaviour based on human intuition.
TAMER+RL \cite{knox2010combining} is an example of the first method in which human provide a reward given a state action pair.
An example of the second method is the agent-agnostic framework proposed by \citet{abel2017agent}.
The framework contains a protocol program that controls the human interaction with a single agent.
Human can perform learning interventions with handcrafted rules that alter the transition dynamics given the current state and action, with methods including action masking, reward manipulation, etc.
Action masking is a technique to bar possible actions from the policy based on engineered rules.
Reward manipulation is the act of specifically designing a reward function.
In particular, the authors found that action masking simplifies the RL problem and was effective to prevent catastrophic actions \cite{saunders2018trial}.
Considering that Battlesnakes are traditionally developed with handcrafted rules, we extend the framework into a multi-agent RL setting.
\section{Description of the Battlesnake Challenge}
The architecture design of the Battlesnake Challenge is presented in Figure \ref{fig:battlesnake_framework}.
Our framework includes an offline RL training module equipped with a simulated Battlesnake environment,
as will be described in Section \ref{subsec:env}.
The framework is designed to work with handcrafted rules \cite{abel2017agent} as well as more complex heuristics \cite{russell2002artificial,mci/Schier2019} that are programmed in advance.
The preprogrammed heuristics are independent of the RL algorithm and could be incorporated into the RL training module to assist the training procedures.
Once the trained agent is deployed online, it can make inferences and interface with the Battlesnake engine to play in the Battlesnake arena.
We integrate the inference with optional ad-hoc heuristics to enable action overwriting, through which human experts can perform the safety checks.
We use the Battlesnake arena to evaluate different combinations of in-training and ad-hoc human guidance by allowing the agents to compete against each other.
\begin{figure}[h!]
\centering
\includegraphics[width=0.43\textwidth]{Images/bs_framework.jpg}
\caption{Overview of the Battlesnake Challenge with human-in-the-loop multi-agent reinforcement learning. Human knowledge injected in the offline training phase can possibly affect state ($s_t$), action ($a_t$) and reward ($r_t$) at timestep $t$. Once the agents is deployed, Ad-hoc heuristics can overwrite the action ($a_t$) at each inference timestep $t$.}
\label{fig:battlesnake_framework}
\end{figure}
\subsection{Battlesnake description}
\label{subsec:env}
We first provide a detailed description of Battlesnake game logic.
A typical game in Battlesnake consists of three to five snakes on a board ranging from $7 \times 7$ (small), $11 \times 11$ (medium) to $19 \times 19$ (large).
At the start of the game, $N$ snakes are randomly distributed along the boundaries of the board, each with health of $100$.
There is one piece of food randomly distributed at the same time.
At each turn, the health of every snake decreases by one and each snake reacts to the environment indicating whether it will move up, down, left or right;
food are then randomly spawned.
Unlike the traditional snakes game, if a snake is facing up and the next action is to move down, the game considers the snake hitting its own body and it will be eliminated from the game.
This is known as a \textit{forbidden move}.
If a snake eats a food, its health will be returned to 100 and its length will grow by one.
If a snake hits another snake's head, the shorter of the two snakes is eliminated from the game (if the sizes of the two snakes are the same, both the snakes are eliminated).
This is referred to as \textit{eating another snake}.
In addition, a snake is eliminated from the game if it: 1) goes out of the boundaries of the map, 2) hits another snake's body, 3) hits its own body, or 4) has a health of 0.
The last surviving snake becomes the winner.
\subsection{Battlesnake as a Reinforcement Learning Environment}
\label{sec:bs_rl}
\begin{figure}[h!]
\centering
\includegraphics[width=0.4\textwidth]{Images/rl.png}
\caption{Modelling Battlesnake with reinforcement learning.
}
\label{fig:rl_battlesnake}
\end{figure}
We consider a standard Markov game \cite{littman1994markov} to model the interaction between multiple Battlesnake agents with the environment.
Each agent represents one snake in the game.
The Markov game is specified by a tuple $M = (\mathcal{N}, \mathcal{S},\{\mathcal{A}^i\}_{i\in\mathcal{N}},\mathcal{T},\{R^i\}_{i\in\mathcal{N}},\gamma)$, where $\mathcal{N} = \{1,\dots,N\}, N> 1$ denotes the set of agents,
$\mathcal{S}$ is the state space observed by all agents and $\mathcal{A}^i$ is the action space of agent $i$.
$\mathcal{T}: \mathcal{S}\times\mathcal{A}^1\times\cdots\mathcal{A}^N\times\mathcal{S} \rightarrow [0,1]$ denotes the transition function that maps a state $s_t\in \mathcal{S}$ and action $a_t^i\in\mathcal{A}^i$ pair for each agent $i$ to a probability distribution over the next state $s_{t+1} \in \mathcal{S}$.
The environment emits a reward $R^i: \mathcal{S}\times\mathcal{A}^1\times\cdots\mathcal{A}^N\rightarrow \mathbb{R}$ on each transition for each agent $i$; $\gamma$ denotes the discount factor.
Figure \ref{fig:rl_battlesnake} illustrates setup in which the agent interacts with the simulator over the OpenAI Gym interface \cite{brockman2016openai}. Components in the Markov game are given as follows:
\textbf{State:} We provide the Battlesnake simulator a gridworld based state space $s_t$ at time $t$ to represent the spatial distribution of all the snakes and food.
Agent $i$ is represented by a list of coordinates, $\textbf{x}^i = [x_1^i, x_2^i, \dots, x_{L_i}^i]$ where $x_j^i \in \mathbb{R}^2$ $\forall j \in 1, \dots, L_i$ and $L_i$ is the length of snake $i$.
The $N$ snakes are collectively referred to with $\textbf{X}$ such that $\textbf{X} = [\textbf{x}^1 ... \textbf{x}^i ...\textbf{x}^N]$.
Food $\textbf{F}$ is represented as $[y^1, y^2 \dots y^{M^t}]$ where $y^j\in\mathbb{R}^2$ is the coordinate corresponding to the location of the food $j$ and $M^t$ represents the number of food at time $t$.
To ingest the information of other agents, we organise the state for agent $i$, $s_{t}^{i}$ as a grid with 3 channels where $s_{t}^{i} \in \mathbb{R}^{w \times h \times 3}$, and $w$ and $h$ are the width and height of the map.
Specifically, assume $s_{t}^{i}[j, k, c]$ describes the state value at coordinate $(j, k)$ on the map of channel $c$ for agent $i$ at time $t$.
Channels $c \in \{0, 1, 2\}$ represents position of food, agent $i$, and other agents, respectively.
For $c = 0$, $s_{t}^{i}[j, k, 0] = 1$ if $(j, k) \in \textbf{F}$ and $0$ otherwise.
The state for $c = 0$ should be identical for all agents.
Similarly, $c = 1$ provides the position of agent $i$ where $s_{t}^{i}[j, k, 1] = 1, \forall (j, k) \in \textbf{x}^i$ and $0$ otherwise.
Also, we set $s_{t}^{i}[j_h, k_h, 1] = 5$ where ($j_h, k_h$) denotes the head of agent $i$.
We choose $5$ experimentally to provide a larger differentiation between the body and the head of the snake.
Finally, $c = 2$ is defined as $s_{t}^{i}[j, k, 2] = 1, \forall (j, k) \in \textbf{x}^{i'}$ where $\textbf{x}^{i'}$ are all other agents in $\textbf{X}$ where $i' \ne i$, and the heads of the snakes in $\textbf{x}^{i'}$ are set to $5$ as well.
It is worth mentioning that we deliberately choose to formulate the state space in a gridworld fashion rather than using image pixels or complex embedded features. We believe this will make the RL training easier and encourage research focus on developing the heuristics with HILL.
\textbf{Action:} The action space $\mathcal{A}^i$ is identical for each agent $i$, representing the direction the agent moves towards in the next turn.
Namely, $a_t^i = [0, 1, 2, 3]$ corresponds to up, down, left, and right at time $t$ for agent $i$.
Thus the joint action space is defined as $\textbf{a}_t = [a_t^1, a_t^2 ... a_t^N]$.
\textbf{Reward:} The overall goal is to become the last surviving snake.
We by default apply the same reward formulation for all the agents.
Specifically, a negative reward $-1$ is imposed if a snake dies,
and the last snake alive is assigned with a reward $1$.
We grant a small reward $\epsilon=0.002$ for each snake when it survives another turn, with an intuition to promote the snakes to move and grow.
\subsection{Training Algorithm}
We train each snake's policy independently using the Proximal Policy Optimisation (PPO) algorithm \cite{schulman2017proximal}. It is a widely used, modern on-policy actor-critic algorithm that has presented stable performances in many of the recent successes in deep RL. The algorithm employs two neural networks during training -- a policy network and a value network.
The policy network interacts with the Battlesnake environment and generates Gaussian-distributed actions given the state. The value network estimates the expected cumulative discounted reward using the generalised advantage algorithm \cite{schulman2015high}.
Note that while we use PPO to conduct experiments, the proposed framework is \textit{agent-agnostic} and can fit various discrete action-based state-of-the-art algorithms such as QMIX \cite{rashid2018qmix} and SAC \cite{haarnoja2018soft}.
\subsection{Heuristics with human-in-the-loop learning}
\label{sec:heuristics}
In principal, our proposed framework is \textit{heuristic-agnostic} such that researchers can tackle various heuristic development and bring them into the agent training process.
For the purpose of illustration, in this work we identify several human engineered heuristics that we used as part of the challenge.
The philosophy is to provide agents information regarding what actions to \textit{avoid} and to guide agents towards superficial skills such as heading to food when starving.
Specifically, we provide the following heuristics:
\begin{enumerate}[topsep=0pt,itemsep=-1ex,partopsep=1ex,parsep=1ex]
\item Avoid hitting the walls.
\item Avoiding moving in the opposite direction as the snake is facing (\textit{forbidden moves}).
\item Moving towards and eating food when the snake health is low to prevent \textit{starving}.
\item Killing another snake (e.g., eating another snake or trapping another snake).
\end{enumerate}
\begin{table}[h!]
\centering
\begin{tabular}{c | c c c}
\hline
Rule & Prevention/ & Interaction & Training \\
&Promotion & & phase \\ [0.5ex]
\hline\hline
1 & Prev. & Env. & Early \\
2 & Prev. & Env. & Early \\
3 & Promo. & Env. & Middle \\
4 & Promo. & Agents & Late \\ [1ex]
\hline
\end{tabular}
\caption{Properties of the heuristics. Prev. refers to action prevention; Promo. refers to action promotion. Interaction describes whether the rules are interacting with the environment or other agents. Training phase indicates when the rules become more important.}
\label{table:rule_properties}
\end{table}
Table \ref{table:rule_properties} provides an overview of the properties of each rule.
Prevention/promotion refers to \textit{action prevention} or \textit{action promotion}.
Action prevention rules help eliminate unreasonable actions, similar to the use of catastrophic actions prevention \cite{abel2017agent,saunders2018trial} and action filters \cite{gao2019skynet,meisheri2019accelerating}.
In most cases, action prevention rules could be resolved with a single conditional statement.
Meanwhile, the described action promotion rules are more complex as they require multiple steps to achieve.
Interaction describes whether the rules are interacting with the environment or other agents.
For example, to manage rule 4, an agent would have to anticipate the movements of other agents in order to kill them.
Finally, training phase indicates when the rules become more important.
Rules 1 and 2 are necessary for basic navigation and movement;
they prevent the agents from committing ``suicide" in the early phases of training and are essential throughout the duration of training.
Rules 3 is necessary for survival after the early phases of training when basic navigation is learnt.
Rule 4 requires high level strategies once the snakes have no issues with surviving.
While human rules are instilled with a goal to accelerate the learning procedure, they can also be biased and limiting the snakes' performance.
For instance, rule 3 could lead to snakes focusing too much on food, whereas rule 4 could result in over-aggressive snakes.
To this end, we design our platform such that the impact of the heuristics can be controlled and even removed once an agent acquires some basic skills.
Such heuristic impact break-down can also be phrased as a curriculum learning method where a logical ordering or hierarchy of simple skills is learnt during training \cite{matiisen2019teacher, portelas2019teacher}.
We now describe three methods to include the heuristics rules into the RL learning.
\begin{figure}[h]
\centering
\centering
\includegraphics[width=0.4\textwidth]{Images/in-training-action_mask.jpg}
\caption{In-training action masking. }
\label{fig:action_masking}
\end{figure}
\textbf{In-training action masking: }
We first consider the case where the human heuristics are injected during training and the agent adjusts its policy accordingly (Figure \ref{fig:action_masking}).
In particular, we incorporate the feedback at the final output layers of the policy, where invalid actions are masked out of the softmax by scaling the probability to zero.
The actual action $a_t^*$ taken after the masking is then passed into the simulated environment to generate the new states.
Action masking applies to catastrophic action prevention and single step heuristics.
As such we apply it for rules 1 and 2.
\begin{figure}[h]
\centering
\centering
\includegraphics[width=0.4\textwidth]{Images/ad_hoc_action_mask.jpg}
\caption{Ad-hoc action overwriting}
\label{fig:ad_hoc_action_masking}
\end{figure}
\textbf{Ad-hoc action overwriting: }
A trained agent can be deployed to interact with the Battlesnake Engine and compete with other snakes.
Ad-hoc action overwriting is often applied in this scenario to enhance robustness and guarantee performance.
As shown in Figure \ref{fig:ad_hoc_action_masking}, it is different from in-training action masking in the sense that the heuristics are only applied during inference.
In our proposed Battlesnake Challenge framework, the actual actions taken and corresponding next states are not used to update the policy in real-time as the agent is already deployed. However, the experiences can be stored for the purpose of policy evaluation.
\textbf{Reward manipulation:}
Reward manipulation includes human intuition by specifically designing a reward function to encourage events corresponding to the heuristics.
During training, the reward function is defined as $\hat{R}(s_t^i, a_t^i) = \hat{r}_t^i$ where $\hat{r}_t^i$ denotes the heuristics based reward.
$\hat{r}_t^i$ is then fed into the learning process with the final reward function defined as $r_t^{*i} = r_t^i + \hat{r}_t^i$.
For instance, we add a penalty term ($\hat{r}_t^i = -0.4$) in our experiments whenever a snake hits a wall to account for rule 1.
Please note that the sign of $\hat{r}_t^i$ changes if the heuristics is action promoting.
\section{Experiments}
\subsection{Implementation details}
We open-source our proposed framework and provide implementation for each component presented in Figure \ref{fig:battlesnake_framework}.
Specifically, our package is featured with a simulated gym environment, a heuristics developer, and the orchestration code to deploy trained agents to the Battlesnake arena.
In addition, we provide a suite of training examples using the RL package RLlib \cite{liang2017rllib} within the Amazon SageMaker RL package.
\subsection{Evaluation}
There are two main avenues to evaluate the RL agent with HILL.
The first avenue evaluates agent performance during training.
The second avenue is to use the leader board in the Battlesnake arena, in which a deployed agent competes against other snakes with black-box mechanisms.
For the first avenue, we collect the maximum \textit{episode length} of all agents as a metric for evaluation.
Episode length provides an indication of how long the agents survived for, and it's consistent across different reward manipulation schemes.
We first investigate the baseline performance with map sizes of $7 \times 7$, $11 \times 11$, and $19 \times 19$ comparing the performances of training with 4, 5, and 6 agents without human intuition.
We aim to verify that the our simulator is properly formulated such that the snake's performance improves over training.
We then compare the performances of heuristic incorporation methods described in Section \ref{sec:heuristics}.
In addition to the episode length, we also record the frequency of events that each heuristics is designed to prevent or encourage to evaluate how well the heuristics are incorporated.
Here and on-wards, we fix the map size and number of agents at $11 \times 11$ and 5 respectively.
For the second avenue, we evaluate the performances of the described baseline heuristics in the arena. We wish to call out that we do not compare with the existing mature snakes.
The focus of this study is to showcase how each component in our proposed framework can be modified with a minimum amount of efforts, rather than to develop a sophisticated fine-tuned agent that beats the best performing snakes.
Particularly, we use rule 2 as an example to compare the performance of the trained agents with in-training action masking, ad-hoc action overwriting, reward manipulation, and vanilla training with no HILL.
For each baseline heuristics, we randomly select one policy for deployment.
Note that the ad-hoc action overwriting agent uses the vanilla trained agent as the base agent.
For the in-training action masking agent, we apply the same masking logic during inference to ensure consistency in the state transition dynamics.
We conduct two experiments for the arena testing.
The first experiment consists of $30$ games with the four snakes in the arena.
The last surviving snake is given four points, the second last surviving snake is given three points, and so on.
We also record the frequency of forbidden moves.
In the second experiment, the four snakes compete in a 1 vs. 1 format.
Each pair of snakes plays $10$ games, leading to a total of 60 games.
The winning snake gets 1 point and the losing snake gets 0 point.
\section{Results}
In this section, we present experimental results for components included in the Battlesnke Challenge.
We first demonstrate the performance of the multi-agent RL agents trained in our simulated environment, and then move to performance evaluation with HILL both offline during training and online in the arena.
\textbf{Multi-agent reinforcement learning:}
The results of varying the number of agents and the map size are presented in Figure \ref{fig:results_MARL}.
We train three different instances of each case with different random seeds.
The solid curves correspond to the mean and the shaded region to the minimum and maximum values over the trials.
We observe that after 1M steps of training, the episode length increases from 0 to an average of 175.
After training for 2M steps, the growth increases steadily.
We can observe that the maximum episode length for 4 agents is distinctively better than 5 and 6 agents.
This is not surprising as games with 4 agents have more space to roam around the map compared to games with 5 or 6 agents.
Similarly, when investigating the effects of the map sizes, we observe that the episode length of 5 agents on larger map sizes is longer than that of smaller map sizes.
\begin{figure}[h!]
\centering
\includegraphics[width=0.32\textwidth]{Images/iterate_agent_map_episode_len.png}
\caption{Episode lengths with (a) varying the number of agents on a $11 \times 11$ map and (b) varying the map size with five agents.}
\label{fig:results_MARL}
\end{figure}
\subsection{Human-in-the-loop during training}
\textbf{In-training action masking:}
Our results as presented in Figure \ref{fig:results_AM} show that the agents with in-training action masking outperforms the one without it.
Action masking with rule 2 (forbidden moves) has the best performance, reaching an episode length of more than 350 at 10M steps of training.
Action masking with rule 1 (wall hitting) achieves an episode length of about 300 at 10M steps whereas the agent with no heuristics achieves a bit more than 250.
This verifies that including action masking improves the sample efficiency and thus accelerate the policy training.
\begin{figure}[h!]
\centering
\includegraphics[width=0.35\textwidth]{Images/results_action_masking.png}
\caption{Experiments with action masking for rule 1 and 2 with 5 agents on a $11 \times 11$ map}
\label{fig:results_AM}
\end{figure}
\textbf{Reward manipulation:}
We present the frequency of forbidden moves (rule 2) in Figure \ref{fig:results_RM}.
At the beginning of training, the agents perform an average of 10 forbidden moves, which is the leading cause of death for the agents.
After training for around 1M time steps, the mean frequency of forbidden moves drops to around 3.
We can observe a slight reduction of the frequency of forbidden moves when comparing between the agents with and without no reward manipulation.
However, as evident from the figure, the agents also manage to learn forbidden moves avoidance even without reward manipulation.
\begin{figure}[h!]
\centering
\includegraphics[width=0.35\textwidth]{Images/reward_shaping_for_bin.png}
\caption{Experiments with reward manipulation for rule 2, forbidden moves with 5 agents on a $11 \times 11$ map.}
\label{fig:results_RM}
\end{figure}
In Figure \ref{fig:results_comparison}, we can see that the episode length for agents with in-training action masking outperform agents with reward manipulation and no heuristics.
This observation is consistent with Figure \ref{fig:results_AM} where forbidden move action masking improved policy training.
\begin{figure}[h!]
\centering
\includegraphics[width=0.35\textwidth]{Images/results_intraining_comparison.png}
\caption{Comparison between the performance of in-training action masking, reward manipulation, and no HILL with 5 agents on a $11 \times 11$ map.}
\label{fig:results_comparison}
\end{figure}
\subsection{Arena testing}
Table \ref{table:arena_score} shows the performances of the four agents in the Battlesnake arena to address rule 2.
The four agents correspond to in-training action masking, ad-hoc action overwriting, reward manipulation, and vanilla training with no heuristics.
Each agent is trained on a $11 \times 11$ map for 2500 episodes.
The same agents are used to tested in 1 vs. 1 competition with results presented in Table \ref{table:arena_1v1}.
We observe a consistent performance from the two tables.
As expected, no HILL has the worst performance in the arena.
With mean scores of 2.533 and 2.333 for in-training action masking and ad-hoc action overwriting respectively, performance of the former is slightly higher.
This aligns with the trend shown in Figure \ref{fig:results_AM}.
It is interesting to note that the agent with reward manipulation has the best performance in the arena, suggesting possible sub-optimal actions introduced during action overwriting.
\begin{table}[h!]
\centering
\begin{tabular}{l | c c}
\hline
HILL type & Arena score & \%Forbidden moves \\ [0.5ex]
\hline\hline
No HILL & $2.200 \pm 0.846$ & 26.6\%\\
In-training AM & $2.533 \pm 1.074$ & 0\%\\
RM & $2.900 \pm 1.296$ & 13.3\%\\
Ad-hoc AO& $2.333 \pm 1.154$ & 0 \% \\
\end{tabular}
\caption{Scores in the Battlesnake arena and the \% of deaths caused by forbidden moves. AM refers to action masking, RM refers to reward manipulation, and AO refers to action overwriting.}
\label{table:arena_score}
\end{table}
\begin{table}[h!]
\centering
\begin{tabular}{r | c c c c}
& No HILL & IT AM & RM & AH AO \\ [0.5ex]
\hline
No HILL & - & 4 & 3 & 6\\
IT AM & 6 & - & 1 & 2 \\
RM & 7 & 9 & - & 1 \\
AH AO& 4 & 8 & 9 & - \\
\end{tabular}
\caption{Scores (with respect to the rows) in the Battlesnake arena in a 1 vs 1 format.
IT AM, AH AO and RM refer to in-training action masking, Ad-hoc action overwriting and reward manipulation respectively.}
\label{table:arena_1v1}
\end{table}
\section{Conclusion}
We introduced the Battlesnake Challenge, a framework to effectively experiment and evaluate multi-agent reinforcement learning with human-in-the-loop.
We formulated Battlesnake into a multi-agent RL problem and identified a set of heuristics-based rules to facilitate standardised human-in-the-loop RL research.
We presented the performance of three heuristics incorporation methods.
Our results suggested that the effectiveness of these methods are different during offline training and online inference.
Overall, agents with HILL perform better than agents without HILL.
During training, action masking improves the agents' survivability, leading to increased episode lengths.
In contrast, in the Battlesnake arena, the agent with reward manipulation outperform the agent with in-training action masking.
All of the code is readily available at our git repository.
We look forward to contributions from the researchers and developers to build new RL based Battlesnakes.
| proofpile-arXiv_065-248 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Relative pose estimation from two views of a camera, or a multi-camera system is regarded as a fundamental problem in computer vision~\cite{HartleyZisserman-472,scaramuzza2011visual,kazik2012real,schoenberger2016sfm,guan2018visual}, which plays an important role in simultaneous localization and mapping (SLAM), visual odometry (VO) and structure-from-motion (SfM). Thus, improving the accuracy, efficiency and robustness of relative pose estimation algorithms is always an important research topic~\cite{hee2014relative,ventura2015efficient,sweeney2015computing,Agarwal2017,barath2018five,Silveira_2019_CVPR}. Motivated by the fact that multi-camera systems are already available in self-driving cars, micro aerial vehicles or augmented reality headsets, this paper investigates the problem of estimating the relative pose of multi-camera systems from affine correspondences, see Fig.~\ref{fig:AffineTransformation}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.85\linewidth]{figure/AffineTransformation_CrossFeature.png}
\end{center}
\caption{An affine correspondence in camera $C_i$ between consecutive frames $k$ and $k+1$. The local affine transformation $\mathbf{A}$ relates the infinitesimal patches around point correspondence (${\mathbf{x}}_{ij}$, ${\mathbf{x}}'_{ij}$).}
\label{fig:AffineTransformation}
\end{figure}
Since a multi-camera system contains multiple individual cameras connected by being fixed to a single rigid body, it has the advantage of large field-of-view and high accuracy. The main difference of a multi-camera system and a standard pinhole camera is the absence of a single projection center. A multi-camera system is modeled by the generalized camera model. The light rays that pass through a multi-camera system are expressed as Pl\"{u}cker lines and the epipolar constraint of the Pl\"{u}cker lines is described by the generalized essential matrix~\cite{pless2003using}.
Most of the state-of-the-art SLAM and SfM pipelines using a multi-camera system~\cite{hane20173d,heng2019project} follow the same procedure consisting of three major steps~\cite{scaramuzza2011visual}: first, a feature matching algorithm is applied to establish image point correspondences between two frames. Then a robust estimation framework, \emph{e.g.} the Random Sample Consensus (RANSAC)~\cite{fischler1981random}, is applied to find the pose parameters and remove outlier matches. Finally, the final relative pose between the two frames is estimated using all RANSAC inliers. The reliability and robustness of such a scheme is heavily dependent on the outlier removal step. In addition, the outlier removal process has to be efficient, which directly affects the real-time performance of SLAM and SfM. The computational complexity and, thus, the processing time of the RANSAC procedure depends exponentially on the number of points required for the estimation. Therefore, exploring the minimal solutions for relative pose estimation of multi-camera system is of significant importance and has received sustained attention~\cite{henrikstewenius2005solutions,li2008linear,hee2014relative,sweeney2014solving,ventura2015efficient,sweeney2015computing,kneip2016generalized,liu2017robust}.
The idea of deriving minimal solutions for relative pose estimation of multi-camera systems ranges back to the work of Stew{\'e}nius \emph{et al.} with the 6-point method~\cite{henrikstewenius2005solutions}. Then other classical works have been subsequently proposed, such as the 17-point linear method~\cite{li2008linear} and techniques based on iterative optimization~\cite{kneip2014efficient}. Moreover, the minimal number of necessary points can be further reduced by taking additional motion constraints into account or using other sensors, like an inertial measurement unit (IMU). For example, two point correspondences are sufficient for the ego-motion estimation of a multi-camera system by exploiting the Ackermann motion model constraints of wheeled vehicles~\cite{hee2013motion}. For vehicles equipped with a multi-camera system and an IMU, the relative motion can be estimated from four point correspondences by exploiting the known vertical direction from the IMU measurements, \emph{i.e.}, roll and pitch angles~\cite{hee2014relative,liu2017robust}.
All of the previously mentioned relative pose solvers estimate the pose parameters from a set of point correspondences, \emph{e.g.}, coming from SIFT~\cite{Lowe2004Distinctive} or SURF~\cite{Bay2008346} detectors. However, as it has been clearly shown in several recently published papers papers~\cite{bentolila2014conic,raposo2016theory,barath2018efficient,eichhardt2018affine}, using more informative features, \emph{e.g.} affine correspondences, improves the estimation procedure both in terms of accuracy and efficiency. An affine correspondence is composed of a point correspondence and a 2$\times$2 affine transformation. Due to containing more information, than point correspondences, about the underlying surface geometry, the affine correspondences enable to estimate relative pose from fewer correspondences. In this paper, we focus on the relative pose estimation of a multi-camera system from affine correspondences, instead of point correspondences. Four novel solutions are proposed:
\begin{itemize}
\item A new minimal solver is proposed which requires two affine correspondences to estimate the general motion of a multi-camera system which has 6 degrees of freedom (6DOF). In contrast, state-of-the-art solvers use six point correspondences~\cite{henrikstewenius2005solutions,kneip2014efficient,ventura2015efficient}.
\item When the motion is planar (\emph{i.e.}, the body to which the cameras are fixed moves on a plane; 3DOF), a single affine correspondence is sufficient to recover the planar motion of a multi-camera system. In order to deal with the degenerate case of 1AC solver, we also propose a new method to estimate the relative pose from two affine correspondences. The point-based solution requires two point pairs, but only for the Ackermann motion model~\cite{hee2013motion}.
\item A fourth solver is proposed for the case when the vertical direction is known (4DOF), \emph{e.g.}, from an IMU attached to the multi-camera system. We show that two affine correspondences are required to recover the relative pose. In contrast, the point-based solver requires four correspondences~\cite{hee2014relative,sweeney2014solving,liu2017robust}.
\end{itemize}
\section{\label{sec:relatedwork}Related Work}
There has been much interest in using multi-camera systems in both academic and industrial communities. The most common case is that a set of cameras, particularly with non-overlapping views, are mounted rigidly on self-driving vehicles, unmanned aerial vehicles (UAV) or AR headsets.
Due to the absence of a single center of projection, the camera model of multi-camera systems is different from the standard pinhole camera. Pless proposed to express the light rays as Pl\"{u}cker lines and derived the generalized camera model which has become a standard representation for the multi-camera systems~\cite{pless2003using}. Stew{\'e}nius~\emph{et al.} proposed the first minimal solution to estimate the relative pose of a multi-camera system from 6 point correspondences, which produces up to 64 solutions~\cite{henrikstewenius2005solutions}. Kim~\emph{et al.} later proposed several approaches for motion estimation using second-order cone programming~\cite{kim2007visual} or branch-and-bound techniques~\cite{kim2009motion}. Lim~\emph{et al.} presented the antipodal epipolar constraint and estimated the relative motion by using antipodal points~\cite{lim2010estimating}. Li~\emph{et al.} provided several linear solvers to compute the relative pose, among which the most commonly used one requires 17 point correspondences~\cite{li2008linear}. Kneip and Li proposed an iterative approach for the relative pose estimation based on eigenvalue minimization~\cite{kneip2014efficient}. Ventura~\emph{et al.} used first-order approximation of the relative
rotation to simplify the problem and estimated the relative pose from 6 point correspondences~\cite{ventura2015efficient}.
By considering additional motion constraints or using additional information provided by an IMU, the number of required point correspondences can be further reduced. Lee~\emph{et al.} presented a minimal solution with two point correspondences for the ego-motion estimation of a multi-camera system, which constrains the relative motion by the Ackermann motion model~\cite{hee2013motion}. In addition, a variety of algorithms have been proposed when a common direction of the multi-camera system is known,~\emph{i.e.}, an IMU provides the roll and pitch angles of the multi-camera system. The relative pose estimation with known vertical direction requires a minimum of 4 point correspondences~\cite{hee2014relative,sweeney2014solving,liu2017robust}.
Exploiting the additional affine parameters besides the image coordinates has been recently proposed for the relative pose estimation of monocular cameras, which reduces the number of required points significantly. Bentolila and Francos estimated the fundamental matrix from three ACs~\cite{bentolila2014conic}. Raposo and Barreto computed homography and essential matrix using two ACs~\cite{raposo2016theory}. Barath and Hajder derived the constraints between the local affine transformation and the essential matrix and recovered the essential matrix from two ACs~\cite{barath2018efficient}. Eichhardt and Chetverikov~\cite{eichhardt2018affine} also estimated the relative pose from two ACs, which is applicable to arbitrary central-projection models. Hajder and Barath~\cite{hajder2019relative} and Guan~\emph{et al.}~\cite{Guan2020CVPR} proposed several minimal solutions for relative pose from a single AC under the planar motion assumption or with knowledge of a vertical direction. The above mentioned works are only suitable for the monocular perspective camera, rather than the multiple perspective cameras connected by being fixed to the single body. In this paper, we focuses on the minimal number of ACs to estimate the relative pose of a multi-camera system.
\section{\label{sec:6DOFmotion}Relative Pose Estimation under General Motion}
A multi-camera system is made up of individual cameras denoted by $C_i$, as shown in Fig.~\ref{fig:AffineTransformation}. Its extrinsic parameters expressed in a multi-camera reference frame are represented as $(\mathbf{R}_i,\mathbf{t}_i)$. For general motion, there is a 3DOF rotation and a 3DOF translation between two reference frames at time $k$ and $k+1$. Rotation $\mathbf{R}$ using Cayley parameterization and translation $\mathbf{t}$ can be written as:
\begin{equation}
\begin{aligned}
&\mathbf{R} = \frac{1}{1+q_x^2+q_y^2+q_z^2} \ . \\ &\begin{bmatrix}{1+q_x^2-q_y^2-q_z^2}&{2{q_x}{q_y}-2{q_z}}&{2{q_y}+2{q_x}{q_z}}\\
{2{q_x}{q_y}+2{q_z}}&{1-q_x^2+q_y^2-q_z^2}&{2{q_y}{q_z}-2{q_x}}\\
{2{q_x}{q_z}-2{q_y}}&{2{q_x}+2{q_y}{q_z}}&{1-q_x^2-q_y^2+q_z^2}
\end{bmatrix},\\
\end{aligned}
\label{eq:R6dof1}
\end{equation}
\begin{equation}
\mathbf{t} = \begin{bmatrix}
{t_x}& \
{t_y}& \
{t_z}
\end{bmatrix}^T,
\label{eq:T6dof1}
\end{equation}
where $[1,q_x,q_y,q_z]^T$ is a homogeneous quaternion vector. Note that 180 degree rotations are prohibited in Cayley parameterization, but this is a rare case for consecutive frames.
\subsection{Generalized camera model}
We give a brief description of generalized camera model (GCM)~\cite{pless2003using}. Let us denote an affine correspondence in camera $C_i$ between consecutive frames $k$ and $k+1$ as $({\mathbf{x}}_{ij}, {\mathbf{x}}'_{ij}, \mathbf{A})$, where ${\mathbf{x}}_{ij}$ and ${\mathbf{x}}'_{ij}$ are the normalized homogeneous image coordinates of feature point $j$ and $\mathbf{A}$ is a 2$\times$2 local affine transformation. Indices $i$ and $j$ are the camera and point index, respectively. The related local affine transformation $\mathbf{A}$ is a 2$\times$2 linear transformation which relates the infinitesimal patches around ${\mathbf{x}}_{ij}$ and ${\mathbf{x}}'_{ij}$~\cite{barath2018five}.
The normalized homogeneous image coordinates $({\mathbf{p}}_{ij}, {\mathbf{p}}'_{ij})$ expressed in the multi-camera reference frame are given as
\begin{equation}
{\mathbf{p}}_{ij} = {\mathbf{R}_i}{\mathbf{x}}_{ij},\qquad
{\mathbf{p}}'_{ij} = {\mathbf{R}_i}{\mathbf{x}}'_{ij}.
\label{eq:imagecoord6dof}
\end{equation}
The unit direction of rays $({\mathbf{u}}_{ij}, {\mathbf{u}}'_{ij})$ expressed in the multi-camera reference frame are given as: ${\mathbf{u}}_{ij} = {\mathbf{p}}_{ij}/{{\|}{{\mathbf{p}}_{ij}}{\|}}$,${\mathbf{u}}'_{ij} = {\mathbf{p}}'_{ij}/{{\|}{{\mathbf{p}}'_{ij}}{\|}}$. The 6-dimensional vector Pl\"{u}cker lines corresponding to the rays are denoted as ${\mathbf{l}}_{ij} = [{\mathbf{u}}_{ij}^T, \ ({\mathbf{t}}_i\times {\mathbf{u}}_{ij})^T]^T$, ${\mathbf{l}}'_{ij} = [{{\mathbf{u}}'_{ij}}^T, \ ({\mathbf{t}}_i\times {\mathbf{u}}'_{ij})^T]^T$. The generalized epipolar constraint is written as~\cite{pless2003using}
\begin{equation}
{{\mathbf{l}}'^T_{ij}}
\begin{bmatrix} {{{\left[ {\mathbf{t}} \right]}_ \times }{\mathbf{R}}}, & {\mathbf{R}} \\ {\mathbf{R}}, & {\mathbf{0}} \end{bmatrix}
{{\mathbf{l}}_{ij}} = 0,
\label{GECS6dof}
\end{equation}
where ${{\mathbf{l}}'^T_{ij}}$ and ${{\mathbf{l}}_{ij}}$ are Pl\"{u}cker lines between two consecutive frames at time $k$ and $k+1$.
\subsection{Affine transformation constraint}
We denote the transition matrix of camera coordinate system $C_i$ between consecutive frames $k$ and $k+1$ as $(\mathbf{R}_{Ci},\mathbf{t}_{Ci})$, which is represented as:
{ \begin{equation}
\begin{aligned}
&\begin{bmatrix}
{\mathbf{R}_{Ci}}&{\mathbf{t}_{Ci}}\\
{{\mathbf{0}}}&{1}\\
\end{bmatrix} = \begin{bmatrix}{\mathbf{R}_{i}}&{\mathbf{t}_{i}}\\
{{\mathbf{0}}}&{1}\\
\end{bmatrix}^{-1}\begin{bmatrix}{\mathbf{R}}&{\mathbf{t}}\\
{{\mathbf{0}}}&{1}\\
\end{bmatrix}\begin{bmatrix}{\mathbf{R}_{i}}&{\mathbf{t}_{i}}\\
{{\mathbf{0}}}&{1}\\
\end{bmatrix} \\
& \qquad \ \ \ =\begin{bmatrix}{{\mathbf{R}_{i}^T}{\mathbf{R}}{\mathbf{R}_{i}}}& \ {{\mathbf{R}_{i}^T}{\mathbf{R}}{\mathbf{t}_{i}}+{\mathbf{R}_{i}^T}{\mathbf{t}}-{\mathbf{R}_{i}^T}{\mathbf{t}_{i}}}\\
{{\mathbf{0}}}& \ {1}\\
\end{bmatrix}.
\end{aligned}
\label{eq:transformationmatrix6dof}
\end{equation}}\\
The essential matrix $\mathbf{E}$ between two frames of camera $C_i$ is given as:
\begin{equation}
\begin{aligned}
\mathbf{E} = [\mathbf{t}_{Ci}]_{\times}\mathbf{R}_{Ci}
= {\mathbf{R}_{i}^T}[{\mathbf{R}_{i}}\mathbf{t}_{Ci}]_{\times}{{\mathbf{R}}{\mathbf{R}_{i}}},
\end{aligned}
\label{eq:E6dof}
\end{equation}
where $\left[{\mathbf{R}_{i}}\mathbf{t}_{Ci}\right]_{\times}={\mathbf{R}}[{\mathbf{t}_{i}}]_{\times}{{\mathbf{R}}^T} + [{\mathbf{t}}]_{\times} - [{\mathbf{t}_{i}}]_{\times}$. The relationship of essential matrix $\mathbf{E}$ and local affine transformation $\mathbf{A}$ is formulated as follows~\cite{barath2018efficient}:
\begin{equation}
(\mathbf{E}^{T}{\mathbf{x}}'_{ij})_{(1:2)} = -(\hat{\mathbf{A}}^{T}\mathbf{E}{\mathbf{x}}_{ij})_{(1:2)},
\label{eq:E6dof_Ac1}
\end{equation}
where $\mathbf{n}_{ij}\triangleq{\mathbf{E}^{T}{\mathbf{x}}'_{ij}}$ and $\mathbf{n}'_{ij}\triangleq{\mathbf{E}{\mathbf{x}}_{ij}}$ denote the epipolar lines in their implicit form in frames of camera $C_i$ at time $k$ and $k+1$. The subscript 1 and 2 represent the first and second equations of the equation system, respectively. $\hat{\mathbf{A}}$ is a $3\times3$ matrix: $\hat{\mathbf{A}} = [\mathbf{A} \ \mathbf{0}; \mathbf{0} \ 0]$. By substituting Eq.~\eqref{eq:E6dof} into Eq.~\eqref{eq:E6dof_Ac1}, we obtain:
\begin{eqnarray}
\begin{aligned}
({\mathbf{R}_{i}^T}{\mathbf{R}^T}&{[{\mathbf{R}_{i}}\mathbf{t}_{Ci}]_{\times}^T}{\mathbf{R}_{i}}{\mathbf{x}}'_{ij})_{(1:2)} \\
&= -(\hat{\mathbf{A}}^{T}{\mathbf{R}_{i}^T}[{\mathbf{R}_{i}}\mathbf{t}_{Ci}]_{\times}{{\mathbf{R}}{\mathbf{R}_{i}}}{\mathbf{x}}_{ij})_{(1:2)}.
\end{aligned}
\label{eq:E6dof_Ac2}
\end{eqnarray}
Based on Eq.~\eqref{eq:imagecoord6dof}, the above equation is reformulated and expanded as follows:
\begin{equation}
\begin{aligned}
({\mathbf{R}_{i}^T}&([{\mathbf{t}_{i}}]_{\times}{\mathbf{R}}^T + {\mathbf{R}^T}[{\mathbf{t}}]_{\times} - {\mathbf{R}^T}[{\mathbf{t}_{i}}]_{\times}){\mathbf{p}}'_{ij})_{(1:2)} = \\
&(\hat{\mathbf{A}}^{T}{\mathbf{R}_{i}^T}({\mathbf{R}}[{\mathbf{t}_{i}}]_{\times} + [{\mathbf{t}}]_{\times}{\mathbf{R}} - [{\mathbf{t}_{i}}]_{\times}{\mathbf{R}}){\mathbf{p}}_{ij})_{(1:2)}.
\end{aligned}
\label{eq:E6dof_Ac6}
\end{equation}
Equation~\eqref{eq:E6dof_Ac6} interprets the epipolar constraints which a local affine transformation implies on the $i$-th camera from a multi-camera system between two consecutive frames $k$ and $k+1$.
\subsection{Solution using Gr\"{o}bner basis method}
For affine correspondence $({\mathbf{x}}_{ij}, {\mathbf{x}}'_{ij}, \mathbf{A})$, we get three polynomials for six unknowns $\{q_x, q_y, q_z, t_x, t_y, t_z\}$ from Eqs.~\eqref{GECS6dof} and~\eqref{eq:E6dof_Ac6}. Thus two affine correspondences are enough to recover the relative pose of a multi-camera system under 6DOF general motion. The hidden variable resultant method~\cite{cox2013ideals} is used to solve for the unknowns, see supplementary material for details. The obtained solver is however too large and, therefore, slow and numerically unstable. Experiments confirmed that the solver is numerically unstable and, thus, no further experiments and comparisons are presented in the paper.
We furthermore investigate the special cases of multi-camera motion, \emph{i.e.},
planar motion and motion with known vertical direction, see Fig.~\ref{fig:Specialcases}. We will show that two special cases can be efficiently solved with affine correspondences.
\section{\label{sec:planarmotion}Relative Pose Estimation Under Planar Motion}
\begin{figure}[ht]
\begin{center}
\subfigure[Planar motion]
{
\includegraphics[width=0.31\linewidth]{figure/PlanarMotion.png}
}
\hspace{0.2in}
\subfigure[Motion with known vertical direction]
{
\includegraphics[width=0.55\linewidth]{figure/KnownVerticalDirection.png}
}
\end{center}
\caption{Special cases of multi-camera motion: (a) Planar motion between two multi-camera reference frames in top-view. There are three unknowns: yaw angle $\theta$, translation direction $\phi$ and translation distance $\rho$. (b) Motion with known vertical direction. There are four unknowns: a Y-axis rotation $\mathbf{R}_{y}$ and 3D translation $\tilde{\mathbf{t}} =[{\tilde{t}_x}, {\tilde{t}_y}, {\tilde{t}_z}]^T$.}
\label{fig:Specialcases}
\end{figure}
When assuming that the body, to which the camera system is rigidly fixed, moves on a planar surface (as visualized in Fig.~\ref{fig:Specialcases}(a)), there are only a Y-axis rotation and 2D translation between the reference frames $k$ and $k+1$. Similar to Eqs.~\eqref{eq:R6dof1} and~\eqref{eq:T6dof1}, the rotation $\mathbf{R}=\mathbf{R}_{y}$ and the translation $\mathbf{t}$ from frame $k$ to $k+1$ is written as:
\begin{equation}
\begin{aligned}
\mathbf{R}_{y} & = \frac{1}{1+{q_y^2}}\begin{bmatrix}{1-{q_y^2}}&0&{-2{q_y}}\\
0&1+{q_y^2}&0\\
{2{q_y}}&0&{1-{q_y^2}}
\end{bmatrix}, \\
\mathbf{t} & = \begin{bmatrix}
{t_x}& \
{0}& \
{t_z}
\end{bmatrix}^T.
\end{aligned}
\label{eq:Ryt1}
\end{equation}
where ${q_y}=\tan(\frac{\theta}{2})$, $t_x={\rho\sin{(\phi)}}$, $t_z={-\rho\cos{(\phi)}}$, $\rho$ is the distance between two multi-camera reference frames.
\subsection{Solution by reduction to a single polynomial}
By substituting Eq.~\eqref{eq:Ryt1} into Eqs.~\eqref{GECS6dof} and~\eqref{eq:E6dof_Ac6}, we get an equation system of three polynomials for 3 unknowns $q_y$, $t_x$ and $t_z$.
Since an AC generally provides 3 independent constraints for relative pose, a single affine correspondence is sufficient to recover the planar motion of a multi-camera system. Three independent constraints from an affine correspondence are stacked into 3 equations in 3 unknowns:
\begin{equation}
\frac{1}{1+{q_y^2}}\underbrace {\begin{bmatrix}
{M_{11}}& {M_{12}}& {M_{13}}\\
{M_{21}}& {M_{22}}& {M_{23}}\\
{M_{31}}& {M_{32}}& {M_{33}}
\end{bmatrix}}_{{\mathbf{M}}\left( {{q_y}} \right)}
\begin{bmatrix}
{{{t}_x}}\\
{{{t}_z}}\\
1
\end{bmatrix} = {\mathbf{0}},
\label{eq:euq_q1}
\end{equation}
where the elements $M_{ij}$ $(i=1,\ldots,3; j=1,\ldots,3)$ of the coefficient matrix ${\mathbf{M}(q_y)}$ are formed by the polynomial coefficients and one unknown variable $q_y$, see supplementary material for details. Since ${\mathbf{M}(q_y)}/(1+{q_y^2})$ is a square matrix, Eq.~\eqref{eq:euq_q1} has a non-trivial solution only if the determinant of ${\mathbf{M}(q_y)/(1+{q_y^2})}$ is zero. The expansion of $\det({\mathbf{M}(q_y)}/(1+{q_y^2}))=0$ gives an 4-degree univariate polynomial:
\begin{eqnarray}
\begin{aligned}
\quot(\textstyle \sum_{i=0}^6 w_i q_y^i, {q_y^2}+1) = 0,
\end{aligned}
\label{eq:euq_q2}
\end{eqnarray}
where $\quot(a, b)$ means calculating the quotient of $a$ divided by $b$, $w_{0},\ldots,w_{6}$ are formed by a Pl\"{u}cker line correspondence and a affine transformation between the corresponding feature points. This univariate polynomial leads to an explicit analytic solution with a maximum of 4 real roots. Once the solutions for $q_y$ are found, the remaining unknowns $t_x$ and $t_z$ are solved by substituting $q_y$ into ${\mathbf{M}(q_y)}$ and solving the linear system via calculating its null vector. Finally, the rotation matrix $\mathbf{R}_{y}$ is recovered from Eq.~\eqref{eq:Ryt1}.
However, we proved that the solver relies on one AC has a degenerate case, \emph{i.e.}, the distances between motion plane and optical centers of individual cameras are equal, see supplementary material for details. This degenerate case often happens in the self-driving scenario. To overcome this issue, two affine correspondences are used to estimate the relative pose. For example, the first and second constraints of the first affine correspondence, and the first constraint of the second affine correspondence are also stacked into 3 equations in 3 unknowns, just as Eq.~\eqref{eq:euq_q1}. The solution procedure remains the same, except that the code for constructing the coefficient matrix ${\mathbf{M}(q_y)}$ is replaced.
An interesting fact in this case is that only three equations from two affine correspondences are used. Although two affine correspondences are required to sample for this solver in the RANSAC loop, it is possible to run a consistency check on two affine correspondences. To identify an outlier free planar motion estimation hypothesis, the three remaining equations of two affine correspondences have also to be fulfilled. The solutions which do not fulfill the hypothesis would be preemptively rejected. This gives a significant computational advantage over the regular 2-point method, such as the solver with Ackermann motion assumption~\cite{hee2013motion}, because the inconsistent samples can be detected directly without testing on all the other affine correspondences.
\section{\label{sec:knownverticaldirection}Relative Pose Estimation with Known Vertical Direction}
In this section a minimal solution using two affine correspondences is proposed for relative motion estimation for multi-camera systems with known vertical direction, see Fig.~\ref{fig:Specialcases}(b). In this case, an IMU is coupled with the multi-camera system and the relative rotation between the IMU and the reference frame is known. The IMU provides the known roll and pitch angles for the reference frame. So the reference frame can be aligned with the measured gravity direction, such that the X-Z-plane of the aligned reference frame is parallel to the ground plane and the Y-axis is parallel to the gravity direction. Rotation $\mathbf{R}_{\text{\text{imu}}}$ for aligning the reference frame to the aligned reference frame is written as:
\begin{equation}
\begin{aligned}
&\mathbf{R}_{\text{\text{imu}}} = \mathbf{R}_{p}\mathbf{R}_{r} \\
&= \begin{bmatrix}1&0&0\\
0&\cos(\theta_p)&{\sin(\theta_p)}\\
0&{-\sin(\theta_p)}&{\cos(\theta_p)}
\end{bmatrix}\begin{bmatrix}
{\cos(\theta_r)}&{\sin(\theta_r)}&0\\
{ -\sin(\theta_r)}&{\cos(\theta_r)}&0\\
0&0&1
\end{bmatrix}, \nonumber
\end{aligned}
\label{eq:RxRz}
\end{equation}
where $\theta_r$ and $\theta_p$ are roll and pitch angles provided by the coupled IMU, respectively. Thus, there are only a Y-axis rotation $\mathbf{R}=\mathbf{R}_{y}$ and 3D translation $\tilde{\mathbf{t}}= {{{\mathbf{R}}}'_{\text{imu}}}{\mathbf{t}} =[{\tilde{t}_x}, {\tilde{t}_y}, {\tilde{t}_z}]^T$ to be estimated between the aligned multi-camera reference frames at time $k$ and $k+1$.
\subsection{Generalized camera model}
Let us denote the rotation matrices from the roll and pitch angles of the two corresponding multi-camera reference frames
at time $k$ and $k+1$ as $\mathbf{R}_{\text{\text{imu}}}$ and $\mathbf{R}'_{\text{\text{imu}}}$. The relative rotation between two multi-camera reference frames can now be given as:
\begin{equation}
{\mathbf{R}} = {(\mathbf{R}'_{\text{\text{imu}}})^T}{\mathbf{R}_{y}}{\mathbf{R}_{\text{\text{imu}}}}.
\label{eq:Rv}
\end{equation}
We substitute Eq.~\eqref{eq:Rv} into Eq.~\eqref{GECS6dof} yields:
{{\begin{equation}
\begin{aligned}
{\underbrace {\left(\begin{bmatrix}
{{{{\mathbf{R}}}'_{\text{\text{imu}}}}}& {\mathbf{0}}\\
{{\mathbf{0}}}& {{{{\mathbf{R}}}'_{\text{\text{imu}}}}}\\
\end{bmatrix}{\mathbf{l}'_{ij}} \right)^T}_{\tilde{\mathbf{l}}'_{ij}}}
&\begin{bmatrix}{{{\left[ {{\tilde{\mathbf{t}}}} \right]}_ \times } {{\mathbf{R}}_y}}&{{{\mathbf{R}}_y}}\\
{{{\mathbf{R}}_y}}&{\mathbf{0}}
\end{bmatrix}.\\
&{\underbrace {\left(\begin{bmatrix}
{{{\mathbf{R}}_{\text{\text{imu}}}}}& {\mathbf{0}}\\
{{\mathbf{0}}}& {{{\mathbf{R}}_{\text{\text{imu}}}}}\\
\end{bmatrix}{\mathbf{l}_{ij}} \right)}_{\tilde{\mathbf{l}}_{ij}}}= 0,
\end{aligned}
\label{eq:GECSIMU}
\end{equation}}}\\
where ${\tilde{\mathbf{l}}_{ij}} \leftrightarrow {\tilde{\mathbf{l}}'_{ij}}$ are the corresponding Pl\"{u}cker lines expressed in the aligned multi-camera reference frame.
\subsection{Affine transformation constraint}
In this case, the transition matrix of the camera coordinate system $C_i$ between consecutive frames $k$ and $k+1$ is represented as
{\begin{equation}
\begin{aligned}
&\begin{bmatrix}
{\mathbf{R}_{Ci}}&\ {\mathbf{t}_{Ci}}\\
{{\mathbf{0}}}&\ {1}\\
\end{bmatrix}
= \left(\begin{bmatrix}{\mathbf{R}'_{\text{imu}}}&{\mathbf{0}}\\
{{\mathbf{0}}}&{1}\\
\end{bmatrix}\begin{bmatrix}{\mathbf{R}_{i}}&{\mathbf{t}_{i}}\\
{{\mathbf{0}}}&{1}\\
\end{bmatrix}\right)^{-1}.\\
& \qquad \qquad \quad \ \ \begin{bmatrix}{\mathbf{R}_{y}}&{\tilde{\mathbf{t}}}\\
{{\mathbf{0}}}&{1}
\end{bmatrix}
\left(\begin{bmatrix}{\mathbf{R}_{\text{imu}}}&{\mathbf{0}}\\
{{\mathbf{0}}}&{1}\\
\end{bmatrix}\begin{bmatrix}{\mathbf{R}_{i}}&{\mathbf{t}_{i}}\\
{{\mathbf{0}}}&{1}\\
\end{bmatrix}\right),
\end{aligned}
\label{eq:transformationmatrix_Ev}
\end{equation}}\\
we denote that
{\begin{eqnarray}
\begin{aligned}
&\begin{bmatrix}
{\tilde{\mathbf{R}}_{\text{imu}}}&{\tilde{\mathbf{t}}_{\text{imu}}}\\
{{\mathbf{0}}}&{1}\\
\end{bmatrix} = \begin{bmatrix}{\mathbf{R}_{\text{imu}}}&{\mathbf{0}}\\
{{\mathbf{0}}}&{1}\\
\end{bmatrix}\begin{bmatrix}{\mathbf{R}_{i}}&{\mathbf{t}_{i}}\\
{{\mathbf{0}}}&{1}\\
\end{bmatrix},\\
&\begin{bmatrix}
{\tilde{\mathbf{R}}'_{\text{imu}}}&{\tilde{\mathbf{t}}'_{\text{imu}}}\\
{{\mathbf{0}}}&{1}\\
\end{bmatrix} = \begin{bmatrix}{\mathbf{R}'_{\text{imu}}}&{\mathbf{0}}\\
{{\mathbf{0}}}&{1}\\
\end{bmatrix}\begin{bmatrix}{\mathbf{R}_{i}}&{\mathbf{t}_{i}}\\
{{\mathbf{0}}}&{1}\\
\end{bmatrix}.
\end{aligned}
\label{eq:R_imuNew}
\end{eqnarray}}\\
By substituting Eq.~\eqref{eq:R_imuNew} into Eq.~\eqref{eq:transformationmatrix_Ev}, we obtain
{\begin{equation}
\begin{aligned}
&\begin{bmatrix}
{\mathbf{R}_{Ci}}&{\mathbf{t}_{Ci}}\\
{{\mathbf{0}}}&{1}\\
\end{bmatrix}\\
&=\begin{bmatrix}{({\tilde{\mathbf{R}}'_{\text{imu}}})^T{\mathbf{R}_{y}}{\tilde{\mathbf{R}}_{\text{imu}}}}& {{({\tilde{\mathbf{R}}'_{\text{imu}}})^T}({\mathbf{R}_{y}}{\tilde{\mathbf{t}}_{\text{imu}}}+{\tilde{\mathbf{t}}}-{\tilde{\mathbf{t}}'_{\text{imu}}})}\\
{{\mathbf{0}}}&{1}\\
\end{bmatrix}.
\end{aligned}
\label{eq:transformationmatrix_Ev2}
\end{equation}}\\
The essential matrix $\mathbf{E}$ between two frames of camera $C_i$ is given as
\begin{equation}
\begin{aligned}
\mathbf{E} = [\mathbf{t}_{Ci}]_{\times}\mathbf{R}_{Ci} = {({\tilde{\mathbf{R}}'_{\text{imu}}})^T}[{\tilde{\mathbf{R}}'_{\text{imu}}}\mathbf{t}_{Ci}]_{\times}{{\mathbf{R}_{y}}{\tilde{\mathbf{R}}_{\text{imu}}}},
\end{aligned}
\label{eq:Ev}
\end{equation}
where $[{\tilde{\mathbf{R}}'_{\text{imu}}}\mathbf{t}_{Ci}]_{\times}={{\mathbf{R}_{y}}[\tilde{\mathbf{t}}_{\text{imu}}]_{\times}{\mathbf{R}_{y}^T}} + [\tilde{\mathbf{t}}]_{\times} - [\tilde{\mathbf{t}}'_{\text{imu}}]_{\times}$. By substituting Eq.~\eqref{eq:Ev} into Eq.~\eqref{eq:E6dof_Ac1}, we obtain
{\begin{eqnarray}
\begin{aligned}
({\tilde{\mathbf{R}}_{\text{imu}}^T}&{\mathbf{R}_{y}^T}{[{\tilde{\mathbf{R}}'_{\text{imu}}}\mathbf{t}_{Ci}]_{\times}^T}{{\tilde{\mathbf{R}}'_{\text{imu}}}}{\mathbf{x}}'_{ij})_{(1:2)} = \\ &-(\hat{\mathbf{A}}^{T}{({\tilde{\mathbf{R}}'_{\text{imu}}})^T}[{\tilde{\mathbf{R}}'_{\text{imu}}}\mathbf{t}_{Ci}]_{\times}{{\mathbf{R}_{y}}{\tilde{\mathbf{R}}_{\text{imu}}}}{\mathbf{x}}_{ij})_{(1:2)}.
\end{aligned}
\label{eq:Ev_Ac}
\end{eqnarray}}
We denote the normalized homogeneous image coordinates expressed in the aligned multi-camera reference frame as $(\tilde{{\mathbf{p}}}_{ij}, {\tilde{\mathbf{p}}}'_{ij})$, which are given as
\begin{equation}
\tilde{{\mathbf{p}}}_{ij} = {\tilde{\mathbf{R}}_{\text{imu}}}{\mathbf{x}}_{ij},\qquad
\tilde{{\mathbf{p}}}'_{ij} = {{\tilde{\mathbf{R}}'_{\text{imu}}}}{\mathbf{x}}'_{ij}.
\label{eq:Ev_alignedimage}
\end{equation}
Based on the above equation, Eq.~\eqref{eq:Ev_Ac} is rewritten and expanded as follows:
\begin{equation}
\begin{aligned}
&({\tilde{\mathbf{R}}_{\text{imu}}^T}([{\tilde{\mathbf{t}}_{\text{imu}}}]_{\times}{\mathbf{R}_{y}^T} + {\mathbf{R}_{y}^T}[{\tilde{\mathbf{t}}}]_{\times} - {\mathbf{R}_{y}^T}[{\tilde{\mathbf{t}}'_{\text{imu}}}]_{\times}){\tilde{{\mathbf{p}}}'_{ij}})_{(1:2)} = \\
&(\hat{\mathbf{A}}^{T}{({\tilde{\mathbf{R}}'_{\text{imu}}})^T}({\mathbf{R}_{y}}[{\tilde{\mathbf{t}}_{\text{imu}}}]_{\times} + [{\tilde{\mathbf{t}}}]_{\times}{\mathbf{R}_{y}} - [{\tilde{\mathbf{t}}_{\text{imu}}}]_{\times}{\mathbf{R}_{y}}){{\tilde{{\mathbf{p}}}}_{ij}})_{(1:2)}
\end{aligned}
\label{eq:Ev_Ac2}
\end{equation}
\subsection{Solution by reduction to a single polynomial}
Based on Eqs.~\eqref{eq:GECSIMU} and \eqref{eq:Ev_Ac2}, we get an equation system of three polynomials for 4 unknowns $q_y$, $\tilde{t}_x$, $\tilde{t}_y$ and $\tilde{t}_z$. Recall that there are three independent constraints provided by one AC. Thus, one more equation is required which can be taken from a second affine correspondence. In principle, one arbitrary equation can be chosen from Eqs.~\eqref{eq:GECSIMU} and \eqref{eq:Ev_Ac2}, for example, three constraints of the first affine correspondence, and the first constraint of the second affine correspondence are stacked into 4 equations in 4 unknowns:
\begin{equation}
\frac{1}{1+{q_y^2}}\underbrace{\begin{bmatrix}
{\tilde{M}_{11}}&{\tilde{M}_{12}}&{\tilde{M}_{13}}&{\tilde{M}_{14}}\\
{\tilde{M}_{21}}&{\tilde{M}_{22}}&{\tilde{M}_{23}}&{\tilde{M}_{24}}\\
{\tilde{M}_{31}}&{\tilde{M}_{32}}&{\tilde{M}_{33}}&{\tilde{M}_{34}}\\
{\tilde{M}_{41}}&{\tilde{M}_{42}}&{\tilde{M}_{43}}&{\tilde{M}_{44}}
\end{bmatrix} }_{\tilde{\mathbf{M}}\left( {q_y} \right)}
\begin{bmatrix}
{{\tilde{t}_x}}\\
{{\tilde{t}_y}}\\
{{\tilde{t}_z}}\\
1
\end{bmatrix} = {\mathbf{0}},
\label{eq:euq_Ev1}
\end{equation}
where the elements $\tilde{M}_{ij} (i=1,\ldots,4; j=1,\ldots,4)$ of the coefficient matrix $\tilde{\mathbf{M}}({q_y})$ are formed by the polynomial coefficients and one unknown variable $q_y$, see supplementary material for details. Since $\tilde{\mathbf{M}}({q_y})/(1+{q_y^2})$ is a square matrix, Eq.~\eqref{eq:euq_Ev1} has a non-trivial solution only if the determinant of $\tilde{\mathbf{M}}({q_y})/(1+{q_y^2})$ is zero. The expansion of $\det({\tilde{\mathbf{M}}({q_y})}/(1+{q_y^2}))=0$ gives a 6-degree univariate polynomial:
{\begin{equation}
\begin{aligned}
\quot(\textstyle \sum_{i=0}^8 w_i q_y^i, {q_y^2}+1) = 0,
\end{aligned}
\label{eq:euq_Evq}
\end{equation}}\\
where $\tilde{w}_{0},\ldots,\tilde{w}_{8}$ are formed by two Pl\"{u}cker line correspondences and two affine transformations between the corresponding feature points.
This univariate polynomial leads to a closed-form solution with a maximum of 6 real roots. Equation~\eqref{eq:euq_Evq} can be efficiently solved by the companion matrix method~\cite{cox2013ideals} or Sturm bracketing method~\cite{nister2004efficient}. Once $q_y$ has been obtained, the rotation matrix $\mathbf{R}_{y}$ is recovered from Eq.~\eqref{eq:Ryt1}.
For the relative pose between two multi-camera reference frames at time $k$ and $k+1$, the rotation matrix $\mathbf{R}$ is recovered from Eq.~\eqref{eq:Rv} and the translation is computed by $\mathbf{t} = ({{{\mathbf{R}}}'_{\text{imu}}})^T \mathbf{\tilde{t}}$. Note that two remaining equations of the second affine correspondence can be also used in the preemptive hypothesis tests, which detect and reject inconsistent samples directly.
\section{\label{sec:experiments}Experiments}
In this section, we conduct extensive experiments on both synthetic and real-world data to evaluate the performance of the proposed methods. Our solvers are compared with state-of-the-art methods.
For relative pose estimation under planar motion, the solvers using 1 AC and 2 ACs proposed in Section~\ref{sec:planarmotion} are referred to as \texttt{1AC~plane} method and \texttt{2AC~plane} method, respectively. The accuracy of \texttt{1AC~plane} and \texttt{2AC~plane} are compared with the \texttt{17pt-Li}~\cite{li2008linear}, \texttt{8pt-Kneip}~\cite{kneip2014efficient} and \texttt{6pt-Stew{\'e}nius}~\cite{henrikstewenius2005solutions}, which are provided in
the OpenGV library~\cite{kneip2014opengv}. Since the Ackermann motion model is restrictive in practice and usually requires a post-relaxation~\cite{hee2013motion,liu2017robust}, the methods using the Ackermann motion model are not compared in this paper.
For relative pose estimation with known vertical direction, the solver proposed in Section~\ref{sec:knownverticaldirection} is referred to as the \texttt{2AC method}. We compare the accuracy of \texttt{2AC method} with \texttt{17pt-Li}~\cite{li2008linear}, \texttt{8pt-Kneip}~\cite{kneip2014efficient}, \texttt{6pt-Stew{\'e}nius}~\cite{henrikstewenius2005solutions}, \texttt{4pt-Lee} ~\cite{hee2014relative}, \texttt{4pt-Sweeney}~\cite{sweeney2014solving} and \texttt{4pt-Liu}~\cite{liu2017robust}.
The proposed methods \texttt{1AC~plane}, \texttt{2AC~plane} and \texttt{2AC method} take about 3.6, 3.6 and 17.8~$\mu s$ in C++. Due to space limitations, the efficiency comparison and stability study are provided in the supplementary material. In the experiments, all the solvers are implemented within RANSAC to reject outliers. The relative pose which produces the highest number of inliers is chosen. The confidence of RANSAC is set to 0.99 and an inlier threshold angle is set to $0.1^\circ$ by following the definition in OpenGV~\cite{kneip2014opengv}. We also show the feasibility of our methods on the \texttt{KITTI} dataset~\cite{geiger2013vision}. This experiment demonstrates that our methods are well suited for visual odometry in road driving scenarios.
\subsection{Experiments on synthetic data}
We made a simulated 2-camera rig system by following the KITTI autonomous driving platform. The baseline length between two simulated cameras is set to 1 meter and the cameras are installed at different heights. The multi-camera reference frame is built at the middle of camera rig and the translation between two multi-camera reference frames is 3 meters. The resolution of the cameras is 640 $\times$ 480 pixels and the focal lengths are 400 pixels. The principal points are set to the image center (320, 240).
The synthetic scene is composed of a ground plane and 50 random planes. All 3D planes are randomly generated within the range of -5 to 5 meters (X-axis direction), -5 to 5 meters (Y-axis direction), and 10 to 20 meters (Z-axis direction), which are expressed in the respective axis of the multi-camera reference frame. We choose 50 ACs from the ground plane and an AC from each random plane randomly. Thus, there are 100 ACs generated randomly in the synthetic data. For each AC, a random 3D point from a plane is reprojected onto two cameras to get the image point pair. The corresponding affine transformation is obtained by the following procedure. First, the implicit homography is calculated for each plane by four random, not col-linear, additional 3D points from the same plane; projecting them to the cameras; adding Gaussian noise with a standard deviation to the image coordinates, which is similar to the noise added to the coordinates of image point pair; and, finally, estimating the homography. The affine parameters is the first-order approximation of the noisy homography matrix, which the plane implies at the image point pair. The 3D points initializing both the image point pair and the homography are selected randomly considering both the image size and the range of the synthetic scene. Note that the homography can be calculated directly from the plane normal and distance. However, using four projected additional random 3D points enables an indirect but geometrically interpretable way of adding noise to the affine transformation~\cite{barath2019homography}.
A total of 1000 trials are carried out in the synthetic experiment. In each test, 100 ACs are generated randomly. The ACs for the methods are selected randomly and the error is measured on the relative pose which produces the most inliers within the RANSAC scheme. This also allows us to select the best candidate from multiple solutions. The median of errors are used to assess the rotation and translation error. The rotation error is computed as the angular difference between the ground truth rotation and the estimated rotation: ${\varepsilon _{\bf{R}}} = \arccos ((\trace({\mathbf{R}_{gt}}{{\mathbf{R}^T}}) - 1)/2)$, where $\mathbf{R}_{gt}$ and ${\mathbf{R}}$ are the ground truth and estimated rotation matrices. Following the definition in~\cite{quan1999linear,hee2014relative}, the translation error is defined as: ${\varepsilon _{\bf{t}}} = 2\left\| ({{\mathbf{t}_{gt}}}-{\mathbf{t}})\right\|/(\left\| {\mathbf{t}_{gt}} \right\| + \left\| {{\mathbf{t}}} \right\|)$, where $\mathbf{t}_{gt}$ and ${\mathbf{t}}$ are the ground truth and estimated translations.
\subsubsection{Planar motion estimation}
In this scenario, the planar motion of the multi-camera system is described by ($\theta$, $\phi$), see Fig.~\ref{fig:Specialcases}(a). The magnitudes of both angles ranges from $-10^\circ$ to $10^\circ$. The magnitude of image noise is set to Gaussian noise with a standard deviation ranging from $0$ to $2$ pixel. Figure~\ref{fig:RT_planar}(a) $\sim$ (c) show the performance of the proposed \texttt{1AC~plane} method and \texttt{2AC~plane} method against image noise. The \texttt{2AC~plane} method performs better than comparative methods under perfect planar motion. In comparison with the \texttt{2AC~plane} method, the \texttt{1AC~plane} method has similar performance in rotation estimation, but performs slightly worse in translation estimation. As shown in Fig.~\ref{fig:RT_planar}(c) and (f), we plot the translation direction error as an additional evaluation. It is interesting to see that the \texttt{1AC~plane} method also performs better than comparative methods in translation direction estimation.
\begin{figure}[tbp]
\begin{center}
\subfigure[\scriptsize{${\varepsilon_{\bf{R}}}$ with image noise}]
{
\includegraphics[width=0.303\linewidth]{figure/PlaneMotion_stdPix_R_add1AC.pdf}
}
\subfigure[\scriptsize{${\varepsilon_{\bf{t}}}$ with image noise}]
{
\includegraphics[width=0.303\linewidth]{figure/PlaneMotion_stdPix_T_add1AC.pdf}
}
\subfigure[\scriptsize{Translation direction error with image noise}]
{
\includegraphics[width=0.303\linewidth]{figure/PlaneMotion_stdPix_T_degree_add1AC.pdf}
}
\subfigure[\scriptsize{${\varepsilon_{\bf{R}}}$ with non-planar motion noise}]
{
\includegraphics[width=0.303\linewidth]{figure/PlaneMotion_nonPlanar_R_add1AC.pdf}
}
\subfigure[\scriptsize{${\varepsilon_{\bf{t}}}$ with non-planar motion noise}]
{
\includegraphics[width=0.303\linewidth]{figure/PlaneMotion_nonPlanar_T_add1AC.pdf}
}
\subfigure[\scriptsize{Translation direction error with non-planar motion noise}]
{
\includegraphics[width=0.303\linewidth]{figure/PlaneMotion_nonPlanar_T_degree_add1AC.pdf}
}
\end{center}
\caption{Rotation and translation error under planar motion. (a) $\sim$ (c): vary image noise under perfect planar motion. (d) $\sim$ (f): vary non-planar motion noise and fix the standard deviation of image noise at $1.0$ pixel.}
\label{fig:RT_planar}
\end{figure}
We also evaluate the accuracy of the proposed \texttt{1AC~plane} method and \texttt{2AC~plane} method for increasing non-planar motion noise. The non-planar components of a 6DOF relative pose including X-axis rotation, Z-axis rotation and direction of YZ-plane translation~\cite{choi2018fast} are randomly generated and added to the motion of the multi-camera system. The magnitude of non-planar motion noise ranges from $0^\circ$ to $1^\circ$ and the standard deviation of the image noise is set to $1.0$ pixel. Figures~\ref{fig:RT_planar}(d) $\sim$ (f) show the performance of the proposed \texttt{1AC~plane} method and \texttt{2AC~plane} method against non-planar motion noise. Methods \texttt{17pt-Li}, \texttt{8pt-Kneip} and \texttt{6pt-Stew{\'e}nius} deal with the 6DOF motion case and, thus they are not affected by the noise in the planarity assumption. It can be seen that the rotation accuracy of \texttt{2AC~plane} method performs better than comparative methods when the non-planar motion noise is less than $0.3^\circ$. Since the estimation accuracy of translation direction of the \texttt{2AC~plane} method in Fig.~\ref{fig:RT_planar}(f) performs satisfactory, the main reason for poor performance of translation estimation is that the metric scale estimation is sensitive to the non-planar motion noise. In comparison with the \texttt{2AC~plane} method, the \texttt{1AC~plane} method has similar performance in rotation estimation, but performs poorly in translation estimation. The translation accuracy decreases significantly when the non-planar motion noise is more than $0.2^\circ$.
Both the \texttt{1AC~plane} method and the \texttt{2AC~plane} method have a significant computational advantage over comparative methods, because the efficient solver for 4-degree polynomial equation takes only about 3.6~$\mu s$. A more interesting fact for the \texttt{2AC~plane} method is the speed-up gained by the preemptive hypothesis tests, which detect and reject inconsistent samples directly. Compared with testing on the other affine correspondences, the preemptive hypothesis tests sped up the procedure by more than three times while leading to the same accuracy of relative pose estimation.
\subsubsection{Motion with known vertical direction}
In this set of experiments, the translation direction between two multi-camera reference frames is chosen to produce either forward, sideways or random motions. In addition, the second reference frame is rotated around three axes in order and the rotation angles range from $-10^\circ$ to $10^\circ$. With the assumption that the roll and pitch angles are known, the multi-camera reference frame is aligned with the gravity direction. Due to space limitations, we only show the results for random motion. The results for forward and sideways motions are shown in the supplementary material. Figure~\ref{fig:RT_1AC}(a) and (d) show the performance of the \texttt{2AC~method} against image noise with perfect IMU data in the random motion case. It can be seen that the proposed method is robust to image noise and performs better than the comparative methods.
\begin{figure}[tbp]
\begin{center}
\subfigure[\scriptsize{${\varepsilon _{\bf{R}}}$ with image noise}]
{
\includegraphics[width=0.303\linewidth]{figure/RandomMotion_stdPix_R.pdf}
}
\subfigure[\scriptsize{${\varepsilon _{\bf{R}}}$ with pitch angle noise}]
{
\includegraphics[width=0.303\linewidth]{figure/RandomMotion_stdAx_R.pdf}
}
\subfigure[\scriptsize{${\varepsilon _{\bf{R}}}$ with roll angle noise}]
{
\includegraphics[width=0.303\linewidth]{figure/RandomMotion_stdAz_R.pdf}
}
\subfigure[\scriptsize{${\varepsilon _{\bf{t}}}$ with image noise}]
{
\includegraphics[width=0.303\linewidth]{figure/RandomMotion_stdPix_T.pdf}
}
\subfigure[\scriptsize{${\varepsilon _{\bf{t}}}$ with pitch angle noise}]
{
\includegraphics[width=0.303\linewidth]{figure/RandomMotion_stdAx_T.pdf}
}
\subfigure[\scriptsize{${\varepsilon _{\bf{t}}}$ with roll angle noise}]
{
\includegraphics[width=0.303\linewidth]{figure/RandomMotion_stdAz_T.pdf}
}
\end{center}
\caption{Rotation and translation error under random motion with known vertical direction. The upper row: rotation error, the bottom row: translation error. (a)(d): vary image noise. (b)(e) and (c)(f): vary IMU angle noise and fix the standard deviation of image noise at $1.0$ pixel.}
\label{fig:RT_1AC}
\end{figure}
Figure~\ref{fig:RT_1AC}(b)(e) and (c)(f) show the performance of the proposed \texttt{2AC~method} against IMU noise in the random motion case, while the standard deviation of the image noise is fixed at $1.0$ pixel. Note that the methods \texttt{17pt-Li}, \texttt{8pt-Kneip} and \texttt{6pt-Stew{\'e}nius} are not influenced by IMU noise, because these methods do not use the known vertical direction as a prior. It is interesting to see that our method outperforms the methods \texttt{17pt-Li}, \texttt{8pt-Kneip} and \texttt{6pt-Stew{\'e}nius} in the random motion case, even though the IMU noise is around $0.8^\circ$. In addition, the proposed \texttt{2AC~method} performs better than the methods \texttt{4pt-Lee}, \texttt{4pt-Sweeney} and \texttt{4pt-Liu} as well, which also use the known vertical direction as a prior. The results under forward and sideways motion also demonstrate that the \texttt{2AC~method} performs basically better than all comparative methods against image noise and provides comparable accuracy for increasing IMU noise. It is worth to mention that, with the help of preemptive hypothesis tests, the relative pose estimation with the proposed \texttt{2AC~method} solver sped up the procedure by more than three times while leading to similarly accurate relative poses.
\subsection{Experiments on real data}
We test the performance of our methods on the \texttt{KITTI} dataset~\cite{geiger2013vision}, which consists of successive video frames from a forward facing stereo camera. We ignore the overlap in their fields of view, and treat it as a general multi-camera system. The sequences labeled from 0 to 10 which have ground truth are used for the evaluation. Therefore, the methods were tested on a total of 23000 image pairs. The affine correspondences between consecutive frames in each camera are established by applying the ASIFT~\cite{morel2009asift}. It can also be obtained by MSER~\cite{matas2004robust} which will be slightly less accurate but much faster to obtain~\cite{barath2016accurate}. The affine correspondences across the two cameras are not matched and the metric scale is not estimated as the movement between consecutive frames is small. Besides, integrating the acceleration over time from IMU is more suitable for recovering the metric scale~\cite{NutziWeiss-411}. All the solvers have been integrated into a RANSAC scheme.
\begin{table*}[htbp]
\caption{Rotation and translation error on \texttt{KITTI} sequences (unit: degree).}
\begin{center}
\setlength{\tabcolsep}{0.9mm}{
\scalebox{1.0}{
\begin{tabular}{|c||c|c|c|c|c|c|c|c|}
\hline
\multirow{2}{*}{\footnotesize{Seq.}} & \footnotesize{17pt-Li~\cite{li2008linear}} & \footnotesize{8pt-Kneip~\cite{kneip2014efficient}} & \footnotesize{6pt-}{\footnotesize{St.}}~\footnotesize{\cite{henrikstewenius2005solutions}} & \footnotesize{4pt-Lee~\cite{hee2014relative}} & \footnotesize{4pt-Sw.~\cite{sweeney2014solving}}& \footnotesize{4pt-Liu~\cite{liu2017robust}}& \footnotesize{\textbf{2AC~plane}}& \footnotesize{\textbf{2AC method}} \\
\cline{2-9}
& ${\varepsilon _{\bf{R}}}$\qquad\ ${\varepsilon _{\bf{t}}}$ & ${\varepsilon _{\bf{R}}}$\qquad\ ${\varepsilon _{\bf{t}}}$ & ${\varepsilon _{\bf{R}}}$\qquad\ ${\varepsilon _{\bf{t}}}$ & ${\varepsilon _{\bf{R}}}$\qquad\ ${\varepsilon _{\bf{t}}}$ & ${\varepsilon _{\bf{R}}}$\qquad\ ${\varepsilon _{\bf{t}}}$ & ${\varepsilon _{\bf{R}}}$\qquad\ ${\varepsilon _{\bf{t}}}$ & ${\varepsilon _{\bf{R}}}$\qquad\ ${\varepsilon _{\bf{t}}}$ & ${\varepsilon _{\bf{R}}}$\qquad\ ${\varepsilon _{\bf{t}}}$\\
\hline
00& 0.139 \ 2.412 & 0.130 \ 2.400& 0.229 \ 4.007 & 0.065 \ 2.469 & 0.050 \ 2.190 & 0.066 \ 2.519 & 0.280 \ 2.243 &\textbf{0.031} \ \textbf{1.738} \\
\rowcolor{gray!10}01& 0.158 \ 5.231 & 0.171 \ 4.102& 0.762 \ 41.19 & 0.137 \ 4.782 & 0.125 \ 11.91 & 0.105 \ 3.781 & 0.168 \ 2.486 &\textbf{0.025} \ \textbf{1.428} \\
02& 0.123 \ 1.740 & 0.126 \ 1.739& 0.186 \ 2.508 & 0.057 \ 1.825 & 0.044 \ 1.579 & 0.057 \ 1.821 & 0.213 \ 1.975 &\textbf{0.030} \ \textbf{1.558} \\
\rowcolor{gray!10}03& 0.115 \ 2.744 & 0.108 \ 2.805& 0.265 \ 6.191 & 0.064 \ 3.116 & 0.069 \ 3.712 & 0.062 \ 3.258 & 0.238 \ \textbf{1.849} &\textbf{0.037} \ 1.888 \\
04& 0.099 \ 1.560 & 0.116 \ 1.746& 0.202 \ 3.619 & 0.050 \ 1.564 & 0.051 \ 1.708 & 0.045 \ 1.635 & 0.116 \ 1.768 &\textbf{0.020} \ \textbf{1.228} \\
\rowcolor{gray!10}05& 0.119 \ 2.289 & 0.112 \ 2.281& 0.199 \ 4.155 & 0.054 \ 2.337 & 0.052 \ 2.544 & 0.056 \ 2.406 & 0.185 \ 2.354 &\textbf{0.022} \ \textbf{1.532} \\
06& 0.116 \ 2.071 & 0.118 \ 1.862& 0.168 \ 2.739 & 0.053 \ 1.757 & 0.092 \ 2.721 & 0.056 \ 1.760 & 0.137 \ 2.247 &\textbf{0.023} \ \textbf{1.303} \\
\rowcolor{gray!10}07& 0.119 \ 3.002 & 0.112 \ 3.029& 0.245 \ 6.397 & 0.058 \ 2.810 & 0.065 \ 4.554 & 0.054 \ 3.048 & 0.173 \ 2.902 &\textbf{0.023} \ \textbf{1.820} \\
08& 0.116 \ 2.386 & 0.111 \ 2.349& 0.196 \ 3.909 & 0.051 \ 2.433 & 0.046 \ 2.422 & 0.053 \ 2.457 & 0.203 \ 2.569 &\textbf{0.024} \ \textbf{1.911} \\
\rowcolor{gray!10}09& 0.133 \ 1.977 & 0.125 \ 1.806& 0.179 \ 2.592 & 0.056 \ 1.838 & 0.046 \ 1.656 & 0.058 \ 1.793 & 0.189 \ 1.997 &\textbf{0.027} \ \textbf{1.440} \\
10& 0.127 \ 1.889 & 0.115 \ 1.893& 0.201 \ 2.781 & 0.052 \ 1.932 & 0.040 \ 1.658 & 0.058 \ 1.888 & 0.223 \ 2.296 &\textbf{0.025} \ \textbf{1.586} \\
\hline
\end{tabular}}}
\end{center}
\label{VerticalRTErrror}
\end{table*}
\begin{table*}[htbp]
\caption{Runtime of RANSAC averaged over \texttt{KITTI} sequences combined with different solvers (unit:~$s$).}
\begin{center}
\setlength{\tabcolsep}{0.9mm}{
\scalebox{0.95}{
\begin{tabular}{|c||c|c|c|c|c|c|c|c|}
\hline
\small{Methods} & \footnotesize{17pt-Li~\cite{li2008linear}} & \footnotesize{8pt-Kneip~\cite{kneip2014efficient}} & \footnotesize{6pt-}{\footnotesize{St.}}~\footnotesize{\cite{henrikstewenius2005solutions}} &\footnotesize{4pt-Lee~\cite{hee2014relative}} & \footnotesize{4pt-Sw.~\cite{sweeney2014solving}}& \footnotesize{4pt-Liu~\cite{liu2017robust}} & \footnotesize{\textbf{2AC~plane}}& \footnotesize{\textbf{2AC method}} \\
\hline
\small{Mean time }& 52.82 & 10.36 & 79.76& 0.85& 0.63& 0.45& \textbf{0.07} & 0.09\\
\hline
\small{Standard deviation}& 2.62 & 1.59 & 4.52& 0.093& 0.057& 0.058& \textbf{0.0071} & 0.0086\\
\hline
\end{tabular}}}
\end{center}
\label{RANSACTime}
\end{table*}
\begin{figure*}[!h]
\begin{center}
\subfigure[\texttt{8pt-Kneip}]
{
\includegraphics[width=0.285\linewidth]{figure/8pt_Kneip.pdf}
}
\subfigure[\texttt{4pt-Sweeney}]
{
\includegraphics[width=0.285\linewidth]{figure/4pt_Sweeney.pdf}
}
\subfigure[\texttt{2AC~method}]
{
\includegraphics[width=0.34\linewidth]{figure/Ev_2AC.pdf}
}
\end{center}
\caption{Estimated trajectories without any post-refinement. The relative pose measurements between consecutive frames are directly concatenated. Colorful curves are estimated trajectories with \texttt{8pt-Kneip}~\cite{kneip2014efficient}, \texttt{4pt-Sweeney}~\cite{sweeney2014solving} and \texttt{2AC~method}. Black curves with stars are ground truth trajectories. Best viewed in color.}
\label{fig:trajectory}
\end{figure*}
The proposed \texttt{2AC~plane} method and \texttt{2AC method} are compared against \texttt{17pt-Li}~\cite{li2008linear}, \texttt{8pt-Kneip}~\cite{kneip2014efficient}, \texttt{6pt-Stew{\'e}nius}~\cite{henrikstewenius2005solutions}, \texttt{4pt-Lee}~\cite{hee2014relative}, \texttt{4pt-Sweeney}~\cite{sweeney2014solving} and \texttt{4pt-Liu}~\cite{liu2017robust}. Since the \texttt{KITTI} dataset is captured by the stereo camera with the same height, which is a degenerate case for the \texttt{1AC~plane} method, this method is not performed in the experiment. For the \texttt{2AC~plane} method, the estimation results are also compared with the 6DOF ground truth of relative pose, even though this method only estimates two angles ($\theta$, $\phi$) with the plane motion assumption. For the \texttt{2AC method}, the roll and pitch angles obtained from the ground truth data are used to simulate IMU measurements, which align the multi-camera reference frame with the gravity direction. To ensure the fairness of the experiment, the roll and pitch angles are also provided for the methods \texttt{4pt-Lee}~\cite{hee2014relative}, \texttt{4pt-Sweeney}~\cite{sweeney2014solving} and \texttt{4pt-Liu}~\cite{liu2017robust}. The results of the rotation and translation estimation are shown in Table~\ref{VerticalRTErrror}. The runtime of RANSAC averaged over \texttt{KITTI} sequences combined with different solvers is shown in Table~\ref{RANSACTime}.
The \texttt{2AC method} offers the best overall performance among all the methods. The \texttt{6pt-Stew{\'e}nius} method performs poorly on sequence 01, because this sequence is a highway with few tractable close objects, and this method always fails to select the best candidate from multiple solutions under forward motion in the RANSAC scheme. Besides, it is interesting to see that the translation accuracy of \texttt{2AC~plane} method basically outperforms the \texttt{6pt-Stew{\'e}nius} method, even though the planar motion assumption does not fit the \texttt{KITTI} dataset well. Due to the benefits of computational efficiency, both the \texttt{2AC~plane} method and the \texttt{2AC method} are quite suitable for finding a correct inlier set, which is then used for accurate motion estimation in visual odometry.
To visualize the comparison results, the estimated trajectory for sequence 00 is plotted in Fig.~\ref{fig:trajectory}. We are directly concatenating frame-to-frame relative pose measurements without any post-refinement. The trajectory for the \texttt{2AC~method} is compared with the two best performing comparison methods in sequence 00 based on Table~\ref{VerticalRTErrror}: the \texttt{8pt-Kneip} method in 6DOF motion case and the \texttt{4pt-Sweeney} method in 4DOF motion case. Since all methods were not able to estimate the scale correctly, in particular for the many straight parts of the trajectory, the ground truth scale is used to plot the trajectories. Then the trajectories are aligned with the ground truth and the color along the trajectory encodes the absolute trajectory error (ATE)~\cite{sturm2012benchmark}. Even though all trajectories have a significant accumulation of drift, it can still be seen that the proposed \texttt{2AC~method} has the smallest ATE among the compared trajectories.
\section{\label{sec:conclusion}Conclusion}
By exploiting the affine parameters, we have proposed four solutions for the relative pose estimation of a multi-camera system. A minimum of two affine correspondences is needed to estimate the 6DOF relative pose of a multi-camera system.
Under the planar motion assumption, we present two solvers to recover the planar motion of a multi-camera system, including a minimal solver with a single affine correspondence and a solver with two affine correspondences. In addition, a minimal solution with two affine correspondences is also proposed to solve for the relative pose of the multi-camera system with known vertical direction. The assumptions taken in these solutions are commonly met in road driving scenes. We evaluate the latter two solutions on synthetic data and real image sequence datasets. The experimental results clearly showed that the proposed methods provide better efficiency and accuracy for relative pose estimation in comparison to state-of-the-art methods.
{\small
\bibliographystyle{ieee_fullname}
\section{\label{sec:6DOFmotion_supp}Relative Pose Estimation under General Motion}
\subsection{Solution using Gr\"{o}bner basis method}
For affine correspondence $({\mathbf{x}}_{ij}, {\mathbf{x}}'_{ij}, \mathbf{A})$, we get three polynomials for six unknowns $\{q_x, q_y, q_z, t_x, t_y, t_z\}$ from Eqs.~\eqref{GECS6dof} and~\eqref{eq:E6dof_Ac6}. After separating $q_x$, $q_y$, $q_z$ from $t_x$, $t_y$, $t_z$, we arrive at equation system
{\begin{equation}
\frac{1}{1+q_x^2+q_y^2+q_z^2}\underbrace {\begin{bmatrix}
{M_{11}}& {M_{12}}& {M_{13}}& {M_{14}}\\
{M_{21}}& {M_{22}}& {M_{23}}& {M_{24}}\\
{M_{31}}& {M_{32}}& {M_{33}}& {M_{34}}
\end{bmatrix}}_{{\mathbf{M}}\left( {{q_x,q_y,q_z}} \right)}
\begin{bmatrix}
{{{t}_x}}\\
{{{t}_y}}\\
{{{t}_z}}\\
1
\end{bmatrix} = {\mathbf{0}},
\label{eq:euq_qxqyqz1}
\end{equation}}\\
where the elements $M_{ij}$ $(i=1,\ldots,3; j=1,\ldots,4)$ of the coefficient matrix ${\mathbf{M}(q_x,q_y,q_z)}$ are formed by the polynomial coefficients and three unknown variables $q_x,q_y,q_z$:
\begin{equation}
{\mathbf{M}(q_x,q_y,q_z)} = \begin{bmatrix}
[2]&\ [2]&\ [2]&\ [2]\\
[2]&\ [2]&\ [2]&\ [2]\\
[2]&\ [2]&\ [2]&\ [2]
\end{bmatrix},
\label{eq:M_qxqyqz2}
\end{equation}
where $[N]$ denotes a polynomial of degree $N$ in variables $q_x,q_y,q_z$.
For the general case, Eq.~\eqref{eq:euq_qxqyqz1} imposes three independent constraints on six unknowns $\{q_x, q_y, q_z, t_x, t_y, t_z\}$. Thus two affine correspondences are enough to recover the relative pose of a multi-camera system under 6DOF general motion. Hence, we get an equation system of 6 independent constraints from 2 affine correspondences in similar form as Eq.~\eqref{eq:euq_qxqyqz1}. These constraints are stacked into six equations in six unknowns:
{\begin{equation}
\frac{1}{1+q_x^2+q_y^2+q_z^2}\underbrace {\begin{bmatrix}
{M_{11}}& {M_{12}}& {M_{13}}& {M_{14}}\\
{M_{21}}& {M_{22}}& {M_{23}}& {M_{24}}\\
{M_{31}}& {M_{32}}& {M_{33}}& {M_{34}}\\
{M_{41}}& {M_{42}}& {M_{43}}& {M_{44}}\\
{M_{51}}& {M_{52}}& {M_{53}}& {M_{54}}\\
{M_{61}}& {M_{62}}& {M_{63}}& {M_{64}}
\end{bmatrix}}_{{\mathbf{M}}_{6\times4}} \begin{bmatrix}
{{{t}_x}}\\
{{{t}_y}}\\
{{{t}_z}}\\
1
\end{bmatrix} = {\mathbf{0}},
\label{eq:euq_qxqyqz1_sp}
\end{equation}}
Since ${{\mathbf{M}}_{6\times4}}/({1+q_x^2+q_y^2+q_z^2})$ has a null vector, its rank must be at most three. Thus, all the $4\times4$ sub-determinants of ${{\mathbf{M}}_{6\times4}}/({1+q_x^2+q_y^2+q_z^2})$ must be zero. In this paper, three sub-matrices which give three equations in three unknowns $q_x,q_y,q_z$ are choose as follows:
{\begin{align}
\begin{cases}
\det(\frac{1}{1+q_x^2+q_y^2+q_z^2}{\begin{bmatrix}
{M_{11}}& {M_{12}}& {M_{13}}& {M_{14}}\\
{M_{21}}& {M_{22}}& {M_{23}}& {M_{24}}\\
{M_{31}}& {M_{32}}& {M_{33}}& {M_{34}}\\
{M_{41}}& {M_{42}}& {M_{43}}& {M_{44}}
\end{bmatrix}}) = 0 \\
\det(\frac{1}{1+q_x^2+q_y^2+q_z^2}{\begin{bmatrix}
{M_{11}}& {M_{12}}& {M_{13}}& {M_{14}}\\
{M_{21}}& {M_{22}}& {M_{23}}& {M_{24}}\\
{M_{31}}& {M_{32}}& {M_{33}}& {M_{34}}\\
{M_{51}}& {M_{52}}& {M_{53}}& {M_{54}}
\end{bmatrix}}) = 0 \\
\det(\frac{1}{1+q_x^2+q_y^2+q_z^2}{\begin{bmatrix}
{M_{11}}& {M_{12}}& {M_{13}}& {M_{14}}\\
{M_{21}}& {M_{22}}& {M_{23}}& {M_{24}}\\
{M_{31}}& {M_{32}}& {M_{33}}& {M_{34}}\\
{M_{61}}& {M_{62}}& {M_{63}}& {M_{64}}
\end{bmatrix}}) = 0 \\
\end{cases}
\label{eq:DetM3}
\end{align}}
The hidden variable resultant method~\cite{cox2013ideals} is used to solve for the unknowns of Eq.~\eqref{eq:DetM3}. By grouping the unknowns $q_x$, $q_y$, $q_z$ with the known coefficients, we obtain an equation system with 84 monomials which consist of $q_x$, $q_y$, $q_z$. Note that the coefficients are divided by ${1+q_x^2+q_y^2+q_z^2}$, which reduces the polynomial degree and improves the efficiency of the solution. The final solver was obtained by the automatic solver generator of~\cite{larsson2017efficient}. This equation system has a maximum polynomial degree of 6.
\section{\label{sec:planarmotion_Supp}Relative Pose Estimation Under Planar Motion}
\subsection{Details about the coefficient matrix ${\mathbf{M}(q_y)}$}
Refer to Eq. (12) in the paper, three constraints obtained from two affine correspondences are stacked into 3 equations in 3 unknowns. The elements $M_{ij}$ $(i=1,\ldots,3; j=1,\ldots,3)$ of the coefficient matrix ${\mathbf{M}(q_y)}$ are formed by the polynomial coefficients and one unknown variable $q_y$, which can be described as:
\begin{equation}
{\mathbf{M}(q_y)} = \begin{bmatrix}
[2]&\ [2]&\ [2]\\
[2]&\ [2]&\ [2]\\
[2]&\ [2]&\ [2]
\end{bmatrix},
\label{eq:M_qy3}
\end{equation}
where $[N]$ denotes a polynomial of degree $N$ in variable $q_y$.
\subsection{Degenerate Case}
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.8\linewidth]{figure/proof.pdf}
\end{center}
\vspace{-0.1in}
\caption{Relative pose estimation for a multi-camera system.}
\label{fig:degenerate}
\end{figure}
\begin{proposition}
\label{theorem:nister}
Consider a multi-camera system which is under planar motion. Assume the following three conditions are satisfied. (1) The rotation axis is $y$-axis, and the translation is on $xz$-plane. (2) There is one affine correspondence across camera $C_i$ in frame $k$ and camera $C_j$ in frame $k+1$ ($C_i$ and $C_j$ can be the same or not). (3) The optical centers of camera $C_i$ and $C_j$ have the same $y$-coordinate. Then this case is degenerate. Specifically, the rotation can be correctly recovered, while the translation can not.
\end{proposition}
\begin{proof}
Figure~\ref{fig:degenerate} illustrates the case described in the proposition. Our proof is based on the following observation: whether a case is degenerate is independent of the pose solvers. Based on this point, we construct a new minimal solver which is different from the proposed solver in the main text.
(i) Since the multi-camera system is rotated by $y$-axis, the camera $C_i$ in frame $k$ and camera $C_j$ in frame $k+1$ are under motion with known rotation axis. Thus we can use the \texttt{1AC-method} for perspective cameras~\cite{Guan2020CVPR} to estimate the relative pose between $C_i$ and $C_j$. This is a minimal solver since one AC provides 3 independent constraints and there are three unknowns (1 for rotation, 2 for translation by excluding scale-ambiguity). Denote the recovered rotation and translation between $C_i$ and $C_j$ as $(\mathbf{R}', \mathbf{t}')$, where $\mathbf{t}'$ is a unit vector. The scale of the translation vector cannot be recovered at this moment. Denote the unknown translation scale as $\lambda$.
(ii) From Fig.~\ref{fig:degenerate}, we have
{ \begin{equation}
\begin{aligned}
&\begin{bmatrix}
{\mathbf{R}} & \mathbf{t}\\
{\mathbf{0}}&{1}\\
\end{bmatrix} = \begin{bmatrix}{\mathbf{R}_{j}}&{\mathbf{t}_{j}}\\
{\mathbf{0}}&{1}\\
\end{bmatrix}
\begin{bmatrix}{\mathbf{R}'}&{ \lambda \mathbf{t}'}\\
{\mathbf{0}}&{1}\\
\end{bmatrix}
\begin{bmatrix}{\mathbf{R}_{i}}&{\mathbf{t}_{i}}\\
{\mathbf{0}}&{1}\\
\end{bmatrix}^{-1} \\
& \qquad \ \ =\begin{bmatrix}{{\mathbf{R}_{j}}{\mathbf{R}'}{\mathbf{R}_{i}^T}}& \ \lambda \mathbf{R}_j \mathbf{t}' + \mathbf{t}_j - \mathbf{R}_j \mathbf{R}' \mathbf{R}_i^T \mathbf{t}_i\\
{\mathbf{0}}& \ {1}\\
\end{bmatrix}.
\end{aligned}
\label{eq:trans_general}
\end{equation}}\\
%
From Eq.~\eqref{eq:trans_general}, we have
\begin{align}
&\mathbf{R} = \mathbf{R}_j \mathbf{R}' \mathbf{R}_i^T, \label{eq:r_equ} \\
&\mathbf{t} = \lambda \mathbf{R}_j \mathbf{t}' + \mathbf{t}_j - \mathbf{R}_j \mathbf{R}' \mathbf{R}_i^T \mathbf{t}_i.
\label{eq:t_equ}
\end{align}
%
From Eq.~\eqref{eq:r_equ}, the rotation $\mathbf{R}$ between frame $k$ and frame $k+1$ for the multi-camera system can be recovered.
%
From Eq.~\eqref{eq:t_equ}, we have
\begin{align}
\lambda (\mathbf{R}_j \mathbf{t}') - \mathbf{t} + (\mathbf{t}_j - \mathbf{R} \mathbf{t}_i) = \mathbf{0}.
\label{eq:tran_linear}
\end{align}
In Eq.~\eqref{eq:tran_linear}, note that $\mathbf{t} = [t_x, 0, t_z]^T$ due to planar motion. Thus this linear equation system has $3$ unknowns $\{\lambda, t_x, t_z\}$ and $3$ equations. Usually the unknowns can be uniquely determined by solving this equation system. However, if the second entry of $\mathbf{R}_j \mathbf{t}'$ is zero, it can be verified that $\lambda$ becomes a free parameter. In other words, the scale cannot be determined and this is a degenerate case.
(iii) Finally, we exploit the geometric meaning of the degenerate case, i.e., the second entry of $\mathbf{R}_j \mathbf{t}'$ is zero. Denote the normalized vector originated from $C_i$ to $C_j$ as $\mathbf{v}$. Since $\mathbf{v}$ represents the normalized translation vector between $C_i$ and $C_j$, the coordinates of $\mathbf{v}$ in reference of camera $C_j$ is $\mathbf{t}'$. Further, the coordinates of $\mathbf{v}$ in frame $k+1$ is $\mathbf{R}_j \mathbf{t}'$. The second entry of $\mathbf{R}_j \mathbf{t}'$ is zero means that the endpoints of $\mathbf{v}$ have the same $y$-coordinate in frame $k+1$, which is the condition~(3) in the proposition.
\end{proof}
\section{\label{sec:knownverticaldirection_Supp}Relative Pose Estimation with Known Vertical Direction}
Refer to Eq. (23) in the paper, four constraints obtained from two affine correspondences are stacked into 4 equations in 4 unknowns. The elements $\tilde{M}_{ij}$ $(i=1,\ldots,4; j=1,\ldots,4)$ of the coefficient matrix $\tilde{\mathbf{M}}({q_y})$ are formed by the polynomial coefficients and one unknown variable $q_y$, which can be described as:
\begin{equation}
{\tilde{\mathbf{M}}({q_y})} = \begin{bmatrix}
[2]&\ [2]&\ [2]&\ [2]\\
[2]&\ [2]&\ [2]&\ [2]\\
[2]&\ [2]&\ [2]&\ [2]\\
[2]&\ [2]&\ [2]&\ [2]
\end{bmatrix},
\label{eq:M_qy4}
\end{equation}
where $[N]$ denotes a polynomial of degree $N$ in variable $q_y$.
\section{\label{sec:experiments_supp}Experiments}
\subsection{Efficiency comparison}
The runtimes of our solvers and the comparative solvers are evaluated on an Intel(R) Core(TM) i7-7800X 3.50GHz. All algorithms are implemented in C++. Methods \texttt{17pt-Li}, \texttt{8pt-Kneip} and \texttt{6pt-Stewenius} are provided in the OpenGV library. We implemented solver \texttt{4pt-Lee}. For methods \texttt{4pt-Sweeney} and \texttt{4pt-Liu}, we used their publicly available implementations from GitHub. The average, over 10 000 runs, processing times of the solvers are shown in Table~\ref{SolverTime}. The runtimes of the methods \texttt{1AC~plane} , \texttt{2AC~plane} and \texttt{4pt-Liu} are the lowest, because these methods solve the 4-degree polynomial equation. The \texttt{2AC~method} which solves the 6-degree polynomial equation also requires low computation time.
\begin{table*}[htbp]
\caption{Run-time comparison of motion estimation algorithms (unit:~$\mu s$).}
\begin{center}
\setlength{\tabcolsep}{0.9mm}{
\scalebox{1.0}{
\begin{tabular}{|c||c|c|c|c|c|c|c|c|c|}
\hline
\small{Methods} & \footnotesize{17pt-Li~\cite{li2008linear}} & \footnotesize{8pt-Kneip~\cite{kneip2014efficient}} & \footnotesize{6pt-}{\footnotesize{St.}}~\footnotesize{\cite{henrikstewenius2005solutions}} &\footnotesize{4pt-Lee~\cite{hee2014relative}} & \footnotesize{4pt-Sw.~\cite{sweeney2014solving}}& \footnotesize{4pt-Liu~\cite{liu2017robust}}& \footnotesize{\textbf{1AC~plane}}&\footnotesize{\textbf{2AC~plane}}& \footnotesize{\textbf{2AC method}} \\
\hline
\small{Timings}& 43.3 & 102.0& 3275.4& 26.5& 22.2& 3.7& \textbf{3.6}& \textbf{3.6} & 17.8\\
\hline
\end{tabular}}}
\end{center}
\label{SolverTime}
\end{table*}
\subsection{Numerical stability}
\begin{figure}[htbp]
\begin{center}
\subfigure[]
{
\includegraphics[width=0.47\linewidth]{figure/NumericalStability_R_1AC3equations_add1ACMethod.eps}
}
\subfigure[]
{
\includegraphics[width=0.47\linewidth]{figure/NumericalStability_T_1AC3equations_add1ACMethod.eps}
}
\end{center}
\caption{Probability density functions over estimation errors in the noise-free case (10 000 runs). The horizontal axis represents the log$_{10}$ errors and the vertical axis represents the density. (a) reports the rotation error. (b) reports the translation error. The proposed \texttt{1AC~plane} method, \texttt{2AC~plane} method and \texttt{2AC method} are compared against \texttt{17pt-Li}~\cite{li2008linear}, \texttt{8pt-Kneip}~\cite{kneip2014efficient}, \texttt{6pt-Stew{\'e}nius}~\cite{henrikstewenius2005solutions}, \texttt{4pt-Lee}~\cite{hee2014relative}, \texttt{4pt-Sweeney} ~\cite{sweeney2014solving} and \texttt{4pt-Liu}~\cite{liu2017robust}.}
\label{fig:Numerical}
\end{figure}
Figure~\ref{fig:Numerical} reports the numerical stability of the solvers in the noise-free case. The procedure is repeated 10 000 times. The empirical probability density functions (vertical axis) are plotted as the function of the log$_{10}$ estimated errors (horizontal axis). Methods \texttt{1AC~plane}, \texttt{2AC~plane}, \texttt{2AC method}, \texttt{17pt-Li}~\cite{li2008linear}, \texttt{4pt-Lee}~\cite{hee2014relative} and \texttt{4pt-Sweeney}~\cite{sweeney2014solving} are numerically stable. It can also be seen that the \texttt{4pt-Sweeney} method has a small peak, both in the rotation and translation error curves, around $10^{-2}$. The \texttt{8pt-Kneip} method based on iterative optimization is susceptible to falling into local minima. Due to the use of first-order approximation of the relative rotation, the \texttt{4pt-Liu} method inevitably has greater than zero error in the noise-free case.
\subsection{Motion with known vertical direction}
\begin{figure}[htbp]
\begin{center}
\subfigure[]
{
\includegraphics[width=0.303\linewidth]{figure/ForwardMotion_stdPix_R.eps}
}
\subfigure[]
{
\includegraphics[width=0.303\linewidth]{figure/ForwardMotion_stdAx_R.eps}
}
\subfigure[]
{
\includegraphics[width=0.303\linewidth]{figure/ForwardMotion_stdAz_R.eps}
}
\subfigure[]
{
\includegraphics[width=0.303\linewidth]{figure/ForwardMotion_stdPix_T.eps}
}
\subfigure[]
{
\includegraphics[width=0.303\linewidth]{figure/ForwardMotion_stdAx_T.eps}
}
\subfigure[]
{
\includegraphics[width=0.303\linewidth]{figure/ForwardMotion_stdAz_T.eps}
}
\end{center}
\caption{Rotation and translation error under forward motion. The upper row: rotation error, the bottom row: translation error. (a)(d): vary image noise. (b)(e) and (c)(f): vary IMU angle noise and fix the standard deviation of image noise as $1.0$ pixel.}
\label{fig:RTForwardMotion_1AC}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\subfigure[]
{
\includegraphics[width=0.303\linewidth]{figure/SidewaysMotion_stdPix_R.eps}
}
\subfigure[]
{
\includegraphics[width=0.303\linewidth]{figure/SidewaysMotion_stdAx_R.eps}
}
\subfigure[]
{
\includegraphics[width=0.303\linewidth]{figure/SidewaysMotion_stdAz_R.eps}
}
\subfigure[]
{
\includegraphics[width=0.303\linewidth]{figure/SidewaysMotion_stdPix_T.eps}
}
\subfigure[]
{
\includegraphics[width=0.303\linewidth]{figure/SidewaysMotion_stdAx_T.eps}
}
\subfigure[]
{
\includegraphics[width=0.303\linewidth]{figure/SidewaysMotion_stdAz_T.eps}
}
\end{center}
\caption{Rotation and translation error under sideways motion. The upper row: rotation error, the bottom row: translation error. (a)(d): vary image noise. (b)(e) and (c)(f): vary IMU angle noise and fix the standard deviation of image noise as $1.0$ pixel.}
\label{fig:RTSidewaysMotion_1AC}
\end{figure}
In this section we show the performance of the proposed \texttt{2AC~method} under forward and sideways motion. Figure~\ref{fig:RTForwardMotion_1AC} shows the performance of the proposed \texttt{2AC~method} under forward motion. Figure~\ref{fig:RTSidewaysMotion_1AC} shows the performance of the proposed \texttt{2AC~method} under sideways motion. The results demonstrate that the \texttt{2AC~method} performs better than all compared methods against image noise and provides comparable accuracy for increasing IMU noise.
\subsection{Cumulative errors distributions}
\begin{figure}[htbp]
\begin{center}
\subfigure[]
{
\includegraphics[width=0.47\linewidth]{figure/CDF_Rotation.eps}
}
\subfigure[]
{
\includegraphics[width=0.47\linewidth]{figure/CDF_Translation.eps}
}
\end{center}
\caption{Empirical cumulative error distributions for KITTI sequence 00. (a) reports the rotation error. (b) reports the translation error. The proposed \texttt{1AC~plane} method, \texttt{2AC~plane} method and \texttt{2AC method} are compared against \texttt{17pt-Li}~\cite{li2008linear}, \texttt{8pt-Kneip}~\cite{kneip2014efficient}, \texttt{6pt-Stew{\'e}nius}~\cite{henrikstewenius2005solutions}, \texttt{4pt-Lee}~\cite{hee2014relative}, \texttt{4pt-Sweeney} ~\cite{sweeney2014solving} and \texttt{4pt-Liu}~\cite{liu2017robust}.}
\label{fig:RTCDF}
\end{figure}
We also show the empirical cumulative error distributions for KITTI sequence 00. These values are calculated from the same values which were used for creating Table 1 in the paper. Figure~\ref{fig:RTCDF} shows the proposed \texttt{2AC~method} offers the best overall performance in comparison to state-of-the-art methods.
\end{document}
| proofpile-arXiv_065-249 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Mahler \cite{Mah32}, in 1932, and Koksma \cite{Ko39}, in 1939,
introduced two related measures for the
quality of approximation of a complex number $\xi$ by algebraic numbers.
For any integer $n \ge 1$, we denote by
$w_n (\xi)$ the supremum of the real numbers $w$ for which
$$
0 < |P(\xi)| < H(P)^{-w}
$$
has infinitely many solutions in integer polynomials $P(X)$ of
degree at most $n$. Here, $H(P)$ stands for the na\"\i ve height of the
polynomial $P(X)$, that is, the maximum of the absolute values of
its coefficients. Further, we set
$$
w(\xi) = \limsup_{n \to \infty} \frac{w_n(\xi)}{n}
$$
and, according to Mahler \cite{Mah32}, we say that $\xi$ is
$$
\displaylines{
\hbox{an $A$-number, if $w(\xi) = 0$;} \cr
\hbox{an $S$-number, if $0 < w(\xi) < \infty$;} \cr
\hbox{a $T$-number, if $w(\xi) = \infty $ and $w_n(\xi) < \infty$, for any
integer $n\ge 1$;} \cr
\hbox{a $U$-number, if $w(\xi) = \infty $ and $w_n(\xi) = \infty$, for some
integer $n\ge 1$.}\cr}
$$
The set of complex $A$-numbers is the set of complex algebraic numbers.
In the sense of the Lebesgue measure, almost all numbers are $S$-numbers.
Liouville numbers (which, by definition, are the real numbers $\xi$ such that
$w_1(\xi)$ is infinite) are examples of $U$-numbers, while the existence of
$T$-numbers remained an open problem during nearly forty years, until it was
confirmed by Schmidt \cite{Schm70,Schm71}.
Following Koksma \cite{Ko39}, for any integer $n \ge 1$, we denote by
$w_n^* (\xi)$ the supremum of the real numbers $w^*$ for which
$$
0 < |\xi - \alpha| < H(\alpha)^{-w^*-1}
$$
has infinitely many solutions in complex algebraic numbers $\alpha$ of
degree at most $n$. Here, $H(\alpha)$ stands for the na\"\i ve height of $\alpha$,
that is, the na\"\i ve height of its minimal defining polynomial over the integers.
Koksma \cite{Ko39} defined $A^*$-, $S^*$-, $T^*$- and $U^*$-numbers as above, using $w_n^*$
in place of $w_n$. Namely, setting
$$
w^*(\xi) = \limsup_{n \to \infty} \frac{w_n^* (\xi)}{n},
$$
we say that $\xi$ is
$$
\displaylines{
\hbox{an $A^*$-number, if $w^*(\xi) = 0$;} \cr
\hbox{an $S^*$-number, if $0 < w^*(\xi) < \infty$;} \cr
\hbox{a $T^*$-number, if $w^*(\xi) = \infty $ and $w_n^*(\xi) < \infty$, for any
integer $n\ge 1$;} \cr
\hbox{a $U^*$-number, if $w^*(\xi) = \infty $ and $w_n^* (\xi) = \infty$, for some
integer $n\ge 1$.}\cr}
$$
Koksma proved that this classification of numbers
is equivalent to the Mahler one, in the sense that the classes $A$, $S$, $T$, $U$ coincide
with the classes $A^*$, $S^*$, $T^*$, $U^*$, respectively.
For more information on the functions $w_n$ and $w_n^*$,
the reader is directed to \cite{BuLiv,BuDurham}.
Likewise, we can divide the sets of real numbers and $p$-adic numbers in classes $A$, $S$, $T$, $U$ and
$A^*$, $S^*$, $T^*$, $U^*$.
However, there is a subtle difference with the case of complex numbers,
since the field ${\mathbb R}} \def\M{{\bf M}$ of real numbers and the field ${\mathbb {Q}}_p$ of $p$-adic numbers are not algebraically closed.
This means that, in the definition of the exponent $w_n^* (\xi)$ for a real (\resp $p$-adic)
number $\xi$, we have to decide whether the algebraic approximants $\alpha$
are to be taken in ${\mathbb C}} \def\ord{{\rm ord}$ (\resp in an algebraic closure of ${\mathbb {Q}}_p$) or in ${\mathbb R}} \def\M{{\bf M}$
(\resp in ${\mathbb {Q}}_p$). Fortunately, in both cases, it makes no difference,
as shown in \cite{Bu03,BuLiv}.
For instance, it has been proved that, if there is $\alpha$ of degree $n$ in
an algebraic closure of ${\mathbb {Q}}_p$ satisfying $| \xi - \alpha| < H(\alpha)^{-1-w^*}$, then there exists $\alpha'$ in
${\mathbb {Q}}_p$, algebraic of degree at most $n$, such that $H(\alpha') \le c H(\alpha)$ and
$| \xi - \alpha'| \le c H(\alpha')^{-1-w^*}$, where $c$ depends only on $\xi$ and on $n$.
The analogous question has not yet been clarified for Diophantine approximation in the
field ${\mathbb F}_q ((T^{-1}))$ of power series over the finite field ${\mathbb F}_q$. Different authors have different
practices, some of them define $w_n^*$ by restricting to algebraic elements in ${\mathbb F}_q ((T^{-1}))$, while some
others allow algebraic elements to lie in an algebraic closure of ${\mathbb F}_q ((T^{-1}))$.
One of the aims of the present paper is precisely to clarify this point.
Our framework is the following. Let $p$ be a prime number and $q = p^f$ an integer
power of $p$. Any non-zero element $\xi$ in ${\mathbb F}_q ((T^{-1}))$ can be written
$$
\xi = \sum_{n=N}^{+ \infty} \, a_n T^{-n},
$$
where $N$ is in ${\mathbb Z}} \def{\mathbb F}{{\mathbb F}$, $a_N \not= 0$, and $a_n$ is in ${\mathbb F}_q$ for $n \ge N$. We define
a valuation $\nu$ and an absolute value $| \cdot |$
on ${\mathbb F}_q ((T^{-1}))$ by setting $\nu (\xi) = N$, $| \xi | := q^{-N}$, and $\nu (0) = + \infty$, $|0|:= 0$.
In particular, if $R(T)$ is a non-zero polynomial in ${\mathbb F}_q[T]$, then we have $|R| = q^{\deg(R)}$.
The field ${\mathbb F}_q ((T^{-1}))$ is the completion with respect to $\nu$ of the
quotient field ${\mathbb F}_q(T)$ of the polynomial ring ${\mathbb F}_q [T]$.
It is not algebraically closed.
Following \cite{Tha12}, we denote by $C_{\infty}$ the completion of its algebraic closure.
To describe precisely the set of algebraic elements in $C_{\infty}$ is rather complicated.
Indeed, Abhyankar \cite{Ab56} pointed out that it contains the element
$$
T^{-1/p} + T^{-1/p^2} + T^{-1/p^3} + \cdots ,
$$
which is a root of the polynomial $T X^p - T X - 1$.
Kedlaya \cite{Ked01,Ked17} constructed an algebraic closure of $K((T^{-1}))$ for any field $K$
of positive characteristic in terms of certain generalized power series.
There should be no confusion between the variable $T$ and the notion of $T$-number.
The height $H(P)$ of a polynomial $P(X)$ over ${\mathbb F}_q[T]$ is the maximum of the absolute values of its
coefficients. A power series in $C_\infty$ is called algebraic if it is a root of a nonzero
polynomial with coefficients in ${\mathbb F}_q[T]$.
Its height is then the height of its minimal
defining polynomial over ${\mathbb F}_q[T]$.
We define the exponents of approximation $w_n$ and $w_n^*$ as follows.
\begin{definition}
\label{Def:1.1}
Let $\xi$ be in ${\mathbb F}_q ((T^{-1}))$. Let $n \ge 1$ be an integer.
We denote by
$w_n (\xi)$ the supremum of the real numbers $w$ for which
$$
0 < |P(\xi )| < H(P)^{-w}
$$
has infinitely many solutions in polynomials $P(X)$ over ${\mathbb F}_q[T]$ of
degree at most $n$.
We denote by
$w_n^* (\xi)$ the supremum of the real numbers $w^*$ for which
$$
0 < |\xi - \alpha| < H(\alpha)^{-w^*-1}
$$
has infinitely many solutions in algebraic power series $\alpha$ in ${\mathbb F}_q ((T^{-1}))$ of
degree at most $n$.
\end{definition}
An important point in the definition of $w_n^*$ is that we require that the approximants $\alpha$ lie
in ${\mathbb F}_q ((T^{-1}))$. In the existing literature, it is not always clearly specified
whether the algebraic approximants are
taken in $C_{\infty}$ or in ${\mathbb F}_q ((T^{-1}))$. To take this into account, we
introduce the following exponents of approximation, where we use the superscript ${}^@$
to refer to the field $C_{\infty}$.
\begin{definition}
\label{Def:1.3}
Let $\xi$ be in ${\mathbb F}_q ((T^{-1}))$. Let $n \ge 1$ be an integer.
We denote by $w_n^@ (\xi)$ the supremum of the real numbers $w^@$ for which
$$
0 < |\xi - \alpha| < H(\alpha)^{-w^@-1}
$$
has infinitely many solutions in algebraic power series $\alpha$ in $C_{\infty}$ of
degree at most~$n$.
\end{definition}
Clearly, we have $w_n^@ (\xi) \ge w_n^* (\xi)$ for every $n \ge 1$ and every $\xi$ in ${\mathbb F}_q ((T^{-1}))$.
The first aim of this paper is to establish that the functions $w_n^*$ and $w_n^@$ coincide.
\begin{theorem}
\label{Th:2.0}
For any $\xi$ in ${\mathbb F}_q ((T^{-1}))$ and any integer $n \ge 1$, we have
$$
w_n^* (\xi) = w_n^@ (\xi).
$$
\end{theorem}
Theorem \ref{Th:2.0} is not surprising, since it seems
to be very unlikely that a power series in ${\mathbb F}_q ((T^{-1}))$
could be better approximated by
algebraic power series in $C_{\infty} \setminus {\mathbb F}_q ((T^{-1}))$ than by
algebraic power series in ${\mathbb F}_q ((T^{-1}))$. Difficulties arise because of the existence
of polynomials over ${\mathbb F}_q[T]$ which are
not separable and of the lack of a Rolle Lemma, which is a key ingredient
for the proof of the analogous result for
the classifications of real and $p$-adic numbers.
Exactly as Mahler and Koksma did, we divide the set of power series in ${\mathbb F}_q ((T^{-1}))$ in
classes $A$, $S$, $T$, $U$, $A^*$, $S^*$, $T^*$, and $U^*$, by using the exponents
of approximation $w_n$ and $w_n^*$.
It is convenient to keep
the same terminology and to use $S$-numbers, etc., although we are concerned with power series
and not with `numbers'. This has been done by Bundschuh \cite{Bund78}, who gave some explicit
examples of $U$-numbers.
Ooto \cite[p. 145]{Oo17} observed that, by the
currently known results (with $w_n^@$ used instead of $w_n^*$ in the definitions
of the classes), the sets of $A$-numbers and of $A^*$-numbers
coincide, as do the sets of $U$-numbers and of $U^*$-numbers. Furthermore, an $S$-number is an $S^*$-number,
while a $T^*$-number is a $T$-number. However, it is not known whether the sets
of $S$-numbers (\resp $T$-numbers) and of $S^*$-numbers (\resp $T^*$-numbers) coincide.
The second aim of this paper is to establish that these sets coincide, thereby
answering \cite[Problem 5.9]{Oo17}.
\begin{theorem}
\label{Th:Tnbs}
In the field ${\mathbb F}_q ((T^{-1}))$ the classes $A$, $S$, $T$, $U$ coincide
with the classes $A^*$, $S^*$, $T^*$, $U^*$, respectively.
\end{theorem}
In 2019 Ooto \cite{Oo19} proved the existence of $T^*$-numbers and, consequently, that of
$T$-numbers.
His proof is fundamentally different from that of the existence of real
$T$-numbers by Schmidt \cite{Schm70,Schm71}, whose
complicated construction rests on a result of Wirsing \cite{Wir71}
(alternatively, one can use a consequence of
Schmidt Subspace Theorem) on the approximation to real algebraic numbers by algebraic numbers of lower degree.
In the power series setting, no analogue of Schmidt Subspace Theorem, or even to Roth Theorem, holds:
Liouville's result is best possible, as was shown by Mahler \cite{Mah49}.
Theorem \ref{Th:Tnbs} is an immediate consequence of the following statement.
\begin{theorem}
\label{Th:wineq}
Let $\xi$ be in ${\mathbb F}_q ((T^{-1}))$ and $n$ be a positive integer. Then, we have
$$
w_n (\xi) - n + 1 \le w_n^* (\xi) \le w_n (\xi).
$$
\end{theorem}
Theorem \ref{Th:wineq} answers \cite[Problem 5.8]{Oo17}
and improves \cite[Proposition 5.6]{Oo17}, which asserts that,
for any positive integer $n$ and any $\xi$ in ${\mathbb F}_q ((T^{-1}))$, we have
$$
\frac{w_n (\xi)}{p^k} - n + \frac{2}{p^k} - 1 \le w_n^@ (\xi) \le w_n (\xi),
$$
where $k$ is the integer defined by $p^k \le n < p^{k+1}$.
Our next result is, in part, a metric statement. It provides a power series
analogue to classical statements already established in the real and in the $p$-adic settings.
Throughout this paper, `almost all'
always refer to the Haar measure on ${\mathbb F}_q ((T^{-1}))$.
\begin{theorem}
\label{Th:Metric}
For any positive integer $n$ and any $\xi$ in ${\mathbb F}_q ((T^{-1}))$ not algebraic of degree $\le n$, the equality
$w_n^* (\xi) = n$ holds as soon as $w_n (\xi) = n$.
Almost all power series $\xi$ in ${\mathbb F}_q ((T^{-1}))$ satisfy
$w_n^* (\xi) = n$ for every $n \ge 1$.
\end{theorem}
The first assertion of Theorem \ref{Th:Metric} was stated without proof, and with $w_n^@$ in place
of $w_n^*$, at the end of \cite{Gu96}.
It follows immediately from Theorem \ref{Th:wineq} combined with \eqref{eqWir} below, which
implies that $w_n^* (\xi) \ge n$ holds as soon as we have $w_n (\xi) = n$.
By a metric result of Sprind\v zuk \cite{Spr69}, stating that almost all
power series $\xi$ in ${\mathbb F}_q ((T^{-1}))$ satisfy
$w_n (\xi) = n$ for every $n \ge 1$, this gives the second assertion.
Chen \cite{Chen18} established that, for any $n \ge 1$ and any real number $w \ge n$, the set
of power series $\xi$ in ${\mathbb F}_q((T^{-1}))$ such that $w_n (\xi) = w$ (\resp $w_n^@ (\xi) = w$) has Hausdorff
dimension $(n+1)/(w+1)$. In view of Theorem \ref{Th:2.0}, her result also holds for $w_n^*$ in place of $w_n^@$.
As observed by Ooto \cite[Lemma 5.5]{Oo17},
it follows quite easily from the theory of continued fractions that $w_1 (\xi) = w_1 (\xi^p)$ for
every $\xi$ in ${\mathbb F}_q ((T^{-1}))$.
This invariance property extends to the exponents $w_n$ and $w_n^*$.
\begin{theorem}
\label{Th:powerp}
Let $\xi$ be in ${\mathbb F}_q ((T^{-1}))$ and $n$ be a positive integer. Then, we have
$$
w_n (\xi) = w_n (\xi^p), \quad w_n^* (\xi) = w_n^* (\xi^p).
$$
\end{theorem}
It follows from Liouville's inequality (see e.g., \cite[Theorem 5.2]{Oo17}) that, for any $n \ge 1$ and any
algebraic power series $\xi$ in ${\mathbb F}_q((T^{-1}))$ of degree $d$, we have
$$
w_n^* (\xi) \le w_n (\xi) \le d-1.
$$
Mahler's example \cite{Mah49} of the root $T^{-1} + T^{-p} + T^{-p^2} + \ldots$
of $X^p - X + T^{-1}$ shows that there are algebraic power series $\xi$ in ${\mathbb F}_p((T^{-1}))$ of degree $p$ with
$w_1 (\xi) = p-1$.
For further results on Diophantine exponents of approximation of algebraic power series, the
reader is directed to \cite{Chen13,Tha11,Tha13,Fir13}
and the references given therein.
The present paper is organized as follows.
Further exponents of approximation are defined in Section 2 and (in)equalities between
them are stated. Auxiliary results are gathered in Section 3, while the next two sections are devoted
to proofs. Several open questions are listed in Section 6.
Throughout this paper, the notation $\ll$, $\gg$ means that there is an implicit, absolute,
positive constant.
\section{Uniform exponents and two inequalities between exponents}
A difficulty occurring in the proof of the metric statement of Sprind\v zuk mentioned
in the previous section is caused by the fact that the polynomials which are very small
at a given power series could be inseparable. Or, said differently, by the possible existence
of power series $\xi$ for which $w_n (\xi)$ exceeds $w_n^{{\rm sep}} (\xi)$, where
$w_n^{{\rm sep}}$ is defined exactly as $w_n$, but with the extra requirement that
the polynomials $P(X)$ have to be separable. The next result shows that such power series
do not exist. Before stating it, we define several exponents of uniform approximation.
\begin{definition}
\label{Def:3.1}
Let $\xi$ be in ${\mathbb F}_q ((T^{-1}))$. Let $n \ge 1$ be an integer.
We denote by
${\widehat w}} \def\hv{{\hat v}_n (\xi)$ (resp., ${\widehat w}} \def\hv{{\hat v}_n^{{\rm sep}} (\xi)$)
the supremum of the real numbers ${\widehat w}} \def\hv{{\hat v}$ for which there exists
an integer $H_0$ such that, for every $H > H_0$, there exists a polynomial $P(X)$
(resp., a separable polynomial $P(X)$) over ${\mathbb F}_q[T]$ of
degree at most $n$ and height at most $H$ such that
$$
0 < |P(\xi )| < H^{- {\widehat w}} \def\hv{{\hat v}}.
$$
We denote by
${\widehat w}} \def\hv{{\hat v}_n^* (\xi)$ the supremum of the real numbers ${\widehat w}} \def\hv{{\hat v}^*$ for which there exists
an integer $H_0$ such that, for every $H > H_0$, there exists an
algebraic power series $\alpha$ in ${\mathbb F}_q ((T^{-1}))$ of
degree at most $n$ and height at most $H$ such that
$$
0 < |\xi - \alpha| < H(\alpha)^{-1} \, H^{-{\widehat w}} \def\hv{{\hat v}^*}.
$$
We denote by
${\widehat w}} \def\hv{{\hat v}_n^@ (\xi)$ the supremum of the real numbers ${\widehat w}} \def\hv{{\hat v}^@$ for which there exists
an integer $H_0$ such that, for every $H > H_0$, there exists an
algebraic power series $\alpha$ in ${\mathbb F}_q ((T^{-1}))$ of
degree at most $n$ and height at most $H$ such that
$$
0 < |\xi - \alpha| < H(\alpha)^{-1} \, H^{-{\widehat w}} \def\hv{{\hat v}^@}.
$$
\end{definition}
For any power series $\xi$ and any $n \ge 1$, we have clearly
$$
{\widehat w}} \def\hv{{\hat v}_n^{{\rm sep}} (\xi) \le {\widehat w}} \def\hv{{\hat v}_n (\xi)
\quad \hbox{and} \quad
{\widehat w}} \def\hv{{\hat v}_n^* (\xi) \le {\widehat w}} \def\hv{{\hat v}_n^@ (\xi).
$$
The first of these is an equality.
\begin{theorem}
\label{invsep}
Let $\xi$ be in ${\mathbb F}_q ((T^{-1}))$ and $n$ a positive integer. Then, we have
$$
w_n (\xi) = w_n^{{\rm sep}} (\xi), \quad {\widehat w}} \def\hv{{\hat v}_n (\xi) = {\widehat w}} \def\hv{{\hat v}_n^{{\rm sep}} (\xi),
$$
and
$$
w_n (\xi) = w_n(\xi^p), \quad {\widehat w}} \def\hv{{\hat v}_n (\xi) = {\widehat w}} \def\hv{{\hat v}_n(\xi^p).
$$
\end{theorem}
To prove Theorem \ref{Th:Metric},
we establish the following inequalities.
\begin{theorem}
\label{WirsUnif}
Let $n \ge 1$ be an integer. The lower bounds
\begin{equation} \label{eqWir}
w_n^* (\xi) \ge {\widehat w}} \def\hv{{\hat v}_n^@ (\xi) \ge \frac{w_n (\xi)}{w_n (\xi) -n +1}
\end{equation}
and
$$
{w}_n^* (\xi) \ge \frac{{\widehat w}} \def\hv{{\hat v}_n (\xi)}{{\widehat w}} \def\hv{{\hat v}_n (\xi) -n +1}
$$
hold for any power series $\xi$ which is not
algebraic of degree $\le n$.
\end{theorem}
For completeness, we define two exponents of simultaneous approximation and
establish that they are invariant under the map $\xi \mapsto \xi^p$.
Below, the `fractional part' $\Vert \cdot \Vert$ is defined by
$$
\Bigl\Vert \sum_{n=N}^{+ \infty} \, a_n T^{-n} \Bigr\Vert = \Bigl| \sum_{n=1}^{+ \infty} \, a_n T^{-n} \Bigr|,
$$
for every power series $\xi = \sum_{n=N}^{+ \infty} \, a_n T^{-n}$ in ${\mathbb F}_q ((T^{-1}))$.
\begin{definition}
\label{Def:lambda}
Let $\xi$ be in ${\mathbb F}_q ((T^{-1}))$. Let $n \ge 1$ be an integer.
We denote by
$\lambda_n (\xi)$ the supremum of the real numbers $\lambda$ for which
$$
0 < \max\{ \Vert R(T) \xi \Vert, \ldots , \Vert R(T) \xi^n \Vert \} < q^{-\lambda \deg(R)}
$$
has infinitely many solutions in polynomials $R(T)$ in ${\mathbb F}_q[T]$. We denote by
${\widehat \lambda}_n (\xi)$ the supremum of the real numbers ${\widehat \lambda}$ for which there exists
an integer $d_0$ such that, for every $d > d_0$, there exists a polynomial $R(T)$ in ${\mathbb F}_q[T]$ of
degree at most $d$ such that
$$
0 < \max\{ |R(T) \xi |, \ldots , |R(T) \xi^n|\} < q^{- {\widehat \lambda} d}.
$$
\end{definition}
Since ${\mathbb F}_q$ is a finite field, requiring `infinitely many solutions in polynomials $R(T)$ in ${\mathbb F}_q[T]$' is equivalent
to requiring `solutions in polynomials $R(T)$ in ${\mathbb F}_q[T]$ of arbitrarily large degree'.
\begin{proposition}
\label{lahla}
Let $\xi$ be in ${\mathbb F}_q ((T^{-1}))$ and $n$ a positive integer. Then, we have
$$
\lambda_n(\xi)=\lambda_n(\xi^p), \quad {\widehat \lambda}_n(\xi)={\widehat \lambda}_n(\xi^p).
$$
\end{proposition}
\section{Auxiliary results}
\begin{lemma}[Krasner's lemma] \label{Kras}
Let $K$ be a complete, algebraically closed field equipped with a non-Archimedean absolute value $| \cdot |$.
Let $\alpha$ be an algebraic element of $K$ of degree $d$ at least equal to $2$ and separable.
Let $\alpha = \alpha_1, \alpha_2, \ldots , \alpha_d$ be the conjugates of $\alpha$.
For any $\beta$ in $K$ satisfying
$$
|\alpha - \beta| < |\alpha_j - \beta|, \quad 2 \le j \le d,
$$
we have $K(\alpha) \subset K(\beta)$.
\end{lemma}
\begin{proof}
See e.g. \cite[Section 3.4.2]{BGR84}.
\end{proof}
\begin{lemma} \label{Gu2}
Let $P (X)$ be a polynomial in $C_{\infty} [X]$ of degree $n \ge 1$ and with leading coefficient $a_n$.
Let $\beta_1, \ldots , \beta_n$ be its roots in $C_{\infty}$.
Then, for any $\rho > 0$ and any $\xi$ in $C_{\infty}$, we have
$$
\prod_{i=1}^n \max\{|\xi - \beta_i|, \rho\} \ll \gg \frac{H(P)}{|a_n|}
$$
and
$$
\prod_{i=1}^n \min \{|\xi - \beta_i|, \rho\} \ll \gg \frac{|P(\xi)|}{H(P)}.
$$
\end{lemma}
\begin{proof}
See \cite{Gu96}.
\end{proof}
\begin{lemma}
\label{lem:estimate}
Let $P(X) = c_m(T) X^m + \ldots + c_1(T) X + c_0 (T)$ be a polynomial in ${\mathbb F}_q[T][X]$ of
positive degree. Let $\alpha_1,\ldots , \alpha_m$ be its roots in $C_\infty$.
Let $\xi$ be in ${\mathbb F}_q ((T^{-1}))$.
Then, for any nonempty subset $S$ of $\{1, \ldots , m\}$, we have
$$
|c_m(T) \prod_{i\in S} (\xi-\alpha_i)| \le (\max(1,|\xi|))^m H(P).
$$
\end{lemma}
\begin{proof}
We may assume without loss of generality that $\alpha_1,\ldots ,\alpha_s$
are the zeros of $P(X)$ with $|\alpha_i|\ge 1$ and that $|\alpha_j|<1$ for $j>s$.
Let $S_0$ be the set of $i$ in $S$
with $|\alpha_i| > |\xi|$.
Then
$$
|c_m(T) \prod_{i\in S} ( \xi-\alpha_i)|
\le |c_m(T)|\prod_{i\in S_0} |\alpha_i| |\xi|^{|S|-|S_0|}\le (\max(1,|\xi|))^m |c_m(T)\alpha_1\cdots \alpha_s|.
$$
Now
by construction $|\alpha_1\cdots \alpha_s| > |\alpha_{i_1}\cdots \alpha_{i_s}|$ whenever
$i_1<i_2<\cdots <i_s$ and $i_s\neq s$. Thus,
$$
|c_m(T)\alpha_1\cdots \alpha_s| =
\Bigl|c_m(T) \sum_{i_1<\cdots <i_s} \alpha_{i_1}\cdots \alpha_{i_s} \Bigr| = |c_{m-s}(T)|,
$$
and so the result follows.
\end{proof}
\section{Proof of Theorem \ref{invsep}}
We establish, in this order, that the exponents $w_n$ and $w_n^{{\rm sep}}$ coincide, that
$w_n$ and ${\widehat w}} \def\hv{{\hat v}_n$ are invariant under the map $\xi \mapsto \xi^p$, and, finally,
that the exponents ${\widehat w}} \def\hv{{\hat v}_n$ and ${\widehat w}} \def\hv{{\hat v}_n^{{\rm sep}}$ coincide.
A common key tool for the proofs given in this section is the notion of Cartier operator.
For a positive integer $j$, let $\Lambda_0,\ldots \Lambda_{p^j-1}:{{\mathbb F}}_q((T^{-1}))\to {{\mathbb F}}_q((T^{-1}))$
be the operators uniquely defined by
$$
G(T^{-1}) = \sum_{i=0}^{p^j-1} T^{i} \Lambda_i(G(T^{-1}))^{p^j},
$$
for $G(T^{-1})$ in ${{\mathbb F}}_q((T^{-1}))$.
Observe that $\Lambda_i(A+B^{p^j} C) = \Lambda_i(A) + B\Lambda_i(C)$
for $A,B,C$ in ${{\mathbb F}}_q((T^{-1}))$.
Note also that, for $i=0, \ldots , p^j - 1$, we have
$$
\Lambda_i (T^{-p^j + i}) = T^{-1}, \quad \Lambda_i (T^{p^j + i}) = T.
$$
\begin{proof}[$\bullet$ Proof of the equality ${\widehat w}} \def\hv{{\hat v}_n={\widehat w}} \def\hv{{\hat v}_n^{\rm sep}$.]
The following lemma implies that the exponents $w_n$ and $w_n^{{\rm sep}}$ coincide.
\begin{lemma} \label{CartOp}
Let $n$ be a positive integer and $\xi$ in ${\mathbb F}_q ((T^{-1}))$.
Let $w \ge 1$ be a real number and $P (X)$ be in ${{\mathbb F}}_q[T] [X]$ such that $0 < |P(\xi)| \le H(P)^{-w}$.
Then, there exists a separable polynomial $Q(X)$ in ${{\mathbb F}}_q[T] [X]$ such that
$0 < |Q(\xi)| \le H(Q)^{-w}$.
\end{lemma}
\begin{proof}
We start with a polynomial $P(X)$ in ${{\mathbb F}}_q [T] [X]$
of degree at most $n$ with
$$
0< |P(\xi)| = H(P)^{-w}.
$$
Write $P (X) = \sum_{i=0}^n Q_{i}(T) X^i$.
Let $d$ denote the greatest common divisor of all $i$ such that $Q_{i}$ is nonzero.
Let $j$ be the nonnegative integer such that $p^j$ divides $d$ but $p^{j+1}$ does not.
If $j=0$, then $P(X)$ is separable, so we can assume $j \ge 1$.
Thus we may write
$$
P (X) = \sum_{i\le n/p^j} Q_{p^j i}(T) X^{p^j i}.
$$
Then,
$$
P (\xi) = \sum_{i\le n/p^j} Q_{ p^j i}(T) \xi^{p^j i},
$$
so, for $0\le s<p^j$, we have
$$
\Lambda_s (P(\xi)) = \sum_{i\le n/p^j} \Lambda_s (Q_{p^j i}(T)) \xi^i.
$$
Set $n'=\lfloor n/p^j \rfloor$.
Notice that
$$
G_{s}(X):=\sum_{i\le n'} \Lambda_s (Q_{p^j i}(T)) X^i
$$
is of degree at most $n'$ and $G_s(\xi) =\Lambda_s (P(\xi))$.
Thus,
$$
P(\xi) = \sum_{s=0}^{p^j-1} T^{s} G_{s}(\xi)^{p^j}.
$$
Since the $\nu(T^{s})$ are pairwise distinct mod $p^j$ for $s=0,\ldots ,p^j-1$,
we see that the $\nu( T^{s} G_{s}(\xi)^{p^j})$
are pairwise distinct as $s$ ranges from $0$ to $p^j-1$.
Let
$$
v = \min_{0 \le s \le p^j - 1} \{-s + p^j \nu(G_{s}(\xi))\}.
$$
Then
$\nu(G_{i}(\xi)) \ge v /p^j$ for $i=0,\ldots ,p^j-1$ and there exists one $s$ such that
$\nu(G_s(\xi)) = v/p^j +s/p^j$.
For this particular $s$ we must have
$$
0 < | G_{s}(\xi) | = q^{-v /p^j - s/p^j} \le q^{-v /p^j } = |P (\xi) |^{1/p^j}.
$$
Also, by construction, if $B(T)$ is a polynomial of degree $\ell$, then $\Lambda_s (B)$
has degree at most $\ell/p^j$ and so $H(G_{s}) \le H(P)^{1/p^{j}}$. Thus
$$
0 < | G_{s}(\xi) | = | P_m (\xi) |^{1/p^j} \le H(P)^{- w /p^j}
\le H(G_{s})^{- w}.
$$
If $G_s (X)$ is inseparable, then one repeats the argument to obtain a nonzero
polynomial of lesser degree small at $\xi$. After at most $\log_p n$ steps we will get a separable polynomial.
This proves the lemma.
\end{proof}
\end{proof}
\begin{proof}[$\bullet$ Proof that $w_n$ is invariant under the map $\xi \mapsto \xi^p$.]
Let $\xi$ and $n$ be as in the theorem.
Let $P(X)$ be a polynomial of degree at most $n$ in ${\mathbb F}_q[T] [X]$ and define $w$ by
$|P(\xi)| = H(P)^{-w}$. Write
$$
P(X) = a_n(T) X^n + \ldots + a_1 (T) X + a_0 (T)
$$
and
$$
Q(X) = a_n(T^p) X^n + \ldots + a_1 (T^p) X + a_0 (T^p).
$$
Then, we have $P(\xi)^p = Q(\xi^p)$, $H(Q) = H(P)^p$, and
$$
|Q(\xi^p)|= H(P)^{-pw} = H(Q)^{-w}.
$$
This shows that $w_n(\xi^p) \ge w_n(\xi)$.
The reverse inequality is more difficult and rests on Lemma \ref{CartOp}.
Let $P(X)$ be a polynomial of degree at most $n$ in ${\mathbb F}_q[T] [X]$
which does not vanish at $\xi^p$, and define $w$ by
$|P(\xi^p)| = H(P)^{-w}$. Write
$$
P(X) = a_n(T) X^n + \ldots + a_1 (T) X + a_0 (T)
$$
and
$$
Q(X) = a_n(T) X^{pn} + \ldots + a_1 (T) X^p + a_0 (T)
$$
Then, we have $P(\xi^p) = Q(\xi)$ and $|Q(\xi)| = H(Q)^{-w}$.
Obviously, $Q(X)$ is not separable. It follows from Lemma \ref{CartOp} that there
exists a polynomial $R(X)$, of degree at most $n$,
such that
$$
|R(\xi)| \ll H(R)^{-w}.
$$
This shows that $w_n(\xi) \ge w_n(\xi^p)$ and completes the proof of the theorem.
\end{proof}
In the next proofs, we make use of the following convention.
Given a nonzero polynomial $P(X)=a_0+a_1X+\cdots +a_m X^m$ in $\mathbb{F}_q[T][X]$
and $i=0, \ldots , p-1$, we let
$\Lambda_i (P)(X)$ denote the polynomial
$$
\sum_{j=0}^m \Lambda_i (a_j) X^j.
$$
\begin{proof}[$\bullet$ Proof that ${\widehat w}} \def\hv{{\hat v}_n$ is invariant under the map $\xi \mapsto \xi^p$.]
Let $\varepsilon>0$.
By assumption, for any sufficiently large $H$, there is some polynomial
$P(X)=a_0+a_1 X+ \cdots +a_m X^m$ of degree $m$ at most $n$ and height at most $H^{1/p}$ such that
$$
0<|P(\xi)|<H^{-{\widehat w}} \def\hv{{\hat v}_n(\xi)/p+\varepsilon/p}.
$$
Set $Q(X)=a_0^p+a_1^p X+\cdots + a_m^p X^m$.
Then $Q(X)$ has degree at most $n$ and height at most $H$, and, by construction, it satisfies
$$
0<|Q(\xi^p)|<H^{-{\widehat w}} \def\hv{{\hat v}_n(\xi)+\varepsilon}.
$$
It follows that ${\widehat w}} \def\hv{{\hat v}_n(\xi^p)\ge {\widehat w}} \def\hv{{\hat v}_n(\xi) - \varepsilon$ for every $\varepsilon>0$ and so we get the inequality
$$
{\widehat w}} \def\hv{{\hat v}_n(\xi^p)\ge {\widehat w}} \def\hv{{\hat v}_n(\xi).
$$
We now show the reverse inequality.
By assumption, for any sufficiently large $H$, there is some polynomial
$Q(X)=a_0+a_1 X +\cdots + a_m X^m$
of degree $m$ at most $n$ and height at most $H^p$ such that
$$
0<|Q(\xi^p)| < (H^p)^{-{\widehat w}} \def\hv{{\hat v}_n(\xi^p)+\varepsilon}.
$$
Then for each $i$ in $\{0,\ldots ,p-1\}$ we define
$$
Q_i(X)=\sum_{j=0}^m \Lambda_i (a_j) X^j.
$$
By construction, we have $H(Q_i) \le H$ for $i=0,\ldots , p-1$.
Also we have $Q_i(\xi) = \Lambda_i (Q)(\xi)$ for $i=0,\ldots , p-1$ and so
$Q(\xi^p) = \sum_{j=0}^{p-1} T^j Q_i(\xi)^p$. Since the valuations are distinct, we have
$$
|Q_i(\xi)|< H^{-{\widehat w}} \def\hv{{\hat v}_n(\xi^p) +\varepsilon},
$$
for $i=0,\ldots , p-1$.
Since $Q(X)$ is nonzero, there is some $k$ in $\{0,\ldots ,p-1\}$ such that $Q_k(\xi)\neq 0$ and we see
$$
0<|Q_k(\xi)|<H^{-{\widehat w}} \def\hv{{\hat v}_n(\xi^p) +\varepsilon}.
$$
It follows that ${\widehat w}} \def\hv{{\hat v}_n(\xi)\ge {\widehat w}} \def\hv{{\hat v}_n(\xi^p)$, giving us the reverse inequality.
\end{proof}
The next lemma is used in the proof that the uniform
exponents ${\widehat w}} \def\hv{{\hat v}_n$ and ${\widehat w}} \def\hv{{\hat v}_n^{\rm sep}$ coincide.
We let $\log_p$ denote the logarithm in base $p$.
\begin{lemma}
\label{lem:pr}
Let $\xi\in \mathbb{F}_p ((T^{-1}))$ and let $P(X)=c_0 + c_1 X+\cdots + c_n X^{n}\in \mathbb{F}_q[T][X]$
be a non-constant polynomial that is a product of irreducible inseparable polynomials such that $P(\xi)$ is nonzero.
Then, there exist an integer $r$ with $0 \le r \le \log_p(n)$ and a polynomial $P_0(X)$ such that the following hold:
\begin{enumerate}
\item $P_0(X)$ has a non-trivial separable factor;
\item $p^r {\rm deg}(P_0) \le {\rm deg}(P)$;
\item $0<|P_0(\xi)|^{p^r} < q^{p^r-1} |P(\xi)|$;
\item $H(P_0)^{p^r} \le H(P)$.
\end{enumerate}
\end{lemma}
\begin{proof}
Suppose that this is not the case. Then there must be some smallest $n$ for which it is not true.
Then since $P(X)$ is a product of irreducible inseparable polynomials $n=pm$ for some $m$.
Then $P(X) = Q(X^p)$ for some polynomial $Q$ of degree $m$. Observe that
$$
P(\xi) = \sum_{j=0}^{p-1} T^j (\Lambda_j (Q)(\xi))^p.
$$
For $j = 0, \ldots , p-1$ such that $\Lambda_j(Q)(\xi)$ is nonzero, write
$$
\hbox{$\Lambda_j (Q)(\xi) = c_j T^{-a_j} +$ larger powers of $T^{-1}$},
$$
with $c_j\neq 0$.
Then, we have
$$
\hbox{$T^j (\Lambda_j (Q)(\xi))^p = c_j^p T^{-pa_j+j} +$ larger powers of $T^{-1}$}.
$$
Now there must be some unique $j_0$ such that $pa_{j_0}-j_0$
is minimal among all $pa_j-j$ (and it must be finite), thus
$$
|P(\xi)| = q^{-(pa_{j_0}-j_0)}.
$$
Then, the polynomial $A(X):= \Lambda_{j_0} (Q)$ has the property that
$$
|A(\xi)| = q^{-a_{j_0}}.
$$
To summarize, we have:
\begin{enumerate}
\item[(a)] $p{\rm deg}(A) \le {\rm deg}(P)$;
\item[(b)] $0<|A(\xi)|^{p} \le |P(\xi)|$;
\item[(c)] $H(A)^{p} \le H(P)$.
\end{enumerate}
By construction, ${\rm deg}(A)<n$ and so by minimality, there is some $r\le \log_p(m)$
and a polynomial $Q(X)$ such that:
\begin{enumerate}
\item[(d)] $p^r {\rm deg}(Q) \le {\rm deg}(A)$;
\item[(e)] $0<|Q(\xi)|^{p^r} \le |A(\xi)|$;
\item[(f)] $H(Q)^{p^r} \le H(A)$;
\item[(g)] $Q$ has a non-trivial separable factor.
\end{enumerate}
Then by construction
$p^{r+1}{\rm deg Q}\le {\rm deg}(P)$, $0<|Q(\xi)|^{p^{r+1}} < q^{p^{r+1}-1} |P(\xi)|$
and $H(Q)^{p^{r+1}}\le H(P)$. Furthermore, $r+1\le 1+\log_p(m)\le \log_p(n)$ and so we get the desired result.
\end{proof}
\begin{proof}[$\bullet$ Proof of the equality ${\widehat w}} \def\hv{{\hat v}_n={\widehat w}} \def\hv{{\hat v}_n^{\rm sep}$.]
It is clear that ${\widehat w}} \def\hv{{\hat v}_n(\xi)\ge {\widehat w}} \def\hv{{\hat v}_n^{\rm sep}(\xi)$.
We now show the reverse inequality. Let $\varepsilon>0$. Then there is some $H_0$
such that for every $H>H_0$ there is a polynomial $P(X)$ of degree at most $n$ and height at most $H$ such that
$$
0<|P(\xi)|<H^{-{\widehat w}} \def\hv{{\hat v}_n(\xi) +\varepsilon}.
$$
We take the infimum over all $d\le n$ for which there is some positive constant $C$
such that for every $H>H_0$ there is a polynomial $A(X)B(X)$ with $A(X)$ separable
and $B(X)$ a polynomial of degree at most $d$ that is a product of irreducible inseparable polynomials with
$$
0<|A(\xi)B(\xi)| < C\cdot H^{-{\widehat w}} \def\hv{{\hat v}_n(\xi) +\varepsilon}.
$$
Then by assumption, $d$ must be positive and since the polynomial $B(X)$
is a product of inseparable irreducible polynomials, we see that $p$ divides $d$.
Let $H>H_0$. Then there is a fixed constant $C>0$ that does not depend on $H$
such that there are polynomials $A(X)$ and $B(X)$ with $A$ separable and $B$
a polynomial of degree at most $d$ that is a product of irreducible separable polynomials with
$$
0<|A(\xi)B(\xi)| < C\cdot H^{-{\widehat w}} \def\hv{{\hat v}_n(\xi) +\varepsilon}.
$$
Then by Lemma \ref{lem:pr} there is some $r\le \log_p(n)$
and a polynomial $B_0(X)$ with a non-trivial separable factor such that
${\rm deg}(B)=p^r {\rm deg}(B_0)$ and
$$
0<|B_0(\xi)| \le |B(\xi)|
$$
and $H(B_0^{p^r})<H(B)$. Thus, the polynomial
$$
A(X)B_0(X)^{p^r}
$$
has degree at most $n$ and height at most $H$ and
$$
0<|A(\xi)B_0(\xi)^{p^r}| < C q^{p^r-1} H^{-{\widehat w}} \def\hv{{\hat v}_n(\xi) +\varepsilon}.
$$
By assumption, we can write $B_0(X)=C(X)D(X)$ with $C(X)$ non-constant and separable
and $D(X)$ a product of irreducible inseparable polynomials.
Then we have
$$
{\rm deg}(D(X)^{p^r})\le {\rm deg}(B)-p^r <d.
$$
But this contradicts the minimality of $d$ and so we see that $d$ must be zero and so we get
${\widehat w}} \def\hv{{\hat v}_n^{\rm sep}(\xi)\ge {\widehat w}} \def\hv{{\hat v}_n(\xi)-\varepsilon$. Since $\varepsilon>0$ is arbitrary, we get the desired result.
\end{proof}
\begin{proof}[Proof of Proposition \ref{lahla}]
Observe that, for $j \ge 1$,
the equality $(R(T) \xi^j)^p = R(T^p)\xi^{p j}$ immediately yields $\lambda_n (\xi^p)\ge \lambda_n(\xi)$
and ${\widehat \lambda}_n (\xi^p)\ge {\widehat \lambda}_n(\xi)$, for $n \ge 1$.
Take $\lambda$ with $0 < \lambda < \lambda_n(\xi^p)$.
Then, there is an infinite set $\mathcal{S}$ of polynomials $R(T)$ such that
$$
0 < \max\{ \Vert R(T) \xi^p \Vert, \ldots , \Vert R(T) \xi^{pn} \Vert \} < q^{-\lambda \deg(R)}.
$$
By replacing $\mathcal{S}$ with a well-chosen infinite subset,
we may assume that there is a fixed $j$ in $\{0,1,\ldots, p-1\}$
such that the degree of every polynomial in $\mathcal{S}$ is congruent to $j$ modulo $p$.
Then for $R(T)$ in $\mathcal{S}$ and $i$ in $\{1,\ldots ,n\}$, we apply the $j$-th Cartier operator $\Lambda_j $ to
$R(T)\xi^{pi}$ and we have $|\Lambda_j (R(T))\xi^{i}| < q^{-\lambda \deg(R)/p}$
We let $Q(T)=\Lambda_j (R(T))$. Then the degree of $Q$ is $(\deg(R)-j)/p$ and so we see
$$
\left| Q(T) \xi^i\right| < q^{-\lambda (p\deg(Q) +j)/p} \le q^{-\lambda \deg(Q)},
$$
for $i=1,\ldots ,n$. Since the degrees of the elements of
$\Lambda_j (\mathcal{S})$ are arbitrarily large, we deduce that $\lambda_n(\xi) \ge \lambda$.
Consequently, we have established that $\lambda_n(\xi) \ge \lambda_n (\xi^p)$.
Take ${\widehat \lambda}$ with $0 < {\widehat \lambda} < {\widehat \lambda}_n(\xi^p)$.
For any sufficiently large integer $d$, there is a polynomial $R(T)$ of degree at most $p d$ such that
$$
0 < \max\{ \Vert R(T) \xi^p \Vert, \ldots , \Vert R(T) \xi^{pn} \Vert \} < q^{-{\widehat \lambda} p d}.
$$
Let $j$ be in $\{0,1,\ldots, p-1\}$
such that the degree of $R(T)$ is congruent to $j$ modulo $p$. Apply the $j$-th Cartier operator to
$R(T)\xi^{pi}$ and let $Q(T)=\Lambda_j (R(T))$. Then the degree of $Q$ is at most equal to $d$ and so we see
$$
\left| Q(T) \xi^i\right| < q^{- {\widehat \lambda} p d /p} \le q^{- {\widehat \lambda} d},
$$
for $i=1,\ldots ,n$. This shows that ${\widehat \lambda}_n(\xi) \ge {\widehat \lambda}$. Thus, we obtain ${\widehat \lambda}_n (\xi) \ge {\widehat \lambda}_n (\xi^p)$.
\end{proof}
\section{Proofs of Theorems \ref{Th:2.0}, \ref{Th:wineq}, \ref{Th:powerp}, and \ref{WirsUnif}}
By adapting the proof of Wirsing \cite{Wir61} to the
power series setting, Guntermann \cite[Satz 1]{Gu96} established that, for every $n \ge 1$
and every $\xi$ in ${\mathbb F}_q ((T^{-1}))$ not algebraic of degree $\le n$, we have
$$
w_n^@ (\xi) \ge \frac{n+1}{2}.
$$
Actually, it is easily seen that instead of starting her proof with polynomials given by Mahler's analogue \cite{Mah41,Spr69}
of Minkowski's theorem, she could have, like Wirsing, started with polynomials $P[X]$ satisfying
$$
0 < |P(\xi)| < H(P)^{-w_n (\xi) + \eps},
$$
where $\eps$ is an arbitrarily small positive real number. By doing this, one gets the stronger assertion
\begin{equation} \label{GuW}
w_n^@ (\xi) \ge \frac{w_n (\xi) +1}{2},
\end{equation}
which is crucial for proving Theorem \ref{Th:2.0}.
Note that Guntermann \cite{Gu96} did not obtain any lower bound for $w_n^* (\xi)$, except when $n=2$.
\begin{proof}[Proof of Theorem \ref{Th:2.0}]
Set $w = w_n (\xi)$, $w^@ = w_n^@ (\xi)$, and $w^* = w_n^* (\xi)$.
Suppose that $w^@ > w^*$ and pick $\varepsilon$ in $(0, 1/3)$ such that $w^@ > w^* + 2\varepsilon$.
Then, there are infinitely many $\alpha$ in $C_{\infty}$ algebraic of degree at most $n$ such that
$$
|\xi - \alpha| < H(\alpha)^{-1-w^@ + \varepsilon}.
$$
Let $P_\alpha (X)$ denote the minimal polynomial of $\alpha$ over ${\mathbb F}_q[T]$. Then $H(P_\alpha) = H(\alpha)$.
We let $\alpha=\alpha_1,\ldots ,\alpha_m$ denote the roots of $P_\alpha(X)$
(with multiplicities), where $m={\rm deg}(P_\alpha)\le n$.
We may assume that $|\xi-\alpha_1|\le \cdots \le |\xi-\alpha_m|$. Let $r$ be the largest integer such that
$$
|\xi-\alpha_1|=\cdots = |\xi-\alpha_r|.
$$
If $r=1$ for infinitely many $\alpha$ as above, then $P_\alpha(X)$ is separable over $\mathbb{F}_q((T))$,
and we conclude from Krasner's Lemma \ref{Kras} that $\alpha_1$ lies in $\mathbb{F}_q((T^{-1}))$.
For $H(\alpha)$ large enough, we then get
$$
H(\alpha)^{-1-w_n^*-\varepsilon} < |\xi - \alpha| < H(\alpha)^{-1-w^@ + \varepsilon},
$$
thus $w^@ \le w^* + 2\varepsilon$, a contradiction.
Thus, we have $r \ge 2$.
Observe that
$|P_\alpha(\xi)| > H(\alpha)^{-w -\varepsilon}$ if $H(P_\alpha)$ is large enough.
On the other hand, with $c_\alpha (T)$ being the leading coefficient of $P_\alpha (X)$, we get
\begin{align*}
|P_\alpha(\xi)| &= \Bigl| c_\alpha(T) (\xi-\alpha_1)\cdots (\xi -\alpha_r) \prod_{i=r+1}^m (\xi-\alpha_i) \Bigr| \\
& = | \xi-\alpha|^r \cdot \Bigl|c_\alpha(T) \prod_{i=r+1}^{m} (\xi-\alpha_i) \Bigr| \\
&< (\max\{1,|\xi|\})^n \cdot H(\alpha)^{1-r (1+w^@-\varepsilon)},
\end{align*}
where the last step follows from Lemma \ref{lem:estimate}.
By \eqref{GuW} we have $w^@ \ge (w+1)/2$, thus we get
$$
H(\alpha)^{-w -\varepsilon} \ll |P_\alpha(\xi)| \ll H(\alpha)^{1-r (1+w^@-\varepsilon)}
\ll H(\alpha)^{1-r(1+(w+1)/2) + r\varepsilon}.
$$
This then gives
$$
w +\varepsilon \ge -1 + r+ \frac{r(w+1)}{2} - r\varepsilon,
$$
and since $r\ge 2$ we deduce
$$
w +\varepsilon \ge - 1+w +1 +r - r\varepsilon,
$$
which is absurd. Since $\eps$ can be taken arbitrarily small, we deduce that $w_n^@ (\xi) \le w_n^* (\xi)$.
As the reverse inequality immediately follows from the definitions of $w_n^@$ and $w_n^*$, the proof
is complete.
\end{proof}
We are ready to complete the proof of Theorem \ref{Th:wineq}.
\begin{proof}[Proof of Theorem \ref{Th:wineq}]
Let $\xi$ be in ${\mathbb F}_q ((T^{-1}))$ and $n$ be a positive integer. The
inequality $w_n^* (\xi) \le w_n (\xi)$ is clear.
Let $\eps$ be a positive real number. By Lemma \ref{CartOp}, there exist separable
polynomials $P(X)$ in ${\mathbb F}_q[T] [X]$ of arbitrarily large height such that
$$
0 < |P(\xi)| < H(P)^{- w_n (\xi) +\eps}.
$$
Then, the (classical) argument given at the beginning
of the proof of \cite[Lemma 5.4]{Oo17} yields the existence of a root $\alpha$ of $P(X)$ such that
$$
0 < |\xi - \alpha| \le |P(\xi)| \, H(P)^{n - 2}.
$$
Thus, we get the inequality
$$
w_n^@ (\xi) \ge w_n (\xi) - n + 1,
$$
and we conclude by applying Theorem \ref{Th:2.0} which asserts that $w_n^@ (\xi) = w_n^* (\xi)$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{Th:powerp}]
Let $\xi$ and $n$ be as in the theorem.
In view of Theorem \ref{invsep}, it only remains for us to prove that $w_n^* (\xi) = w_n^* (\xi^p)$.
Let $\alpha$ be in ${\mathbb F}_q((T^{-1}))$ algebraic of degree at most $n$ and define $w$
by $|\xi - \alpha| = H(\alpha)^{-w-1}$. Then, we have
$$
|\xi^p - \alpha^p| = H(\alpha)^{-p(w+1)} \quad \hbox{and} \quad H(\alpha^p) \le H(\alpha).
$$
Consequently, we get $w_n^* (\xi^p) \ge w_n^* (\xi)$.
Now, we prove the reverse inequality. Set $w = w_n^* (\xi^p)$.
Let $\eps > 0$ and $\alpha$ be in ${\mathbb F}_q((T^{-1}))$ algebraic of degree at most $n$
such that $|\xi^p - \alpha| < H(\alpha)^{-w-1 + \eps}$.
We look at $\xi^p$ as an element of the field
${\mathbb F}_q((U^{-1}))$, where $U = T^p$. Note that $\alpha$ is in the algebraic closure of ${\mathbb F}_q((U^{-1}))$.
Consequently, in the field ${\mathbb F}_q((U^{-1}))$, we have $w_n^@ (\xi^p) = w$.
By Theorem \ref{Th:2.0}, we obtain that, in the field ${\mathbb F}_q((U^{-1}))$, we have $w_n^* (\xi^p) = w$, thus
there are $\beta$ in ${\mathbb F}_q((U^{-1}))$ algebraic of degree at most $n$ of arbitrarily large
height such that $|\xi^p - \beta| < H(\beta)^{-w-1 + \eps}$. But these $\beta$ are of the form $\beta = \gamma^p$,
with $\gamma$ in ${\mathbb F}_q((T^{-1}))$, so we get
$$
|\xi - \gamma| < H(\gamma)^{-w-1 + \eps},
$$
and we deduce that $w_n^* (\xi) \ge w_n^* (\xi^p)$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{WirsUnif}]
We obtain \eqref{eqWir} by taking Wirsing's
argumentation \cite{Wir61}.
Let $n \ge 2$ be an integer and let $\xi$ be
a power series which is either transcendental,
or algebraic of degree $> n$.
Let $\eps > 0$ and set $w = w_n(\xi) (1 + \eps)^2$.
Let $i_1, \ldots, i_n$ be distinct integers in $\{0, \ldots , n\}$ such that
$\nu (\xi) \not= i_j$ for $j = 1, \ldots , n$.
By Mahler's analog \cite{Mah41,Spr69} of Minkowski's theorem,
there exist a
constant $c$ and, for any positive real number $H$,
a nonzero polynomial
$P(X)$ of degree at most $n$ such that
$$
|P(\xi)| \le H^{-w}, \quad |P(T^{i_1})|, \ldots, |P(T^{i_{n-1}})| \le H, \quad
{\rm and} \quad |P(T^{i_n})| \le c H^{w-n+1}.
$$
The definitions of $w_n(\xi)$ and $w$ show
that $H(P) \gg H^{1 + \eps}$.
It follows from Lemma \ref{Gu2}
that $P(X)$ has some root in a small neighbourhood of each of the points
$\xi$, $T^{i_1}, \ldots, T^{i_{n-1}}$. Denoting by $\alpha$
the closest root to $\xi$, we get
$$
|\xi - \alpha| \gg \ll \frac{|P(\xi)|}{H(P)} \ll H(P)^{-1} \,
(H^{w-n+1})^{-w/(w-n+1)} \quad
$$
and
$$
H(P) \ll H^{w-n+1}.
$$
Since all of this holds for any sufficiently large $H$, we deduce that
${\widehat w}} \def\hv{{\hat v}_n^@ (\xi) \ge
w/(w-n+1)$. Selecting now $\eps$ arbitrarily close to $0$, we obtain
the first assertion.
Now, we establish the second assertion.
Since ${\widehat w}} \def\hv{{\hat v}_n (\xi) \ge n$,
there is nothing to prove if $w_n^*(\xi) \ge n$.
Otherwise, let $A >2$ be a real number with $w_n^*(\xi) < A - 1 < n$.
Thus, we have $|\xi - \alpha| \ge H(\alpha)^{-A}$ for all algebraic power
series $\alpha$ of degree $\le n$ and sufficiently large height.
We make use of an idea of Bernik and Tishchenko; see also
\cite[Section 3.4]{BuLiv}. Let $\eps > 0$ be given.
We may assume that $|\xi| \le 1$.
Again, by Mahler's analog \cite{Mah41,Spr69} of Minkowski's theorem, there exist a
constant $c$ and, for any positive real number $H$,
a nonzero polynomial
$P(X) = a_n X^n + \ldots + a_1 X + a_0$ of degree at most $n$ such that
$$
|a_1|\le H^{1 + \eps}, \quad |a_2|, \ldots , |a_n| \le H, \quad
|P(\xi)| \le c H^{-n-\eps}.
$$
If $P(X)$ is a product of irreducible inseparable factors, then $a_1 = 0$ and $H(P) \ll H$.
Assume now that $P(X)$ has a separable factor.
Let $\alpha$ in $C_\infty$ be the closest root of $P(X)$ to $\xi$.
If $|a_1| > H$, then we deduce from
$|n a_n \xi^{n-1} + \ldots + 2 a_2 \xi| \le H$ that
$|P'(\xi)| = |a_1| \gg H(P)$. Thus, we get
$$
|P(\xi)| \ge |\xi - \alpha| \cdot |P'(\xi)| \gg H(\alpha)^{1 - A}
$$
and
$$
|P(\xi)| \le H(P)^{-(n+ \eps)/(1 + \eps)},
$$
which also holds if $a_1 = 0$.
Consequently, if $A- 1 \le (n+ \eps)/(1 + \eps)$, that is, if
$$
\eps < \frac{n+1 - A}{A - 2},
$$
then we get a contradiction if $H$ is large enough.
We conclude that, for any $\eps < (n+1 - A)/(A-2)$ and any sufficiently large $H$,
there exists a polynomial $P(X)$ of height $\le H$
and degree $\le n$ satisfying
$|P(\xi)| \le H^{-n-\eps}$.
Consequently, we have
${\widehat w}} \def\hv{{\hat v}_n (\xi) \ge n + \eps$, and thus
${\widehat w}} \def\hv{{\hat v}_n (\xi) \ge n + (n+1 - A)/(A-2)$.
We obtain the desired inequality by letting $A$ tend to $1 + w_n^*(\xi)$.
\end{proof}
\section{Further problems}
Despite some effort, we did not succeed to solve the following problem.
\begin{problem}
Let $n$ be a positive integer and $\xi$ in ${\mathbb {Q}}_p$. Prove that
$$
{\widehat w}} \def\hv{{\hat v}_n^* (\xi) = {\widehat w}} \def\hv{{\hat v}_n^@(\xi) = {\widehat w}} \def\hv{{\hat v}_n^* (\xi^p).
$$
\end{problem}
For $n \ge 2$, Ooto \cite{Oo17} proved the existence of $\xi$ in ${\mathbb F}_q ((T^{-1}))$ for which $w_n^* (\xi) < w_n (\xi)$.
His strategy, inspired by \cite{Bu12},
was to use continued fractions to construct power series $\xi$ with $w_2^* (\xi) < w_2 (\xi)$
and $w_2^* (\xi)$ sufficiently large to ensure that, for small (in terms of $w_2^* (\xi)$) values of $n$, we have
$$
w_2^* (\xi) = w_3^* (\xi) = \ldots = w_n^* (\xi), \quad
w_2 (\xi) = w_3 (\xi) = \ldots = w_n (\xi).
$$
Very recently, Ayadi and Ooto \cite{AyOo20} answered a question of Ooto \cite[Problem 2.2]{Oo18} by
proving, for given $n \ge 2$ and $q \ge 4$, the existence of algebraic power series
$\xi$ in ${\mathbb F}_q ((T^{-1}))$ for which $w_n^* (\xi) < w_n (\xi)$.
\begin{problem}
Do there exist power series $\xi$ in ${\mathbb F}_q ((T^{-1}))$ such that
$$
w_n^* (\xi) < w_n (\xi), \quad \hbox{for infinitely many $n$?}
$$
\end{problem}
The formulation of the next problem is close to that of \cite[Problem 2.4]{Oo18}.
\begin{problem}
Let $\xi$ be an algebraic power series in ${\mathbb F}_q ((T^{-1}))$ and $n$ a positive integer.
Is $w_1 (\xi)$ always rational? Are $w_n(\xi), w_n^* (\xi),$ and
$\lambda_n (\xi)$ always algebraic numbers?
\end{problem}
No results are known on uniform exponents of algebraic power series in ${\mathbb F}_q ((T^{-1}))$.
\begin{problem}
Let $\xi$ be an algebraic power series in ${\mathbb F}_q ((T^{-1}))$ and $n$ a positive integer. Do we have
$$
{\widehat w}} \def\hv{{\hat v}_n (\xi) = {\widehat w}} \def\hv{{\hat v}_n^* (\xi) = n ?
$$
\end{problem}
In the real case, there are many of relations between the six exponents
$w_n$, $w_n^*$, $\lambda_n$, ${\widehat w}} \def\hv{{\hat v}_n$, ${\widehat w}} \def\hv{{\hat v}_n^*$, ${\widehat \lambda}_n$, see e.g.
the survey \cite{BuDurham}.
We believe that most of the proofs
can be adapted to the power series setting.
| proofpile-arXiv_065-250 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
In recent years, advances in GPU technology and machine learning libraries enabled the trend towards deeper neural networks in Automatic Speech Recognition (ASR) systems.
End-to-end ASR systems transcribe speech features to letters or tokens without any intermediate representations.
There are two major techniques:
\begin{inparaenum}[1)]
\item Connectionist Temporal Classification (CTC~\cite{graves2006connectionist}) carries the concept of hidden Markov states over to end-to-end neural networks as training loss for sequence classification networks.
Neural networks trained with CTC loss calculate the posterior probability of each letter at a given time step in the input sequence.
\item Attention-based encoder-decoder architectures such as~\cite{chan2016listen}, are trained as auto-regressive sequence-generative models. The encoder transforms the input sequence into a latent representation; from this, the decoder generates the sentence transcription.
\end{inparaenum}
The hybrid CTC/attention architecture combines these two approaches in one single neural network~\cite{watanabe2017hybrid}.
Our work is motivated by the observation that adding a small amount of specially crafted noise to a sample given to a neural network can cause the neural network to wrongly classify its input~\cite{szegedy2013intriguing}.
From the standpoint of system security, those algorithms have implications on possible attack scenarios.
A news program or sound that was augmented with a barely noticeable noise can give hidden voice commands, e.g. to open the door, to the ASR system of a personal assistant~\cite{carlini2016hidden,carlini2018audio}.
From the perspective of ASR research, a network should be robust against such small perturbations that can change the transcription of an utterance;
its speech recognition capability shall relate more closely to what humans understand.
In speech recognition domain, working Audio Adversarial Examples (AAEs) were already demonstrated for CTC-based~\cite{carlini2018audio}, as well as for attention-based ASR systems~\cite{sun2019adversarial}.
The contribution of this work is a method for generation of untargeted adversarial examples in feature domain for the hybrid CTC/attention ASR system.
For this, we propose two novel algorithms that can be used to generate AAE for attention-based encoder-decoder architectures.
We then combine these with CTC-based AAEs to introduce an algorithm for joint CTC/attention AAE generation.
To further evaluate our methods and exploit the information within AAEs, the ASR network training is then augmented with generated AAEs.
Results indicate improved robustness of the model against adversarial examples, as well as a generally improved speech recognition performance by a moderate $10\%$ relative to the baseline model.
\section{Related Work}
\paragraph{Automatic Speech Recognition (ASR) Architecture.}
Our work builds on the hybrid CTC/attention ASR architecture as proposed and described in~\cite{watanabe2018espnet,watanabe2017hybrid}, using the location-aware attention mechanism~\cite{ChorowskiEtAl15}.
This framework combines the most two popular techniques in end-to-end ASR:
Connectionist Temporal Classification (CTC), as proposed in~\cite{graves2006connectionist}, and attention-based encoder-decoder architectures.
Attention-based sequence transcription was proposed in the field of machine language translation in~\cite{BahdanauEtAl14} and later applied to speech recognition in Listen-Attend-Spell~\cite{chan2016listen}.
Sentence transcription is performed with the help of a RNN language model (RNNLM) integrated into decoding process using {shallow fusion}~\cite{GulcehreEtAl15}.
\paragraph{Audio Adversarial Examples (AAEs).}
Adversarial examples were originally porposed and developed in the image recognition field and since then, they have been amply investigated in~\cite{szegedy2013intriguing,kurakin2016adversarialphysical,kurakin2016adversarial}.
The most known method for generation is the Fast Gradient Sign Method (FGSM)~\cite{goodfellow2014explaining}.
Adversarial examples can be prompt to label leaking~\cite{kurakin2016adversarial}, that is when the model does not have difficulties finding the original class of the disguised sample, as the transformation from the original is ``simple and predictable''.
The implementation of AAEs in ASR systems has been proven to be more difficult than in image processing~\cite{cisse2017houdini}.
Some of them work irrespective of the architecture \cite{neekhara2019universal,vadillo2019universal,abdoli2019universal}.
However, these examples are crafted and tested using simplified architectures, either RNN or CNN.
They lack an attention mechanism, which is a relevant component of the framework used in our work.
Other works focus on making AAEs remain undetected by human subjects, e.g., by {psychoachustic hiding}~\cite{schonherr2018adversarial,qin2019imperceptible}.
Carlini et al.~\cite{carlini2016hidden} demonstrated how to extract AAEs for the CTC-based DeepSpeech architecture~\cite{Hannun2014DeepSS} by applying the FGSM to CTC loss.
Hu et al. gives a general overview over adversarial attacks on ASR systems and possible defense mechanisms in~\cite{hu2019adversarial}.
In it, they observe that by treating the features matrix of the audio input as the AAE seed, it is possible to generate AAE with algorithms developed in the image processing field.
However, this leads to the incapacity of the AAE to be transformed back to audio format, as the feature extraction of log-mel f-bank features is lossy.
Some have proposed ways to overcome this problem~\cite{Andronic2020}.
AAEs on the sequence-to-sequence attention-based LAS model~\cite{chan2016listen} by extending FGSM to attention are presented in~\cite{sun2019adversarial}.
In the same work, Sun et al. also propose adversarial regulation to improve model robustness by feeding back AAEs into the training loop.
\section{Audio Adversarial Example (AAE) Generation}
The following paragraphs describe the proposed algorithms to generate AAEs
(a) from two attention-based gradient methods, either using a static or a moving window adversarial loss;
(b) from a CTC-based FGSM, and
(c) combining both previous approaches in a joint CTC/attention approach.
In general, those methods apply the single-step FGSM~\cite{goodfellow2014explaining} on audio data and generate an additive adversarial noise $\bm{\delta}(\bm{x}_t)$ from a given audio feature sequence $\bm{X}=\bm{x}_{1:T}$, i.e.,
\begin{equation}
\hat{\bm{x}}_t = \bm{x}_t + \bm{\delta}(\bm{x}_t),\hspace{0.02\textwidth} \forall t\in[1,T].
\end{equation}
We assume a \textit{whitebox} model, i.e., model parameters are known, to perform backpropagation through the neural network.
For any AAE algorithm, its reference sentence $y_{1:L}^*$ is derived from the network by decoding $\bm{x}_{1:T}$, instead of the ground truth sequence, to avoid label leaking~\cite{kurakin2016adversarial}.
\subsection{Attention-based Static Window AAEs}
For attention-based AAEs, the cross-entropy loss $\text{J}(\bm{X}, y_{l}; \bm{\theta} )$ w.r.t. $\bm{x}_{1:T}$ is extracted by iterating over {sequential token posteriors $p(y^*_{l}|y^*_{1:(l-1)})$} obtained from the attention decoder.
Sequence-to-sequence FGSM, as proposed in~\cite{sun2019adversarial}, then calculates $\bm{\delta}(\bm{x}_t)$ from the \emph{total} sequence as
\begin{align}\label{eq:seq-2-seq-fsgm}
\bm{\delta}(\bm{x}_t) &= \epsilon \cdot \sgn(\nabla_{\bm{x}_t} \sum_{l = 1}^{L} J(\bm{X}, y_l^*; \bm{\theta} )), \quad l\in [1;L].
\end{align}
As opposed to this algorithm, our approach does not focus on the total token sequence, but only a portion of certain sequential steps.
This is motivated by the observation that attention-based decoding is auto-regressive;
interruption of the attention mechanism targeted at one single step in the sequence can change the corresponding portion of the transcription as well as throw off further decoding up to a certain degree.
A sum over all sequence parts as in Eq.~\ref{eq:seq-2-seq-fsgm} may dissipate localized adversarial noise.
From this, the first attention-based method is derived that takes a single portion out of the output sequence.
We term this algorithm in the following as \emph{static window} method.
Gradients in the sentence are summed up from the start token $ \gamma $ on to the following $ l_w $ tokens, such that
\begin{equation}\label{eq:window-static}
\bm{\delta}_{\text{SW}}(\bm{x}_t) = \epsilon\cdot \sgn(\nabla_{\bm{x}_t} \sum_{l = \gamma}^{l_w} J(\bm{X}, y_l^*; \bm{\theta} )), \quad l\in [1;L].
\end{equation}
\subsection{Attention-based Moving Window AAEs}
As observed from experiments with the static window, the effectiveness of the {static window} method strongly varies depending on segment position.
Adversarial loss from some segments has a higher impact than from others.
Some perturbations only impact local parts of the transcription.
Therefore, as an extension to the static window gradient derived from Eq.~\ref{eq:window-static}, multiple segments of the sequence can be selected to generate $\bm{\delta}_{MW}(\bm{x}_t)$.
We term this the {\emph{moving window}} method.
For this, gradients from a sliding window with a fixed length $l_w$ and {stride} $ \nu $ are accumulated to $ \nabla_{\text{MW}}(\bm{x}_t) $.
The optimal values of length and stride are specific to each sentence.
Similar to the iterative FGSM based on momentum~\cite{dong2018boosting}, gradient normalization is applied in order to accumulate gradient directions.
\begin{align}
\label{eq:window-moving}
\nabla_{\text{MW}}(\bm{x}_t) &= \sum_{i = 0}^{\lceil L/\nu \rceil} \left(
\frac{\nabla_{\bm{x}_t} \sum\limits_{l = i\cdot\nu}^{l_w} J(\bm{X}, y_l^*; \bm{\theta} ) }
{||\nabla_{\bm{x}_t} \sum\limits_{l = i\cdot\nu}^{l_w} J(\bm{X}, y_l^*; \bm{\theta} )||_1}
\right), \quad l\in [1;L]\\
\bm{\delta}_{MW}(\bm{x}_t) &= \epsilon\cdot \sgn( \nabla_{\text{MW}}(\bm{x}_t) )
\end{align}
\subsection{AAEs from Connectionist Temporal Classification}
From regular CTC loss $\mathcal{L}_{\text{CTC}}$ over the total reconstructed label sentence $\bm{y}^*$, the adversarial noise is derived as
\begin{align}
\label{eq:ctc-fsgm}
\bm{\delta}_{\text{CTC}}(\bm{x}_t) &= \epsilon\cdot \sgn(\nabla_{\bm{x}_t} \mathcal{L}_{\text{CTC}}(\bm{X}, \bm{y}^*; \bm{\theta} )).
\end{align}
\subsection{Hybrid CTC/Attention Adversarial Examples}
A multi-objective optimization function~\cite{lu2017multitask} is then applied to combine CTC and attention adversarial noise $\bm{\delta}_{\text{att}}$, that was either generated from $\bm{\delta}_{\text{SW}}$ or from $\bm{\delta}_{\text{MW}}$, by introducing the factor $\xi \in [0;1]$.
\begin{align}
\label{eq:hybrid-aae}
\hat{\bm{x}}_t = \bm{x}_t + (1-\xi)\cdot\bm{\delta}_{\text{att}}(\bm{x}_t) + \xi\cdot\bm{\delta}_{\text{CTC}}(\bm{x}_t), \hspace{0.02\textwidth} \forall t\in[1,T]
\end{align}
The full process to generate hybrid CTC/attention AAEs is shown in Fig.~\ref{fig:advexgeneration}.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.95\linewidth]{fig/AdvEx.eps}
\caption{Generation of AAEs.
The unmodified speech sentence $\bm{x}_{1:T}$ and the reference sentence $\bm{y}^*_{1:L}$ are given as input.
Then, using the hybrid CTC/attention model, the adversarial loss by the CTC-layer as well as the windowed attention sequence parts are calculated.
Those are then combined by the {weighting parameter $\xi$} and noise factor $\epsilon$ to obtain the adversarial example.
}
\label{fig:advexgeneration}
\end{figure}
\subsection{Adversarial Training}
Similar as data augmentation, Adversarial Training (AT) augments samples of a minibatch with adversarial noise $\bm{\delta}_{\text{AAE}}(x_t)$.
The samples for which we create the AAEs are chosen randomly with a probability of $p_a$, as proposed by Sun et al.~\cite{sun2019adversarial}.
Because of its successive backpropagation in a single step, this method is also termed \textit{adversarial regularization} and is applied not from the beginning of the neural network training but after the $N$th epoch.
Sun et al. additionally included a weighting factor $\alpha$ to distinct sequence components that we omit, i.e., set to $1$; instead, the gradient is calculated for the minibatch as a whole.
Furthermore, our AT algorithm also samples randomly the perturbation step size $\epsilon$ to avoid overfitting as originally described in~\cite{kurakin2016adversarial}.
The expanded gradient calculations for the {sequence-based training} is then written as
\begin{equation}\label{eq:adv_training_seq2seq}
\hat{J}(\bm{X}, y; \bm{\theta}) = \sum_i (J(\bm{X}, y_i; \bm{\theta}) + J(\hat{\bm{X}}, y_i; \bm{\theta})).
\end{equation}
\section{Evaluation}
Throughout the experiments, the hybrid CTC/attention architecture is used with an LSTM encoder with projection neurons in the encoder and location-aware attention mechanism, classifying from log-mel f-bank features~\cite{watanabe2017hybrid}.
As we evaluate model performance, and not human perception on AAEs, we limit our investigation to the feature space.
Evaluation is done on the TEDlium v2 \cite{rousseau2014enhancing} speech recognition task consisting of over $200$h of speech.
The baseline model we compare our results to is provided by the ESPnet toolkit~\cite{watanabe2018espnet}.
It has each four encoder and one an attention decoder layers with each $1024$ units per layer, and in total $50$m parameters.
We use the BLSTM architecture for our experiments with each two layers in the encoder and the location-aware attention decoder;
the number of units in encoder, decoder and attention layers was reduced to 512 units~\cite{Chavez2020}.
This results in a model that has only one quarter in size compared to the baseline model, i.e., $14$m parameters.
For both models, $500$ unigram units serve as text tokens, as extracted from the corpus with SentencePiece~\cite{kudo2018subword}.
In all experiments, the reference token sequence $\bm{y^*}$ is previously decoded using the attention mechanism, as this is faster than hybrid decoding and also can be done without the RNNLM.
We also set $\epsilon = 0.3$, as done in AAE generation for attention-based ASR networks~\cite{sun2019adversarial}.
Throughout the experiments, we use the decoded transcription $\bm{y^*}$ as reference, to avoid label leaking.
The dataset used in the evaluation, TEDlium v2, consists of recordings from presentations in front of an audience and therefore is already noisy and contains reverberations.
To better evaluate the impact of adversarial noise generated by our algorithms, two noise-free sample sentences are used for evaluation.
Both sample sentences are created artificially using Text-to-Speech (TTS) toolkits so that they remain noise-free.
\subsection{Generation of AAEs: Two Case Studies}
The first noise-free sentence \emph{Peter} is generated from the TTS algorithm developed by Google named Tacotron 2~\cite{shen2018natural}.
It was generated using the pre-trained model by Google\footnote{https://google.github.io/tacotron/publications/tacotron2/index.html} and reads ``\emph{Peter Piper picked a peck of pickled peppers. How many pickled peppers did Peter Piper pick?}''
The second sentence \emph{Anie} was generated from the ESPNet TTS\footnote{The \texttt{ljspeech.tacotron2.v2} model.} and reads ``\emph{Anie gave Christina a present and it was beautiful.}''
We first test the CTC-based algorithm.
The algorithm outputs for \emph{Peter} an AAE that has $41.3\%$ CER w.r.t. the ground-truth, whereas an error rate of $36.4\%$ for \emph{Anie}.
For our experiments with the static window algorithm, we observe that it intrinsically runs the risk of changing only local tokens.
We take, for example, the sentence \textit{Anie} and set the parameter of $l_w = 3$ and $\gamma = 4$.
This gives us the following segment, as marked in bold font, out of the previously decoded sequence $\bm{y^*}$
\vspace{0.1cm}
\centerline{\emph{any gave ch{\textbf{ristin}}a a present and it was beautiful.}}
\vspace{0.1cm}
After we compute the AAE, the ASR system transcribes
\vspace{0.1cm}
\centerline{\emph{any game christian out priasant and it was beautiful}}
\vspace{0.1cm}
as the decoded sentence.
We obtain a sequence that strongly resembles the original where most of the words remain intact, while some of them change slightly.
Translated to CER and WER w.r.t the original sequence, we have 50.0 and 55.6 respectively.
We also test its hybrid version given $\xi = 0.5$, which is analogue to the decoding configuration of the baseline model.
It outputs a sequence with rates of $31.8\%$ CER, lower than its non-hybrid version.
The moving window method overcomes this problem, as it calculates a non-localized AAE.
For example, a configuration with the parameters $\nu = 4 $ and $l_w = 2$ applied to \emph{Peter} generates the pattern
\vspace{0.1cm}
\centerline{\emph{\textbf{pe}ter pip\textbf{er p}icked a p\textbf{eck} of pickle\textbf{ pe}ppers.}}
\centerline{\emph{\textbf{ many p}ickle pe\textbf{pp}ers did pe\textbf{ter p}iper pa\textbf{ck}}}
\vspace{0.1cm}
for which we obtain the decoded sentence
\vspace{0.1cm}
\centerline{\emph{huter reperber picked a pick of piggle pebpers. }}
\centerline{\emph{how many tickle taper state plea piper pick.}}
\vspace{0.1cm}
This transcribed sentence then exhibits a CER of $54.3\%$ w.r.t the ground-truth.
The same parameter configuration applied in the hybrid version with $\xi = 0.5$ achieves error rates of $34.8\%$ CER.
Throughout the experiments, higher error rates were observed on the moving window than static window or CTC-based AAE generation.
\subsection{Evaluation of Adversarial Training}\label{subsec:at}
Throughout the experiments, we configured the {moving window method} with $\nu = 2$ and $l_w = 4 $ as arbitrary constant parameters,
motivated by the observation that those parameters performed well on both sentences \emph{Peter} and \emph{Ani}.
By inspection, this configuration is also suitable for sentences of the TEDlium v2 dataset.
Especially for its shorter utterances, a small window size and overlapping segments are effective.
Each model is trained for $10$ epochs, of which $N=5$ epochs are done in a regular fashion from regular training data;
then, the regular minibatch is augmented with its adversarial counterpart with a probability $p_a=0.05$.
Adversarial training examples are either attention-only, i.e. $\xi=0$, or hybrid, i.e. $\xi=0.5$.
Finally, the noise factor $\epsilon$ is sampled uniformly from a range of $[0.0;0.3]$ to cover a wide range of possible perturbations.
The trained model is compared with the baseline model as reported in~\cite{watanabe2018espnet}.
We use the moving window and its hybrid in the AT algorithm, because we hypothesize that both can benefit the training process of the hybrid model.
The RNNLM language model that we use is provided by the ESPnet toolkit~\cite{watanabe2018espnet};
it has $2$ layers with each $650$ units and its weight in decoding was set to $\beta=1.0$ in all experiments.
We did \emph{not} use data augmentation, such as SpecAugment, or language model rescoring;
both are known to improve ASR results, but we omit them for better comparability of the effects of adversarial training.
Results are collected by decoding four datasets:
(1) the regular test set of the TEDlium v2 corpus;
(2) AAEs from the test set, made with the {attention}-based {moving window} algorithm;
(3) the test set augmented with regular white noise at 30 dB SNR; and
(4) the test set with clearly noticeable 5 dB white noise.
\begin{table}[tb!]
\centering
\setlength{\tabcolsep}{4pt}
\caption{Decoding results for all models.
The first value in each cell corresponds to the CER and the second to the WER.
The parameter $\lambda$ determines the weight of the CTC model during the decoding.
Trained models with attention-only AAEs are marked with $\xi=0$; with hybrid AAEs with $\xi=0.5$.}
\label{tab:decode_results_at}
\begin{tabular}{c c c c c c c c}
& & & & \multicolumn{4}{c}{Dataset} \\
\cmidrule(lr){5-8}
\textbf{CER/WER}& $\xi$ & $\lambda$ & LM & test & \begin{tabular}{@{}c@{}}test \\ AAE\end{tabular} & \begin{tabular}{@{}c@{}}noise \\ 30dB \end{tabular} & \begin{tabular}{@{}c@{}}noise \\ 5dB \end{tabular} \\
\cmidrule(lr){2-8}
\multirow{3}{*}[-1pt]{baseline~\cite{watanabe2018espnet}} & - & 0.0 & \textbf{-} & 20.7/22.8 & 90.7/89.1 & 23.6/25.8 & 78.8/78.8 \\
& - & 0.5 & \textbf{-} & 15.7/18.6 & 86.1/89.9 & 18.1/21.3 & 66.1/68.3 \\
& -& 0.5 & \checkmark & 16.3/18.3 & \textbf{98.5/92.2} & 19.2/20.8 & 73.2/72.7\\
\midrule
\multirow{3}{*}[-1pt]{\parbox{2.5cm}{adv. trained with att.-only AAE}} & 0.0 & 0.0 & \textbf{-} & 17.7/19.6 & 63.6/63.3 & 21.0/22.8 & 74.7/74.4 \\
& 0.0 & 0.5 & \textbf{-} & 14.3/16.9 & \textbf{53.5/56.8} & 16.5/18.9 & 62.6/65.0 \\
& 0.0 & 0.5 & \checkmark & 15.1/16.9 & 60.3/58.3 & 17.5/18.9 & 69.0/68.0\\
\midrule
\multirow{3}{*}[-1pt]{\parbox{2.5cm}{adv. trained with hybrid AAE}} & 0.5 & 0.0 & \textbf{-} & 17.9/19.8 & 65.2/65.0 & 20.4/22.3 & 74.9/75.0 \\
& 0.5 & 0.5 & \textbf{-} & \textbf{14.0/16.5} & \textbf{54.8/58.6} & \textbf{16.2/18.7} & \textbf{63.5/65.8} \\
& 0.5 & 0.5 & \checkmark & {14.8/16.6} & 61.8/59.9 & 17.0/18.5 & 70.0/69.2\\
\bottomrule
\end{tabular}
\end{table}
\paragraph{General trend.}
Some general trends during evaluation are manifested in Tab.~\ref{tab:decode_results_at}.
Comparing decoding performances between the regular test set and the AAE test set, all models perform worse.
In other words, the moving window technique used for creating the AAEs performs well against different model configurations.
Setting the CTC weight lowers error rates in general.
The higher error rates in combination with a LM are explained by the relatively high weight $\beta=1.0$.
Rescoring leads to improved performance, however, listed results are more comparable to each other when set to a constant in all decoding runs.
\paragraph{Successful AAEs.}
Notably, the baseline model performed worst on this dataset with almost $100\%$ error rates, even worse when decoding noisy data.
This manifests in wrong transcriptions of around the same length as the ground truth, with on average $90\%$ substitution errors but only $20\%$ insertion or deletion errors.
Word loops or dropped sentence parts were observed only rarely, two architectural behaviors when the attention decoder looses its alignment.
We report CERs as well as WERs, as a relative mismatch between those indicates certain error patterns for CTC and attention decoding~\cite{kurzinger2019exploring};
however, the ratio of CER to WER of transcribed sentences was observed to stay roughly at the same levels in the experiments with the AAE test set~\cite{kurzinger2019exploring}
\paragraph{Adv. trained models are more robust.}
Both models obtained from adversarial training perform better in general, especially in the presence of adversarial noise, than the baseline model;
the model trained with hybrid AAEs achieves a WER of $16.5\%$ on the test set, a performance of absolute $1.8\%$ over the baseline model.
At the same time, the robustness on regular noise and specially on adversarial noise was improved.
For the latter we have an improvement of $ 24-33\% $ absolute WER.
The most notable difference is in decoding in combination with CTC and LM, where the regularly trained model had a WER of $92.2\%$, while the corresponding adv. trained model had roughly $60\%$ WER.
The att.-only adv. trained model with $\xi=0$ seems to be slightly more robust.
On the one hand that might be a side effect from the the AAEs that are generated in an attention-only manner;
on the other hand, this model also slightly performed better on regular noise.
\section{Conclusion}
In this work, we demonstrated audio adversarial examples against hybrid attention/CTC speech recognition networks.
The first method we introduced was to select a \emph{static window} over a selected segment of the attention-decoded output sequence to calculate the adversarial example.
This method was then extended to a \emph{moving window} that slides over the output sequence to better distribute perturbations over the transcription.
In a third step, we applied the fast gradient sign method to CTC-network.
AAEs constructed with this method induced on a regular speech recognition model a word error rate of up to $90\%$.
In a second step, we employed these for adversarial training a hybrid CTC/attention ASR network.
This process improved its robustness against audio adversarial examples, with $55\%$ WER, and also slightly against regular white noise.
Most notably, the speech recognition performance on regular data was improved by absolute $1.8\%$ from $18.3\%$ compared to baseline results.
\bibliographystyle{splncs04}
| proofpile-arXiv_065-251 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\subsection{Data Collection}
We focus our data collection efforts on the Twitter platform. To retrieve a set of COVID-19 related tweets, we first define a set of case-sensitive keywords that are frequently used in the COVID-19 news and discussions. These keywords include: ``covid-19'', ``COVID-19'', ``COVID'', ``Coronavirus'', ``coronavirus'', ``CoronaVirus'', and ``corona''. Next, we utilize Twitter's Streaming API to retrieve a set of COVID-19 related tweets using the earlier defined keywords as queries. The comments on these retrieved tweets are also collected. For this study, tweets from 17 March 2020 to 28 April 2020 are collected. We further filter and remove the non-English tweets in our collected dataset, resulting in a total of 40,385,257 tweets retrieved. We term final collected dataset as the \textit{COVID-19 dataset}.
\subsection{Antisocial Behavior Annotation Framework}
The COVID-19 dataset is considerably large compared with existing publicly available antisocial online behavior datasets~\cite{waseem-hovy:2016:N16-2,DBLP:conf/acl-alw/ParkF17,DBLP:conf/icwsm/DavidsonWMW17,DBLP:conf/icwsm/FountaDCLBSVSK18}. Therefore, it is impractical to annotate the COVID-19 dataset manually. We propose an annotation framework to annotate the antisocial behavior in the COVID-19 dataset automatically. While we agree with existing studies that there could be many sub-categories of antisocial behaviors \cite{waseem-hovy:2016:N16-2,DBLP:conf/icwsm/DavidsonWMW17,DBLP:conf/icwsm/FountaDCLBSVSK18}, it is particularly challenging to annotate antisocial behaviors at fine-grain level automatically. Instead, we simplified the annotation process by annotating the tweets with binary labels: ``normal'' and ``antisocial''. Our proposed antisocial behavior annotation framework mainly comprises two annotation techniques: the lexicon-based method and the \textit{Perspective} API.
\textbf{Lexicon-Based Method.} We first compile a word corpus of antisocial keywords from various open-source antisocial behavior and online toxic content lexicons: HateBase\footnote{The largest structured hate speech repository, available at https://hatebase.org}, RSBD\footnote{http://www.rsdb.org/}, and Wikipedia\footnote{https://en.wikipedia.org/wiki/List\_of\_ethnic\_slurs}. Next, we manually remove ambiguous words such as ``pancake'', ``yellow
'', etc., as these words could also be used in normal conversational settings. As the word corpus may contain rare slurs that are obsoleted or no longer relevant, we further filter the word corpus by checking its relevance to social media. To perform this operation, we first construct a combined annotated antisocial behavior dataset by aggregating antisocial content from three publicly available datasets: WZ-LS~\cite{DBLP:conf/acl-alw/ParkF17}, DT~\cite{DBLP:conf/icwsm/DavidsonWMW17} and FOUNTA~\cite{DBLP:conf/icwsm/FountaDCLBSVSK18}. Subsequently, we perform frequency count for each keyword in the antisocial word corpus by computing the number of times it occurs in the combined annotated antisocial behavior dataset. Finally, infrequent antisocial keywords that occurred less than five times are removed from the antisocial word corpus. We term this final antisocial word corpus from open-source lexicons as the \textit{basic lexicon set}.
We also noted that there might be words that share similar antisocial semantic as the keywords in the \textit{basic lexicon set} but are not included in the lexicon itself. To this end, we construct an \textit{extended lexicon set}, which include keywords with similar antisocial semantic. We first train a Word2Vec model \cite{mikolov2013distributed} over the COVID-19 dataset. Next, we search keywords that share similar semantics with the keywords in the \textit{basic lexicon set}. More specifically, keywords that have more than 0.7 similarity scores with any keywords in the \textit{basic lexicon set} are included in the \textit{extended lexicon set}.
Finally, we annotate the COVID-19 dataset by performing keyword matching against the \textit{basic lexicon set} and \textit{extended lexicon set}. Specifically, tweets that contain any keywords in the \textit{basic lexicon set} and \textit{extended lexicon set} are labeled as ``antisocial'', while the rest of the tweets are deemed as ``normal''.
\textbf{\textit{Perspective} API\footnote{https://www.perspectiveapi.com/}} While the lexicon-based method is simple and would be able to identify a substantial amount of antisocial behaviors online, it still has limitations. For instance, there might be new antisocial keywords that are not included in the open-source antisocial word corpus. To address this limitation, we added another automatic annotation approach. The \textit{Perspective} API is an open-source program developed by Google's Counter Abuse Technology team and Jigsaw in order to improve online discussions. The API scores a given text based on several categories, such as toxicity, profanity, insult, etc. For each of these categories, Perspective API trains classifiers and outputs the probability score of a given text with respect to specific categories. Among the categories, the \textit{toxicity} score aligned most to our annotation goal. Therefore, we use the toxicity score in the \textit{Perspective} API to annotate the COVID-19 dataset. Specifically, we label a tweet as ``antisocial'' when it is given a $>0.5$ toxicity score by the \textit{Perspective} API, otherwise, the tweet will be labeled as ``normal''.
\begin{table}[t!]
\caption{Result for the automatic annotation of the COVID-19 dataset.}
\label{tab:anno-result}
\centering
\begin{tabular}{|c|c|c|c}
\hline
\textbf{Method} & \textbf{Antisocial tweets} & \textbf{Normal tweets} \\\hline\hline
Lexicon-based & 1,169,755 & 39,215,502 \\\hline
Perspective API & 2,383,316 & 38,001,941\\\hline
Combined & 2,659,585 & 37,725,672\\\hline
\end{tabular}
\end{table}
As each annotation method has its strengths, we combine the lexicon-based method and Perspective API to annotate our COVID-19 dataset. We annotate a tweet as ``antisocial'' if it is a annotated as such by any of the two methods. Otherwise, the tweet will be labeled as ``Normal''. The subsequent empirical analysis will be based on our annotated COVID-19 dataset\footnote{Due to double blind policy, the link to the dataset will be released after paper acceptance}. Table \ref{tab:anno-result} shows our final annotation results, where about 7\% of the tweets are annotated to contain antisocial content.
\subsection{Annotation Case Study}
Before performing the empirical analysis, we first conduct some case studies to examine the quality of our antisocial behavior annotation. Table~\ref{tbl:lex-API-comparison} presents several examples randomly sampled from our annotated COVID-19 datasets.
\begin{table}[t!]
\caption{Ten sampled tweets and their annotated label based onthe lexicon-based method and the \textit{Perspective} API.}
\label{tbl:lex-API-comparison}
\centering
\begin{tabular}{|c|p{6.3cm}|c|c|
\hline
\textbf{ID} & \textbf{Tweet text} & \textbf{Lexicon-based} & \textbf{Perspective API} \\ \hline\hline
1 & Her coochie probably got the cure for corona in it. & Antisocial & Normal\\\hline
2 & RT @USER Had selfish \#China notified the world in time, the \#ChineseVirus would have died by now. But don't worry world will as always come out of this Chinese Problem as well. \#ChineseVirus \#coronavirus \#COVID-19 \#IndiaFightsCorona & Antisocial & Normal\\\hline
3 & Donald John Trump, the greatest narcissist in the history of humanity \#donaldtrump \#narcissist \#WHO & Antisocial& Normal\\\hline
4 & Hoping heat kills \#coronavirus!! & Normal & Antisocial \\\hline
5 & RT @USER: Earlier: Islam has nothing to do with terrorism Now: \#China has nothing to do with Corona Virus. \#ChineseBioterrorism \#COVID2019 & Normal & Antisocial \\\hline
6 & On this \#AprilFoolsDay don't go out... It's a lockdown you Indians! You can't fool \#Corona. & Normal & Antisocial\\\hline
7 & Support Social Distancing; in fact as a photographer, I have turned down jobs for my safety and for all! Let us all support \#SocialDistancing \#StayHomeSaveLives Let us join hands together and fight this monster called \#COVID-19 Yes \#WeWillWin & Antisocial & Normal \\\hline
8 & RT @USER: These online assignments will kill me way before the corona. & Normal & Antisocial \\\hline
9 & Can this corona virus get done so my man can see his pet rats again and stay the night with me so I can be held and love all over him? & Antisocial & Normal \\\hline
10 & \#COVID is that childhood loser who refuses to grow up and seeks revenge in old age. & Antisocial & Antisocial \\\hline
\end{tabular}
\end{table}
We could only compare the annotated labels with the two methods as we do not have the ground truth labels of the tweets. From Table \ref{tbl:lex-API-comparison}, we observed that there are scenarios where a lexicon-based method provides more reasonable annotation than \textit{Perspective} API and vice versa. For instance, the annotations for tweets (1)-(4) by the lexicon-based method seem more reasonable compared to those annotated by the \textit{Perspective} API. We hypothesize that the \textit{Perspective} API's inappropriate annotation on the four tweets stems from the insufficient training examples of rare occurrence words (e.g., the word ``coochie'' in tweet (1)). Conversely, the \textit{Perspective} API might treat normal tweets with potentially inappropriate keywords as antisocial content, even though it might be the keyword is correctly and appropriately used in the specific context (e.g., the word ``kill'' in tweet (4)).
There are also situations where the \textit{Perspective} API is able to provide more reasonable annotation. For example, for tweets (5)-(7), the \textit{Perspective} API seems to give more suitable labels. Specifically, for tweets (5) and (6), the lexicon-based method cannot detect the antisocial content in the tweet as there are no matching antisocial keywords found in the tweets. Nevertheless, the \textit{Perspective} API is able to overcome this limitation to provide a more appropriate label.
While the lexicon-based method and the \textit{Perspective} API collectively provided reasonable antisocial behavior annotations on our COVID-19 dataset, some false positives can still be observed in the annotated dataset. For instance, tweet (8)-(10) seems to be normal tweets. However, due to our annotation strategy, these tweets will be falsely annotated as ``antisocial'' as one of the two methods wrongly labeled the tweets as such. In particular, tweet (8) is annotated as ``antisocial'' by the \textit{Perspective} API. A possible reason for the annotation could be due to the negative sentiment in the tweet. In tweet (9), the lexicon-based model annotated it as antisocial based on the matching keyword ``rats'' in the \textit{extended lexicon}. Similarly, tweet (10) contains the matching keyword ``loser''; hence the lexicon-based model labeled it as ``antisocial''. These examples highlight the limitation of our automatic annotation framework. For future work, we will explore more annotation methods to improve the quality of the annotation in our COVID-19 dataset.
\subsection{New Antisocial Lexicon Amid COVID-19}
\begin{figure*}[t!]
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=0.98\textwidth]{images/wordcloud_unigrams.png}
\caption{Unigram keywords.}
\label{fig:AS_keywords-uni}
\end{subfigure}
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=0.98\textwidth]{images/wordcloud_moregrams.png}
\caption{Bi-, tri- and four-gram keywords.}
\label{fig:AS_keywords-more}
\end{subfigure}
\caption{High frequency antisocial keywords}
\label{fig:AS_keywords}
\end{figure*}
To answer the research question \textbf{R1}, we first perform word frequency count operation on the tweets annotated as ``antisocial'' content. The goal is to examine the popular keywords used in antisocial tweets. Fig. \ref{fig:AS_keywords-uni}) and \ref{fig:AS_keywords-more} show the high frequency unigram and multi-gram antisocial keywords found in our COVID-19 dataset respectively. We observe a number of China-related antisocial keywords such as ``wuflu'' (combination of ``Wuhan''and ``flu''), ``kungflu'', ``chinazi'', ``ChineseBioterrorism'', ``BoycottChina'',
``HoldChinaAccountable'' etc. This supports earlier study \cite{schild2020go}, which suggests a strong presence of sinophobic behavior amid the COVID-19 pandemic. Interestingly, we also observe antisocial keywords targeting other individuals and groups. For example, we notice several keywords such as ``TrumpVirus'', ``Trumpdemic'', ``TrumpOwnsEveryDeath'', ``TrumpGenocide'', ``TrumpIsAnIdiot'', ``TrumpPandemic
''etc., which targeted at United States President Donald Trump. Similar observations are also made for antisocial keywords targeting British Prime Minister Boris Johnson. Additionally, some antisocial keywords reflect people perception about pandemic (e.g., ``Coronapocalypse'', ``scamdemic'',``plandemic'', etc.). The observations made on the high-frequency antisocial words suggest that there might be other antisocial behaviors amid the COVID-19 pandemic besides the sinophobic behavior presented in \cite{schild2020go}. We will further examine the potential antisocial content targets in the next subsection.
Interestingly, we also observed that many of the high-frequency antisocial keywords are new terms that are not found in the open-source traditional antisocial content lexicon. This suggests that new antisocial keywords are created amid the COVID-19 pandemic. This observation also further highlights the limitations of applying the lexicon-based annotation method on fast-evolving social media datasets. Therefore, more research will need to be done to improve the antisocial behavior annotation and detection methods.
\subsection{Antisocial Target Individuals and Groups}
From the antisocial lexicon analysis, we notice the introduction of new antisocial keywords that are targeted on specific individuals and groups. To further verify the targets of antisocial behaviors amid the COVID-19 pandemic (\textbf{R2}), we first train a word2vec model \cite{mikolov2013distributed} our annotated antisocial behavior COVID-19 dataset. Next, we query the trained wordvec model with potential target individuals and groups keywords and search for their neighboring words in the word vector. The intuition is that words that are closer to the target individuals and group keywords are either semantically close to the target or frequently used with the target. Finally, we examine these neighboring words and identify the keywords that are more frequently used in antisocial tweets.
\begin{figure}[t!]
\centering
\includegraphics[width=0.85\textwidth]{images/china.pdf}
\caption{Neighboring antisocial keywords for the target word ``China''.}
\label{fig:China_graph}
\end{figure}
Motivated by the earlier study on sinophobia amid COVID-19 pandemic \cite{schild2020go}, we first check the neighboring keywords of the target group ``China''. Fig. \ref{fig:China_graph} illustrates the graph of neighboring antisocial keywords for the target group ``China''. The target word is represented with a red square. The first-order neighbor keywords are marked with diamond symbols and second-order with circles. The
distance of the edges corresponds to the proximity (or similarity) of the terms in vector space. Our study supports the finding in \cite{schild2020go}; we observe new antisocial keywords created that discriminate against China and the Chinese community. As the Coronavirus was assumed to have originated from the Wuhan city in China, we observed that the virus was not only being referred to as ``China virus'' or ``Chinese virus'', but new blame-attributing and conspiracy lexicon were used on China and the Chinese community. For example, ``\#ChinaLiedPeopleDied'', ``\#ChinaMustPay'', ``\#ChineseBioterrorism'',
``\#BanChina'', etc.
\begin{figure}[t!
\centering
\includegraphics[width=1\textwidth]{images/trump.pdf}
\caption{Neighboring antisocial keywords for the target word ``DonaldJTrump''.}
\label{fig:Trump_graph}
\end{figure}
\begin{figure}[h!
\centering
\includegraphics[width=0.8\textwidth]{images/boris.pdf}
\caption{Neighboring antisocial keywords for the target word ``Boris''.}
\label{fig:Boris_graph}
\end{figure}
While we observed intensive antisocial behaviors against China and the Chinese community, the antisocial behaviors generated amid the COVID-19 pandemic is more than just sinophobia. Prominent politicians and global NGO such as World Health Organization (WHO), are also targets of antisocial behaviors. Fig. \ref{fig:Trump_graph} and \ref{fig:Boris_graph} show the graph of neighboring antisocial keywords for the prominent politicians, United States President Donald J Trump (``DonaldJTrump''), and British Prime Minister Boris Johnson (``Boris''), respectively. We observed abusive terms such as ``TrumpVirus2020'', ``TrumpPademic'', ``DumpTrump'' etc., are frequently used on Donald Trump. Similar abusive keywords are also used on Boris Johnson (e.g., ``BorisTheButcher'',
``ToryShambles'', etc.). Regardless of political affiliations and agendas, we believe that no individuals and groups should be subjected to antisocial behaviors in online social media.
Many other prominent individuals are targets of baseless conspiracies and abusive tweets amid the COVID-19 pandemic. For example, prominent businessman Bill Gates (``\#BillGatesIsEvil'', ``\#GatesOfHell'', ``\#arrestbillgates'', ``\#VaccineAgenda'', etc.), immunologist Dr. Anthony Fauci (``\#FauciFraud'', etc.), and Dr. Tedros Adhanom from World Health Organization (``\#TedrosLiedPeopleDied'', ``\#WHOLiedPeopleDied'' etc.). Other races that previously had been subjected to intensive discrimination and racism \cite{schmidt2017survey,fortuna2018survey} were also targeted during the COVID-19 pandemic. For example, ``\#MuslimsSpreadingCorona'', ``\#IslamicCoronaJehad'', ``\#NizzamudinIdiots'', ``\#MuslimVirus'', ``\#CoronaJihaad'', ``\#JihadiVirus'', etc., the racist and abusive terms as such are used on the Muslim community.
To summarize, we observed antisocial behaviors targeting a wide-range of individuals and groups. Some of the targets are unique to the COVID-19 pandemic, while some groups that were previously subjected to discrimination and racism are also targeted in the COVID-19 context. Such toxic behaviors harm the society's cohesion and deepen the divides among the communities and social groups. Therefore, it is important to develop solutions to detect, curb, and monitor such undesirable online behaviors.
\subsection{Factors Influencing the Spread of Antisocial Content}
The question of what are the factors influencing the spread of antisocial content (\textbf{R3}) is a difficult problem as there could be many factors that affect the diffusion of content over social media \cite{li2017survey}. We attempt to provide a preliminary analysis of this problem by examining the temporal distribution of antisocial content generated in our COVID-19 dataset. Specifically, we compute the proportion of antisocial tweets on a given day over the observed period in our dataset.
\begin{figure}[t!]
\centering
\includegraphics[width=1\textwidth]{images/temporal_distribution.pdf}
\caption{Temporal distribution of antisocial tweets amid COVID-19 pandemic}
\label{fig:temporal}
\end{figure}
Fig. \ref{fig:temporal} shows the temporal distribution of antisocial tweets. We observe that the proportion of antisocial tweets generated per day ranges from 4\% to 9\%. There are also certain days where we notice a spike in antisocial behaviors. To understand what causes these sudden sharp increases in antisocial behavior, we dig deeper and examine the tweets on days where we observe the spike in antisocial content. Interestingly, the increase of antisocial tweets on 26th March is largely criticizing Donald Trump's press conference on the COVID-19 pandemic. The sharpest increase of antisocial behavior is observed on 6th April. Examining the tweets, we found that most of the abusive tweets are protesting against Donald Trump's claims that the medicine hydroxychloroquine cures the coronavirus\footnote{https://www.theguardian.com/global/video/2020/apr/06/trump-grilled-over-continued-promotion-of-hydroxychloroquine-to-treat-coronavirus-video}. Not all antisocial tweets are from or about the COVID-19 situation in the United States. The spike of antisocial tweets on 1st April was attributed to the criticism of the religious gathering by the Tablighi Jamaat, claiming that the event increased the spreading of coronavirus in India\footnote{https://www.bbc.com/news/world-asia-india-52131338}.
Nevertheless, it is challenging to explicitly attribute all spikes of antisocial content to a certain event. There could be many factors affecting the spread of antisocial content, and these relationships between these factors may also have an impact on the diffusion of antisocial content. For instance, we notice that the retweet function in Twitter plays a profound role in the diffusion of antisocial content. For example, when examining the spike of antisocial tweets on 11th April, we observed that many of the antisocial tweets are retweets of Bill Maher's discriminatory tweet: ``\textit{China is a dictatorship that, for decades, enforced a one child per family policy under penalty of forced sterilization. But they can't close down the farmer's market from hell? \#CoronaVirus \#WetMarkets}''.
Our preliminary analysis of antisocial tweets exposes the complexity of antisocial content diffusion in social media. More in-depth research will have to be conducted to curb the spread of these toxic behaviors to analyze the multiple factors that affect the spread of online antisocial behaviors.
\section{Introduction}
\label{sec:introduction}
\input{introduction.tex}
\section{Related Work}
\label{sec:related}
\input{related.tex}
\section{Antisocial Behavior Annotation Framework}
\label{sec:annotation}
\input{annotation.tex}
\section{Empirical Analysis}
\label{sec:empirical}
\input{empirical.tex}
\section{Conclusion and Future Works}
\label{sec:conclusion}
\input{conclusion.tex}
\bibliographystyle{splncs04}
| proofpile-arXiv_065-252 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
A group $G$ is called \emph{representation rigid} if the number $r_n(G)$ of isomorphism classes of
(continuous, if $G$ is topological) complex irreducible $n$-dimensional
representations of $G$ is finite, for each $n$. If $G$ is finitely generated profinite and FAb (i.e., $H/[H,H]$
is finite for every finite index subgroup $H$ of $G$), then $G$ is representation rigid (see \cite[Proposition~2]{BassLubotzkyMagidMozes}). For any representation rigid group $G$ we have the formal Dirichlet series
\[
Z_G(s)=\sum_{n=1}^{\infty}r_n(G)n^{-s}=\sum_{\rho\in \Irr(G)}\rho(1)^{-s},
\]
where $\Irr(G)$ denotes the set of irreducible characters of $G$. If the sequence $R_N(G)=\sum_{i=1}^{N}r_i(G)$ grows at most polynomially, $Z_G(s)$ defines a holomorphic function $\zeta_{G}(s)$ on some right half-plane of $\ensuremath{\mathbb{C}}$, which is called the representation zeta function of~$G$.
Suppose that $G$ is a FAb compact $p$-adic analytic group. In this case, as we will see, the representation zeta function is defined and extends to a meromorphic function on $\ensuremath{\mathbb{C}}$. It is not true in general that $\zeta_{G}(s)$ is a rational function
in $p^{-s}$, but this is not too far from the truth. Any formal Dirichlet series with coefficients in $\ensuremath{\mathbb{Z}}$ is an element of the ring $\ensuremath{\mathbb{Z}}[\mkern-2mu [ p_1^{-s},p_2^{-s},\dots]\mkern-2mu ]$, where $p_1,p_2,\dots$ are the primes in $\ensuremath{\mathbb{N}}$, via the map $\sum_{n}a_n n^{-s} \mapsto \sum_n a_n p_1^{-se_1}p_2^{-se_2}\cdots$, where $p_1^{e_1}p_2^{e_2}\cdots$ is the prime factorisation of $n$.
We say that a Dirichlet series (with integer coefficients) is \emph{virtually rational in
$p^{-s}$} if, as an element of $\ensuremath{\mathbb{Z}}[\mkern-2mu [ p_1^{-s},p_2^{-s},\dots]\mkern-2mu ]$, it is of the form
\begin{equation}\label{eqn:virtually-rational}
\sum_{i=1}^{k}n_{i}^{-s}f_{i}(p^{-s}),
\end{equation}
for some natural numbers $k$ and $n_{i}$ and rational functions $f_{i}(t)\in\ensuremath{\mathbb{Q}}(t)$.
If $Z_G(s)$ defines a zeta function $\zeta_{G}(s)$, we say that $\zeta_{G}(s)$ is virtually rational in $p^{-s}$ if $Z_G(s)$ is, or equivalently, if $\zeta_{G}(s)$ is of the form \eqref{eqn:virtually-rational}, for all $s$ in some right half-plane of $\ensuremath{\mathbb{C}}$.
In \cite{Jaikin-zeta}, Jaikin-Zapirain proved one of the first and
most fundamental results on representation zeta functions, namely,
that when $p>2$ the zeta function $\zeta_{G}(s)$ is virtually rational in $p^{-s}$
and if $G$ is pro-$p$, then $\zeta_{G}(s)$ is rational in $p^{-s}$.
Moreover, he conjectured that the results hold also for $p=2$, and proved this in the special case when $G$ is uniform pro-$2$. Recall that a pro-$p$ group is called \emph{uniform} (or \emph{uniformly powerful}) if it is finitely generated, torsion free and $[G,G]\leq G^p$ if $p>2$ and $[G,G]\leq G^4$ if $p=2$.
These results may be compared with analogous (virtual) rationality results proved
earlier by du~Sautoy for subgroup zeta functions (see \cite{duSautoy-rationality}).
Both Jaikin-Zapirain and du~Sautoy rely on a rationality result for
definable integrals, due to Denef and van~den~Dries \cite{Denef-vandenDries}.
In \cite{Avni-abscissa}, Avni used Jaikin-Zapirain's virtual rationality theorem as an ingredient to prove that the abscissa of convergence of representation zeta functions of certain arithmetic groups is rational. Jaikin-Zapirain's result has also been fundamental for
(albeit not always a logical prerequisite of)
work of Larsen and Lubotzky \cite{Larsen-Lubotzky}, Aizenbud and Avni \cite{Aizenbud-Avni-2016} and of Avni, Klopsch, Onn and Voll, e.g., \cite{AKOV-Duke, AKOV-GAFA}.
For certain significant classes of groups the representation zeta function is not defined because $r_1(G)$ is not finite, although it turns out that the number $\widetilde{r}_n(G)$ of irreducible representations of dimension $n$ up to one-dimensional twists is finite for all $n$. We call such groups \emph{twist rigid} (see Section~\ref{sec:Twist-iso-Clifford} for the definition of twists and twist isoclasses). Examples of twist rigid groups include finitely generated nilpotent groups and (at least certain) reductive compact $p$-adic groups like $\GL_n(\ensuremath{\mathbb{Z}}_p)$. For a twist rigid group $G$, one can define the Dirichlet series
\[
\widetilde{Z}_G(s)=\sum_{n=1}^{\infty}\widetilde{r}_n(G)n^{-s}
\]
and its meromorphic continuation $\widetilde{\zeta }_G(s)$ (where it exists), called the \emph{twist (representation) zeta series/function}, respectively.
In \cite{Stasinski-Voll-Tgrps} the first author and Voll proved rationality of the $p$-local factors of twist zeta functions of torsion-free finitely generated nilpotent groups associated with certain group schemes
when a suitable Kirillov orbit method can be applied and in
\cite{hrumar2015definable} Hrushovski, Martin and Rideau
proved (among other things) rationality of local factors of twist representation zeta functions for all finitely generated nilpotent groups.
In addition to representation and group theoretic results, the results in \cite{hrumar2015definable} were based on
an elimination of imaginaries result, allowing the use of a definable
equivalence relation. An alternative
model theoretic rationality result (for fixed $p$) that generalizes to the analytic setting was given by Cluckers in the appendix to \cite{hrumar2015definable}. As we will describe below, Clucker's result will play a crucial role in the present paper. The model theoretic rationality results in \cite{hrumar2015definable} differ from that used by Jaikin-Zapirain and du~Sautoy insofar as they, in addition to definable sets, also allow the use of a definable family of equivalence relations (see Sections~\ref{subsubsec:anlang} and \ref{subsubsec:Cluckers-rationality}).
The study of twist representation zeta functions of compact $p$-adic groups was initiated by the first author and H\"as\"a in \cite{Hasa-Stasinski}, who proved in particular that $\GL_n(\ensuremath{\mathcal{O}})$ is twist rigid (where $\ensuremath{\mathcal{O}}$ is any compact discrete valuation ring) and explicitly computed the twist zeta function of $\GL_2(\ensuremath{\mathcal{O}})$ when $2$ is a unit in $\ensuremath{\mathcal{O}}$.
\subsection{Main results and consequences}
The goals of the present paper are, firstly, to give a new proof of the (virtual) rationality of $\zeta_G(s)$ for $G$ FAb compact $p$-adic analytic and in particular to prove Jaikin-Zapirain's conjecture mentioned above. Secondly, we prove (virtual) rationality of the twist zeta function $\widetilde{\zeta}_G(s)$ for $G$ twist rigid compact $p$-adic analytic.
Our first main result is:
\begin{thmABC}
\label{thm:Main}Let $G$ be a FAb compact $p$-adic analytic group.
Then $\zeta_{G}(s)$ is virtually rational in $p^{-s}$. If in addition
$G$ is pro-$p$, then $\zeta_{G}(s)$ is rational in $p^{-s}$.
\end{thmABC}
This theorem has the following consequences.
\begin{cor}\label{cor:Main}
Let $G$ be a FAb compact $p$-adic analytic group. Then the following holds regarding $\zeta_{G}(s)$:
\begin{enumerate}
\item it extends meromorphically to the whole complex plane,
\item it has an abscissa of convergence which is a rational number,
\item it has no poles at negative integers and $\zeta_{G}(-2)=0$.
\end{enumerate}
\end{cor}
Here \emph{i)} and \emph{iii)} were previously known consequences of Jaikin-Zapirain's results when $p\neq 2$ or $G$ is uniform pro-$2$, while \emph{ii)} follows from Jaikin-Zapirain's results for all $p$ since for $p=2$ the abscissa of $G$ is the same as for any finite index uniform pro-$2$ subgroup.
In general, \emph{i)} follows immediately from Theorem~\ref{thm:Main}, because any virtually rational function in $p^{-s}$ is clearly meromorphic in all of $\ensuremath{\mathbb{C}}$. Statement \emph{ii)} follows from the model theoretic rationality result we use (Theorem~\ref{thm:rational_series}), which implies that each rational function appearing in the expression for $\zeta_{G}(s)$ has denominator that is a product of factors of the form $1-p^{i-sj}$, for integers $i,j$ with $j>0$. Moreover, the series $Z_G(s)$ diverges at $s=0$ (since $G$ is an infinite profinite group, hence possesses infinitely many non-equivalent irreducible representations). Thus the abscissa of $\zeta_{G}(s)$ is finite and equals $i/j$, for some $i,j$ as above (note that it does not necessarily equal $\max\{i/j\}$, because some denominators may cancel).
Part \emph{iii)} of Corollary~\ref{cor:Main} was proved in \cite[Theorem~1]{JKGS-zero-at-2}
for all $G$ for which virtual rationality of $\zeta_{G}(s)$ holds. Theorem~\ref{thm:Main} therefore implies that it holds for all $p$.
Our second main result is a direct analogue of Theorem~\ref{thm:Main} for twist zeta functions:
\begin{thmABC}
\label{thm:Main-twist}Let $G$ be a twist rigid compact $p$-adic analytic group.
Then $\widetilde{\zeta}_{G}(s)$ is virtually rational in $p^{-s}$. If in addition
$G$ is pro-$p$, then $\widetilde{\zeta}_{G}(s)$ is rational in $p^{-s}$.
\end{thmABC}
By the same argument as above, this theorem has the following consequences.
\begin{cor}\label{cor:Main-twist}
Let $G$ be a twist rigid compact $p$-adic analytic group. Then the following holds regarding $\widetilde{\zeta}_{G}(s)$:
\begin{enumerate}
\item it extends meromorphically to the whole complex plane,
\item it has an abscissa of convergence which is a rational number.
\end{enumerate}
\end{cor}
\subsection{Outline of the paper and proofs}
In Section~\ref{sec:Basics from model theory} we give the basic definitions and results from model theory that we will use in this paper. This is intended for readers with a limited (or no) background in model theory. Similarly, Section~\ref{sec:Prel on proj reps} provides the definitions and results from the theory of projective representations and projective characters, as well as related Clifford theory and group cohomology that we need in later parts of the paper. Some of this material does not seem to be
well known or is not stated in the literature in a form that is useful for us.
We now briefly summarise the remaining sections, all of which are devoted to proving the two theorems above.
\subsubsection{Theorem~\ref{thm:Main}}
The most noteworthy aspect here is the new proof, which has the following main features:
\begin{itemize}
\item[--] a new argument (i.e., different from the one in \cite[Section~5]{Jaikin-zeta}) for the main part of the proof, namely the rationality of the `partial' zeta series (see below), making systematic use of projective representations and associated cohomology classes, avoiding Lie algebras;
\item[--] a unification of the approach in \cite{hrumar2015definable} for pro-$p$ completions of finitely generated
nilpotent groups with the case of $p$-adic analytic pro-$p$ groups, avoiding the Kirillov orbit method.
\end{itemize}
We now describe the main ideas of the proof in more detail, and point out how it relates to and differs from Jaikin-Zapirain's proof for $p>2$ and the approach in \cite{hrumar2015definable}. The first step is to reduce the (virtual) rationality of $\zeta_{G}(s)$ to the rationality in $p^{-s}$ of the partial zeta series
\[
Z_{N;K}^{c}(s)=\sum_{\theta\in\Irr_{K}^{c}(N)}\theta(1)^{-s},
\]
where $N$ a fixed open normal uniform subgroup of $G$, $K$ is a subgroup of $G$ containing $N$, and $\Irr_{K}^{c}(N)$ denotes the set of irreducible characters of $N$ with stabiliser $K$ which determine the cohomology class $c$ in the Schur multiplier $\coho{2}(K_p/N)$, where $K_p$ is a pro-$p$ Sylow subgroup of $K$. This reduction step follows \cite[Sections~5-6]{Jaikin-zeta} and uses Clifford theory, together with a result of Jaikin-Zapirain which shows that we can replace $\coho{2}(K/N)$ by $\coho{2}(K_p/N)$ (see Section~\ref{sec:red_partial}).
In Sections~\ref{sec:red_deg_one}-\ref{sec:proof_main} we prove the rationality of the partial
zeta series and hence Theorem~\ref{thm:Main}. To do this, we show that enumerating characters in
$\Irr_{K}^{c}(N)$ of given degrees is equivalent to enumerating the classes of a family
of equivalence relations that is definable with respect
to an analytic language $\lan_{\mathrm{an}}$ of $p$-adic numbers (see Section~\ref{subsubsec:anlang}).
The rationality then follows from a result of Cluckers (see Theorem~\ref{thm:rational_series}). The possibility of using a definable equivalence relation,
as in \cite{hrumar2015definable},
gives an added flexibility not present in the definability results in \cite{Jaikin-zeta}. We note that in contrast to \cite{hrumar2015definable}, which works with an extended language of rings, we need a $p$-adic analytic language because of the analytic structure of $G$ and du~Sautoy's parametrisation of subgroups via bases, which is one of our key ingredients.
We now describe the contents of Sections~\ref{sec:red_deg_one}-\ref{sec:proof_main} in more detail. In Section~\ref{sec:red_deg_one}, we use some of the theory of projective representations to show that the cohomology class in $\coho{2}(K_p/N)$ associated with a character triple $(K_p,N,\theta)$, for $K_p$ a pro-$p$ \mbox{Sylow} subgroup of $K$, can be obtained from a character triple $(N,N\cap H,\chi)$, where $H$ is an open subgroup of $G$ such that $K_p=HN$, and $\chi$ is of degree one (see Proposition~\ref{prop:Linearisation}). This is a key step, because, just like in \cite{hrumar2015definable}, we can only talk about degree one characters in definability statements. We also introduce a set $X_K$ of pairs $(H,\chi)$, which, modulo a suitable equivalence relation, parametrises the elements in $\Irr(N)$ whose stabiliser is $K$, and a function $\ensuremath{\mathcal{C}}:X_K\rightarrow \coho{2}(K_p/N)$ whose fibres parametrise the sets $\Irr_{K}^{c}(N)$, modulo the relation. We then show that these fibres are expressible by a first order formula involving the values of $\chi$, $2$-cocycles and $2$-coboundaries (see Lemma~\ref{lem:first_o_formula_cohomology}).
The approach in Section~\ref{sec:red_deg_one} is new, compared to \cite{Jaikin-zeta}, and avoids Lie algebra methods by exploiting the monomiality, for projective representations, of $K_p$.
In Section~\ref{sec:proof_main} we use the results of Section~\ref{sec:red_deg_one} to prove that the fibres of $\ensuremath{\mathcal{C}}$ and the required equivalence relation are definable in the structure $\struc_{\mathrm{an}}$ of $p$-adic numbers, with respect to the language $\lan_{\mathrm{an}}$.
Among other things, we exploit the known fact about Schur multipliers that every element in $\coho{2}(K_p/N)$ has a cocycle representative of $p$-power order and that we can also choose our coboundaries to have $p$-power order. This implies that we can consider our cocycles and coboundaries as functions with values in $\ensuremath{\mathbb{Q}}_p/\ensuremath{\mathbb{Z}}_p$, and hence ultimately as elements of definable sets.
Once the definability of the fibres of $\ensuremath{\mathcal{C}}$, modulo the equivalence relation, is established, an application of Cluckers's rationality result mentioned above finishes the proof of Theorem~\ref{thm:Main}.
In the proof of the definability of the fibres of $\ensuremath{\mathcal{C}}$ and the equivalence relation, we adapt some ideas in \cite{hrumar2015definable} to the setting of $p$-adic analytic pro-$p$ groups. The main idea here is that the irreducible characters of $N$ are induced from degree one characters of finite index subgroups, and thus that $\Irr(N)$ can be parametrised (in a many-to-one way) by certain pairs $(H,\chi)$ where $H\leq N$ and $\chi\in\Irr(H)$. Modulo a suitable definable equivalence relation, the parametrisation is bijective and this approach is the reason why the Kirillov orbit method can be avoided. The main new contribution in Section~\ref{sec:proof_main} compared to \cite{hrumar2015definable}, is the definability of the condition for a representation corresponding to a pair $(H,\chi)$ to map to a given $c\in \coho{2}(K_p/N)$ under $\ensuremath{\mathcal{C}}$ (note that all of Section~\ref{sec:red_deg_one} is needed for this purpose).
\subsubsection{Theorem~\ref{thm:Main-twist}}
From Section~\ref{sec:Twist-iso-Clifford} onwards, the paper is devoted to rationality of twist zeta functions. In order to adapt the strategy employed in the preceding sections, Section~\ref{sec:Twist-iso-Clifford} studies restriction and induction of what we call $G$-twist classes of characters in the presence of a normal subgroup. Here we let $G$ be an arbitrary profinite group and $N$ a normal subgroup of finite index. For any subgroup $H$ of $G$, we say that $\lambda,\delta\in\Irr(H)$
are \emph{$G$-twist equivalent} if $\lambda=\delta\psi|_{H}$, for some character $\psi$ of $G$ of degree one (see Definition~\ref{def:G-twist}).
Let now $H$ and $H'$ be subgroups of $G$ such that $H\leq H'$ and such that $H$ contains the stabiliser in $G$ of some $\theta \in \Irr(N)$. Then the usual Clifford correspondence says that induction gives a bijection between irreducible characters of $H$ lying over $\theta$ and irreducible characters of $H'$ lying over $\theta$. This immediately implies that induction of $G$-twist classes is a surjective map. However, in contrast, induction of $G$-twist classes is not necessarily injective. It is for this reason that our proof of Theorem~\ref{thm:Main-twist} requires new methods in addition to those used in the proof of Theorem~\ref{thm:Main}.
The main new ingredient needed is an invariant $\ensuremath{\mathcal{T}}_{L,K,\Gamma}(\tic{\theta})$ attached to the $G$-twist class $\tic{\theta}$ of a $\theta \in \Irr(N)$, which controls precisely when two $G$-twist classes induce to the same $G$-twist class. This invariant is an element in $\coho{1}(L/N,F_K/\Gamma)$, where $L$ is the stabiliser of $\tic{\theta}$ in $G$, $K$ is the stabiliser of $\theta$ in $G$ (so that $K\trianglelefteq L$), $F_K$ is the set of functions $K/N\rightarrow \ensuremath{\mathbb{C}}^{\times}$,
$\Gamma$ is a certain subgroup of $\Hom(K/N,\ensuremath{\mathbb{C}}^{\times})$ and the action of $L/N$ on $F_K/\Gamma$ is the co-adjoint action (see Section~\ref{subsec:The-function-bar-mu} for the definitions).
We give a quick idea of how $\ensuremath{\mathcal{T}}_{L,K,\Gamma}(\tic{\theta})$ is defined. By definition of $L$, any $g\in L$ fixes $\theta$ up to $G$-twist, that is, for any $g\in L$ there is a character $\psi_g$ of $G$ of degree one such that $\leftexp{g}{\theta}=\theta\psi_{g}|_{N}$. Now let $\hat{\theta}$ be a projective representation of $K$ strongly extending $\theta$ (see Section~\ref{sec:Prel on proj reps}). Then both $\leftexp{g}{\hat{\theta}}$ and $\hat{\theta}\psi_{g}|_{K}$ strongly extend $\leftexp{g}{\theta}$, so there exists a function $\mu(g):K/N \rightarrow \ensuremath{\mathbb{C}}^{\times}$ such that
\[
\leftexp{g}{\hat{\theta}}=\hat{\theta}\psi_{g}|_{K}\cdot\mu(g).
\]
The goal of Section~\ref{sec:Twist-iso-Clifford} is then to prove that the function $g \mapsto \mu(g)$ gives rise to a unique element in $\coho{1}(L/N,F_K/\Gamma)$, where the ambiguity in the choice of strong extension $\hat{\theta}$ has been accounted for by quotienting out by $1$-coboundaries, and the the ambiguity in the choice of $\psi_g$ has been accounted for by quotienting out by $\Gamma$. At the same time, it is shown that the resulting cohomology class only depends on the class $\tic{\theta}$, and not on the choice of representative $\theta$.
The next step, carried out in Section~\ref{sec:Reduction-to-pro-p_twisted_case}, is to show that the invariant $\ensuremath{\mathcal{T}}_{L,K,\Gamma}(\tic{\theta})$ is determined by
$\ensuremath{\mathcal{C}}_{K_p}(\tic{\theta})$ (it is easy to see that $\ensuremath{\mathcal{C}}_{K_p}$ induces a function on $G$-twist classes) together with
$\ensuremath{\mathcal{T}}_{L_p,K_p,\Gamma_p}(\tic{\theta})$, where $L_p$ and $K_p$ are pro-$p$ Sylow subgroups of $L$ and $K$, respectively, and $\Gamma_p$ is the image of $\Gamma$ under restriction of homomorphisms to $K_p/N$. Here it is assumed that $N$ is a pro-$p$ subgroup of $G$ (eventually $N$ will be a uniform subgroup). The reasons for reducing to pro-$p$ Sylow subgroups is the same as for the reduction of $\coho{2}(K/N)$ to $\coho{2}(K_p/N)$ in the proof of Theorem~\ref{thm:Main}, but in the twist zeta setting, the reduction is more complicated and uses very different arguments.
In Section~\ref{sec:Reduction partial twist} we use the main result of the previous section (Proposition~\ref{prop:red_coeff_to_Sylow}) to prove that Theorem~\ref{thm:Main-twist} follows from the rationality of the partial twist zeta series $\tpartial{N;L, K, \Gamma}{c, c'}$. Finally, Section~\ref{sec:rationality_partial_tw} proves rationality of the partial twist zeta series by, among other things, showing that the condition $\ensuremath{\mathcal{T}}_{L_p,K_p,\Gamma_p}(\tic{\theta})=c'$, for $c'\in \coho{1}(L/N,F_K/\Gamma)$, can be expressed as a definable condition on a suitable definable set.
This section is an analogue of Section~\ref{sec:proof_main} but again is more complicated and requires new arguments (an insight into the differences between the sections can be gleaned from comparing Lemma~\ref{lem:first_o_formula_cohomology} to its counterpart Proposition~\ref{prop:Linearisation_twist}).
\subsection{Remarks on the positive characteristic case}
\label{sec:pos_char}
It is a natural question to ask whether FAb finitely generated compact $\ensuremath{\mathbb{F}}_{p}[\mkern-2mu [ t ]\mkern-2mu ]$-analytic groups have virtually rational representation zeta functions.
This is known for $\mathrm{SL}_2(\ensuremath{\mathbb{F}}_{q}[\mkern-2mu [ t ]\mkern-2mu ])$ with $q$ a power of an odd prime (see \cite[Theorem~7.5]{Jaikin-zeta}) and was asked more generally by Larsen and Lubotzky for groups which are $\ensuremath{\mathbb{F}}_{p}[\mkern-2mu [ t ]\mkern-2mu ]$-points of certain group schemes (see \cite[Problem~6.2]{Larsen-Lubotzky}).
Our proof of Theorem~\ref{thm:Main} can be seen as the first step towards a possible approach to this problem, as it avoids the Kirillov orbit method (which is unavailable in characteristic $p$) and Lie algebras (which are often less effective or inadequate in characteristic $p$).
Moreover, the model theoretic rationality result of Cluckers which we use has a version for uniformly definable equivalence classes over local fields of characteristic $p$, for large enough $p$ (see Nguyen \cite{ngu2016uniform}).
On the other hand, an essential ingredient in our proof of Theorem~\ref{thm:Main} is du~Sautoy's parametrisation of finite index subgroups of $G$, which only works in characteristic $0$. To go further, it seems that one will have to narrow down the set of those subgroups of a pro-$p$ group from which all irreducible representations can be obtained by induction of one-dimensional representations.
\section{Basics from model theory}
\label{sec:Basics from model theory}
We give a brief introduction to the notation and basic concepts we need from model theory, aimed at non-experts.
\subsection{Languages, structures and definability}
We start by introducing the key concepts of language, structure, and definability in the
classical setting and in the more complex context of many-sorted languages.
We refer the interested reader to the first chapters of \cite{mar2006model} and \cite{tenzie2012model}
for a more exhaustive exposition of the subject.
\subsubsection{Languages and structures}
\begin{defn}[{\cite[Definition~1.1.1]{mar2006model}}]
\label{def:language}
A {\em language} $\mathcal{L}$ is given by specifying the following data:
\begin{enumerate}
\item a set of function symbols $\mathcal{F}$ and positive integers $n_f$ for each
$f\in\mathcal{F}$;
\item a set of relation symbols $\mathcal{R}$ and positive integers $n_R$ for each
$R\in\mathcal{R}$;
\item a set of constant symbols $\mathcal{C}$.
\end{enumerate}
The positive integers $n_f$ and $n_R$ are called the {\em arity} of $f$ and $R$ respectively;
$f$ and $R$ are said to be $n_f$-ary and $n_R$-ary respectively.
\end{defn}
\begin{exmp}
The language $\mathcal{L}_{\mathrm{ring}} = \lbrace +,-, \cdot, 0, 1 \rbrace$ where $+$ and $\cdot$ are binary function
symbols, $-$ a unary function symbol and $0, 1$ are constants, is called the {\em ring language}.\par
The language $\mathcal{L}_{\mathrm{oag}} = \lbrace +, <, 0 \rbrace$ where $+$ is a binary
function symbol, $<$ is a binary relation symbol and $0$ is a constant, is called the {\em language of ordered abelian groups}.
\end{exmp}
As we shall see below, the choice of a language $\mathcal{L}$ determines the syntax of the logical statements
we are allowed to build. The process of constructing these statements is purely
formal and happens before the constituents of $\mathcal{L}$ are given any meaning.
If one wants to establish the truth of a statement constructed this way, one first needs to fix an interpretation
for the symbols in $\mathcal{L}$. This is how the concept of structure arises.
\begin{defn}[{\cite[Definition~1.1.2]{mar2006model}}]
\label{def:structure}
An $\mathcal{L}$-structure $\mathcal{M}$ is given by the following data:
\begin{enumerate}
\item a non-empty set $M$ called the {\em universe}, {\em domain} or {\em underlying
set} of $\mathcal{M}$;
\item a function $f^{\mathcal{M}}:M^{n_f}\rightarrow M$ for each $f\in \mathcal{F}$;
\item a set $R^{\mathcal{M}} \subseteq M^{n_R}$ for each $R\in \mathcal{R}$;
\item an element $c^{\mathcal{M}} \in M$ for each $c \in \mathcal{C}$.
\end{enumerate}
\end{defn}
\begin{exmp}
\label{exmp:two_structures}
Let $A$ be a ring. We define an $\mathcal{L}_{\mathrm{ring}}$-structure $\mathcal{M}$ by setting $ M = A$,
$0^{\mathcal{M}} = 0_A$, $1^{\mathcal{M}} = 1_A$, $+^{\mathcal{M}} = +_A$, $-^{\mathcal{M}}$ is the function associating
each element with its additive inverse, and $\cdot^{\mathcal{M}} = \cdot_A$.\par
Let $<_{\ensuremath{\mathbb{Z}}}$ be the order on $\ensuremath{\mathbb{Z}} \cup \lbrace -\infty \rbrace$. We construct an $\mathcal{L}_{\mathrm{oag}}$-structure
$\mathcal{M}$ with underlying set $\ensuremath{\mathbb{Z}} \cup \lbrace - \infty \rbrace$ by setting $0^{\mathcal{M}} = 0_\ensuremath{\mathbb{Z}}$, $<^{\mathcal{M}} = <_{\ensuremath{\mathbb{Z}}}$, and
\[
x +^{\mathcal{M}} y = \begin{cases}
x +_\ensuremath{\mathbb{Z}} y &\text{if } x,y\in \ensuremath{\mathbb{Z}}\\
-\infty &\text{otherwise}.
\end{cases}
\]
Note that this $\mathcal{L}_{\mathrm{oag}}$-structure is not a group.
\end{exmp}
\subsubsection{Definability}
Fixing a language $\mathcal{L}$ and a set of variables $\lbrace v_1, v_2,\dots \rbrace$ allows us to construct
formulas from them. In essence, an $\mathcal{L}$-formula is a string of quantifiers,
logical connectives, variables, and symbols from $\mathcal{L}$ which is formed according to some prescribed rules.
One says that a variable $v$ is {\em free} in an $\mathcal{L}$-formula if it is not inside the scope of a quantifier.
An $\mathcal{L}$-formula whose free variables are exactly $x_1, \dots, x_n$ is called an {\em $\mathcal{L}$-condition} on $x_1, \dots, x_n$.
If $\mathcal{M}$ is an $\mathcal{L}$-structure, free variables may be evaluated in the underlying set $M$ to establish if
a formula is true or false in $\mathcal{M}$. If $\varphi$ is an $\mathcal{L}$-formula with free variables
$(v_1,\dots,v_m)$ and $\tuple{a}\in M^m$ is such that
$\varphi(\tuple{a})$ is true in $\mathcal{M}$, one writes $\mathcal{M} \vDash\varphi(\tuple{a})$. See
\cite{mar2006model} for precise definitions.
\begin{defn}[{\cite[Definition~1.3.1]{mar2006model}}]
Let $\mathcal{L}$ be a language and $\mathcal{M} = (M, \dots)$ be an $\mathcal{L}$-structure. Let $\ell \in \ensuremath{\mathbb{N}}$.
We say that a set $A \subseteq M^\ell$ is {\em definable} (in $\mathcal{M}$) if there is an $\mathcal{L}$-formula
$\varphi$ with $\ell + m$ free variables
and $\tuple{b} \in M^m$ such that
\[
A = \lbrace \tuple{a} \in M^\ell \mid \mathcal{M} \vDash \varphi(\tuple{a},\tuple{b})\rbrace.
\]
A function $f^{\mathcal{M}}:M^{n_f}\rightarrow M$ is said to be {\em definable} if its graph is a definable subset of $M^{n_f+1}$.
A relation $R^{\mathcal{M}}$ is said to be {\em definable} if $R^{\mathcal{M}}$ is a definable subset of $M^{n_R}$.
\end{defn}
Notice that since projections are definable functions, the domain and the image of a definable function
are definable sets.\par
%
It will sometimes be more convenient to talk about predicates rather than sets.
A predicate on a set $M$ is a Boolean valued function $P : M^{n_P} \to \{ 0, 1\}$
for some $n_P \in \ensuremath{\mathbb{N}}_0$. If $\mathcal{L}$ is a language and
$\mathcal{M} = (M, \dots)$ is an $\mathcal{L}$-structure we say that a predicate $P$ on $M$ is definable in
$\mathcal{M}$ if there is an $n$-tuple $\tuple{b} \in M^n$ and an $\mathcal{L}$-condition
$\Phi$ on $n_P + n$ variables such that
\[
P(\tuple{a}) = 1 \iff \mathcal{M} \vDash \varphi(\tuple{a},\tuple{b}).
\]
Clearly a set $A$ is definable if and only if the predicate $x \in A$ is definable.
\subsubsection{Many-sorted languages and structures}
There will be occasions in which we would like structures to have several sorts of underlying
sets. For instance the properties of valued fields will be more closely reflected by a language taking into
account that constant, functions, relations (as well as variables) may have values in the field or in the value
group or even in the residue field. This is made rigorous by allowing languages to have many sorts.
The following definitions are a reformulation of those at the end of \cite[Section~1.1]{tenzie2012model}.
\begin{defn}
Let $S$ be a set whose elements we call \emph{sorts}. An $\# S$-sorted language
with sorts $S$ is given by specifying the following data:
\begin{enumerate}
\item a set of function symbols $\ensuremath{\mathcal{F}}$, positive integers $n_f$
and tuples of sorts $\tuple{s}_f\in S^{n_f + 1}$ for each $f \in \ensuremath{\mathcal{F}}$;
\item a set of relation symbols $\ensuremath{\mathcal{R}}$, positive integers $n_R$
and tuples of sorts $\tuple{s}_R\in S^{n_R}$ for each $R \in \ensuremath{\mathcal{R}}$;
\item a set of constant symbols $\mathcal{C}_s$ for each sort in $S$.
\end{enumerate}
The tuples $\tuple{s}_f$ and $\tuple{s}_R$ are called the \emph{types} of $f$ and $R$, respectively.
\end{defn}
Note that a one-sorted language is a language in the sense of Definition~\ref{def:language}. If $f$ is a
function symbol in a language $\mathcal{L}$ with sorts $S$, the first $n_f$ elements of $\tuple{s}_f$ specify the
{\em source type} of $f$ and the last element specifies the {\em target sort} of $s$.
\begin{defn}
An $\mathcal{L}$-structure $\mathcal{M}$ (for a many-sorted language $\mathcal{L}$) is given by
\begin{enumerate}
\item a family of underlying sets $(M_s : s\in \mathcal{S})$;
\item a function $f^{\mathcal{M}}: M_{s_1} \times \cdots \times M_{s_{n}} \rightarrow M_{s}$
for each $f \in \mathcal{F}$ of type $(s_1, \dots, s_{n}, s)$;
\item a relation $R^{\mathcal{M}}\subseteq M_{s_1} \times \cdots \times M_{s_{n}}$
for each $R \in \mathcal{R}$ of type $(s_1, \dots, s_{n})$;
\item an element $c^\mathcal{M} \in M_{s}$ for each $c \in \mathcal{C}_s$ of sort $s$.
\end{enumerate}
\end{defn}
If $\mathcal{L}$ is a many-sorted language, the formation of formulas is analogous to the procedure for
one-sorted languages, the only difference being that one has to use a set of variables
for each sort and ensure that
variables of the correct sort are used in equalities, functions and relations. In analogy with the one-sorted case,
if $\mathcal{M}$ is an $\mathcal{L}$-structure and $(s_1, \dots, s_{n_P})$ is an $n_P$-tuple of sorts, a {\em predicate} on
$M_{s_1} \times \cdots \times M_{s_{n_P}}$ is a Boolean valued function on $M_{s_1} \times \cdots \times M_{s_{n_P}}$.
The predicate $P$ is said to be definable in $\mathcal{M}$ if there is an $n$-tuple of sorts
$(t_1, \dots, t_n)$, $\tuple{b} \in M_{t_1} \times \cdots \times M_{t_n}$, and an $\mathcal{L}$-condition $\Phi$ on
variables of sorts $(s_1, \dots, s_{n_P}, t_1, \dots, t_n)$ such that
$P(\tuple{a}) = 1 \iff \mathcal{M} \vDash \varphi(\tuple{a},\tuple{b})$. We say that a set $A$ is {\em definable} in
$\mathcal{M}$ if the predicate $x \in A$ is definable in $\mathcal{M}$. As before, functions and relations are definable
when their graph is definable.
We shall have occasion to consider two structures relative to
different languages. In such a situation we shall need the concept of definable interpretation in order to be able to
compare definable sets between the two structures. The following definition is a special case of
\cite[Section~5.3]{hod1993model}.
\begin{defn}
\label{def:interpretation}
Let $\mathcal{L}, \mathcal{L}'$ be two many-sorted languages and let $S'$ be the set of sorts of $\mathcal{L}'$.
Let also $\mathcal{M}$ be an $\mathcal{L}$-structure and
$\mathcal{M}'$ be an $\mathcal{L}'$-structure with underlying sets $M'_s$ for $s \in S'$.
A {\em definable interpretation} of $\mathcal{M}'$ in $\mathcal{M}$ consists of an $\mathcal{L}'$-structure $\mathcal{M}''$
such that
\begin{enumerate}
\item its underlying sets $M_s''$ for $s \in S'$ are definable in $\mathcal{M}$;
\item the interpretation of function (resp. relation) symbols in $\mathcal{L}'$ is by
functions (resp. relations) that are definable in $\mathcal{M}$;
\item \label{def:interpretation_iii}
and such that there are maps $h_s: M_s' \rightarrow M_s''$ ($s\in S'$) forming an isomorphism of
$\mathcal{L}'$-structures between $\mathcal{M}'$ and $\mathcal{M}''$.
\end{enumerate}
\end{defn}
See \cite[Definition~1.1.3]{tenzie2012model}
for the definition of isomorphism between one-sorted structures and \cite[Section~1.1]{tenzie2012model}
for the extension of the definition to many-sorted structures. In particular, a consequence of imposing \ref{def:interpretation_iii}
is that we are given an explicit identification of each $M'_s$ with a definable subset of
$M^{n_s}$ for some $n_s$, and that if $f'$ is a function symbol with domain
${M'_s}^{n'_s}$ and codomain $M'_t$ then the graph of $f'$ is a definable subset of
$(M^{n_s})^{n'_s} \times M^{n_t}$.
In the notation of the definition above, let $\mathcal{M}'$ be definably interpreted in $\mathcal{M}$ with maps $h_s: M'_s \rightarrow M_s''$ ($s \in S'$).
Let $\varphi$ be an $\mathcal{L}'$-formula with free variables of sorts $s_1, \dots, s_n$. Then there is an $\mathcal{L}$-formula $\psi$ such that
for all $(a_1,\dots, a_n) \in M'_{s_1}\times \cdots \times M'_{s_n}$
\[
\mathcal{M}' \vDash \varphi(a_1,\dots, a_n) \Longleftrightarrow \mathcal{M} \vDash \psi(h_{s_1}(a_1), \dots, h_{s_n}(a_n)).
\]
This follows directly from the definition of isomorphism between structures and from the fact that all interpretations
of the function and relation symbols of $\mathcal{L}'$ in $\mathcal{M}''$ are definable functions and relations
in $\mathcal{M}$ respectively. In particular, if $X\subseteq M'_{s_1} \times \cdots \times M'_{s_\ell}$ (for some $\ell \in \ensuremath{\mathbb{N}}$)
is a definable set in $\mathcal{M}'$ and $H = \prod_{i = 1}^{\ell} h_{s_i}$, then $H(X)$ is definable in $\mathcal{M}$.
\subsection{Structures for the $p$-adic numbers and definable integrals}
In the present paper
we shall only use languages that model the field $\ensuremath{\mathbb{Q}}_p$ and
$p$-adic analytic groups. Preceding literature has defined and used a number of
different languages for these two objects; we review and compare those that are relevant to our results.
We shall also cite model theoretic rationality results for power series arising from certain counting
problems involving definable sets and equivalence relations.
\subsubsection{The language of $p$-adic analytic groups}
We start by recalling the languages used by du Sautoy in \cite{duSautoy-rationality}. We let $\ensuremath{\mathbb{N}}_0$ denote the set of non-negative integers.
\begin{defn}[{\cite[Definition~1.6]{duSautoy-rationality}, \cite[Section~0.6]{Denef-vandenDries}}]
Let $\lan_{\mathrm{an}}^D$ be the language with
\begin{enumerate}
\item for $m \geq 0$, an $m$-ary function symbol $F$ for each {\em convergent}
\[
F(\tuple{X}) = \sum_{\tuple{i} \in \ensuremath{\mathbb{N}}_0^m} a_{\tuple{i}} X_1^{i_1}\cdots X_m^{i_m}
\in \ensuremath{\mathbb{Z}}_p[\mkern-2mu [ \tuple{X} ]\mkern-2mu ],
\]
that is, such that $\lvert a_\tuple{i}\rvert \to 0$ as $i_1 +\cdots +i_m \to \infty$;
\item a binary function symbol $D$;
\item a unary relation symbol $P_n$ for each $n > 0$.
\end{enumerate}
\end{defn}
We define an $\lan_{\mathrm{an}}^D$-structure $\struc^D_{\mathrm{an}}$ with underlying set $\ensuremath{\mathbb{Z}}_p$ by the following interpretation
rules:
\begin{enumerate}
\item each function symbol is interpreted as the power-series it corresponds to in the
definition of $\lan_{\mathrm{an}}^D$.
\item We define $D^{\struc^D_{\mathrm{an}}}: \ensuremath{\mathbb{Z}}_p^{2}\rightarrow \ensuremath{\mathbb{Z}}_p$ by
\[
D^{\struc^D_{\mathrm{an}}}(x,y) = \begin{cases}
x/y &\text{if } \lvert x \rvert \leq \lvert y \rvert \text{ and } y\neq 0,\\
0 &\text{otherwise}.
\end{cases}
\]
\item For $n > 0$ we define $P_n^{\struc^D_{\mathrm{an}}}$ to be the set of non-zero $n$-th powers in $\ensuremath{\mathbb{Z}}_p$.
\end{enumerate}
It will often be necessary for us to show definability in structures whose underlying set is a pro-$p$ group.
In these situations, following \cite{duSautoy-rationality}, we shall use the following language whose constants,
functions and relations closely resemble the natural ones in a normal pro-$p$ subgroup $N$ inside a
$p$-adic analytic group $G$.
The following definition is a slightly modified version of \cite[Definition~1.13]{duSautoy-rationality}.
\begin{defn}
\label{def:L_G} Let $N$ be a normal pro-$p$ subgroup of a $p$-adic analytic group $G$.
The language $\mathcal{L}_N$ has two sorts $s_1$ (also called the {\em group sort}) and $s_2$,
constant symbols in the sort $s_1$ for each element of $N$, and a binary relation
symbol $x\mid y$ of sort $(s_1, s_1)$. We have the following function symbols, which all have
target sort $s_1$:
\begin{enumerate}
\item a binary function symbol $x.y$ of source type $(s_1, s_1)$;
\item a unary function symbol $x^{-1}$ of source type $s_1$;
\item a binary function symbol $x^{\lambda}$ of source type $(s_1, s_2)$;
\item \label{def:L_G:aut}
for each, $g\in G$, a unary function symbol $\varphi_g$ of source type $s_1$.
\end{enumerate}
\end{defn}
We define an $\mathcal{L}_N$-structure $\mathcal{M}_N$ with underlying set $N$ for the first sort and $\ensuremath{\mathbb{Z}}_p$ for the second sort.
The interpretation of the symbols in $\mathcal{L}_N$ as operations in the group $N$ is immediately suggested by the
notation, with the exception of $x\mid y$ and the functions $\varphi_g$. The latter is interpreted as the conjugation
function $N\rightarrow N$, $x\mapsto gxg^{-1}$. In order to interpret $x\mid y$ we will use that $N$ is a pro-$p$ group.
Recall that the lower $p$-series of a pro-$p$ group $H$ is defined as $H_{1}\geq H_{2}\geq\cdots$, where
\[
H_{1} = H,\quad\text{and}\quad H_{i+1} = \overline{H_{i}^{p}[H_{i},H]},
\]
where $\overline{H_{i}^{p}[H_{i},H]}$ denotes the closure of $H_{i}^{p}[H_{i},H]$ as a topological subgroup of $H$.
We interpret $x\mid y$ as the relation $\omega(x) \geq \omega(y)$, where $\omega$
is defined as follows.
\begin{defn}[{\cite[Definition~1.12]{duSautoy-rationality}}] \label{def:omega}
Define $\omega: N\rightarrow \ensuremath{\mathbb{N}} \cup \lbrace \infty \rbrace$ by $\omega(g) = n$ if
$g\in N_n\setminus N_{n+1}$ and $\omega(1) = \infty$.
\end{defn}
\subsubsection{The analytic language of the $p$-adic numbers}\label{subsubsec:anlang} We recall the definition of the language
used in \cite[Appendix~A]{hrumar2015definable}.
\begin{defn}
The language $\lan_{\mathrm{an}}$ is a three-sorted language with a valued field sort VF, a
value group sort VG and a residue field sort RF. We have all constants, functions and relations of
$\mathcal{L}_{\mathrm{ring}}$ for the valued field and the residue field sort, and all constants, functions
and relations of $\mathcal{L}_{\mathrm{oag}}$ for the value group sort. In addition, we have:
\begin{enumerate}
\item for $m\geq 0$, a function symbol $f$ with source type $\mathrm{VF}^m$ and target sort VF
for each convergent power series in $m$ variables with coefficients in
$\ensuremath{\mathbb{Z}}_p$;
\item a function symbol $\mathrm{ord}$ with source type VF and target sort VG;
\item a function symbol $\overline{\mathrm{ac}}$ with source type VF and target sort RF.
\end{enumerate}
\end{defn}
We define an $\lan_{\mathrm{an}}$-structure $\struc_{\mathrm{an}}$
with underlying sets $\ensuremath{\mathbb{Q}}_p$ for the valued field sort, $\ensuremath{\mathbb{Z}} \cup \lbrace -\infty \rbrace$
for the value group, and $\ensuremath{\mathbb{F}_p}$ (the field with $p$ elements) for the residue field sort. The constants,
functions and relations of $\mathcal{L}_{\mathrm{ring}}$ and $\mathcal{L}_{\mathrm{oag}}$ are interpreted in the usual way (see
Example~\ref{exmp:two_structures}). The functions $f$ are interpreted as \emph{restricted} analytic functions
defined by the power series they correspond to, that is,
\begin{align*}
f^{\struc_{\mathrm{an}}}: \ensuremath{\mathbb{Q}}_p^m &\longrightarrow \ensuremath{\mathbb{Q}}_p\\
\tuple{X} &\longmapsto {\begin{cases}
\sum_{\tuple{i} \in \ensuremath{\mathbb{N}}_0^m} a_{\tuple{i}} X_1^{i_1}\cdots X_m^{i_m}
&\text{if } \tuple{X} \in \ensuremath{\mathbb{Z}}_p^m\\
0 &\text{otherwise}.
\end{cases}}
\end{align*}
The function symbol $\mathrm{ord}$ is interpreted as the valuation map on $\ensuremath{\mathbb{Q}}_p$ (the valuation of $0$ is $-\infty$). Finally
the function symbol $\overline{\mathrm{ac}}$ is interpreted as $\overline{\mathrm{ac}}^{\struc_{\mathrm{an}}}: \ensuremath{\mathbb{Q}}_p \rightarrow \ensuremath{\mathbb{F}_p}$
sending $0$ to $0$ and $x$ to
\[
x p^{- \mathrm{ord}^{\struc_{\mathrm{an}}}(x)} \mod p.
\]
Let $N$ be a uniform pro-$p$ group and let $n_1,\dots, n_d$ be a minimal set of topological generators for
$N$. By \cite[Proposition~3.7]{DdSMS}, $N$ is in bijection with $\ensuremath{\mathbb{Z}}_p^d$
via the map $(\lambda_1,\dots, \lambda_d)\mapsto n_1^{\lambda_1}\cdots n_d^{\lambda_d}$. If $g\in N$ is such that
$ g = n_1^{\lambda_1}\cdots n_d^{\lambda_d}$ for some $\lambda_1, \dots, \lambda_d \in \ensuremath{\mathbb{Z}}_p$ we say that
$(\lambda_1,\dots, \lambda_d)$ are its $\ensuremath{\mathbb{Z}}_p$-coordinates (with respect to $n_1, \dots, n_d$).
\begin{lem}
\label{lem:int_M_N}
Suppose that $N$ is a uniform normal pro-$p$ group of a compact $p$-adic analytic group $G$.
Then $\mathcal{M}_N$ is definably interpreted in $\struc^D_{\mathrm{an}}$. Moreover $\struc^D_{\mathrm{an}}$ is definably interpreted in
$\struc_{\mathrm{an}}$ (and so $\mathcal{M}_N$ is definably interpreted in $\struc_{\mathrm{an}}$).
\end{lem}
\begin{proof}
Fix a minimal set of topological generators for $N$. Passing to $\ensuremath{\mathbb{Z}}_p$-coordinates for $N$,
Theorem~1.18 in \cite{duSautoy-rationality} gives the definable sets in $\struc^D_{\mathrm{an}}$;
the map $h_{s_1}$ in Definition~\ref{def:interpretation}; the definable interpretation of the constants
as tuples in $\ensuremath{\mathbb{Z}}_p$; and the interpretation of $x.y$, $x^{-1}$, $\varphi$, and $x\mid y$.
The underlying set $\ensuremath{\mathbb{Z}}_p$ is interpreted as itself (so that $h_{s_2} = \mathrm{id}_{\ensuremath{\mathbb{Z}}_p}$) and
Lemma~1.19 in \cite{duSautoy-rationality} gives the interpretation of $x^{\lambda}$.\par
It is easy to construe the function symbol $D$ of $\lan_{\mathrm{an}}^D$ as a definable function in $\struc_{\mathrm{an}}$,
so that the structure $\struc^D_{\mathrm{an}}$ is definably interpreted in $\struc_{\mathrm{an}}$.
\end{proof}
\subsubsection{Rationality and definable enumerations in $\struc_{\mathrm{an}}$}\label{subsubsec:Cluckers-rationality}
We conclude this exposition with the main technical result that will allow us to prove rationality of
certain series enumerating equivalence classes of families of equivalence relations.
\begin{defn}
Let $Y\subseteq \ensuremath{\mathbb{Z}}$ be a definable set in $\struc_{\mathrm{an}}$ and let $d\in \ensuremath{\mathbb{N}}$. A family of equivalence relations
$\lbrace \mathbin{\mathcal{E}}_{n}\rbrace_{n \in Y}$ on definable sets $X_n\subseteq \ensuremath{\mathbb{Q}}_p^d$ ($n\in Y$) is said to be
{\em definable} if there is a definable set $X \subseteq \ensuremath{\mathbb{Q}}_p^d$
and a definable relation $F$ on $(X \times X) \times Y$ such that
$\mathbin{\mathcal{E}}_n = F^{-1}(n)$ for all $n \in Y$.
\end{defn}
\begin{thm}[{\cite[Theorem~A.2]{hrumar2015definable}}]
\label{thm:rational_series}
Let $d \in \ensuremath{\mathbb{N}}$. Let $\mathbin{\mathcal{E}}_n$ be a definable family of equivalence relations in $\struc_{\mathrm{an}}$ on (definable)
sets $X_n \subseteq \ensuremath{\mathbb{Q}}_p^d$, for $n\in \ensuremath{\mathbb{N}}_0$.
Suppose that for each $n \in \ensuremath{\mathbb{N}}_0$ the quotient $X_n/\mathbin{\mathcal{E}}_n$ is finite, say, of
size $a_n$. Then the Poincar\'e series
\[
\sum_{n \in \ensuremath{\mathbb{N}}_0} a_n t^n
\]
is a rational power series in $t$ over $\ensuremath{\mathbb{Q}}$ whose denominator is a product of factors of the
form $(1 - p^i t^j)$ for some integers $i,j$ with $j > 0$.
\end{thm}
\begin{proof}
The proof is the same as the one at the end of Appendix~A in \cite{hrumar2015definable}.
The only difference is that instead of
setting $Y$ to be the set of non-negative integers, we set
$Y = \lbrace n \in \ensuremath{\mathbb{N}}_0 \mid X_n \neq \emptyset\rbrace$. This set
is definable in $\struc_{\mathrm{an}}$ as it is the projection on the $\ensuremath{\mathbb{Z}}$-component of the relation defining the
family $\{ \mathbin{\mathcal{E}}_n\}$. The rest of the proof remains
unchanged.
\end{proof}
\section{Preliminaries on projective representations}
\label{sec:Prel on proj reps}
The main representation theoretic steps of our proof (Section~\ref{sec:red_deg_one})
will use projective representations and projective characters of (pro-)finite
groups. In this section, we collect the definitions and results that
we will need. We use \cite{Isaacs}, \cite{Karpilovsky2} and \cite{Karpilovsky3}
as sources for this theory (precise references for the non-trivial results are given below).
In the following, we regard any $\GL_{n}(\ensuremath{\mathbb{C}})$ with its discrete topology. All the definitions and results in this section apply to finite groups, regarded as discrete profinite groups. In fact, the results are trivial generalisations from the finite groups case because we consider only continuous representations and finite index subgroups. We will however need to apply the results to infinite profinite groups.
From now on, we will consider only continuous representations and their characters. Let $G$ be a profinite group and $N$ an open normal subgroup. We define $\Irr(G)$ to be the set of characters of continuous irreducible complex representations of $G$.
For any subgroup $K\leq G$ and $\theta\in\Irr(K)$, we let $\Irr(G\mid\theta)$
denote the set of irreducible characters of $G$ whose restriction
to $K$ contains $\theta$. The elements of $\Irr(G\mid\theta)$ are said to \emph{lie above} or to \emph{contain} $\theta$.
For any $K\leq G$, we write
\[
\Irr_{K}(N)=\{\theta\in\Irr(N)\mid\Stab_{G}(\theta)=K\}
\]
for the irreducible characters of $N$ whose stabiliser under the
conjugation action of $G$ is precisely $K$.
We call $(K,N,\theta)$ a \emph{character triple} if $\theta\in\Irr(N)$ and $K$ fixes $\theta$, that is, if $K\leq\Stab_G(\theta)$. Thus $\Irr_G(N)$ is the set of character triples $(G,N,\theta)$.
\begin{defn}
A \emph{projective representation} of $G$ is a continuous function
$\rho:G\rightarrow\GL_{n}(\ensuremath{\mathbb{C}})$, such that there exists a continuous
function $\alpha:G\times G\rightarrow\ensuremath{\mathbb{C}}^{\times}$ satisfying
\[
\rho(g)\rho(h)=\rho(gh)\alpha(g,h)\qquad\text{for all }g,h\in G.
\]
The function $\alpha$ is called the \emph{factor set} of $\rho$.
The \emph{projective character} of $\rho$ is the function $G\rightarrow\ensuremath{\mathbb{C}}$
given by $g\mapsto\tr(\rho(g))$.
\end{defn}
Just like for finite groups, one shows that the factor sets on $G\times G$ are precisely the elements in the group $\cocy{2}(G):=\cocy{2}(G,\ensuremath{\mathbb{C}}^{\times})$ of continuous $2$-cocycles with values in $\ensuremath{\mathbb{C}}^{\times}$ (see \cite[(11.6)]{Isaacs}). Moreover, we have the subgroup $\cobo{2}(G):=\cobo{2}(G,\ensuremath{\mathbb{C}}^{\times})$ of $2$-coboundaries and the cohomology group $\coho{2}(G)=\cocy{2}(G)/\cobo{2}(G)$, which is also called the \emph{Schur multiplier} of $G$. It is well known that the Schur multiplier of a finite group is finite (see \cite[(11.15)]{Isaacs}).
Two projective representations $\rho$ and $\sigma$ are said to be \emph{similar} if there exists a $T\in\GL_{n}(\ensuremath{\mathbb{C}})$ such that $\rho(g)=T\sigma(g)T^{-1}$, for all $g\in G$. Two projective representations have the same projective character if and only if they are similar. Note that there exists a notion of equivalent projective representations which we will not use.
Projective representations with factor set $\alpha$ naturally correspond to modules for the twisted group algebra $\ensuremath{\mathbb{C}}[G]^{\alpha}$ (see, e.g., \cite[Section~11]{Isaacs}). It is well known that
this algebra is semisimple. A projective representation $\Theta$ with factor set $\alpha$ and the character it affords are called \emph{irreducible} if $\Theta$ corresponds to a simple $\ensuremath{\mathbb{C}}[G]^{\alpha}$-module. We let
\[
\PIrr_{\alpha}(G)
\]
denote the set of irreducible projective characters of $G$ with factor set $\alpha$.
\begin{defn}
\label{def:stron_ext}
Let $\Theta$ be an irreducible representation of $N$ fixed by $K\leq G$. We say that a projective representation $\Pi$ of $K$ \emph{strongly
extends} (or is a \emph{strong extension} of) $\Theta$ if for all $g\in K$ and $n\in N$, we have:
\begin{enumerate}
\item $\Pi(n)=\Theta(n)$,
\item $\Pi(ng)=\Pi(n)\Pi(g)$,
\item $\Pi(gn)=\Pi(g)\Pi(n)$.
\end{enumerate}
Moreover, in this situation, we say that the projective character
of $\Pi$ \emph{strongly extends} (or is a \emph{strong extension} of) the character of $\Theta$.
\end{defn}
\begin{lem}
\label{lem:factor-set-gn} Let $\Theta$ be an irreducible representation
of $N$ fixed by $K\leq G$ and let $\Pi$ be a projective representation
of $G$ with factor set $\alpha$ such that $\Pi(n)=\Theta(n)$, for
all $n\in N$. Then $\Pi$ strongly extends $\Theta$ if and only
if for all $g\in G$ and $n\in N$,
\[
\alpha(g,n)=\alpha(n,g)=1.
\]
\end{lem}
\begin{proof}
We have
\[
\Pi(ng)\alpha(n,g)=\Pi(n)\Pi(g),
\]
so $\Pi(ng)=\Pi(n)\Pi(g)$ is equivalent to $\alpha(n,g)=1$. Similarly
$\Pi(gn)=\Pi(g)\Pi(n)$ is equivalent to $\alpha(g,n)=1$.
\end{proof}
\begin{thm}
\label{thm:Clifford-map}Let $\Theta$ be an irreducible representation
of $N$ fixed by $K\leq G$. There exists a projective representation $\Pi$
of $K$ which strongly extends $\Theta$. Let $\hat{\alpha}$ be the
factor set of $\Pi$. Then $\hat{\alpha}$ is constant on cosets in
$K/N$, so we have a well-defined element $\alpha\in \cocy{2}(K/N)$
given by
\[
\alpha(gN,hN)=\hat{\alpha}(g,h).
\]
Moreover, we have a well-defined function
\[
\ensuremath{\mathcal{C}}_{K}:\{\theta\in\Irr(N)\mid K\leq \Stab_G(\theta)\}\longrightarrow \coho{2}(K/N),\qquad\ensuremath{\mathcal{C}}_{K}(\theta)=[\alpha].
\]
\end{thm}
\begin{proof}
Since $N$ is open in $K$ and every representation of $N$ factors
through a finite quotient, we can reduce to the case of finite groups.
Now, the statements are contained in (the proofs of) \cite[(11.2) and (11.7)]{Isaacs}.
\end{proof}
\begin{lem}
\label{lem:same_fs}
Let $\theta$ be an irreducible character of $N$ fixed by $K\leq G$, let
$\alpha\in\cocy{2}(K/N)$ be a representative of the cohomology class $\ensuremath{\mathcal{C}}_K(\theta)$ and let $\hat{\alpha}$ be the pull-back given by $\hat{\alpha}(g,h)=\alpha(gN,hN)$, for $g,h\in K$.
Assume that $\alpha$ is trivial on $N \times N$ (i.e, not merely constant but equal to $1$). Then there exists a strong extension of $\theta$ to $K$ with factor set $\hat{\alpha}$.
\end{lem}
\begin{proof}
Let $\hat{\theta}$ be a strong extension of $\theta$. Let $\hat{\beta}$ be the factor set of $\hat{\theta}$ and $\beta\in\cocy{2}(K/N)$ such that $\beta(gN,hN)=\hat{\beta}(g,h)$.
Then (by Theorem~\ref{thm:Clifford-map}) there is a $\delta\in \cobo{2}(K/N)$ such that $\alpha = \beta \delta$. Pulling back to $K\times K$, we get $\hat{\alpha}=\hat{\beta}\hat{\delta}$, where $\hat{\delta}(g,h)=\delta(gN,hN)$. By definition of
coboundary, there is a function $\gamma: K/N \to \ensuremath{\mathbb{C}}^{\times}$ such that for all
$g,h \in K$, $\delta(gN, hN) = \gamma(ghN)^{-1} \gamma(gN) \gamma(hN)$.
Thus
\[
\hat{\delta}(g,h)= \hat{\gamma}(gh)^{-1} \hat{\gamma}(g) \hat{\gamma}(h),
\]
where $\hat{\gamma}$ is the pull-back of $\gamma$.
Since both $\hat{\alpha}$ (by assumption) and $\hat{\beta}$ (by Lemma~\ref{lem:factor-set-gn}) are trivial on $N \times N$ we have that $\hat{\delta}$ is also trivial on $N\times N$. It follows that $\hat{\gamma}$ is a homomorphism on $N\times N$.
The only constant homomorphism is the trivial one so we conclude
that $\hat{\gamma} \hat{\theta}$ is a strong extension of $\theta$ with factor set $\hat{\alpha}$.
\end{proof}
For any $H\leq G$ and factor set $\alpha\in \cocy{2}(G)$,
we denote the restriction of $\alpha$ to $H\times H$ by $\alpha_{H}$.
Suppose that $H$ is open in $G$, and let $\alpha\in \cocy{2}(G)$.
If $\chi$ is a projective character of $H$ with factor set $\alpha_{H}$, we define the \emph{induced projective character}
$\Ind_{H,\alpha}^{G}\chi$ as the character of the induced projective representation given by tensoring
by the twisted group algebra $\ensuremath{\mathbb{C}}^{\alpha}[G]$ (see \cite[I, Section~9]{Karpilovsky3}). Then
$\Ind_{H,\alpha}^{G}\chi$ is a projective character of $G$ with factor set $\alpha$. A projective
character with trivial factor set is the character of a linear representation and in this case we omit
the factor set, so that our notation coincides with the standard notation for induced characters of linear representations.
In Section~\ref{sec:red_deg_one} we will freely use
basic facts about projective characters which are direct analogues
of well known results for ordinary characters. For example, we have
Frobenius reciprocity \cite[Ch.~1, Lemma~9.18]{Karpilovsky3}, Mackey's
intertwining number formula \cite[Ch.~1, Theorem~8.6]{Karpilovsky3},
and the fact that the inner product $\langle\chi,\chi'\rangle$ of
two projective characters, with $\chi$ irreducible, equals the multiplicity
of $\chi$ as an irreducible constituent of $\chi'$ \cite[Ch.~1, Lemma~8.10]{Karpilovsky3}.
\begin{lem}
\label{lem:projective-monomial}Let $P$ be a pro-$p$ group. Then
any projective representation is induced from a one-dimensional
projective representation of an open subgroup of $P$.
\end{lem}
\begin{proof}
By definition, every projective representation of $P$ factors through
a finite quotient. Since a finite $p$-group is supersolvable, the
result now follows from \cite[Ch.~3, Theorem~11.2]{Karpilovsky2}.
\end{proof}
\subsection{Projective representations and Clifford theory}
If two projective representations of a group $G$ have factor sets $\alpha$ and $\beta$, respectively, then their tensor product has factor set $\alpha \beta$. This is an immediate consequence of the definitions, but is a fact that we will use repeatedly throughout the paper.
The following two lemmas are due to Clifford \cite[Theorems~3-5]{Clifford-1937}.
\begin{lem}
\label{lem:Clifford-extensions}Let $(K,N,\theta)$ be a character
triple. Let $\hat{\theta}\in\PIrr_{\hat{\alpha}}(K)$ be a strong
extension of $\theta$, so that $\ensuremath{\mathcal{C}}_{K}(\theta)=[\alpha]$. For any
$\bar{\pi}\in\PIrr_{\alpha^{-1}}(K/N)$, let $\pi\in\PIrr_{\hat{\alpha}^{-1}}(K)$
denote the pull-back of $\bar{\pi}$ along the map $K\rightarrow K/N$.
Then the function
\[
\PIrr_{\alpha^{-1}}(K/N)\longrightarrow\Irr(K\mid\theta),\qquad\bar{\pi}\longmapsto
\hat{\theta}\pi
\]
is a bijection.
\end{lem}
\begin{proof}
Since $\theta$ factors through a finite group, the statements immediately
reduce to the case where $K$ and $N$ are finite. The fact that $\bar{\pi}\longmapsto\hat{\theta}\pi$
is a function with the given domain and codomain is proved in \cite[Theorem~5.8\,(ii)]{Nagao-Tsushima} (in the context of projective representations; this immediately implies the corresponding fact for projective characters),
and the fact that it is surjective is \cite[Theorem~5.8\,(i)]{Nagao-Tsushima}.
We prove injectivity using a simplified version of the argument in
\cite[p.~545-546]{Clifford-1937}. Let $\Theta$ be a $K$-fixed irreducible
representation of $N$ and let $\widehat{\Theta}$ be
a strong extension of $\Theta$ to $K$ with factor set $\hat{\alpha}$.
Let $\overline{\Pi},\overline{\Pi}'$ be irreducible projective representations
of $K/N$ with factor set $\alpha^{-1}$, and let $\Pi,\Pi'$ be their
pull-backs to $K$. Let $d=\dim\widehat{\Theta}=\dim\Theta$, $e=\dim\Pi=\dim\overline{\Pi}$
and $e'=\dim\Pi'=\dim\overline{\Pi}'$. Assume that $\widehat{\Theta}\otimes\Pi$
is similar to $\widehat{\Theta}\otimes\Pi'$. Then $\Pi\otimes\widehat{\Theta}$
is also similar to $\Pi\otimes\widehat{\Theta}'$, that is, there exists
a $P\in\GL_{de}(\ensuremath{\mathbb{C}})$ such that for all $k\in K$, we have
\[
P^{-1}(\Pi(k)\otimes\widehat{\Theta}(k))P=\Pi'(k)\otimes\widehat{\Theta}(k).
\]
Then, for any $n\in N$, we have $P^{-1}(\hat{\alpha}(1,1)^{-1}I_{e}\otimes\Theta(n))P=\hat{\alpha}(1,1)^{-1}I_{e}\otimes\Theta(n)$,
and thus
\[
P^{-1}(I_{e}\otimes\Theta(n))P=I_{e}\otimes\Theta(n).
\]
The matrix $I_{e}\otimes\Theta(n)$ is the value at $n$ of the representation
$\Theta^{\oplus e}$, so Schur's lemma implies that $P$ is a block matrix consisting
of $e^2$ scalar blocks of size $d\times d$, that is, $P=Q\otimes I_{d}$, for some
$Q\in\GL_{e}(\ensuremath{\mathbb{C}})$. Hence, for all $k\in K$,
\[
0=P^{-1}(\Pi(k)\otimes\widehat{\Theta}(k))P-\Pi'(k)\otimes\widehat{\Theta}(k)=(Q^{-1}\Pi(k)Q-\Pi'(k))\otimes\widehat{\Theta}(k).
\]
This implies that $\widehat{\Theta}(k)\otimes(Q^{-1}\Pi(k)Q-\Pi'(k))=0$,
so since $\widehat{\Theta}(k)$ is non-zero, we must have $Q^{-1}\Pi(k)Q=\Pi'(k)$,
by the definition of Kronecker product. We have thus proved that if $\widehat{\Theta}\otimes\Pi$
has the same character as $\widehat{\Theta}\otimes\Pi'$, then $\Pi$ has the same character as $\Pi'$, and this proves the asserted injectivity.
\end{proof}
\begin{lem}
\label{lem:Clifford-degree-ratios}Let $\theta,\theta'\in\Irr(N)$ be two characters fixed by $K$ such that $\ensuremath{\mathcal{C}}_{K}(\theta)=\ensuremath{\mathcal{C}}_{K}(\theta')=[\alpha]$, for some $\alpha\in\cocy{2}(K/N)$. Let $\hat{\theta},\hat{\theta}'\in\PIrr_{\hat{\alpha}}(K)$ be strong extensions of $\theta$ and $\theta'$, respectively, where $\hat{\alpha}$ is the pull-back of $\alpha$ to $K$ (such $\hat{\theta}$ and $\hat{\theta}'$ exist thanks to Lemma~\ref{lem:same_fs}).
Then we have a bijection
$\sigma:\Irr(K\mid\theta)\rightarrow \Irr(K\mid\theta')$,
$\hat{\theta}\pi\mapsto\hat{\theta}'\pi$,
where $\pi$ is the pull-back of $\bar{\pi}\in\PIrr_{\alpha^{-1}}(K/N)$,
such that
\[
\frac{(\hat{\theta}\pi)(1)}{\theta(1)}=\frac{\sigma(\hat{\theta}\pi)(1)}{\theta'(1)}.
\]
\end{lem}
\begin{proof}
Lemma~\ref{lem:Clifford-extensions} implies that $\sigma$ is a bijection. For the statement regarding ratios of degrees, it remains to note that
\[
(\hat{\theta}\pi)(1)=\hat{\theta}(1)\pi(1) \quad\text{and}\quad(\hat{\theta}'\pi)(1)=\hat{\theta}'(1)\pi(1).
\]
\end{proof}
The following is a well known result from the cohomology of finite
groups. Note that we write the abelian group structure of cohomology groups multiplicatively as this will be more natural for the cohomology groups we will consider.
\begin{lem}
\label{lem:basic-group-cohomology}Let $G$ be a finite group of order
$m$ and let $A$ be a $G$-module. For any integer $i\geq1$, the
following holds:
\begin{enumerate}
\item \label{lem:basic-group-cohomology-1} For any $x\in\coho{i}(G,A)$, we have $x^m=1$. Thus, if $\coho{i}(G,A)$
is finite and if a prime $p$ divides $|\coho{i}(G,A)|$, then $p$
divides $m$.
\item If $P$ is a Sylow $p$-subgroup of $G$, then the restriction homomorphism
$\res_{G,P}:\coho{i}(G,A)\rightarrow\coho{i}(P,A)$ restricts to an injection
\[
\res_{p}:\coho{i}(G,A)_{(p)}\mathbin{\lhook\joinrel\longrightarrow}\coho{i}(P,A),
\]
where $\coho{i}(G,A)_{(p)}$ is the $p$-torsion subgroup of $\coho{i}(G,A)$.
Thus, if $\coho{i}(P,A)\allowbreak = 1$ for all Sylow $p$-subgroups and all
primes $p\mid m$, then $\coho{i}(G,A)=1$.
\end{enumerate}
\end{lem}
\begin{proof}
See, for example, Corollaries~2 and 3 of \cite[Theorem~7.26]{Suzuki}.
\end{proof}
Since any torsion abelian group (not necessarily finite) is a direct
sum of its $p$-torsion subgroups where $p$ runs through all torsion primes (see \cite[Theorem~5.5]{Suzuki}), Lemma~\ref{lem:basic-group-cohomology}\,\ref{lem:basic-group-cohomology-1} implies that $\coho{i}(G,A)_{(p)}$ is the $p$-primary component of $\coho{i}(G,A)$. In general, for any torsion abelian group $M$ we will denote its $p$-primary component (possibly trivial) by $M_{(p)}$. Similarly, we will write $m_{(q)}$ for the $q$-part of an element $m\in M$.
\begin{lem}\label{lem:Z-BU_H-finite}
Let $G$ be a finite group of order $m$ and $M$ be an abelian group
(written multiplicatively) on which $G$ acts. Assume that $M$ is
finitely divisible in the sense that for any $n\in\ensuremath{\mathbb{N}}$ and $a\in M$,
there is a finite but non-zero number of elements $x\in M$ such that
$x^{n}=a$. Then, for any integer $i\geq1$, we have
\[
Z^{i}(G,M)=B^{i}(G,M)U^{i},
\]
where $U^{i}=\{\alpha\in Z^{i}(G,M)\mid\alpha^{m}=1\}$. Moreover,
$H^{i}(G,M)$ is finite.
\end{lem}
\begin{proof}
We first prove that $B^{i}(G,M)$ is divisible. A function $\beta:G^{i}\rightarrow M$
is in $B^{i}(G,M)$ if and only if it is of the form
\begin{align*}
\beta(g_{1},\dots,g_{i}) & =\leftexp{g_{1}}{f(g_{2},\dots,g_{i})}f(g_{1},\dots,g_{n})^{(-1)^{i}}\\
& \hphantom{{}={}} \prod_{j=1}^{i-1}f(g_{1},\dots,g_{j-1},g_{j}g_{j+1},\dots,g_{i})^{(-1)^{j}}
\end{align*}
for some function $f:G^{i-1}\rightarrow M$, where $G^{0}:=\{1\}$
(see, e.g., \cite[VII.3]{serre}). Let $\beta$ and $f$ be such that
this holds. Since $M$ is divisible, there exists, for any $n\in\ensuremath{\mathbb{N}}$,
a function $\widetilde{f}\in G^{i-1}\rightarrow M$ such that $\widetilde{f}^{n}=f$.
Thus
\begin{multline*}
\beta(g_{1},\dots,g_{i})=\leftexp{g_{1}}{\widetilde{f}(g_{2},\dots,g_{i})^{n}}\widetilde{f}(g_{1},\dots,g_{n})^{(-1)^{i}n}\\
\prod_{j=1}^{i-1}\widetilde{f}(g_{1},\dots,g_{j-1},g_{j}g_{j+1},\dots,g_{i})^{(-1)^{j}n}\\
=\Big(\leftexp{g_{1}}{\widetilde{f}(g_{2},\dots,g_{i})}\widetilde{f}(g_{1},\dots,g_{n})^{(-1)^{i}}\prod_{j=1}^{i-1}\widetilde{f}(g_{1},\dots,g_{j-1},g_{j}g_{j+1},\dots,g_{i})^{(-1)^{j}}\Big)^{n},
\end{multline*}
so $\beta=\gamma^{n}$ for some $\gamma\in B^{i}(G,M)$.
Let $\alpha\in Z^{i}(G,M)$. By Lemma~\ref{lem:basic-group-cohomology}\;\ref{lem:basic-group-cohomology-1},
we have $\alpha^{m}\in B^{i}(G,M)$. Since $B^{i}(G,M)$ is divisible,
there is a $\beta \in B^{i}(G,M)$ such that $\alpha^{m}=\beta^{m}$, and hence
$\alpha \beta^{-1}\in U^{i}$. We thus have $\alpha\in B^{i}(G,M)U^{i}$
and since $\alpha$ was arbitrary, $Z^{i}(G,M)=B^{i}(G,M)U^{i}$.
Now, every element in $U^{i}$ is a function $G^{i}\rightarrow\{a\in M\mid a^{m}=1\}$.
The codomain is a finite set since $M$ is finitely divisible, so
$U^{i}$ is finite and hence $H^{i}(G,M)$ is finite, since it embeds in $U^i$.
\end{proof}
\section{Reduction to the partial zeta series}
\label{sec:red_partial}
Let $G$ be a representation rigid profinite group, such that there exists a finite index normal pro-$p$ subgroup $N\leq G$. For example, one can take $G$ to be FAb and compact $p$-adic analytic (see \cite[Corollary~8.34]{DdSMS}). For any $K\leq G$ such that
$N\leq K$, let $K_p$ be a pro-$p$ Sylow subgroup of $K$. Since
$N$ is normal and pro-$p$ we necessarily have $N\leq K_p$. For any
$c\in \coho{2}(K_p/N)$, define
\[
\Irr_{K}^{c}(N)=\{\theta\in\Irr_{K}(N)\mid\ensuremath{\mathcal{C}}_{K_p}(\theta)=c\},
\]
where $\ensuremath{\mathcal{C}}_{K_p}$ is the function defined in Theorem~\ref{thm:Clifford-map}.
Note that any two choices of $K_p$ are $G$-conjugate, so up to the natural identification of the groups $\coho{2}(K_p/N)$, for different $K_p$, the set $\Irr_{K}^{c}(N)$ is independent of $K_p$.
We call
\[
Z_{N;K}^{c}(s)=\sum_{\theta\in\Irr_{K}^{c}(N)}\theta(1)^{-s}
\]
a \emph{partial zeta series}. Note that for $G$ fixed there are only finitely
many partial zeta series and that
\[
Z_N(s)=\sum_{N\leq K\leq G}\sum_{c\in\coho{2}(K_p/N)} Z_{N;K}^{c}(s).
\]
Following Jaikin-Zapirain \cite[Section~5]{Jaikin-zeta}, we show how the (virtual)
rationality of $Z_G(s)$, and thus of $\zeta_{G}(s)$, is reduced to the rationality in $p^{-s}$ of the partial zeta series.
Let $(K,N,\theta)$ be a character triple. By Clifford's theorem, $\lambda(1)/\theta(1)$ is an integer for any $\lambda\in\Irr(K\mid\theta)$, so we may define the finite Dirichlet series
\[
f_{(K,N,\theta)}(s)=\sum_{\lambda\in\Irr(K\mid\theta)}\left(\frac{\lambda(1)}{\theta(1)}\right)^{-s}.
\]
The following result is contained in \cite[Proposition~5.1]{Jaikin-zeta}.
We give a new proof, which adds several steps involving Schur multipliers.
\begin{lem}
\label{lem:Jaikins-prop}Let $N$ be a finite index pro-$p$ group
in $K$ and let $(K,N,\theta)$ and $(K,N,\theta')$ be two character
triples. If $\ensuremath{\mathcal{C}}_{K_p}(\theta)=\ensuremath{\mathcal{C}}_{K_p}(\theta')$, then
\[
\ensuremath{\mathcal{C}}_{K}(\theta)=\ensuremath{\mathcal{C}}_{K}(\theta')\qquad\text{and}\qquad f_{(K,N,\theta)}(s)=f_{(K,N,\theta')}(s).
\]
\end{lem}
\begin{proof}
By the remark after Lemma~\ref{lem:basic-group-cohomology}, any $c\in\coho{2}(K/N)$ can be written as $c=\prod_{q}c_{(q)}$, where
$q$ runs through the primes dividing $|K/N|$ and $c_{(q)}\in \coho{2}(K/N)_{(q)}$ is the $q$-primary component of $c$.
Let $q$ be a prime
dividing $|K/N|$ and let $K_{q}\leq K$ be such that $K_{q}/N$
is a Sylow $q$-subgroup of $K/N$ (note that this agrees with our notation $K_p$ for $q=p$).
By Lemma~\ref{lem:basic-group-cohomology}, $\res_{q}:\coho{2}(K/N)_{(q)}\rightarrow\coho{2}(K_{q}/N)$
is injective. We claim that
\begin{equation}
\res_{q}(\ensuremath{\mathcal{C}}_{K}(\theta)_{(q)})=\ensuremath{\mathcal{C}}_{K_{q}}(\theta)
\label{eq:res(C_K(theta))}
\end{equation}
(and similarly for $\theta'$). Indeed, if $\hat{\theta}\in\PIrr_{\alpha}(K)$
is a strong extension of $\theta$, then $\Res_{K_{q}}^{K}\hat{\theta}$
is a strong extension of $\theta$ with factor set $\alpha_{K_{q}}$
(the restriction of $\alpha$ to $K_{q}$), and since $\res_{K/N,K_{q}/N}(\ensuremath{\mathcal{C}}_{K}(\theta))$
is the element in $\coho{2}(K_{q}/N)$ determined by the cocycle $\alpha_{K_{q}}$,
we have
\[
\res_{K/N,K_{q}/N}(\ensuremath{\mathcal{C}}_{K}(\theta))=\ensuremath{\mathcal{C}}_{K_{q}}(\theta).
\]
Furthermore, since $\coho{2}(K_{q}/N)$ is a $q$-group, the homomorphism
$\res_{K/N,K_{q}/N}$ is trivial on $\coho{2}(K/N)_{(\ell)}$, for any prime
$\ell\neq q$. Hence, for any $c\in\coho{2}(K/N)$,
\[
\res_{K/N,K_{q}/N}(c)=\res_{K/N,K_{q}/N}(c_{(q)})=\res_{q}(c_{(q)}),
\]
proving \eqref{eq:res(C_K(theta))}.
Now, if $q\neq p$, then $p\nmid|K_{q}/N|$, so by \cite[(8.16)]{Isaacs},
$\theta$ extends to $K_{q}$, and thus $\ensuremath{\mathcal{C}}_{K_{q}}(\theta)=1$.
By \eqref{eq:res(C_K(theta))} we obtain $\res_{q}(\ensuremath{\mathcal{C}}_{K}(\theta)_{(q)})=1$,
whence $\ensuremath{\mathcal{C}}_{K}(\theta)_{(q)}=1$ (by the injectivity of $\res_{q}$). We
must therefore have $\ensuremath{\mathcal{C}}_{K}(\theta)=\ensuremath{\mathcal{C}}_{K}(\theta)_{(p)}$, and since
$\theta$ was arbitrary, we similarly have $\ensuremath{\mathcal{C}}_{K}(\theta')=\ensuremath{\mathcal{C}}_{K}(\theta')_{(p)}$.
Applying \eqref{eq:res(C_K(theta))} for $q=p$, we get
\[
\res_{p}(\ensuremath{\mathcal{C}}_{K}(\theta)_{(p)})=\ensuremath{\mathcal{C}}_{K_p}(\theta)=\ensuremath{\mathcal{C}}_{K_p}(\theta')=\res_{p}(\ensuremath{\mathcal{C}}_{K}(\theta')_{(p)}),
\]
and we conclude that $\ensuremath{\mathcal{C}}_{K}(\theta)_{(p)}=\ensuremath{\mathcal{C}}_{K}(\theta')_{(p)}$,
and thus $\ensuremath{\mathcal{C}}_{K}(\theta)=\ensuremath{\mathcal{C}}_{K}(\theta')$.
We now prove the second assertion. By the first part together
with Lemma~\ref{lem:Clifford-degree-ratios}, there exists a bijection
$\sigma:\Irr(K\mid\theta)\rightarrow\Irr(K\mid\theta')$ such that
$\lambda(1)/\theta(1)=\sigma(\lambda)(1)/\theta'(1)$. Thus
\[
f_{(K,N,\theta)}(s) =
\sum_{\lambda\in\Irr(K\mid\theta)}\left(\frac{\lambda(1)}{\theta(1)}\right)^{-s}
= \sum_{\sigma(\lambda)\in\Irr(K\mid\theta')}\left(\frac{\sigma(\lambda)(1)}{\theta'(1)}\right)^{-s}
= f_{(K,N,\theta')}(s).
\]
\end{proof}
Let $\ensuremath{\mathcal{S}}$ denote the set of subgroups $K\leq G$ such that $N\leq K$
and $\Stab_{G}(\theta)=K$, for some $\theta\in\Irr(N)$.
\begin{prop}\label{prop:partial-Main}
Suppose that $Z_{N;K}^{c}(s)$ is rational in $p^{-s}$, for every $K\in \ensuremath{\mathcal{S}}$ and every $c\in \coho{2}(K_p/N)$. Then Theorem~\ref{thm:Main} holds.
\end{prop}
\begin{proof}
By Clifford's theorem, for every $\rho\in\Irr(G)$, there are exactly $|G:\Stab_{G}(\theta)|$
distinct characters $\theta\in\Irr(N)$ such that $\rho\in\Irr(G\mid\theta)$.
Thus
\begin{align*}
Z_{G}(s) & =\sum_{\rho\in\Irr(G)}\rho(1)^{-s}=\sum_{\theta\in\Irr(N)}\frac{1}{|G:\Stab_{G}(\theta)|}\sum_{\rho\in\Irr(G\mid\theta)}\rho(1)^{-s}.
\end{align*}
By standard Clifford theory (see \cite[(6.11)]{Isaacs}), induction yields a bijection between
$\Irr(\Stab_{G}(\theta)\mid\theta)$ and $\Irr(G\mid\theta)$, for
every $\theta\in\Irr(N)$, so
\[
\sum_{\rho\in\Irr(G\mid\theta)}\rho(1)^{-s}=\sum_{\lambda\in\Irr(\Stab_{G}(\theta)\mid\theta)}(\lambda(1)\cdot|G:\Stab_{G}(\theta)|)^{-s}.
\]
This implies that
\begin{align*}
Z_{G}(s) & =\sum_{\theta\in\Irr(N)}|G:\Stab_{G}(\theta)|^{-s-1}\sum_{\lambda\in\Irr(\Stab_{G}(\theta)\mid\theta)}\theta(1)^{-s}\left(\frac{\lambda(1)}{\theta(1)}\right)^{-s}\\
& =\sum_{\theta\in\Irr(N)}|G:\Stab_{G}(\theta)|^{-s-1}\theta(1)^{-s}f_{(\Stab_{G}(\theta),N,\theta)}(s)\\
& =\sum_{K\in\ensuremath{\mathcal{S}}}|G:K|^{-s-1}\sum_{\theta\in\Irr_{K}(N)}\theta(1)^{-s}f_{(K,N,\theta)}(s).
\end{align*}
By Lemma~\ref{lem:Jaikins-prop}, we have $f_{(K,N,\theta)}(s)=f_{(K,N,\theta')}(s)$,
for $\theta,\theta'\in\Irr(N)$ if $\ensuremath{\mathcal{C}}_{K_p}(\theta)=\ensuremath{\mathcal{C}}_{K_p}(\theta')$.
By the above, we can therefore write
\begin{equation*}
Z_{G}(s)=\sum_{K\in\ensuremath{\mathcal{S}}}|G:K|^{-s-1}\sum_{c\in \coho{2}(K_p/N)}f_{K}^{c}(s)Z_{N;K}^{c}(s)
\end{equation*}
where $f_{K}^{c} (s) := f_{(K, N, \theta)}(s)$ for some (equivalently, any) character triple $(K, N, \theta)$ such that $\ensuremath{\mathcal{C}}_{K_p}(\theta) = c$.
The sets $\ensuremath{\mathcal{S}}$ is finite and the group $\coho{2}(K_p/N)$ is also finite by Lemma~\ref{lem:Z-BU_H-finite}.
From the assumption that $Z_{N;K}^{c}(s)$ is rational in $p^{-s}$, it now follows
that $Z_G(s)$, and hence $\zeta_{G}(s)$, is virtually rational. Moreover, if $G$ is pro-$p$, then $|G:K|$ is a power
of $p$ for any subgroup $K$, and likewise $\lambda(1)$ is a power of $p$ for any $\lambda\in\Irr(K)$,
so $f_{(K,N,\theta)}(s)$ is a polynomial in $p^{-s}$. Thus, when $G$ is pro-$p$, $Z_G(s)$, and hence
$\zeta_{G}(s)$, is rational in $p^{-s}$.
\end{proof}
\section{Cohomology classes and degree one characters}
\label{sec:red_deg_one}
To prove the rationality in $p^{-s}$ of the partial zeta series $Z_{N;K}^{c}(s)$ for $G$ FAb compact $p$-adic analytic,
we will prove that the set $\Irr_{K}^{c}(N)$ is in bijection with
the set of equivalence classes of a definable equivalence relation
on a definable set in $\struc_{\mathrm{an}}$. To this end, we need to show that the condition $\ensuremath{\mathcal{C}}_{K_p}(\theta)=c$ is equivalent
to a similar condition where $K_p$ is replaced by a subgroup $H$ of $K_p$ and $\theta$ is replaced by a character
$\chi$ of $N\cap H$ of degree one. In this section we will state and prove the main technical result allowing for this
reduction.
As in the previous section, let $G$ be a profinite group possessing a finite index normal pro-$p$ subgroup $N\leq G$.
All the results in the present section are really theorems about finite groups with trivial generalisations to profinite groups,
and the reader may assume that $G$ is finite with the discrete topology throughout the section (without changing any of the proofs).
We work in the profinite setting because this is what we will need to apply the results to in Section~\ref{sec:proof_main}.
For any $K\leq G$ such that $N\leq K$, define the set
\[
\ensuremath{\mathcal{H}}(K)=\{H\leq K\mid H\text{ open in }K,\,K=HN\}.
\]
From now on, and until the end of Section~\ref{sec:proof_main}, let $N\leq K\leq G$ be fixed.
\begin{lem}
\label{lem:IndRes}Let $\gamma\in \cocy{2}(K_p)$, $H\in\ensuremath{\mathcal{H}}(K_p)$
and $\eta\in\PIrr_{\gamma_{H}}(H)$ be of degree one.
If $\Ind_{N\cap H, \gamma_N}^{N}\Res_{N\cap H}^{H}\eta$ or $\Res_{N}^{K_p}\Ind_{H,\gamma}^{K_p}\eta$
is irreducible, then
\[
\Ind_{N\cap H,\gamma_N}^{N}\Res_{N\cap H}^{H}\eta=\Res_{N}^{K_p}\Ind_{H,\gamma}^{K_p}\eta.
\]
\end{lem}
\begin{proof}
By Mackey's induction-restriction formula and Frobenius reciprocity for projective representations, we have
\begin{multline*}
\left\langle \Ind_{N\cap H, \gamma_N}^{N}\Res_{N\cap H}^{H}\eta,\Res_{N}^{K_p}\Ind_{H,\gamma}^{K_p}\eta\right\rangle
=\left\langle \Res_{N\cap H}^{H}\eta,\Res_{N\cap H}^{K_p}\Ind_{H,\gamma}^{K_p}\eta\right\rangle \\
=\sum_{\bar{g}\in(N\cap H)\backslash K_p/H}\left\langle \Res_{N\cap H}^{H}\eta,
\Ind_{N\cap H\cap\leftexp{g}{H}, \gamma_{N\cap H}}^{N\cap H}
\Res_{N\cap H\cap\leftexp{g}{H}}^{\leftexp{g}{H}}\leftexp{g}{\eta}\right\rangle \\
\geq\left\langle \Res_{N\cap H}^{H}\eta,\Res_{N\cap H}^{H}\eta\right\rangle =1.
\end{multline*}
Here $g\in K_p$ denotes an arbitrary representative of $\bar{g}$. Since $K_p=HN$ and
\[
|K_p:N|\cdot|N:N\cap H|=|K_p:H|\cdot|H:N\cap H|=|K_p:H|\cdot |HN:N|,
\]
we have $|N:N\cap H| = |K_p:H|$. Hence $\Ind_{N\cap H, \gamma_N}^{N}\Res_{N\cap H}^{H}\eta$
and $\Res_{N}^{K_p}\Ind_{H,\gamma}^{K_p}\eta$ have the same degree, so if one of them is irreducible, they are equal.
\end{proof}
For $H\leq K_p$ such that $K_p=HN$, we let $\widetilde{f}_{H}:\cocy{2}(H/(N\cap H))\rightarrow \cocy{2}(K_p/N)$
be the isomorphism induced by pulling back cocycles along the isomorphism
$K_p/N\rightarrow H/(N\cap H)$. We describe this isomorphism more explicitly.
Since $K_p = HN$, every coset in $K_p/N$ contains a unique coset in $H/(N\cap H)$.
Then, for $\alpha\in \cocy{2}(H/(N\cap H))$ and $g,g' \in K_p$, we have
\begin{equation}\label{f-tilde-explicit}
\widetilde{f}_{H}(\alpha)(gN, g'N) = \alpha(h(N \cap H), h'(N \cap H))
\end{equation}
where $h,h'$ are such that $h(N \cap H)\subseteq gN$ and $h'(N \cap H)\subseteq g'N$.
Moreover, for $\beta\in \cocy{2}(K_p/N)$ and $h,h'\in H$,
we have
\[
\widetilde{f}_{H}^{-1}(\beta)(h(N \cap H), h'(N \cap H)) = \beta(hN, h'N).
\]
We denote by $f_{H}$ the corresponding induced isomorphism from $\coho{2}(H/(N\cap H))$ to
$\coho{2}(K_p/N)$.
\begin{prop}
\label{prop:Linearisation}
Let $(K,N,\theta)$ be a character triple. Then there exists an $H\in\ensuremath{\mathcal{H}}(K_p)$
and a character triple $(H,N\cap H,\chi)$ such that:
\begin{enumerate}
\item \label{enu:i} $\chi$ is of degree one,
\item \label{enu:ii} $\theta=\Ind_{N\cap H}^{N}\chi$,
\item \label{enu:iii} $\ensuremath{\mathcal{C}}_{K_p}(\theta)=f_{H}(\ensuremath{\mathcal{C}}_{H}(\chi))$.
\end{enumerate}
Moreover, let $H\in\ensuremath{\mathcal{H}}(K_p)$ be such that $(H,N\cap H,\chi)$ is a
character triple with $\chi$ of degree one, such that $(K,N,\theta)$
is a character triple, where $\theta=\Ind_{N\cap H}^{N}\chi$. Then
\[
\ensuremath{\mathcal{C}}_{K_p}(\theta)=f_{H}(\ensuremath{\mathcal{C}}_{H}(\chi)).
\]
\end{prop}
\begin{proof}
Assume that $(K,N,\theta)$ is a character triple. By Theorem~\ref{thm:Clifford-map},
there exists an $\alpha\in \cocy{2}(K_p/N)$ such that $[\alpha]=\ensuremath{\mathcal{C}}_{K_p}(\theta)$
and a $\hat{\theta}\in\PIrr_{\hat{\alpha}}(K_p)$ strongly extending
$\theta$.
Note that by Lemma~\ref{lem:factor-set-gn}, $\hat{\alpha}(n,x)=\hat{\alpha}(x,n)=1$, for all $n\in N$, $x\in K_p$,
so in particular, $\hat{\alpha}_{N}=1$.
By Lemma~\ref{lem:projective-monomial}, there exist an
open subgroup $H$ of $K_p$ and $\eta\in\PIrr_{\hat{\alpha}_{H}}(H)$
of degree one such that $\hat{\theta}=\Ind_{H, \hat{\alpha}}^{K_p}\eta$. Then $\theta=\Res_{N}^{K_p}\Ind_{H, \hat{\alpha}}^{K_p}\eta$
is irreducible, so
\begin{align*}
1 & =\left\langle\Res_{N}^{K_p}\Ind_{H, \hat{\alpha}}^{K_p}\eta,\Res_{N}^{K_p}\Ind_{H, \hat{\alpha}}^{K_p}\eta\right\rangle=\\
& =\sum_{\bar{g}\in N\backslash K_p/H}\sum_{\bar{h}\in N\backslash K_p/H}\left\langle \Ind_{N\cap\leftexp{g}{H}}^{N}\Res_{N\cap\leftexp{g}{H}}^{\leftexp{g}{H}}\leftexp{g}{\eta},\Ind_{N\cap\leftexp{h}{H}}^{N}\Res_{N\cap\leftexp{h}{H}}^{\leftexp{h}{H}}\leftexp{h}{\eta}\right\rangle \\
& =\sum_{\bar{g}\in K_p/HN}\sum_{\bar{h}\in K_p/HN}\left\langle \Ind_{N\cap\leftexp{g}{H}}^{N}\Res_{N\cap\leftexp{g}{H}}^{\leftexp{g}{H}}\leftexp{g}{\eta},\Ind_{N\cap\leftexp{h}{H}}^{N}\Res_{N\cap\leftexp{h}{H}}^{\leftexp{h}{H}}\leftexp{h}{\eta}\right\rangle \\
& \geq\sum_{\bar{g}\in K_p/HN}\left\langle \Ind_{N\cap\leftexp{g}{H}}^{N}\Res_{N\cap\leftexp{g}{H}}^{\leftexp{g}{H}}\leftexp{g}{\eta},\Ind_{N\cap\leftexp{g}{H}}^{N}\Res_{N\cap\leftexp{g}{H}}^{\leftexp{g}{H}}\leftexp{g}{\eta}\right\rangle \\
& \geq |K_p:HN|.
\end{align*}
Thus, $|K_p:HN|=1$, and so $K_p=HN$, that is, $H\in\ensuremath{\mathcal{H}}(K_p)$.
Next, define $\chi=\Res_{N\cap H}^{H}\eta$; then $\chi$ is fixed by
$H$, and Lemma~\ref{lem:IndRes} (with $\gamma=\hat{\alpha}$) implies that
$\theta=\Ind_{N\cap H}^{N}\chi$.
Since $\hat{\alpha}_{H}$ descends to $\alpha_{H}\in \cocy{2}(H/(N\cap H))$,
where
\[
\alpha_{H}(h(N\cap H),h'(N\cap H))=\hat{\alpha}_{H}(h,h'),
\]
we have $f_{H}([\alpha_{H}])=\ensuremath{\mathcal{C}}_{K_p}(\theta)$. Since $\eta$ strongly extends $\chi,$
we obtain
\[
\ensuremath{\mathcal{C}}_{K_p}(\theta)=f_{H}(\ensuremath{\mathcal{C}}_{H}(\chi)).
\]
Assume now that $(H,N\cap H,\chi)$ and $(K,N,\theta)$ are as in the second part of the statement.
By Theorem~\ref{thm:Clifford-map},
there exists a $\beta\in \cocy{2}(H/(N\cap H))$ and a $\hat{\chi}\in\PIrr_{\hat{\beta}}(H)$
strongly extending $\chi$, such that $[\beta]=\ensuremath{\mathcal{C}}_{H}(\chi)$.
Let $\gamma\in \cocy{2}(K_p)$ be the pull-back of $\widetilde{f}_{H}(\beta)\in \cocy{2}(K_p/N)$.
Then, for any $h,h'\in H$, we have
\[
\gamma_{H}(h,h')=\gamma(h,h')=\widetilde{f}_H(\beta)(hN,h'N)=\beta(h(N\cap H),h'(N\cap H))=\hat{\beta}(h,h'),
\]
where in the second to last step we have used \eqref{f-tilde-explicit}.
Thus $\gamma_H=\hat{\beta}$, and since $\theta$ is irreducible, Lemma~\ref{lem:IndRes} (with $\eta=\hat{\chi})$ implies that
\[
\theta=\Ind_{N\cap H, \gamma_N}^{N}\Res_{N\cap H}^{H}\hat{\chi}=\Res_{N}^{K_p}\Ind_{H, \gamma}^{K_p}\hat{\chi}.
\]
Hence $\Ind_{H, \gamma}^{K_p}\hat{\chi}$ is an extension of $\theta$ and we show that it is in fact a strong extension (see Definition~\ref{def:stron_ext}).
Indeed, since $\gamma$ is constant on cosets of $N$ in $K_p$, we have
\[
\gamma(x,n)=\gamma(hn',n)=\gamma(h,1)=\gamma_{H}(h,1)=\hat{\beta}(h,1)=1,
\]
where we have written $x=hn'$, with $h\in H$, $n'\in N$ and $\hat{\beta}(h,1)=1$
by Lemma~\ref{lem:factor-set-gn}, because $\hat{\beta}$ is the
factor set of a strong extension. In a similar way, we show that $\gamma(n, x) = 1$; thus, by Lemma~\ref{lem:factor-set-gn},
we conclude that $\Ind_{H, \gamma}^{K_p}\hat{\chi}$ strongly extends $\theta$.
Since $\Ind_{H, \gamma}^{K_p}\hat{\chi}$ has factor set
$\gamma$, which descends (modulo $N$) to $\widetilde{f}_H(\beta)$,
it follows that
\[
\ensuremath{\mathcal{C}}_{K_p}(\theta)=[\widetilde{f}_{H}(\beta)]=f_{H}([\beta])=f_{H}(\ensuremath{\mathcal{C}}_H(\chi)).
\]
\end{proof}
It will be useful for us to state a consequence of Proposition~\ref{prop:Linearisation} in terms of a
commutative diagram. To this end, let $X_K$
\label{def:X_K}
be the set of pairs
$(H,\chi)$ with $H\in\ensuremath{\mathcal{H}}(K_p)$, where:
\begin{enumerate}
\item $(H,N\cap H,\chi)$ is a character triple.
\item $\chi$ is of degree one,
\item $\Ind_{N\cap H}^{N}\chi\in \Irr_K(N)$.
\end{enumerate}
Note that $\theta \in\Irr_K(N)$ means that $K=\Stab_G(\theta)$, and not merely that $K$ is contained in the stabiliser.
Define the function
\[
\ensuremath{\mathcal{C}}:X_K\longrightarrow \coho{2}(K_p/N)
\]
by $\ensuremath{\mathcal{C}}(H,\chi)=f_{H}(\ensuremath{\mathcal{C}}_{H}(\chi))$.
\begin{cor}
\label{cor:surj_coho}
The function $X_K\rightarrow\Irr_{K}(N)$, $(H,\chi)\mapsto\Ind_{N\cap H}^{N}\chi$ is surjective and the following
diagram commutes:
\[
\begin{tikzcd}[column sep=0.4cm]
X_K \arrow{r} \arrow{dr}[swap]{\ensuremath{\mathcal{C}}} & \Irr_{K}(N)\arrow{d}{\ensuremath{\mathcal{C}}_{K_p}}\\
{} & \coho{2}(K_p/N).
\end{tikzcd}
\]
\end{cor}
\begin{proof}
Every $\theta\in\Irr_K(N)$ defines a character triple $(K,N,\theta)$. Thus, the surjectivity follows from the first statement
in Proposition~\ref{prop:Linearisation}. The commutativity of the diagram follows from the second statement in
Proposition~\ref{prop:Linearisation}.
\end{proof}
\section{Rationality of the partial zeta series}
\label{sec:proof_main}
From now on, let $G$ be a FAb compact $p$-adic analytic group and let $N\leq G$ be a normal uniform subgroup. As in Section~\ref{sec:red_deg_one}, let $K\leq G$ be such that $N\leq K$ and fix a pro-$p$ Sylow subgroup $K_p$ of $K$.
In this section we show that the set of characters $\Irr_K^c(N)$, for each $c\in\coho{2}(K_p/N)$,
is in bijection with a set of equivalence classes under a definable equivalence relation in $\struc_{\mathrm{an}}$.
We deduce from this that each partial zeta series is rational in $p^{-s}$ and hence prove
Theorem~\ref{thm:Main}.
\subsection{Bases for $p$-adic analytic groups}\label{subsec:Good-bases}
Recall from Section~\ref{sec:red_deg_one} that $\ensuremath{\mathcal{H}}(K_p)=\{H\leq K_p\mid H\text{ open in }K_p,\,K_p=HN\}$.
In this section, we describe du~Sautoy's parametrisation of $\ensuremath{\mathcal{H}}(K_p)$.
One starts by parametrising open subgroups of $N$. The following definition is from
\cite[p.~259]{duSautoy-Segal-in-Horizons} and is equivalent to \cite[Definition~2.2]{duSautoy-rationality}.
Some properties characterising open subgroups of $N$ and
some notation are necessary to state it. A subgroup $H$ of $N$ is open if and only if it contains $N_{m}$
for some $m\geq 1$, where $N_m$ denotes the $m$-th term of the
lower $p$-series of $N$. Moreover, as $N$ is uniform, raising to the power of $p$ induces an
isomorphism $N_i/ N_{i + 1}\rightarrow N_{i + 1}/ N_{i + 2}$ and $N_{i+1}$ is the Frattini subgroup
of $N_i$, for all $i\in \ensuremath{\mathbb{N}}$ (see \cite[Lemma~2.4, Definition~4.1\,(iii)]{DdSMS}).
Thus $N_i/N_{i+1}$ is an $\ensuremath{\mathbb{F}}_p$-vector space, and denoting by $d = \dim_{\mathbb{F}_p} N/N_2$
the minimum number of topological generators for $N$, each quotient $N_i/N_{i+1}$ is isomorphic to
$\mathbb{F}_p^d$. Recall the function $\omega$ in
Definition~\ref{def:omega}.
\begin{defn}
Let $H\leq N$ be open with $N_{m}\leq H$. A $d$-tuple $(h_{1},\dots,h_{d})$
of elements in $H$ is called a \emph{good basis} for $H$ if
\begin{enumerate}
\label{def:good_basis}
\item
$\omega(h_{i})\leq\omega(h_{j})$ whenever $i\leq j$, and
\item
\label{def:good_basis_2}
for each $n\leq m$, the set
\[
\left\{ h_{i}^{p^{n-\omega(h_{j})}}N_{n+1}\bigm|i\in\{1,\dots,d\},\,\omega(h_{i})\leq n\right\}
\]
is a basis for the $\ensuremath{\mathbb{F}}_{p}$-vector space $(N_{n}\cap H)N_{n+1}/N_{n+1}$.
\end{enumerate}
\end{defn}
Notice that the definition is constructive so a good basis for an open subgroup of $N$ always exists.
Notice also that a good basis for $N$ is just an ordered minimal set of topological generators of $N$ and that, by
\cite[Lemma~2.4~(i)]{duSautoy-rationality}, if $H$ is an open subgroup of $N$ and
$(h_1, \dots, h_d)$ is a good basis for $H$, then for every $h \in H$ there are
$\lambda_1, \dots, \lambda_d \in \ensuremath{\mathbb{Z}}_p$ such that
\[
h = h_1^{\lambda_1}\cdots h_d^{\lambda_d}.
\]
The recursive construction in the proof of \cite[Lemma~2.4~(i)]{duSautoy-rationality}
implies that $\lambda_1, \dots, \lambda_d$ are unique with the property above.
\begin{rem}
Good bases give a many-to-one parametrisation of the set of finite
index subgroups of $N$ in terms of $p$-adic analytic coordinates. Indeed the set of
good bases is definable in $\mathcal{M}_N$ by \cite[Lemma~2.8]{duSautoy-rationality}.
By Lemma~\ref{lem:int_M_N}, using $\ensuremath{\mathbb{Z}}_p$-coordinates for $N$, the set of good bases
is interpreted as a definable set in $\struc_{\mathrm{an}}$.
\end{rem}
The parametrisation of $\ensuremath{\mathcal{H}}(K_p)$ is obtained by extending the parametrisation given
by good bases. Let $r=|K_p:N|$. Fix a left transversal $(y_{1},\dots,y_{r})$ for
$N$ in $K_p$ with $y_1=1$.
Every coset $y_iN$ contains a unique coset $x(N\cap H)$, with $x\in H$. Thus, $x=y_it_i$ for some
$t_i\in N$, and we conclude that there exist elements $t_1,\dots,t_{r}\in N$ such that
$(y_1t_1,\dots,y_{r}t_{r})$ is a left transversal for $N\cap H$ in $H$.
The following definition is from \cite[Definition~2.10]{duSautoy-rationality};
see also \cite[p.~261]{duSautoy-Segal-in-Horizons} (note that we use left cosets instead of du~Sautoy's right coset convention).
\begin{defn}
Let $H\in\ensuremath{\mathcal{H}}(K_p)$. A $(d+r)$-tuple $(h_{1},\dots,h_{d},t_{1},\dots,t_{r})$
of elements in $N$ is called a \emph{basis} for $H$ if
\begin{enumerate}
\item $(h_{1},\dots,h_{d})$ is a good basis for $N\cap H$, and
\item $(y_{1}t_{1},\dots,y_{r}t_{{\indexPN}})$ is a (left) transversal for $N\cap H$
in $H$.
\end{enumerate}
\end{defn}
If $(h_{1},\dots,h_{d},t_{1},\dots,t_{r})$
is a basis for $H\in \ensuremath{\mathcal{H}}(K_p)$, it follows from the definition that
\[
H = \bar{\langle h_1, \dots h_d, y_1t_1, \dots, y_r t_r\rangle}.
\]
In particular, unlike a good basis for $N$, a basis for $H$ need not be a (topological) generating set for $H$.
Notice moreover that a basis of $H$ always exists: it suffices to construct a good basis
$(h_{1},\dots,h_{d})$ of $N\cap H$
as described in Definition~\ref{def:good_basis} and then find $t_1,\dots, t_r$ using that
each coset of $N$ in $K_p$ contains a unique coset of $N\cap H$ in
$H$ because $K_p = HN$.
The groups, transversals and bases appearing above are illustrated
by the following diagrams:
\[
\begin{tikzcd}[column sep={3em,between origins},row sep={2.5em,between origins}]
{} & K_p\arrow[dash]{ddl}\arrow[dash]{dr} & {}\\
{} & {} & H\arrow[dash]{ddl}\\
N\arrow[dash]{dr} & {} & {}\\
{} & N\cap H & {}
\end{tikzcd}
\qquad\qquad\qquad
\begin{tikzcd}[column sep={3.5em,between origins},row sep={3em,between origins}]
{} & (y_1,\dots,y_r)\arrow[dash]{ddl}\arrow[dash]{dr} & {}\\
{} & {} & (y_1t_1,\dots, y_r t_r)\arrow[dash]{ddl}\\
(t_1,\dots,t_r)\arrow[dash]{dr} & {} & {}\\
{} & (h_1,\dots,h_d) & {}
\end{tikzcd}
\]
\begin{rem}
By \cite[Lemma~2.12]{duSautoy-rationality}, the set of bases is definable in $\mathcal{M}_N$, hence, by
Lemma~\ref{lem:int_M_N}, can be interpreted as a definable set in $\struc_{\mathrm{an}}$ by passing to $\ensuremath{\mathbb{Z}}_p$-coordinates for~$N$.
\end{rem}
\subsection{The fibres of $\ensuremath{\mathcal{C}}$ in terms of degree one characters.}
\label{subsec:Def_of_fibres}
From now on, let $c \in \coho{2}(K_p/N)$. The aim of this section is to show that the set
$\ensuremath{\mathcal{C}}^{-1}(c)$ may be characterised by a predicate involving only elements of $N$ and degree one
characters of finite index subgroups of $N$. We will at the end of the section produce an $\lan_{\mathrm{an}}$ formula
for the fibre of $\ensuremath{\mathcal{C}}$. We therefore start by reducing the range for $c$ to a cohomology group with
values in the group of roots of unity of order a power of $p$.
In order to do this, we need to set up some notation. Let $W\leq \ensuremath{\mathbb{C}}^{\times}$ be the group of roots of
unity. This is a torsion abelian group so it splits as
\begin{equation*}
W = \prod_{\ell \text{ prime}} W_{(\ell)}
\end{equation*}
where $W_{(\ell)}\leq W$ is the group of roots of unity of order a power of $\ell$. It is clear that $W$
is a divisible group so by \cite[XX, Lemma~4.2]{Lang-Algebra} it is injective in the category of abelian groups,
hence it is complemented in $\ensuremath{\mathbb{C}}^{\times}$. We may therefore fix a homomorphism $\ensuremath{\mathbb{C}}^{\times}\to W$, and for each
prime $\ell$ denote by $\pi_\ell:\ensuremath{\mathbb{C}}^\times \to W_{(\ell)}$ the homomorphism obtained by composing with the
projection $W\to W_{(\ell)}$.
If $f$ is a function with image inside $\ensuremath{\mathbb{C}}^{\times}$ and $\ell$ is a prime, we define
\[
f_{(\ell)} = \pi_{\ell}\circ f.
\]
Note that if $f$ has finite order,
that is, if $f$ has image in
$W$, then $f_{(\ell)}$ coincides with the $\ell$-primary
component of $f$. Note aslo that for any $f,f'$ with codomain $\ensuremath{\mathbb{C}}^{\times}$, we have
\[
(ff')_{(\ell)} = f_{(\ell)}f'_{(\ell)}
\]
(since $\pi_{\ell}$ is a homomorphism).\par
We introduce the following groups:
\begin{align*}
Z_p &= \cocy{2}(K_p/ N, W_{(p)})\\
\twocobop &= \cobo{2}(K_p/ N, W_{(p)}).
\end{align*}
By Lemma~\ref{lem:Z-BU_H-finite} every class $\coho{2}(K_p/ N)$ has a representative in
$Z_p$. Moreover, let $\delta \in \cobo{2}(K_p/ N) \cap Z_p$. Then, by definition, there is a function
$\varphi: K/N \to \ensuremath{\mathbb{C}}^{\times}$ such that for all $a,b \in K/N$ we have
\[
\delta(a,b) = \varphi(a) \varphi(b) \varphi(ab)^{-1}.
\]
Now $\delta$ has values in $W_{(p)}$ already, so, for all $a,b \in K/N$,
\[
\delta (a,b) = \delta_{(p)} (a,b)= \varphi_{(p)} (a) \varphi_{(p)} (b) \varphi_{(p)} (ab)^{-1}.
\]
Thus $\delta \in \twocobop$, and $\cobo{2}(K_p/ N) \cap Z_p = \twocobop$.
It follows that the inclusion of $Z_p$ in $\cocy{2}(K_p/ N)$ induces an
isomorphism $\coho{2}(K_p/ N)\cong Z_p/\twocobop$.\par
We now turn to describing the fibres of the map $\ensuremath{\mathcal{C}}$.
Define $a_{ij}\in N$ and
$\gamma:\lbrace 1, \dots, r \rbrace^{2}\rightarrow\lbrace 1,\dots,r \rbrace$ by
\begin{equation}
\label{eq:gamma}
y_{i}y_{j}=y_{\gamma(i,j)}a_{ij}.
\end{equation}
We also define the inner automorphisms $\varphi_{i}=\varphi_{y_i}:G\rightarrow G$,
$\varphi_{i}(g)=y_{i}gy_{i}^{-1}$, for $g\in G$.
The purpose of the following lemma is to show that the fibres of $\ensuremath{\mathcal{C}}$ are given by a first order statement involving only values of
degree one characters, cocycles and coboundaries.
\begin{lem}
\label{lem:first_o_formula_cohomology}
Let $(H,\chi)\in X_K$ and
let $t_1,\dots,t_{r}\in N$ be such that $(y_1t_1,\dots,y_{r}t_{r})$ is a left transversal
for $N\cap H$ in $H$. Let $\alpha\in Z_p$ such that $[\alpha] = c$.
Then $\ensuremath{\mathcal{C}}(H,\chi)= c$ if and only if there exists $\delta\in \twocobop$
such that for all $n,n'\in N\cap H$ and all $i,j\in\{1,\dots,r\}$,
we have
\begin{equation}
\label{eq:formula_cohomology}
\chi(t_{\gamma(i,j)}^{-1}a_{ij}\varphi_{j}^{-1}(t_{i}n)t_{j}n')\alpha(y_{i}N, y_{j}N)\delta(y_{i}N, y_{j}N)=\chi(nn').
\end{equation}
\end{lem}
\begin{proof}
We have $\ensuremath{\mathcal{C}}(H,\chi)=[\alpha]$ if and only if there exists a strong
extension $\hat{\chi}\in\PIrr_{\hat{\beta}}(H)$ of $\chi$, with
$\hat{\beta}\in \cocy{2}(H)$ such that $f_{H}([\beta])=[\alpha]$.
Since every two strong extensions of $\chi$ to $H$ define the same
element $\ensuremath{\mathcal{C}}_{H}(\chi)\in \coho{2}(H/(N\cap H))$, we may
without loss of generality assume that $\hat{\chi}$ is given by
\begin{equation}
\label{eq:def_ext}
\hat{\chi}(y_{i}t_{i}n)=\chi(n),
\end{equation}
for all $n\in N\cap H$ and $y_{i}t_{i}$. Thus $\ensuremath{\mathcal{C}}(H,\chi)=[\alpha]$
if and only if there exists $\beta\in \cocy{2}(H/(N\cap H))$
such that $f_{H}([\beta])=[\alpha]$ and such that for all $n,n'\in N\cap H$
and all $i,j\in\{1,\dots,r\}$, we have
\[
\hat{\chi}(y_{i}t_{i}n y_{j}t_{j}n')\hat{\beta}(y_{i} t_{i} n,y_{j}t_{j}n')=\hat{\chi}(y_{i}t_{i}n)\hat{\chi}(y_{j}t_{j}n').
\]
Notice that, by definition, $\hat{\chi}$ has values in $W_{(p)}$. Thus we may strengthen the last equivalence by assuming
that $\hat{\beta} \in \cocy{2}(H, W_{(p)})$ and consequently $\beta \in Z_p$.
The last equation is equivalent to
\[
\hat{\chi}(y_{i}t_{i}n y_{j}t_{j}n')\beta(y_{i}t_{i}(N\cap H), y_{j}t_{j}(N\cap H))=\chi(nn').
\]
Furthermore, $y_it_i(N\cap H)\subseteq y_iN$, so $f_{H}([\beta])=[\alpha]$
if and only if there exists a $\delta\in \twocobop$ such that for all $i,j\in\{1,\dots,r\}$, we have
\[
\beta(y_{i}t_{i}(N\cap H), y_{j}t_{j}(N\cap H))=\alpha(y_{i}N,y_{j}N)\delta(y_{i}N,y_{j}N).
\]
Notice that here we were able to restrict the range for $\delta$ to $\twocobop$, because we could assume
that $\beta \in Z_p$ and we chose $\alpha \in Z_p$.
Combining these two statements of equivalence we obtain that $\ensuremath{\mathcal{C}}(H,\chi)=[\alpha]$
if and only if there exists $\delta\in \twocobop$ such
that for all $n,n'\in N\cap H$ and for all $i,j\in\{1,\dots,r\}$,
we have
\[
\hat{\chi}(nt_{i}y_{i}n't_{j}y_{j})\alpha(y_{i}N,y_{j}N)\delta(y_{i}N,y_{j}N) = \chi(nn').
\]
Hence, to finish the proof, we need to show that
\[
\hat{\chi}(nt_{i}y_{i}n't_{j}y_{j})
= \chi(t_{\gamma(i,j)}^{-1}a_{ij}\varphi_{j}^{-1}(t_{i}n)t_{j}n').
\]
Indeed, this follows from \eqref{eq:def_ext} and the identities
\begin{align*}
y_{i}t_{i}ny_{j}t_{j}n' & =y_{i}y_{j}y_{j}^{-1}t_{i}ny_{j}t_{j}n'\\
& =y_{i}y_{j}\varphi_{j}^{-1}(t_{i}n)t_{j}n'\\
& =y_{\gamma(i,j)}a_{ij}\varphi_{j}^{-1}(t_{i}n)t_{j}n'\\
& =y_{\gamma(i,j)}t_{\gamma(i,j)}t_{\gamma(i,j)}^{-1}a_{ij}\varphi_{j}^{-1}(t_{i}n)t_{j}n',
\end{align*}
noting that $t_{\gamma(i,j)}^{-1}a_{ij}\varphi_{j}^{-1}(t_{i}n)t_{j}n'$
lies in $H$ (since $y_{i}t_{i}ny_{j}t_{j}n'$ and $y_{\gamma(i,j)}t_{\gamma(i,j)}$ do), and therefore in $N\cap H$.
\end{proof}
\subsection{Definable sets for $Z_p$ and $\twocobop$.}
We shall now introduce the definable sets that will be used to interpret predicates
quantifying over $Z_p$ and $\twocobop$.
\begin{rem}
\label{ass:prufer}
It is well-known that $\ensuremath{\mathbb{Q}}_p / \ensuremath{\mathbb{Z}}_p$ is isomorphic
to $W_{(p)}$ via the map $\iota: a/p^m+\ensuremath{\mathbb{Z}}_p \mapsto e^{2\pi i a/p^m}$
(cf.~\cite[Lemma~8.7]{hrumar2015definable}).
\end{rem}
\begin{lem}
\label{lem:schur_def_sets}
Define $\mathcal{Z}$ and $\mathcal{B}$ to be the sets of matrices $(z_{ij}) \in {\M_{r}(\ensuremath{\mathbb{Q}}_p)}$ such that the map
\[
( y_iN, y_jN)
\longmapsto \iota(z_{ij} + \ensuremath{\mathbb{Z}}_p), \quad \text{for}\ {i,j\in \lbrace 1, \dots, r \rbrace}
\]
is in $Z_p$ and $\twocobop$ respectively. Then $\mathcal{Z}$ and $\mathcal{B}$ are definable subsets of $\ensuremath{\mathbb{Q}}_p^{r^2}$ in
$\struc_{\mathrm{an}}$.
\end{lem}
\begin{proof}
Let $(z_{ij}) \in \M_{r}(\ensuremath{\mathbb{Q}}_p)$ and let $\alpha$ be the the map
$K_p/N \times K_p/N \to \ensuremath{\mathbb{Q}}_p/\ensuremath{\mathbb{Z}}_p$ defined as
\[
\alpha( y_iN, y_jN)
\longmapsto \iota( z_{ij} + \ensuremath{\mathbb{Z}}_p), \qquad i,j \in \lbrace 1, \dots, r \rbrace.
\]
Imposing that $\alpha$ satisfy the $2$-cocycle identity, we obtain that $(z_{ij}) \in \mathcal{Z}$
if and only if for all $i,j,k \in \lbrace 1, \dots, r \rbrace$, we have
\[
z_{\gamma(i,j)\, k} + z_{ij} = z_{i \, \gamma(j,k)} + z_{jk} \mod \ensuremath{\mathbb{Z}}_p,
\]
where $\gamma$ is as defined in \eqref{eq:gamma}.
Notice that $\ensuremath{\mathbb{Z}}_p$ is definable in $\struc_{\mathrm{an}}$,
hence equivalence modulo $\ensuremath{\mathbb{Z}}_p$ is a definable relation. It follows that the set $\mathcal{Z}$
is definable in $\struc_{\mathrm{an}}$.\par
%
The set $\mathcal{B}$ is also definable in $\struc_{\mathrm{an}}$. Indeed we have that $\delta \in \twocobop$ if and only if
\[
\delta(x,y) = \varphi(x) \varphi(y) \varphi(xy)^{-1},
\]
for some function $\varphi: K_p/N \rightarrow \ensuremath{\mathbb{Q}}_p/\ensuremath{\mathbb{Z}}_p$.
We parametrise the functions
$K_p/N\rightarrow \ensuremath{\mathbb{Q}}_p$ by the $r$-tuples
of their values on $y_1N,\dots, y_{r}N$. In these coordinates,
we obtain that $\alpha \in \twocobop$ if and only if there are $b_1, \dots, \allowbreak b_r \in \ensuremath{\mathbb{Q}}_p$
with the property that for all $i,j \in \lbrace 1, \dots, r \rbrace$,
\[
z_{ij} = b_i + b_j - b_{\gamma(i,j)} \mod \ensuremath{\mathbb{Z}}_p.
\]
This is a definable predicate in $\struc_{\mathrm{an}}$ so $\mathcal{B}$ is definable in $\struc_{\mathrm{an}}$.
\end{proof}
\subsection{Definability of the fibres of $\ensuremath{\mathcal{C}}$}
We now find a definable parametrisation of the fibres of $\ensuremath{\mathcal{C}}$ in Corollary~\ref{cor:surj_coho}.
We need the following lemma to definably express $K$-stability of characters by an $\lan_{\mathrm{an}}$-formula.
\begin{lem}
\label{lem:conj_induced}
Let $M$ be a finite index subgroup of $N$ and $\chi$ be a character of $M$ of degree one.
Then, for all $g \in G$,
\[
\leftexp{g}{\big(\mathrm{Ind}_M^N \chi\big)} = \mathrm{Ind}_{\leftexp{g}{M}}^N\leftexp{g}{\chi}.
\]
Moreover if $M'$ is another finite index subgroup of $N$ and $\chi, \chi'$ are degree one characters
of $M$ and $M'$ respectively, such that $\Ind_M^N \chi$ and $\Ind_{M'}^N \chi'$ are irreducible, then
$\Ind_M^N \chi = \Ind_{M'}^N \chi'$ if and only if there exists $g \in N$ such that
$\Res^{\leftexp{g}{M}}_{\leftexp{g}{M} \cap M'} \leftexp{g}{\chi} = \Res^{M'}_{\leftexp{g}{M} \cap M'} \chi'$.
\end{lem}
\begin{proof}
The proof of the first statement is a routine check using the formula for an induced character.
The second statement follows from Mackey's theorem
(cf.\ \cite[Proposition~8.6~(c)]{hrumar2015definable}).
\end{proof}
We are ready to construct the definable set parametrising $\ensuremath{\mathcal{C}}^{-1}(c)\subseteq X_K$. Let from now on $n_1, \dots, n_d \in N$ be a minimal
topological generating set for $N$.
\begin{prop}
\label{pro:X_definable}
Let $c \in \coho{2}(K_p/ N)$ and let $\ensuremath{\mathcal{D}}^c$ be the set of pairs
$(\tuple{\lambda}, \tuple{\xi})$, $\tuple{\lambda}\in \M_{d\times (d + r)}(\ensuremath{\mathbb{Z}}_p)$, $\tuple{\xi}=(\xi_1,\dots,\xi_d)\in \ensuremath{\mathbb{Q}}_p^{d}$ such that:
\begin{enumerate}
\item \label{pro:X_definable_1} the columns of $\tuple{\lambda}$ are the $\ensuremath{\mathbb{Z}}_p$-coordinates with respect to
$n_1, \dots, n_d$ of a basis $(h_1,\dots, h_d, t_1,\dots, t_r)$ for some subgroup
$H \in \ensuremath{\mathcal{H}}(K_p)$.
%
\item \label{pro:X_definable_2} The function $\{h_1,\dots, h_d\} \rightarrow \ensuremath{\mathbb{Q}}_p/\ensuremath{\mathbb{Z}}_p$, $h_i\mapsto \xi_i + \ensuremath{\mathbb{Z}}_p$,
extends to a (necessarily unique) continuous $H$-invariant homomorphism
\[
\chi: N \cap H\longrightarrow \ensuremath{\mathbb{Q}}_p/\ensuremath{\mathbb{Z}}_p.
\]
%
\item \label{pro:X_definable_3} $\Ind_{N\cap H}^N (\iota \circ \chi) \in
\Irr_K(N)$,
%
\item \label{pro:X_definable_4} $\ensuremath{\mathcal{C}}(H, (\iota \circ \chi)) = c$.
\end{enumerate}
Then $\ensuremath{\mathcal{D}}^c$ is a definable subset of $\ensuremath{\mathbb{Q}}_p^{d(d+r+1)}$
in $\struc_{\mathrm{an}}$.
\end{prop}
\begin{proof}
Condition \ref{pro:X_definable_1} is expressible by an $\lan_{\mathrm{an}}$-formula by
\cite[Lemma~2.12]{duSautoy-rationality}. Following the proof of
\cite[Lemma~8.8]{hrumar2015definable}, we show that if \ref{pro:X_definable_1}
holds, then \ref{pro:X_definable_2} holds if and only if:
\renewcommand{\theenumi}{\emph{\alph{enumi})}}
\begin{enumerate}
\item \label{pro:X_equivalent_1}
there exists $(\mu_{ij}) \in \M_{d}(\ensuremath{\mathbb{Z}}_p)$
whose columns are the $\ensuremath{\mathbb{Z}}_p$-coordinates of a good basis for some finite index
normal subgroup $M$ of $N \cap H$;
\item \label{pro:X_equivalent_3}
there exist $\xi \in \ensuremath{\mathbb{Q}}_p$, $r_1, \dots, r_d \in \ensuremath{\mathbb{Z}}_p$, and $h \in N\cap H$ such
that the order of $\xi$ in $\ensuremath{\mathbb{Q}}_p/\ensuremath{\mathbb{Z}}_p$ is $\lvert N\cap H : M\rvert$ and for every
$i,j \in \lbrace 1,\dots, d\rbrace$ we have
\begin{align*}
h_j &=\leftexp{t_i^{-1}}{\varphi_i^{-1}(h^{r_j})} \mod M\\
r_i \xi &= \xi_i \mod \ensuremath{\mathbb{Z}}_p.
\end{align*}
\end{enumerate}
\renewcommand{\theenumi}{\emph{\roman{enumi})}}
Suppose that conditions \ref{pro:X_definable_1} and \ref{pro:X_definable_2} in the statement hold. Then
$\chi$ factors through a finite quotient of $N \cap H$. Set $M = \Ker \chi$ and choose
$(\mu_{ij}) \in \M_{d}(\ensuremath{\mathbb{Z}}_p)$ such that its
columns are the $\ensuremath{\mathbb{Z}}_p$-coordinates of a good basis of $M$.
Condition \ref{pro:X_equivalent_1} is immediately satisfied.
Moreover the group $(N\cap H)/M$ is cyclic, because it is isomorphic to a subgroup of $\ensuremath{\mathbb{C}}^{\times}$.
This, together with the $H$-invariance of $\chi$, implies condition \ref{pro:X_equivalent_3} for $h \in N\cap H$
such that $(N\cap H)/M = \langle h M \rangle$, $\xi \mathrel{\mathop:}= \chi (h)$ and $r_1, \dots, r_d \in \ensuremath{\mathbb{Z}}$ such that, for
$i \in \lbrace 1, \dots, d\rbrace$, $h_i M = h^{r_i} M$.\par
%
Conversely, assume there are $(\mu_{ij})\in \M_{d}(\ensuremath{\mathbb{Z}}_p)$, $h \in H$, and $\xi \in \ensuremath{\mathbb{Q}}_p$
such that \ref{pro:X_equivalent_1} and \ref{pro:X_equivalent_3}
hold. We define a continuous homomorphism $\chi: N \cap H\rightarrow \ensuremath{\mathbb{Q}}_p/\ensuremath{\mathbb{Z}}_p$ as follows.
By \cite[Lemma~1.19]{duSautoy-rationality} the map $\ensuremath{\mathbb{Z}}_p\rightarrow N\cap H$ defined by
$\lambda \mapsto h^{\lambda}$ is analytic in the $\ensuremath{\mathbb{Z}}_p$-coordinates of $N$
and therefore it is continuous. Since $M$ is an open subgroup, we may find a neighbourhood
of $U$ of $0$ such that $h^\lambda \in M$ for all $\lambda \in U$. Now, $\ensuremath{\mathbb{Z}}$ is dense in $\ensuremath{\mathbb{Z}}_p$,
so, for all $i \in \lbrace 1,\dots, d\rbrace$, we may find $s_i \in (r_i + U) \cap \ensuremath{\mathbb{Z}}$. Clearly, since
$s_i \in r_i + U$, we have
\[
h^{s_i} M = h^{r_i} M = h_i M,
\]
showing that $(N\cap H) / M$ is cyclic with generator $h M$.\par
%
By assumption, the order of $\xi + \ensuremath{\mathbb{Z}}_p$ in $\ensuremath{\mathbb{Q}}_p/\ensuremath{\mathbb{Z}}_p$ is equal to the order of $h M$ in $(N \cap H)/M$, thus
we have an injective homomorphism $\beta: (N \cap H)/M\rightarrow \ensuremath{\mathbb{Q}}_p/\ensuremath{\mathbb{Z}}_p$ defined by
$h M \mapsto \xi + \ensuremath{\mathbb{Z}}_p$. We define $\chi: N \cap H\rightarrow \ensuremath{\mathbb{Q}}_p/\ensuremath{\mathbb{Z}}_p$ to be the composition
of $\beta$ with the canonical projection $N \cap H \rightarrow (N \cap H)/M$. The latter is continuous
by \cite[Proposition~1.2]{DdSMS}, so $\chi$ is a continuous homomorphism. Since $y_1 = 1$ by assumption,
$t_1 \in N \cap H$. So, for all $j \in \lbrace 1, \dots, d\rbrace$,
\[
\chi(h_j) = \chi( \leftexp{t_1^{-1}}{h^{r_j}}) = r_j \xi + \ensuremath{\mathbb{Z}}_p= \xi_j + \ensuremath{\mathbb{Z}}_p.
\]
Similarly, for $i, j \in \lbrace 1, \dots, d\rbrace$, we have
$\chi(\leftexp{t_i^{-1}}{\varphi_i^{-1}(h_j)}) = \xi_j + \ensuremath{\mathbb{Z}}_p$ showing that $\chi$ is $H$-invariant.\par
%
Conditions \ref{pro:X_equivalent_1} and \ref{pro:X_equivalent_3} become $\lan_{\mathrm{an}}$-formulas
by passing to $\ensuremath{\mathbb{Z}}_p$-coordinates with respect to $n_1, \dots, n_d$ and via the interpretation
of $\mathcal{M}_N$ in $\struc_{\mathrm{an}}$ of Lemma~\ref{lem:int_M_N}. Notice that membership in $N \cap H$
can be expressed by means of the $\ensuremath{\mathbb{Z}}_p$-coordinates of $N$ because we assumed that
$h_1, \dots, h_d$ is a good basis by \ref{pro:X_definable_1}. Moreover, equivalence modulo
$M$ is definable in $\struc_{\mathrm{an}}$, as we have a good basis for $M$. Finally, the condition on the order
of $\xi$ is equivalent to
\[
\left(h^{(\xi^{-1})} \in M\right) \wedge
\left(\forall\, \eta \in \ensuremath{\mathbb{Q}}_p \, (\ord(\eta) > \ord (\xi)
\Rightarrow h^{(\eta^{-1})} \notin M)\right).
\]\par
We now show that condition \ref{pro:X_definable_3} is definable.
To simplify notation we will throughout the rest of the proof identify the group $\ensuremath{\mathbb{Q}}_p/ \ensuremath{\mathbb{Z}}_p$ with $W_{(p)}$ through $\iota$.
Under this identification, we re-define $\chi = \iota \circ \chi$.\par
%
First we show that the irreducibility
of $\mathrm{Ind}_{N\cap H}^N \chi$ is expressible as an $\lan_{\mathrm{an}}$-formula. Indeed, by Mackey's
irreducibility criterion $\Ind_{N\cap H}^N \chi$ is irreducible if and only if
\[
\forall\, g\in N\ : \Big(\big(\forall\, h\in N\cap H,\,\chi(\leftexp{g}{h})
= \chi(h)\big)\Longrightarrow g\in H\Big).
\]
By \ref{pro:X_definable_1} and \ref{pro:X_definable_2} we may rewrite the formula above in terms of
$\tuple{\lambda}$, $\tuple{\xi}$ and of the $\ensuremath{\mathbb{Z}}_p$-coordinates in $N$. By Lemma~\ref{lem:int_M_N}, this gives
an $\lan_{\mathrm{an}}$-formula for the irreducibility statement in condition \ref{pro:X_definable_3}.
To conclude the proof that this condition gives rise to a definable set, we show that $K$-invariance can also
be expressed by an $\lan_{\mathrm{an}}$-formula.
Indeed, let $u = \lvert K : N \rvert$ and
${m} = \lvert G : N \rvert$. Fix $y_{{\indexPN} + 1}, \dots, y_{{m}} \in G$ such that
$(y_1, \dots, y_u)$ and $(y_1, \dots, y_{m})$ are left transversals of $N$ in $K$ and
$G$ respectively. Recall that, for $g\in G$, we denote by $\varphi_g$ the
conjugation by $g$ on $N$. Let
\begin{align*}
C_K & = \lbrace \varphi_{y_i} \mid i \in \lbrace 1, \dots, u\rbrace \rbrace\\
C_G &= \lbrace \varphi_{y_i} \mid i \in \lbrace 1, \dots, {m} \rbrace \rbrace.
\end{align*}
Notice that $C_K \subseteq C_G$. By Lemma~\ref{lem:conj_induced},
the stabiliser of $\Ind_{N\cap H}^{N}\chi$ is equal to $K$ if and only the following statement holds:
\begin{equation}
\label{eq:stab_is_K}
\forall\, \varphi \in C_G\ : \Big(\Ind_{N\cap H}^{N}\chi=\Ind_{\varphi (N\cap H)}^{N}\chi \circ\varphi^{-1}
\Longleftrightarrow \varphi \in C_K \Big).
\end{equation}
Fix
$i\in\lbrace 1,\dots, {m} \rbrace$. Lemma~\ref{lem:conj_induced} with
$M = N \cap H$, $M' = \leftexp{y_i}{(N \cap H})$ and $\chi' = \leftexp{y_i}{\chi}$ implies that
$\Ind_{N\cap H}^{N}\chi=\Ind_{\varphi_{y_i}(N\cap H)}^{N}\chi \circ\varphi_{y_i}^{-1}$ if and only if
\[
\exists\, g \in N,\ \forall\, h\in N\cap H\ : \big(\leftexp{g}{h} \in \leftexp{y_i}{(N \cap H}) \Longrightarrow \chi(h)
= \leftexp{y_i}{\chi}(\leftexp{g}{h})\big).
\]
Again, by \ref{pro:X_definable_1} and \ref{pro:X_definable_2}, we may write the latter in terms of
$\tuple{\lambda}$, $\tuple{\xi}$ and of the $\ensuremath{\mathbb{Z}}_p$-coordinates in $N$. Substituting in \eqref{eq:stab_is_K} finishes the
proof that
condition \ref{pro:X_definable_3}
is definable. Notice that we are allowed to conjugate elements of
$N$ by elements of $G$ because we have corresponding function symbols $\varphi_g$ in $\mathcal{L}_N$ (and these
are interpreted as definable functions in $\struc_{\mathrm{an}}$ by Lemma~\ref{lem:int_M_N}).\par
%
Finally we show that also \ref{pro:X_definable_4} can be expressed by
an $\lan_{\mathrm{an}}$-formula. Fix $\alpha \in Z_p$ such that $[\alpha] = c$. By Lemma~\ref{lem:first_o_formula_cohomology},
condition \ref{pro:X_definable_4} is equivalent to
\begin{multline}
\label{eq:coho_is_c}
\exists\, \delta \in \twocobop\ :
\Bigg(\bigwedge_{i,j\in\{1,\dots,r\}}
\forall\, n, n' \in N \cap H\\
\Big(
\chi(t_{\gamma(i,j)}^{-1}a_{ij}\varphi_{j}^{-1}(t_{i}n)t_{j}n') \alpha(y_{i}N, y_{j}N) \delta(y_{i}N, y_{j}N)= \chi(nn').
\Big)\Bigg).
\end{multline}
We parametrise $\alpha$ by an element of $\ensuremath{\mathcal{Z}}$. Now, by Lemma~\ref{lem:schur_def_sets},
$\exists \delta \in \twocobop$ in the formula above may be interpreted as $\exists (d_{ij}) \in \ensuremath{\mathcal{B}}$.
In particular, if
$(b_{ij})$ and $(d_{ij})$ represent $\alpha$ and
$\delta$ respectively, then we have
\begin{align*}
\alpha(y_{i}N,y_{j}N) &= b_{ij} + \ensuremath{\mathbb{Z}}_p\\
\delta(y_{i}N,y_{j}N) &= d_{ij}+ \ensuremath{\mathbb{Z}}_p
\end{align*}
for all $i,j\in \lbrace 1, \dots, r\rbrace$.
Using \ref{pro:X_definable_1} and \ref{pro:X_definable_2} as before,
we may write the equalities in \eqref{eq:coho_is_c} as equalities modulo $\ensuremath{\mathbb{Z}}_p$
involving $\tuple{\lambda}$, $\tuple{\xi}$ and the $\ensuremath{\mathbb{Z}}_p$-coordinates in $N$. It follows
that \ref{pro:X_definable_4} is definable by an $\lan_{\mathrm{an}}$-formula and we conclude the proof.
\end{proof}
Proposition~\ref{pro:X_definable} shows that
there is a surjective map $\Psi: \ensuremath{\mathcal{D}}^c \rightarrow \ensuremath{\mathcal{C}}^{-1}(c)$ defined by
$(\tuple{\lambda}, \tuple{\xi})\mapsto (H, \chi)$ where $H\in\ensuremath{\mathcal{H}}(K_p)$ is the subgroup corresponding to
the basis $(h_1,\dots, h_d, t_1,\dots, t_r)$ of Proposition~\ref{pro:X_definable}~\ref{pro:X_definable_1}
and $\chi$ is as in Proposition~\ref{pro:X_definable}~\ref{pro:X_definable_2}.
\subsection{Finishing the proof of Theorem~\ref{thm:Main}}
We write the partial zeta series as a generating function enumerating the equivalence classes of a family
of definable equivalence relations. We conclude rationality of the partial zeta series
by Theorem~\ref{thm:rational_series}. Theorem~\ref{thm:Main} then follows from
Proposition~\ref{prop:partial-Main}.\par
We start by constructing a definable equivalence
relation on $\ensuremath{\mathcal{D}}^c$ whose equivalence classes will be in bijection with
$\Irr_K^{c}(N)$.
Let $(\tuple{\lambda}, \tuple{\xi}), (\tuple{\lambda}', \tuple{\xi}') \in \ensuremath{\mathcal{D}}^c$ and
let $(H, \chi) = \Psi (\tuple{\lambda}, \tuple{\xi})$ and $(H', \chi') = \Psi (\tuple{\lambda}', \tuple{\xi}')$. We define an
equivalence relation $\mathbin{\mathcal{E}}$ on $\ensuremath{\mathcal{D}}^c$ by
\[
((\tuple{\lambda}, \tuple{\xi}), (\tuple{\lambda}', \tuple{\xi}')) \in \mathbin{\mathcal{E}} \Longleftrightarrow
\mathrm{Ind}_{N\cap H}^N \chi = \mathrm{Ind}_{N\cap H'}^N \chi'.
\]
\begin{lem}
The relation $\mathbin{\mathcal{E}}$ is definable in $\struc_{\mathrm{an}}$.
\end{lem}
\begin{proof}
Let $(H, \chi), (H', \chi')$ be as above. By Lemma~\ref{lem:conj_induced},
$\Ind_{N\cap H}^N \chi = \Ind_{N\cap H'}^N \chi'$ if and only if
\[
\exists\, g \in N,\ \forall\, h\in N \cap H\ \left(\leftexp{g}{h} \in N \cap H' \Longrightarrow \chi(h) = \chi'(\leftexp{g}{h})\right).
\]
Writing this in the $\ensuremath{\mathbb{Z}}_p$-coordinates of $N$ we obtain an $\lan_{\mathrm{an}}$-formula. Restricting this formula to
the definable set $\ensuremath{\mathcal{D}}^c$ we obtain the $\lan_{\mathrm{an}}$-formula defining
$\mathbin{\mathcal{E}}$.
\end{proof}
Composing $\Psi$ with the surjective map $X_K\rightarrow \Irr_K(N)$ of Corollary~\ref{cor:surj_coho}
induces a bijection between the set of equivalence classes $\ensuremath{\mathcal{D}}^c/\mathbin{\mathcal{E}}$ and $\Irr_K^c(N)$. We now use
this bijection to produce a definable family of equivalence relations giving the partial zeta series.
For $(\tuple{\lambda}, \tuple{\xi}) \in \ensuremath{\mathcal{D}}^c$,
write $(h_1(\tuple{\lambda}), \dots, h_d(\tuple{\lambda}))$ for the good basis associated
with $\tuple{\lambda}$ by Proposition~\ref{pro:X_definable}~\ref{pro:X_definable_1}. The function $f:\ensuremath{\mathcal{D}}^c\rightarrow \ensuremath{\mathbb{Z}}$ given by
\[
(\tuple{\lambda}, \tuple{\xi}) \longmapsto \sum_{i = 1}^{d} {\omega(h_i(\tuple{\lambda})) - 1}
\]
is definable in $\struc_{\mathrm{an}}$ because $\mathcal{M}_N$ is definably interpreted in $\struc_{\mathrm{an}}$ and,
under this interpretation, $\omega$ becomes a definable function
by \cite[Theorem~1.18~(iv)]{duSautoy-rationality}. Notice that, if $\Psi(\tuple{\lambda}, \tuple{\xi}) = (H, \chi)$,
then, by the discussion preceding \cite[Lemma~2.8]{duSautoy-rationality}, $p^{f(\tuple{\lambda}, \tuple{\xi})}$
is the index of $N\cap H$ in $N$, which is equal to the degree of $\Ind_{N\cap H}^{N} \chi$.\par
Let
\[
F: \mathbin{\mathcal{E}} \longrightarrow \ensuremath{\mathbb{Z}}
\]
be the function defined by
$((\tuple{\lambda}, \tuple{\xi}), (\tuple{\lambda}', \tuple{\xi}'))\mapsto f(\tuple{\lambda}, \tuple{\xi})$.
The function $f$ is definable, hence $F$ is definable. It follows that,
for $n \in \ensuremath{\mathbb{N}}_0$, the fibre of $F$ at $n$ gives a definable subset of $\mathbin{\mathcal{E}}$. Let
$\mathbin{\mathcal{E}}_n = F^{-1}(n)$. The projection onto the
first component of a product is a definable function so the sets
\[
\ensuremath{\mathcal{D}}^c_n = \lbrace (\tuple{\lambda}, \tuple{\xi}) \in \ensuremath{\mathcal{D}}^c \mid f(\tuple{\lambda}, \tuple{\xi}) = n\rbrace
\]
are definable for all $n \in \ensuremath{\mathbb{N}}_0$. Furthermore, if
$((\tuple{\lambda}, \tuple{\xi}), (\tuple{\lambda}', \tuple{\xi}')) \in \mathbin{\mathcal{E}}$, then the degrees of
the associated induced characters are equal, and so $f(\tuple{\lambda}, \tuple{\xi})= f(\tuple{\lambda}', \tuple{\xi}')$.
This implies that $\mathbin{\mathcal{E}}_n = \mathbin{\mathcal{E}} \cap (\ensuremath{\mathcal{D}}^c_n \times \ensuremath{\mathcal{D}}^c_n)$, so
each $\mathbin{\mathcal{E}}_n$ is an equivalence relation on $\ensuremath{\mathcal{D}}^c_n$ and $\lbrace \mathbin{\mathcal{E}}_n \rbrace_{n \in \ensuremath{\mathbb{N}}_0}$ is a definable
family of equivalence relations.\par
Since, for all $n\in \ensuremath{\mathbb{N}}_0$,
the set $\ensuremath{\mathcal{D}}^c_n/\mathbin{\mathcal{E}}_n$ is in bijection with the subset of characters of degree $p^n$ in $\Irr_K^c(N)$,
it follows that
\[
Z_{N;K}^{c}(s) = \sum_{n \in \ensuremath{\mathbb{N}}_0} \#(\ensuremath{\mathcal{D}}^c_n/\mathbin{\mathcal{E}}_n) p^{-ns}.
\]
Applying Theorem~\ref{thm:rational_series} to the series above
we deduce that $Z_{N;K}^{c}(s)$ is a rational function in $p^{-s}$. This concludes the proof.
\section{Twist classes and Clifford theory\label{sec:Twist-iso-Clifford}}
From now on and throughout the rest of this paper, we will develop
results that will lead up to the proof of Theorem~\ref{thm:Main-twist}.
The main goal of the present section is to define a cohomology class $\ensuremath{\mathcal{T}}_{L,K,\Gamma}(\tic{\theta})$
attached to a twist class $\tic{\theta}$ of $N$. In the following section,
we will show that $\ensuremath{\mathcal{T}}_{L,K,\Gamma}(\tic{\theta})$ controls the number of
$G$-twist classes of $L$ lying above a given $\tic{\theta}$. In this sense,
the function $\ensuremath{\mathcal{T}}_{L,K,\Gamma}$ can be thought of as an analogue of the
function $\ensuremath{\mathcal{C}}_K$ used earlier. Note however, that we will need to use
\emph{both} of these functions to establish Theorem~\ref{thm:Main-twist}.
Throughout the current section, we let $G$ be an arbitrary profinite group.
We say that two irreducible continuous complex representations $\rho,\sigma$ of $G$ are \emph{twist equivalent} if there exists
a one-dimensional representation $\psi$ of $G$ such that $\rho\otimes\psi\cong\sigma$.
This equivalence relation partitions the set of irreducible representations
of $G$ into \emph{twist isoclasses}. Let $\Lin(G)$ denote the set of characters in $\Irr(G)$ of degree one, that is, the linear continuous characters of $G$. We say that $\lambda,\delta\in\Irr(G)$ are \emph{twist equivalent} or lie in the same \emph{twist class} if $\lambda=\delta\psi$,
for some $\psi\in\Lin(G)$. Of course two representations are twist equivalent if and only if the characters they afford are. Note that twist equivalence preserves the dimension of representations, so we can speak of the dimension (degree) of a twist isoclass (twist class).
If $H\leq G$ and $\psi:G\rightarrow\ensuremath{\mathbb{C}}^{\times}$ is a function (e.g.,
a degree one character), we will write $\psi|_{H}$ for $\Res_{H}^{G}(\psi)$.
We now define a twist equivalence relation for representations of
a subgroup of $G$, where the twisting is by degree one characters
which extend to $G$.
\begin{defn}
\label{def:G-twist}
Let $H$ be a subgroup of $G$ and let $\rho$ and $\sigma$ be two
irreducible representations of $H$. We say that $\rho$ and $\sigma$
are \emph{$G$-twist equivalent,} and write $\rho\twist_{G}\sigma$,
if there is a $\psi\in\Lin(G)$ such that
\[
\rho\otimes\psi|_{H}\cong\sigma.
\]
Similarly, two irreducible characters $\lambda,\delta\in\Irr(H)$
are \emph{$G$-twist equivalent}, written $\lambda\twist_{G}\delta$,
if $\lambda=\delta\psi|_{H}$, for some $\psi\in\Lin(G)$.
\end{defn}
For a character $\theta\in\Irr(H)$, we write $\tic{\theta}$ for the \emph{$G$-twist class} of $\theta$, that is,
\[
\tic{\theta}=\{\rho\in\Irr(H)\mid\rho\twist_{G}\theta\}
\]
and we denote the set of such $G$-twist classes by $\widetilde{\Irr}(H)$.
In particular, when $H=G$, $\widetilde{\Irr}(G)$ is in bijection with the set of twist
isoclasses of $G$.
From now on, let $N$ be a normal subgroup of $G$ of finite index.
The conjugation action of $G$ on $\Irr(N)$ induces an action on
$\widetilde{\Irr}(N)$. Indeed, $g\cdot\tic{\theta}:=\tic{\leftexp{g}{\theta}\,}$
is well-defined because for any $\psi\in\Lin(G)$ and any $n\in N$,
we have
\[
\leftexp{g}{(\psi|_{N}\theta)}(n)=\leftexp{g}{\psi}(n)\leftexp{g}{\theta}(n)=\psi(n)\leftexp{g}{\theta}(n),
\]
and hence $\tic{\leftexp{g}{(\psi|_{N}\theta})}=\tic{\leftexp{g}{\theta}\,}$.
For any $\theta\in\Irr(N)$, define the stabiliser subgroups
\[
K_{\tic{\theta}}=\Stab_{G}(\theta),\qquad L_{\tic{\theta}}=\Stab_{G}(\tic{\theta}).
\]
Note that $K_{\tic{\theta}}$ only depends on the class $\tic{\theta}$
because $\Stab_{G}(\theta)=\Stab_{G}(\theta')$ for any $\theta'\in\tic{\theta}$.
It is clear that $K_{\tic{\theta}}\leq L_{\tic{\theta}}$, but in
fact we also have:
\begin{lem}
The group $K_{\tic{\theta}}$ is normal in $L_{\tic{\theta}}$.
\end{lem}
\begin{proof}
Indeed, if $k\in K_{\tic{\theta}}$, $g\in L_{\tic{\theta}}$ and
$x\in N$, then there exist some $\psi_{g}$, $\psi_{g^{-1}}\in\Lin(G)$
such that
\[
\leftexp{g}{\theta}(y)=\theta(y)\psi_{g}(y),\quad\leftexp{g^{-1}}{\theta}(y)=\theta(y)\psi_{g^{-1}}(y),\qquad\text{for all }y\in N.
\]
Thus
\begin{align*}
\leftexp{gkg^{-1}}{\theta}(x) & =\theta(gk^{-1}g^{-1}xgkg^{-1})=\theta(k^{-1}g^{-1}xgk)\psi_{g}(x)\\
& =\theta(g^{-1}xg)\psi_{g}(x)=\theta(x)\psi_{g^{-1}}(x)\psi_{g}(x).
\end{align*}
But on the other hand,
\begin{align*}
\theta(x) & =\leftexp{gg^{-1}}{\theta}(x)=\leftexp{g}{(\leftexp{g^{-1}}{\theta})}(x)=(\leftexp{g^{-1}}{\theta})(g^{-1}xg)\\
& =\theta(g^{-1}xg)\psi_{g^{-1}}(g^{-1}xg)=\theta(x)\psi_{g}(x)\psi_{g^{-1}}(x),
\end{align*}
so $gkg^{-1}\in K_{\tic{\theta}}$.
\end{proof}
\subsection{Restriction and induction of twist classes}
Let $H$ be a group such that $N\leq H\leq G$. Let $\widetilde{\Irr}(H\mid\tic{\theta})$
be the set of those $G$-twist classes $\tic{\lambda}\in\widetilde{\Irr}(H)$
such that $\lambda\in\Irr(H\mid\psi|_{N}\theta)$, for some $\psi\in\Lin(G)$.
This is well-defined because $\lambda\in\Irr(H\mid\psi|_{N}\theta)$
if and only if $\psi'|_{H}\lambda\in\Irr(H\mid\psi'|_{N}\psi|_{N}\theta)$,
for all $\psi'\in\Lin(G)$.
The following is an immediate consequence of Clifford's theorem (see
\cite[(6.5)]{Isaacs}). Informally, it says that the $G$-twist classes
contained in the restriction to $N$ of $\tic{\rho}\in\widetilde{\Irr}(H\mid\tic{\theta})$
are precisely the $H$-conjugates of $\tic{\theta}$.
\begin{lem}
\label{lem:Cliffords-thm-twists}Let $\tic{\rho}\in\widetilde{\Irr}(H\mid\tic{\theta})$.
Then $\tic{\rho}\in\widetilde{\Irr}(H\mid\leftexp{h}{\tic{\theta}})$, for any
$h\in H$. Moreover, if $\tic{\rho}\in\widetilde{\Irr}(H\mid\tic{\theta'})$,
for some $\theta'\in\Irr(N)$, then there exists an $h\in H$ such
that $\leftexp{h}{\tic{\theta}}=\tic{\theta'}$.
\end{lem}
We now consider induction of twist classes.
Let $H$ and $H'$ be groups such that $K_{\tic{\theta}}\leq H\leq H'\leq G$.
Induction gives rise to a function
\begin{align*}
\twind_{H}^{H'}:\widetilde{\Irr}(H\mid\tic{\theta}) & \longrightarrow\widetilde{\Irr}(H'\mid\tic{\theta})\\
\tic{\lambda} & \longmapsto\tic{\Ind_{H}^{H'}\lambda},
\end{align*}
which is well-defined thanks to the formula $\Ind_{H}^{H'}(\psi|_{H}\lambda)=\psi|_{H'}\Ind_{H}^{H'}(\lambda)$,
for $\psi\in\Lin(G)$, and surjective thanks to standard Clifford
theory (see \cite[(6.11)(b)]{Isaacs}). However, unlike the classical
Clifford correspondence, where induction gives a bijection $\Irr(H\mid\theta)\rightarrow\Irr(H'\mid\theta)$,
the map $\twind_{H}^{H'}$ is not necessarily injective. Nevertheless, once we get up to the group $L_{\tic{\theta}}$, induction of twist classes behaves as in the classical Clifford correspondence, namely:
\begin{lem}
\label{lem:Ind-is-bijective} The map $\twind_{L_{\tic{\theta}}}^{G}$
is bijective.
\end{lem}
\begin{proof}
Let $\tic{\lambda},\tic{\lambda'}\in\widetilde{\Irr}(L_{\tic{\theta}}\mid\tic{\theta})$
such that $\Ind_{L_{\tic{\theta}}}^{G}\lambda\sim_{G}\Ind_{L_{\tic{\theta}}}^{G}\lambda'$.
After multiplying by suitable degree one characters of $G$ restricted
to $L_{\tic{\theta}}$ we may assume that both $\lambda$ and $\lambda'$
lie above $\theta$. By hypothesis, there is a $\psi\in\Lin(G)$ such
that
\[
\Ind_{L_{\tic{\theta}}}^{G}\lambda=\psi\Ind_{L_{\tic{\theta}}}^{G}\lambda',
\]
so by Clifford's theorem there is a $g\in G$ such that $\leftexp{g}{\theta}=\psi|_{N}\theta$.
Thus $g\in L_{\tic{\theta}}$ and $\lambda=\leftexp{g}{\lambda}$
lies above $\psi|_{N}\theta$. Standard Clifford theory now implies
that $\lambda=\psi|_{L_{\tic{\theta}}}\lambda'$ because $\lambda$ and $\psi|_{L_{\tic{\theta}}}\lambda'$ induce
to the same irreducible character of $G$, both lie above $\psi|_{N}\theta$,
and $\Stab_{G}(\psi|_{N}\theta)=K_{\tic{\theta}}\leq L_{\tic{\theta}}$.
\end{proof}
\subsection{\label{subsec:The-function-bar-mu}The function $\bar{\mu}$ attached
to a strong extension}
From now on, let $\theta\in\Irr(N)$ be fixed, let $L$ and $K$ groups
such that
\[
N\leq L\leq L_{\tic{\theta}}\qquad\text{and}\qquad N\leq K\leq K_{\tic{\theta}}.
\]
We assume that $L$ normalises $K$.
The situations we will consider
where this is satisfied are when either $L=L_{\tic{\theta}}$ and
$K=L\cap K_{\tic{\theta}}$ or $L\leq L_{\tic{\theta}}$ and $K=K_{\tic{\theta}}$.
From now on, let $\hat{\theta}\in\PIrr_{\alpha}(K)$ be a strong extension
of $\theta$ to $K$ with factor set $\alpha$. For any $g\in L$,
we have
\begin{equation}
\leftexp{g}{\theta}=\theta\psi_{g}|_{N},\label{eq:g-theta-chi}
\end{equation}
for some $\psi_{g}\in\Lin(G)$. The conjugate projective character
$\leftexp{g}{\hat{\theta}}$ defined by $\leftexp{g}{\hat{\theta}}(x)=\hat{\theta}(g^{-1}xg)$
has factor set $\leftexp{g}{\alpha}$, where
\[
\leftexp{g}{\alpha}(x,y)=\alpha(g^{-1}xg,g^{-1}yg)\qquad\text{for all }x,y\in K.
\]
Since both $\leftexp{g}{\hat{\theta}}$ and $\hat{\theta}\psi_{g}|_{K}$
are strong extensions of $\leftexp{g}{\theta}$, there exists a function
\[
\mu(g):K/N\rightarrow\ensuremath{\mathbb{C}}^{\times}
\]
(i.e., a function on $K$ constant on cosets of $N$) such that
\begin{equation}
\leftexp{g}{\hat{\theta}}=\hat{\theta}\psi_{g}|_{K}\cdot\mu(g).\label{eq:g-hat-theta-mu}
\end{equation}
Note that we may take $\mu(g)=\mu(gn)$, for any $n\in N$ because
$N$ fixes $\hat{\theta}$. Indeed, if $\Theta$ is a representation
affording $\theta$ and $\widehat{\Theta}$ is a projective representation
of $K$ affording $\hat{\theta}$, then, for any $n\in N$ and $x\in K$,
we have
\begin{align}
\leftexp{n}{\hat{\theta}}(x) & =\Tr(\widehat{\Theta}(n^{-1}xn))=\Tr(\Theta(n^{-1})\widehat{\Theta}(x)\Theta(n))=\Tr(\widehat{\Theta}(x))=\hat{\theta}(x).\label{eq:N-fixes-theta-hat}
\end{align}
We will therefore henceforth write $\mu(gN)$ instead of $\mu(g)$.
Using \eqref{eq:g-hat-theta-mu} and the fact that factor sets multiply
under tensor products of projective representations, we deduce that
the factor set of $\mu(gN)$ is $\leftexp{g}{\alpha}\alpha^{-1}$,
that is,
\[
\mu(gN)\in\PIrr_{\leftexp{g}{\alpha}\alpha^{-1}}(K/N).
\]
\begin{lem}
\label{lem:theta-hat-non-trivial-on-coset-Burnside}
For every $x\in K$ there exists an $n\in N$ such that $\hat{\theta}(xn)\neq0$.
Thus, for fixed $\theta$, the function $\mu(gN)$ is uniquely determined
by $gN$, $\hat{\theta}$ and $\psi_{g}|_{K}$.
\end{lem}
\begin{proof}
Let $\Theta$ be a representation affording $\theta$ and $\widehat{\Theta}$
a projective representation of $K$ affording $\hat{\theta}$, so
that $\hat{\theta}(xn)=\Tr(\widehat{\Theta}(xn))=\Tr(\widehat{\Theta}(x)\Theta(n))$.
Assume that $\hat{\theta}(xn)=0$ for all $n\in N$. Then $\Tr(\widehat{\Theta}(x)\Theta(n))=0$
for all $n\in N$, and by a theorem of Burnside (see \cite[(27.4)]{Curtis_Reiner})
the values of $\Theta$ span the whole algebra $\M_{\theta(1)}(\ensuremath{\mathbb{C}})$
of matrices of size $\theta(1)$, so we have $\Tr(\widehat{\Theta}(x)A)=0$,
for all $A\in\M_{\theta(1)}(\ensuremath{\mathbb{C}})$. Since the trace form on $\M_{\theta(1)}(\ensuremath{\mathbb{C}})$
is non-degenerate, this implies that $\widehat{\Theta}(x)=0$, which is
a contradiction. Thus $\hat{\theta}(xn)\neq0$ for some $n\in N$
and
\[
\mu(gN)(xN)=\mu(gN)(xnN)=\leftexp{g}{\hat{\theta}}(xn)\hat{\theta}(xn)^{-1}\psi_{g}(xn)^{-1},
\]
which proves the second assertion.
\end{proof}
We now consider how $\mu(gN)$ depends on $\psi_{g}|_{K}$. By \eqref{eq:g-theta-chi},
we have $\leftexp{g}{\theta}=\theta\psi_{g}|_{N}$. Let $\psi_{g}'\in\Lin(G)$
be such that $\leftexp{g}{\theta}=\theta\psi_{g}'|_{N}$. Then $\theta\, (\psi_{g}\psi_{g}'^{-1})|_{N} = \theta$
and since both $\hat{\theta}\,(\psi_{g}\psi_{g}'^{-1})|_{K}$ and $\hat{\theta}$
are strong extensions of $\theta$, we have
\begin{equation}
\hat{\theta}\,(\psi_{g}\psi_{g}'^{-1})|_{K}=\hat{\theta}\cdot\nu_{g},\label{eq:theta-psi-nu_g}
\end{equation}
for some function $\nu_{g}:K/N\rightarrow\ensuremath{\mathbb{C}}^{\times}$. In fact, since
$\hat{\theta}\,(\psi_{g}\psi_{g}'^{-1})|_{K}$ and $\hat{\theta}$ have
the same factor set, $\nu_{g}$ has trivial factor set, that is, $\nu_{g}$
is a homomorphism.
Thus \eqref{eq:g-hat-theta-mu} can be written
\begin{equation}
\leftexp{g}{\hat{\theta}}=\hat{\theta}\psi_{g}|_{K}\cdot\mu(gN)=\hat{\theta}\psi_{g}'|_{K}\cdot\mu(gN)\nu_{g}.\label{eq:mu-nu_g-ambiguity}
\end{equation}
\begin{defn}\label{def:Gamma}
Define the following subgroup of $\Lin(K/N)$.
\[
\Gamma_{K,\tic{\theta}}=\{\nu\in\Lin(K/N)\mid\hat{\theta}\varepsilon|_{K}=\hat{\theta}\nu,\ \text{for some}\ \varepsilon\in\Lin(G)\}.
\]
(as usual, we denote by $\Lin(K/N)$ the subgroup of $\Lin(K)$ of characters which are trivial on $N$).
\end{defn}
In the present section, $K$ and $\tic{\theta}$ are fixed and we will simply write $\Gamma$ for $\Gamma_{K,\tic{\theta}}$.
Note that $\Gamma$ is independent of the choice of representative
$\theta$ of $\tic{\theta}$ and of the choice of strong extension
$\hat{\theta}$ of $\theta$. Indeed, if $\psi\in\Lin(G)$ and $\hat{\theta}'$
is a strong extension of $\theta\psi|_{N}$, then there exists a function
$\omega:K/N\rightarrow\ensuremath{\mathbb{C}}^{\times}$ such that
\[
\hat{\theta}\psi|_{K}\omega=\hat{\theta}'.
\]
Clearly $\hat{\theta}\varepsilon|_{K}=\hat{\theta}\nu$ holds for some $\varepsilon\in\Lin(G)$,
if and only if $\hat{\theta}\varepsilon|_{K}\psi|_{K}\omega = \hat{\theta}\nu\psi|_{K}\omega$,
that is, by the equation above, if and only if $\hat{\theta}'\varepsilon|_{K}=\hat{\theta}'\nu$.
Moreover, for every $\nu\in\Gamma$ and $\psi_{g}$ as in \eqref{eq:g-hat-theta-mu}, if we
let $\varepsilon\in\Lin(G)$ be such that $\hat{\theta}\varepsilon|_{K}=\hat{\theta}\nu$,
we have that \eqref{eq:theta-psi-nu_g} holds with $\psi'_{g}=\varepsilon^{-1}\psi_{g}$
and $\nu_{g}=\nu$. Thus \eqref{eq:mu-nu_g-ambiguity} implies the
following.
\begin{lem}
For any $g\in L$, the coset $\bar{\mu(gN)}:=\mu(gN)\Gamma$ is independent
of the choice of $\psi_{g}|_{K}$ in \eqref{eq:g-hat-theta-mu}.
\end{lem}
In what follows, for a set $A$, we use the notation $\Func(A,\ensuremath{\mathbb{C}}^{\times})$
to denote the group of functions
$A\rightarrow\ensuremath{\mathbb{C}}^{\times}$ under pointwise multiplication.
The last lemma implies that, when $\theta$ is fixed, $gN$ and $\hat{\theta}$ uniquely
determine $\bar{\mu(gN)}$ and hence $\hat{\theta}$ uniquely determines
the function
\[
\bar{\mu}:L/N\longrightarrow F_{K}/\Gamma,\qquad g\longmapsto\bar{\mu(gN)},
\]
where
\[
F_{K}:=\Func(K/N,\ensuremath{\mathbb{C}}^{\times}).
\]
We endow
the abelian group $F_{K}$ with the structure of $L/N$-module via
$gN\cdot f=\leftexp{g}{f}$, that is, $(gN\cdot f)(xN)=f(g^{-1}xgN)$
(this is well-defined because $K$ is normalised by $L$).
Since $\hat{\theta}\varepsilon|_{K}=\hat{\theta}\nu$
implies, by conjugating both sides by $g$, that $\hat{\theta}\varepsilon|_{K}=\hat{\theta}\leftexp{g}{v}$,
$\Gamma$ is a submodule of $F_{K}$. Thus the quotient $F_{K}/\Gamma$
carries the corresponding $L/N$-module structure.
\subsection{\label{subsec:The-cohom-class-bar-mu}The cohomology class determined
by $\bar{\mu}$}
We now consider how $\bar{\mu}$ depends on the choice of strong
extension $\hat{\theta}$ and the choice of representative $\theta\in\tic{\theta}$.
\begin{prop}
\label{prop:function-mu-cohomology}Let $\theta\in\Irr(N)$ and let
$\hat{\theta}$ be a strong extension of $\theta$ to $K$. The function
$\bar{\mu}$ associated with $\hat{\theta}$ is an element of $\cocy{1}(L/N,F_{K}/\Gamma)$.
The image $[\bar{\mu}]$ of $\bar{\mu}$ in $\coho{1}(L/N,F_{K}/\Gamma)$
is uniquely determined by $\tic{\theta}$, that is, independent of
the choice of strong extension $\hat{\theta}$ and independent of
the choice of representative $\theta\in\tic{\theta}$.
\end{prop}
\begin{proof}
For the first statement, we need to show that $\bar{\mu}$ is a crossed
homomorphism, that is, that for all $g,g'\in L$, $\bar{\mu}(gg'N)=\bar{\mu}(gN)\leftexp{g}{\bar{\mu}(g'N)}$,
or equivalently,
\[
\mu(gN)\leftexp{g}{\mu(g'N)}\Gamma=\mu(gg'N)\Gamma.
\]
By \eqref{eq:g-hat-theta-mu}, there exist some $\psi_{g},\psi_{g'}\in\Lin(G)$
such that
\begin{align*}
\leftexp{gg'}{\hat{\theta}} & =\leftexp{g}{(\hat{\theta}\psi_{g'}|_{K}\cdot\mu(g'N))}=\leftexp{g}{\hat{\theta}}\psi_{g'}|_{K}\cdot\leftexp{g}{\mu(g'N)}\\
& =\hat{\theta}\psi_{g}|_{K}\cdot\mu(gN)\psi_{g'}|_{K}\cdot\leftexp{g}{\mu(g'N)}\\
& =\hat{\theta}\,(\psi_{g}\psi_{g'})|_{K}\cdot\mu(gN)\leftexp{g}{\mu(g'N)}.
\end{align*}
On the other hand, for some $\psi_{gg'}\in\Lin(G)$ we have
$\leftexp{gg'}{\hat{\theta}}=\hat{\theta}\psi_{gg'}|_{K}\cdot\mu(gg'N)$
and hence
\[
\hat{\theta}\psi_{gg'}|_{K}\cdot\mu(gg'N)=\hat{\theta}\,(\psi_{g}\psi_{g'})|_{K}\cdot\mu(gN)\leftexp{g}{\mu(g'N)}.
\]
This is equivalent to
\begin{equation}
\hat{\theta}\,(\psi_{g}\psi_{g'})^{-1}|_{K}\psi_{gg'}|_{K}=\hat{\theta}\cdot\mu(gN)\leftexp{g}{\mu(g'N)}\mu(gg'N)^{-1},
\end{equation}
which implies that $\mu(gN)\leftexp{g}{\mu(g'N)}\mu(gg'N)^{-1}\in\Gamma$. Thus $\bar{\mu}$ is crossed homomorphism.
For the second statement, let $\hat{\theta}'$
be another strong extension of $\theta$ to $K$. Then there exists
a function $\omega\in F_{K}$, such that $\hat{\theta}'=\hat{\theta}\omega$
and hence, for any $g\in L$ there is a $\psi_{g}\in\Lin(G)$ such
that
\begin{equation}
\leftexp{g}{\hat{\theta}'}
\label{eq:another-strong-extn}
= \hat{\theta}\psi_{g}|_{K}\cdot\mu(gN)\leftexp{g}{\omega}
= \hat{\theta}' \psi_{g}|_{K} \cdot \mu(gN) \leftexp{g}{\omega} \omega^{-1}.
\end{equation}
The function
\[
f:gN\mapsto\leftexp{g}{\omega}\omega^{-1}\Gamma=\leftexp{g}{\omega\Gamma}(\omega\Gamma)^{-1}
\]
lies in $\cobo{1}(L/N,F_{K}/\Gamma)$ and \eqref{eq:another-strong-extn}
implies that $[\bar{\mu}]=[\bar{\mu}f]$. Hence both $\hat{\theta}$
and $\hat{\theta}'$ determine the same element $[\bar{\mu}]\in \cocy{1}(L/N,F_{K}/\Gamma)/\cobo{1}(L/N,F_{K}/\Gamma)=\coho{1}(L/N,F_{K}/\Gamma)$.
Finally, we need to show that $[\bar{\mu}]$ is independent of the
choice of representative $\theta\in\tic{\theta}$. Let $\theta'\in\tic{\theta}$
and let $\psi\in\Lin(G)$ be such that $\theta'=\theta\psi|_{N}$.
Then $\hat{\theta}\psi|_{K}$ is a strong extension of $\theta'$.
We want to compute $\bar{\mu}$ of $\theta'$ with respect to $\hat{\theta}\psi|_{K}$.
For any $g\in L$, \eqref{eq:g-hat-theta-mu} yields
\begin{align*}
\leftexp{g}{(\hat{\theta}\psi|_{K})} & =\leftexp{g}{\hat{\theta}}\psi|_{K}=\hat{\theta}\psi_{g}|_{K}\psi|_{K}\cdot\mu(gN)\\
& =(\hat{\theta}\psi|_{K})\psi_{g}|_{K}\cdot\mu(gN).
\end{align*}
Thus $\theta$ and $\theta'$ give rise to the same element
$\bar{\mu}$, with respect to the strong extensions $\hat{\theta}$ and $\hat{\theta}\psi|_{K}$, respectively.
By the independence of $[\bar{\mu}]$ on the choice of strong extension
proved above, we conclude that $\theta$ and $\theta'$ give rise to the same element $[\bar{\mu}]$.
\end{proof}
\subsection{The function $\ensuremath{\mathcal{T}}_{L,K,\Gamma}$}
\label{sec:The_function_cT_L_K_Gamma}
So far we have associated $[\bar{\mu}]\in \coho{1}(L/N,F_{K}/\Gamma)$
with a fixed class $\tic{\theta}\in\widetilde{\Irr}(N)$. We now consider the
situation when $\tic{\theta}$ varies, but with $K$
and $L$ fixed.
Let $L$ and $K$ be as in the beginning of Section~\ref{subsec:The-function-bar-mu}
and let $\Gamma$ be any subgroup of $F_{K}=\Hom(K/N,\ensuremath{\mathbb{C}}^{\times})$.
Define
\begin{align*}
\widetilde{\Irr}_{L,K,\Gamma}(N) &= \{\tic{\theta}\in\widetilde{\Irr}(N)\mid L=L_{\tic{\theta}},\, K=K_{\tic{\theta}},\, \Gamma=\Gamma_{K,\tic{\theta}}\},\\
\widetilde{\Irr}^{\leq}_{L,K,\Gamma}(N) &= \{\tic{\theta}\in\widetilde{\Irr}(N)\mid L\leq L_{\tic{\theta}},\,K\leq K_{\tic{\theta}},\,\Gamma=\Gamma_{K,\tic{\theta}}\},
\end{align*}
where $\Gamma_{K,\tic{\theta}}$ is as in Definition~\ref{def:Gamma}.
Note that $\widetilde{\Irr}_{L,K,\Gamma}(N)$ may well be empty for some $L,K,\Gamma$.
Proposition~\ref{prop:function-mu-cohomology} implies that we may
define the following function
\[
\ensuremath{\mathcal{T}}_{L,K,\Gamma}:\widetilde{\Irr}^{\leq}_{L,K,\Gamma}(N) \longrightarrow \coho{1}(L/N,F_{K}/\Gamma),\qquad \ensuremath{\mathcal{T}}_{L,K,\Gamma}(\tic{\theta})=[\bar{\mu}].
\]
\section{Reduction to pro-$p$ Sylow subgroups}\label{sec:Reduction-to-pro-p_twisted_case}
Throughout the present section, $G$ will denote a profinite group and $N$ a
finite index normal pro-$p$ subgroup of $G$. Let $N\leq K\trianglelefteq L\leq G$
and $\Gamma$ be an arbitrary subgroup of $\Hom(K/N,\ensuremath{\mathbb{C}}^{\times})$.
For any prime $q$ dividing $|L/N|$, let $L_q$ be a subgroup of $L$ such that $L_q/N$ is a Sylow $q$-subgroup of $L/N$.
Similarly, let $K_q$ be a subgroup of $K$ such that $K_q/N$ is a Sylow $q$-subgroup of $K/N$. We may and will assume that
\[
K_p = K \cap L_p.
\]
Let $H\leq G$ be a group that fixes $\theta$. We note that the function $\ensuremath{\mathcal{C}}_H$ defined in Theorem~\ref{thm:Clifford-map}
induces a function on twist classes. Let $\theta'\in\tic{\theta}$ so that $\theta'=\theta\psi|_{N}$ for
some $\psi\in\Lin(G)$. Let $\hat{\theta}$ be a strong extension of $\theta$. Then $\hat{\theta}\psi|_{H}$ is a strong
extension of $\theta'$ with the same factor set as that of $\hat{\theta}$, and thus
\[
\ensuremath{\mathcal{C}}_{H}(\theta)=\ensuremath{\mathcal{C}}_{H}(\theta').
\]
This shows that the function $\ensuremath{\mathcal{C}}_H$ is constant on the twist class $\tic{\theta}$, so $\ensuremath{\mathcal{C}}_{H}(\tic{\theta})$ is well-defined.
The goal of this section (Proposition~\ref{prop:red_coeff_to_Sylow}) is to show that the invariant
$\ensuremath{\mathcal{T}}_{L,K,\Gamma}(\tic{\theta})$ attached to a twist class $\tic{\theta} \in \widetilde{\Irr}^{\leq}_{L,K,\Gamma}(N)$
is determined by $\ensuremath{\mathcal{C}}_{K_p}(\tic{\theta})$ together with $\ensuremath{\mathcal{T}}_{L_p,K_p,\Gamma_p}(\tic{\theta})$, where $\Gamma_p$ is
the image of $\Gamma$ under the map defined by restricting homomorphisms of $K/N$ to $K_{p}/N$.
Let $q$ be a prime. As mentioned after Lemma~\ref{lem:basic-group-cohomology}, we denote the $q$-primary component of a torsion abelian group $M$ by $M_{(q)}$ and write $m_{(q)}$ for the $q$-part of an element $m\in M$.
\subsection{Reduction of the parameter $L$}
In this section we will prove that for $\tic{\theta}, \tic{\theta'} \in \widetilde{\Irr}_{L,K,\Gamma}(N)$, we have
\[
\ensuremath{\mathcal{T}}_{L_p,K,\Gamma}(\tic{\theta}) = \ensuremath{\mathcal{T}}_{L_p,K,\Gamma}(\tic{\theta'})
\Longrightarrow
\ensuremath{\mathcal{T}}_{L,K,\Gamma}(\tic{\theta}) = \ensuremath{\mathcal{T}}_{L,K,\Gamma}(\tic{\theta'}).
\]
(see Proposition~\ref{prop:red_L_to_Sylow}).
In order to prove this we need two lemmas.
Let $\ell$ be a prime. Recall the definition of $W$ and $W_{(\ell)}$ in Section~\ref{subsec:Def_of_fibres}. Let
$f \in \Func(K, \ensuremath{\mathbb{C}}^\times)$. Then for any $g\in G$ normalising $K$, we have
$\leftexp{g}{(f_{(\ell)})} = (\leftexp{g}{f})_{(\ell)}$. Recall also the notation $F_K=\Func(K/N,\ensuremath{\mathbb{C}}^{\times})$ and that
throughout Section~\ref{sec:Reduction-to-pro-p_twisted_case}, $\Gamma$ is an arbitrary subgroup of $F_K$.
\begin{lem}
\label{lem:pi_ell_mu-cobound}
Let $q$ be a prime dividing $\lvert L/N \rvert$ and $\mu \in \cocy{1}(L_q/N, F_K)$.
Suppose that there exists a function $\omega_q: K/N \to W_{(q)}$ such that for all $g \in L_q$,
\[
\mu(gN)_{(q)} = \leftexp{g}{\omega_q} \omega_q^{-1}\cdot\nu_g,
\]
for some $\nu_g\in\Gamma$.
Then $\bar{\mu} \in \cobo{1}(L_q/N, F_K/\Gamma)$.
\end{lem}
\begin{proof}
Let $U = \lbrace \alpha \in \cocy{1}(L_q/N, F_K) \mid \alpha^{\lvert L_q/N \rvert} = 1 \rbrace$.
Then, as $\ensuremath{\mathbb{C}}^{\times}$, and hence $F_K$, is finitely divisible, Lemma~\ref{lem:Z-BU_H-finite} implies that
\[
\cocy{1}(L_q/N, F_K) = U \cobo{1}(L_q/N, F_K).
\]
Thus there is a $\tau \in U$ and a function $\omega: K /N \to \ensuremath{\mathbb{C}}^{\times}$
such that, for $g \in L_q$, $\mu(gN) = \tau(gN) \leftexp{g}{\omega}\omega^{-1}$.
This implies that $ \mu(gN)_{(q)} = \tau(gN)_{(q)}\leftexp{g}{(\omega_{(q)})} {\omega}^{-1}_{(q)}$ and hence
\[
\tau(gN)_{(q)} = \mu(gN)_{(q)} (\leftexp{g}{(\omega_{(q)})} {\omega}^{-1}_{(q)})^{-1}.
\]
Combined with the equation
$\mu(gN)_{(q)} = \leftexp{g}{\omega_q} \omega_q^{-1}\nu_g$ this implies that
\[
\tau(gN)_{(q)} = \leftexp{g}{\omega_q} \omega_q^{-1} (\leftexp{g}{(\omega_{(q)})} {\omega}^{-1}_{(q)})^{-1} \nu_g
= \leftexp{g}{(\omega_q\omega_{(q)}^{-1})} (\omega_q\omega_{(q)}^{-1})^{-1} \nu_g.
\]
Since $\tau(gN)$ has values in $W_{(q)}$, we have $\tau(gN)=\tau(gN)_{(q)}$ and thus, for all $g\in L_q$,
\begin{align*}
\mu(gN) & = \tau(gN) \leftexp{g}{\omega}\omega^{-1} =
\leftexp{g}{(\omega_q\omega_{(q)}^{-1})} (\omega_q\omega_{(q)}^{-1})^{-1} \leftexp{g}{\omega}\omega^{-1} \nu_g \\
& = \leftexp{g}{(\omega_q\omega_{(q)}^{-1}\omega)} (\omega_q\omega_{(q)}^{-1}\omega)^{-1}\nu_g.
\end{align*}
Hence the function $\bar{\mu} : L_q/N\rightarrow F_K/\Gamma$ is an element in $\cobo{1}(L_q/N, F_K/\Gamma)$.
\end{proof}
\begin{lem}\label{lem:triv_l_not_q}
Let $\tic{\theta}\in \widetilde{\Irr}_{L,K,\Gamma}(N)$. For every prime $q$ dividing $\lvert L/N \rvert$ such that
$q\neq p$, we have $\ensuremath{\mathcal{T}}_{L_q,K,\Gamma}(\tic{\theta}) = 1$.
\end{lem}
\begin{proof}
Let $\Theta$ be a representation of $N$ affording the character $\theta$.
Then, for $g \in L_q$, there is a $\psi_g\in \Lin(G)$,
$P_g \in \GL_{\theta(1)}(\ensuremath{\mathbb{C}})$ and $\mu \in \cocy{1}(L_q/N, F_K)$ such that
\begin{equation}
\label{eq:mu_for_Theta}
\leftexp{g}{\widehat{\Theta}} = \,P_g^{-1} \widehat{\Theta} P_g \cdot \psi_g\lvert_K\cdot\mu(gN).
\end{equation}
By definition we have $\ensuremath{\mathcal{T}}_{L_q,K,\Gamma}(\tic{\theta}) = [\bar{\mu}]$, so by Lemma~\ref{lem:pi_ell_mu-cobound}
it suffices to show that there is a function $\omega_q:\nolinebreak K/N \to W_{(q)}$ such that for all $g \in L_q$,
\begin{equation}
\label{eq:triviality_q}
\mu(gN)_{(q)} = \leftexp{g}{\omega_q}\omega_q^{-1} \nu_g,
\end{equation}
for some $\nu_g\in \Gamma$.
To prove this, let $\xi = \det \circ \,\widehat{\Theta}$ so that $\xi \in \Func(K,\ensuremath{\mathbb{C}}^{\times})$
(note that the use of this function is the reason we cannot work only with projective characters in this proof).
Then, by equation \eqref{eq:mu_for_Theta},
\[
\leftexp{g}{\xi}{\xi}^{-1} = \mu(gN)^{\theta(1)} (\psi_g \lvert_K)^{\theta(1)}.
\]
and hence
\begin{equation}
\label{eq:xi}
\leftexp{g}{\xi}_{(q)}\xi^{-1} =
\mu(gN)_{(q)}^{\theta(1)} (\psi_g\lvert_K)_{(q)}^{\theta(1)}.
\end{equation}
Now, $\theta(1)$ is a power of $p$ so it is coprime to $q$. This means that raising
to the power of $\theta(1)$ is an automorphism of $W_{(q)}$. Therefore there exists a unique function
$\omega_q: K\rightarrow W_{(q)}$ such that $\omega_q^{\theta(1)} = \xi_{(q)}$ and \eqref{eq:xi} implies that
\[
\leftexp{g}{\omega_q}\omega_q^{-1} = \mu(gN)_{(q)} (\psi_g\lvert_K)_{(q)}.
\]
We finish the proof by showing that the last equality implies equation \eqref{eq:triviality_q}, that is,
that $(\psi_g\lvert_K)_{(q)}\in \Gamma$ and that $\omega_q$ is constant on cosets of $N$.
First observe that $(\psi_g\lvert_N)_{(q)}$
is a homomorphism from a pro-$p$ group
to the $q$-group $W_{(q)}$, so it must be trivial.
By the definition of $\Gamma_{K, \tic{\theta}}$ (Definition~\ref{def:Gamma}), it therefore follows that
$(\psi_g\lvert_K)_{(q)} \in \Gamma_{K, \tic{\theta}} = \Gamma$.
It remains to show that $\omega_q$ is constant on the cosets of $N$ in $K$. Indeed, let $t \in K$
and $n \in N$. Then $\widehat{\Theta}(tn) = \widehat{\Theta}(t) \Theta(n)$, so
$\xi(tn) = \xi(t) \xi(n)$ and hence $\xi_{(q)}(tn) = \xi_{(q)}(t) \xi_{(q)}(n)$.
Since $\xi \lvert_N = \det \circ \Theta$ is a homomorphism from the pro-$p$ group $N$ to $\ensuremath{\mathbb{C}}^{\times}$,
$\xi_{(q)}$ is trivial on $N$. It follows that $\xi_{(q)}(tn) = \xi_{(q)}(t)$, so
$\omega_q(tn)^{\theta(1)}=\xi_{(q)}(tn)=\xi_{(q)}(t)=\omega_q(t)^{\theta(1)}$ and
thus $\omega_q(tn) = \omega_q(t)$.
\end{proof}
\begin{prop}
\label{prop:red_L_to_Sylow}
Let $\tic{\theta}, \tic{\theta'} \in \widetilde{\Irr}_{L,K,\Gamma}(N)$. Then
\[
\ensuremath{\mathcal{T}}_{L_p,K,\Gamma}(\tic{\theta}) = \ensuremath{\mathcal{T}}_{L_p,K,\Gamma}(\tic{\theta'})
\Longrightarrow
\ensuremath{\mathcal{T}}_{L,K,\Gamma}(\tic{\theta}) = \ensuremath{\mathcal{T}}_{L,K,\Gamma}(\tic{\theta'}).
\]
\end{prop}
\begin{proof}
By Lemma~\ref{lem:basic-group-cohomology}, $\coho{1}(L/N,F_{K}/\Gamma)$ is a torsion abelian group so we can write
\[
\ensuremath{\mathcal{T}}_{L,K,\Gamma}(\tic{\theta}) =
\prod_q
\ensuremath{\mathcal{T}}_{L,K,\Gamma}(\tic{\theta})_{(q)},
\]
where $q$ runs through the primes dividing $\lvert L/N \rvert$.
Let $q$ be a prime dividing $|L/N|$. By Lemma~\ref{lem:basic-group-cohomology},
\[
\res_{q}:\coho{1}(L/N,F_K/\Gamma)_{(q)}\longrightarrow\coho{1}(L_{q}/N,F_K/\Gamma)
\]
is injective. We claim that
\begin{equation}
\res_{q}(\ensuremath{\mathcal{T}}_{L,K,\Gamma}(\tic{\theta})_{(q)})=\ensuremath{\mathcal{T}}_{L_{q},K,\Gamma}(\tic{\theta})
\label{eq:res(T_L-K-Gamma(theta))}
\end{equation}
(and similarly for $\theta'$). Indeed, letting $\mu$ be such that $\ensuremath{\mathcal{T}}_{L,K,\Gamma}(\tic{\theta}) = [\bar{\mu}]$,
we have
\[
\res_{L/N,L_{q}/N}(\ensuremath{\mathcal{T}}_{L,K,\Gamma}(\tic{\theta})) = [\bar{\mu}\lvert_{L_q}] = \ensuremath{\mathcal{T}}_{L_{q},K,\Gamma}(\tic{\theta}),
\]
where the second equality holds by the definition of $\ensuremath{\mathcal{T}}_{L_q,K,\Gamma}(\tic{\theta})$.
Furthermore, since $\coho{1}(L_{q}/N,F_K/\Gamma)$ is a $q$-group, the homomorphism
$\res_{L/N,L_{q}/N}$ is trivial on $\coho{1}(L/N,F_K/\Gamma)_{(\ell)}$, for any prime
$\ell\neq q$. Thus,
\[
\res_{L/N,L_{q}/N}(\ensuremath{\mathcal{T}}_{L,K,\Gamma}(\tic{\theta}))
=\res_{L/N,L_{q}/N}(\ensuremath{\mathcal{T}}_{L,K,\Gamma}(\tic{\theta})_{(q)})=\res_{q}(\ensuremath{\mathcal{T}}_{L,K,\Gamma}(\tic{\theta})_{(q)}),
\]
proving \eqref{eq:res(T_L-K-Gamma(theta))}.
Now, if $q\neq p$, Lemma~\ref{lem:triv_l_not_q} implies that $\ensuremath{\mathcal{T}}_{L_{q},K,\Gamma}(\tic{\theta})=1$ and
by \eqref{eq:res(T_L-K-Gamma(theta))} we obtain $\res_{q}(\ensuremath{\mathcal{T}}_{L,K,\Gamma}(\tic{\theta})_{(q)})=1$,
whence $\ensuremath{\mathcal{T}}_{L,K,\Gamma}(\tic{\theta})_{(q)}=1$ (by the injectivity of $\res_{q}$). We must therefore have
$\ensuremath{\mathcal{T}}_{L,K,\Gamma}(\tic{\theta})=\ensuremath{\mathcal{T}}_{L,K,\Gamma}(\tic{\theta})_{(p)}$, and since
$\theta$ was arbitrary, we similarly have $\ensuremath{\mathcal{T}}_{L,K,\Gamma}(\tic{\theta'})=\ensuremath{\mathcal{T}}_{L,K,\Gamma}(\tic{\theta'})_{(p)}$.
Applying \eqref{eq:res(T_L-K-Gamma(theta))} for $q=p$, we get
\[
\res_{p}(\ensuremath{\mathcal{T}}_{L,K,\Gamma}(\tic{\theta})_{(p)})=\ensuremath{\mathcal{T}}_{L_p,K,\Gamma}(\tic{\theta})
= \ensuremath{\mathcal{T}}_{L_p,K,\Gamma}(\tic{\theta'})=\res_{p}(\ensuremath{\mathcal{T}}_{L,K,\Gamma}(\tic{\theta'})_{(p)}),
\]
and we conclude that $\ensuremath{\mathcal{T}}_{L,K,\Gamma}(\tic{\theta})_{(p)}=\ensuremath{\mathcal{T}}_{L,K,\Gamma}(\tic{\theta'})_{(p)}$,
and thus $\ensuremath{\mathcal{T}}_{L,K,\Gamma}(\tic{\theta})=\ensuremath{\mathcal{T}}_{L,K,\Gamma}(\tic{\theta'})$.
\end{proof}
\subsection{Reduction of the coefficient module}
We have shown that $\ensuremath{\mathcal{T}}_{L, K, \Gamma}(\tic{\theta})$ is determined by
$\ensuremath{\mathcal{T}}_{L_{p}, K, \Gamma}(\tic{\theta})\in \coho{1}(L_p/N,F_{K}/\Gamma)$. We will now further show that the latter
is determined by an element in $\coho{1}(L_p/N,F_{K_p}/\Gamma_p)$ where
\[
\Gamma_p = \{ \nu\lvert_{K_p} \mid \nu \in \Gamma \}.
\]
\subsubsection{Reduction of the parameter $\Gamma$}
We start by investigating the structure of $\Gamma_{K, \tic{\theta}}$.
\begin{defn}
We define
\[
\Gamma_{K}^{0} = \{ \nu \in \Lin(K/N) \mid \nu = \varepsilon\lvert_K, \text{ for some } \varepsilon\in \Lin(G),\, \nu_{(p)} = 1\}.
\]
\end{defn}
\begin{lem}
\label{lem:struct_Gamma}
Let $\theta \in \Irr(N)$ such that $K\leq \Stab_G(\theta)$.
\begin{enumerate}
\item \label{lem:struct_Gamma_i}
Then $\Gamma_{K, \tic{\theta}}$ splits as the
(internal) direct product
\[
\Gamma_{K, \tic{\theta}} = \Gamma_{K}^{0}\, \big(\Gamma_{K, \tic{\theta}}\big)_{(p)}
\]
where $\big(\Gamma_{K, \tic{\theta}}\big)_{(p)} = \{ \nu_{(p)} \mid \nu \in \Gamma \}$.
\item \label{lem:struct_Gamma_ii}
Moreover, let $\rho: \Lin(K) \to \Lin({K_p})$ be the
homomorphism of abelian groups induced by restricting maps
on $K$ to maps on ${K_p}$. Then $\Gamma_K^0 \leq \Ker \rho$ and $\rho$ restricted to
$\big(\Gamma_{K, \tic{\theta}}\big)_{(p)}$ is injective with image $\Gamma_{{K_p}, \tic{\theta}}$.
\end{enumerate}
\end{lem}
\begin{proof}
We prove the first statement. Let $\hat{\theta}$ be a strong extension of $\theta$.
First $\Gamma_{K}^{0} \leq \Gamma_{{K_p}, \tic{\theta}}$. Indeed, if $\nu \in \Gamma_{K}^{0}$, then $\nu\lvert_N = 1$ and
$\nu = \varepsilon\lvert_K$ for some $\varepsilon \in \Lin(G)$. We have $\varepsilon\lvert_N = 1$, so
$\theta \varepsilon\lvert_N = \theta$. Thus $\hat{\theta} \nu = \hat{\theta} \varepsilon_{K}$, so
$\nu \in \Gamma$.
Second,
$\Gamma_{K}^{0} \cap \big(\Gamma_{{K_p}, \tic{\theta}}\big)_{(p)} = 1$ because, by definition, $\nu_{(p)} = 1$ for
all $\nu \in \Gamma_{K}^{0}$.
Let now $\nu \in \Gamma$, so that
\[
\nu = \prod_{q} \nu_{(q)}
\]
where the product runs over primes $q \mid \lvert K : N \rvert$. We prove that for
$q \neq p$, $\nu_{(q)} \in \Gamma_{K}^{0}$. Fix $q \neq p$. Since $(\nu_{(q)})_{(p)} = 1$,
it suffices to show that $\nu_{(q)}$ is the restriction of a character in $\Lin(G)$.
In order to do so, let $\Theta$ be representation
affording $\theta$ and let $\widehat{\Theta}$ be a strong extension of $\Theta$ to $K$. By
definition of $\Gamma_{{K_p}, \tic{\theta}}$ we have that there are $P \in \GL_{\theta(1)}(\ensuremath{\mathbb{C}})$ and $\varepsilon \in \Lin(G)$
such that
\[
\varepsilon\lvert_{K} \cdot\, \widehat{\Theta} = \nu \cdot\, P^{-1} \widehat{\Theta} P.
\]
This implies that
\[
\varepsilon\lvert_{K}^{\theta(1)} \cdot\, (\det \circ \, \widehat{\Theta})= \nu^{\theta(1)} \cdot\, (\det \circ\, \widehat{\Theta}).
\]
Hence $(\varepsilon\lvert_{K})^{\theta(1)} = \nu^{\theta(1)}$ and so
$((\varepsilon\lvert_{K})^{\theta(1)})_{(q)} = (\nu^{\theta(1)})_{(q)}$. The decomposition of
a root of unity into roots of unity of prime power order is multiplicative, hence
$(\varepsilon_{(q)}\lvert_{K})^{\theta(1)} = \nu_{(q)}^{\theta(1)}$.
Since $q \neq p$ and $\theta(1)$ is a power of $p$, it follows that
$\varepsilon_{(q)}\lvert_{K}= \nu_{(q)}$.\par
We prove the second part. Clearly $\Gamma_{K}^{0} \leq \Ker \rho$ because ${K_p}$ is a pro-$p$ group.
Moreover, every
homomorphism $K/N \rightarrow W_{(p)}$ factors through
\[
\frac{K}{[K,K]N} = \prod_{q} \frac{K_q [K,K]N}{[K,K]N}.
\]
where the product runs over primes $q \mid \lvert K : N \rvert$.
For $q \neq p$, there are no non-trivial homomorphisms
$K_q[K, K]N/[K,K]N \rightarrow W_{(p)}$. Thus $\rho$ is injective on $\big(\Gamma_{{K_p}, \tic{\theta}}\big)_{(p)}$ and
we need only prove the statement about its image. To this end, let $\widehat{\Theta}_p =
\widehat{\Theta}\lvert_{{K_p}}$ and let $\nu_p \in \Gamma_{{K_p}, \tic{\theta}}$. Then
there are $P \in \GL_{\theta(1)}(\ensuremath{\mathbb{C}})$ and $\varepsilon \in \Lin(G)$ such that
\begin{equation*}
\varepsilon\lvert_{{K_p}} \cdot\, \widehat{\Theta}_p = \nu_p P^{-1} \widehat{\Theta}_p P.
\end{equation*}
Restricting both sides of the last equality to $N$ we have that
$\varepsilon\lvert_{N} \cdot\, \Theta = P^{-1} \Theta P$, so $\varepsilon\lvert_{K} \cdot\, \widehat{\Theta}$ and
$P^{-1} \widehat{\Theta} P$ are both strong extensions of $\varepsilon\lvert_{N} \cdot\, \Theta$. Thus there
is a scalar function $\nu:K/N \to \ensuremath{\mathbb{C}}^{\times}$ such that
\[
\nu \cdot I_{\theta(1)}= \varepsilon\lvert_{K} \cdot\, \widehat{\Theta} P^{-1} \widehat{\Theta}^{-1} P.
\]
By its very definition, $\nu \in \Gamma$ and so $\nu_{(p)} \in \big(\Gamma_{{K_p}, \tic{\theta}}\big)_{(p)}$.
This is enough to conclude, because ${K_p}$ is a pro-$p$ group and hence
we have $\nu_{(p)}\lvert_{{K_p}} = \nu\lvert_{{K_p}} = \nu_p$.
\end{proof}
The following consequence of the structure of $\Gamma_{K, \tic{\theta}}$ will achieve the
goal of this subsection and will also be key to producing an $\lan_{\mathrm{an}}$-formula
for the predicate $\Gamma_{K, \tic{\theta}} = \Gamma$ in Section~\ref{sec:rationality_partial_tw}.
\begin{prop}
\label{prop:red_Gamma}
Let $\tic{\theta} \in \widetilde{\Irr}(N)$ such that $K \leq K_{\tic{\theta}}$.
Assume there exists $\tic{\theta}' \in \widetilde{\Irr}(N)$ such that $\Gamma_{K, \tic{\theta}'} = \Gamma$.
Then $\Gamma_{K,\tic{\theta}} = \Gamma$ if and only if $\Gamma_{{K_p}, \tic{\theta}} = \Gamma_p$.
\end{prop}
\begin{proof}
Part \ref{lem:struct_Gamma_ii}
of Lemma~\ref{lem:struct_Gamma} gives that
\[
\Gamma_{{K_p}, \tic{\theta}} = \Gamma_p \iff \big(\Gamma_{{K_p}, \tic{\theta}}\big)_{(p)} = \Gamma_{(p)}.
\]
By part \ref{lem:struct_Gamma_i} of Lemma~\ref{lem:struct_Gamma},
the latter is equivalent to $\Gamma_{{K_p}, \tic{\theta}} = \big(\Gamma_{K}^{0} \Gamma_{{K_p}, \tic{\theta}}\big)_{(p)}
= \Gamma_{K}^{0}\Gamma_{(p)} = \Gamma$.
\end{proof}
\subsubsection{Reduction of the parameter $K$}
\label{subsec:red_coeff_mod}
In what follows, we let $\theta \in \widetilde{\Irr}_{L,K,\Gamma}(N)$. Proposition~\ref{prop:red_Gamma} implies that
\[
\widetilde{\Irr}_{L,K,\Gamma}(N) \subseteq \widetilde{\Irr}_{L_p, K_p, \Gamma_p}^{\leq}(N)
\]
(see Section~\ref{sec:The_function_cT_L_K_Gamma} for the definitions of these sets)
and therefore, for $\tic{\theta} \in \widetilde{\Irr}_{L,K,\Gamma}(N)$,
the element $\ensuremath{\mathcal{T}}_{L_{p}, K_{p}, \Gamma_p} (\tic{\theta})\in \coho{1}(L_p/N,F_{K_p}/\Gamma_p)$ is well-defined.
The following proposition shows that $\ensuremath{\mathcal{T}}_{L_{p}, K, \Gamma}(\tic{\theta})$
is determined by $\ensuremath{\mathcal{T}}_{L_{p}, K_p, \Gamma_p}(\tic{\theta})$.
\begin{prop}
\label{prop:red_coeff_to_Sylow}
Let $\tic{\theta}, \tic{\theta'} \in \widetilde{\Irr}_{L,K,\Gamma}(N)$ and assume that
$\ensuremath{\mathcal{C}}_{K_{p}}(\tic{\theta}) = \ensuremath{\mathcal{C}}_{K_{p}}(\tic{\theta'})$. Then
\[
\ensuremath{\mathcal{T}}_{L_{p}, K_{p}, \Gamma_p} (\tic{\theta})= \ensuremath{\mathcal{T}}_{L_{p}, K_{p}, \Gamma_p}(\tic{\theta'})
\Longrightarrow
\ensuremath{\mathcal{T}}_{L, K, \Gamma} (\tic{\theta})= \ensuremath{\mathcal{T}}_{L, K, \Gamma}(\tic{\theta'}).
\]
\end{prop}
\begin{proof}
By Proposition~\ref{prop:red_L_to_Sylow} it suffices to prove that
\begin{equation*}
\ensuremath{\mathcal{T}}_{L_p, K_p, \Gamma_p} (\tic{\theta})= \ensuremath{\mathcal{T}}_{L_p, K_p, \Gamma_p}(\tic{\theta'})
\Longrightarrow
\ensuremath{\mathcal{T}}_{L_p, K, \Gamma} (\tic{\theta}) =
\ensuremath{\mathcal{T}}_{L_p, K, \Gamma}(\tic{\theta'}).
\end{equation*}
By Lemma~\ref{lem:Jaikins-prop} our hypothesis $\ensuremath{\mathcal{C}}_{K_{p}}(\tic{\theta}) = \ensuremath{\mathcal{C}}_{K_{p}}(\tic{\theta'})$ implies that
$\ensuremath{\mathcal{C}}_{K}(\tic{\theta}) = \ensuremath{\mathcal{C}}_{K}(\tic{\theta'})$. Therefore, by Lemma~\ref{lem:same_fs},
there exist strong extensions $\hat{\theta}$ and $\hat{\theta}'$ with the same factor set.
Thus there exist $\mu, \mu' \in \cocy{1}(L_{p}/N, F_K)$, such that for all $g \in L_{p}$ there are $\psi_g, \psi'_g\in \Lin(G)$
with
\[
\leftexp{g}{\hat{\theta}} = \hat{\theta} \psi_g\lvert_K \cdot \mu(gN) \qquad \text{and} \qquad
\leftexp{g}{\hat{\theta}'} = \hat{\theta}' \psi'_g\lvert_K \cdot \mu'(gN).
\]
Since $\hat{\theta}$ and $\hat{\theta}'$ have the same factor set, we have
\begin{equation}
\label{eq:mu-mu_prime-hom}
\mu(gN)^{-1} \mu'(gN) \in \Lin(K/N)=\Hom(K/N, \ensuremath{\mathbb{C}}^{\times}).
\end{equation}
Assume now that
$\ensuremath{\mathcal{T}}_{L_{p}, K_{p}, \Gamma_p} (\tic{\theta})= \ensuremath{\mathcal{T}}_{L_{p}, K_{p}, \Gamma_p}(\tic{\theta'})$.
Then there is a function $\eta: K_{p}/N \rightarrow \ensuremath{\mathbb{C}}^{\times}$ such that, for any $g\in L_{p}$,
\[
(\mu(gN)^{-1}\lvert_{K_{p}}) (\mu'(gN)\lvert_{K_{p}}) \Gamma_p
= \leftexp{g}{\eta}\eta^{-1} \Gamma_p.
\]
Changing $\psi_g$ and $\psi'_g$ if necessary we may without loss of generality assume that
\begin{equation}
\label{eq:triv_K_p}
\mu(gN)^{-1}\lvert_{K_{p}} \mu'(gN)\lvert_{K_{p}} = \leftexp{g}{\eta}\eta^{-1}.
\end{equation}
By \eqref{eq:mu-mu_prime-hom} and \eqref{eq:triv_K_p},
$\leftexp{g}{\eta}\eta^{-1}$ is the restriction of an element in $\Lin(K/N)$, hence
it is trivial on $(K_{p}\cap [K,K])N$.
By the second isomorphism theorem, $\eta$ defines a function on $K_{p}[K,K]N$, which is constant on cosets of $[K,K]N$.
By abuse of notation, we denote this function by $\eta$ as well.
The finite abelian group $K/([K,K]N)$ factors as
\begin{equation}
\label{eq:splitting_K_KKN}
\frac{K}{[K,K]N} = \frac{K_{p}[K,K]N}{[K,K]N} \prod_q
\frac{K_q[K,K]N}{[K,K]N},
\end{equation}
where $q$ runs through the primes dividing $|K: K_{p}|$.
Extend $\eta$ to the function $\hat{\eta}: K/([K,K]N) \rightarrow \ensuremath{\mathbb{C}}^{\times}$ such that $\hat{\eta} = 1$ on $\frac{K_{q}[K,K]N}{[K,K]N}$ for every $q\neq p$.
By \eqref{eq:mu-mu_prime-hom} and \eqref{eq:triv_K_p}, the function $\leftexp{g}{\hat{\eta}}\hat{\eta}^{-1}$ is
a homomorphism on $\frac{K_{p}[K,K]N}{[K,K]N}$
and it is trivial on each of the other factors on the
right-hand side of \eqref{eq:splitting_K_KKN}. Thus $\leftexp{g}{\hat{\eta}}\hat{\eta}^{-1}$ is a homomorphism $K/N \rightarrow \ensuremath{\mathbb{C}}^{\times}$.
Therefore
\[
(\leftexp{g}{\hat{\eta}}\hat{\eta}^{-1})_{(p)} = \leftexp{g}{\hat{\eta}_{(p)}}\hat{\eta}_{(p)}^{-1}
\]
is a homomorphism $K/N \rightarrow W_{(p)}$ and, by \eqref{eq:triv_K_p},
\[
\mu(gN)_{(p)}^{-1}\lvert_{K_{p}} \mu'(gN)_{(p)}\lvert_{K_{p}}
= \leftexp{g}{\eta_{(p)}} \eta_{(p)}^{-1}
=\leftexp{g}{\hat{\eta}_{(p)}} \hat{\eta}_{(p)}^{-1}|_{K_p}.
\]
Since $\mu(gN)_{(p)}^{-1} \mu'(gN)_{(p)}$ is a homomorphism from $K/([K,K]N)$ to the $p$-group $W_{(p)}$, it is trivial on every factor $\frac{K_{q}[K,K]N}{[K,K]N}$, $q\neq p$, in \eqref{eq:splitting_K_KKN}. We therefore have
\[
\mu(gN)_{(p)}^{-1} \mu'(gN)_{(p)} = \leftexp{g}{\hat{\eta}_{(p)}}\hat{\eta}_{(p)}^{-1},
\]
for any $g\in L_p$, so by Lemma~\ref{lem:pi_ell_mu-cobound},
\[
\bar{\mu^{-1} \mu'} \in \cobo{1}(L_p/N,K/\Gamma),
\]
that is,
\[
\ensuremath{\mathcal{T}}_{L_p, K, \Gamma} (\tic{\theta}) =
\ensuremath{\mathcal{T}}_{L_p, K, \Gamma}(\tic{\theta'}).
\]
\end{proof}
\section{Reduction to the partial twist zeta series}
\label{sec:Reduction partial twist}
\subsection{Finite Dirichlet series for twist character triples}
If $\tic{\theta}\in \widetilde{\Irr}(N)$ and $N \leq L\leq L_{\tic{\theta}}$ we will call $(L,N,\tic{\theta})$ a \emph{$G$-twist character triple}. Given such a triple, define the finite Dirichlet series
\[
\widetilde{f}_{(L,N,\tic{\theta})}(s)=
\sum_{\tic{\lambda}\in\widetilde{\Irr}(L \mid\tic{\theta})}
\left(\frac{\lambda(1)}{\theta(1)}\right)^{-s}.
\]
The goal of this subsection is to prove
Proposition~\ref{prop:Analogue-Jaikin}, that is, that the invariants $\ensuremath{\mathcal{C}}_{K_p}(\tic{\theta})$ and $\ensuremath{\mathcal{T}}_{L_p,K_p,\Gamma_p}(\tic{\theta})$ associated with $\tic{\theta}$ determine $\widetilde{f}_{(L,N,\tic{\theta})}(s)$. This is an analogue of Lemma~\ref{lem:Jaikins-prop} and will be used in the proof of Proposition~\ref{prop:partial-Main-twist} in the following subsection.
We start with some straightforward generalisations to projective characters
of some of the notation and formalism regarding induction and restriction.
Let $H$ be a profinite group and let $\alpha \in \cocy{2}(H)$.
If $M$ is a normal subgroup of $H$ of finite index and $\theta \in \PIrr_{\alpha_M}(M)$ we define
\[
\PIrr_{\alpha}(H \mid \theta) =
\{ \pi \in \PIrr_{\alpha}(H) \mid
\bigl< \Res^H_M \pi, \theta \bigr> \neq 0 \}.
\]
From now on, let $\alpha\in \cocy{2}(L/N)$.
\begin{lem}
\label{lem:isaacs_6_11_proj}
Let $\theta \in \PIrr_{\alpha_N}(N)$. Assume that $L\cap K_{\theta}\leq K$ (i.e., $\Stab_L(\theta) \leq K$) and that $\alpha_K$ is $L$-invariant
Then we have a bijection
\[
\Ind_{K,\alpha}^L : \PIrr_{\alpha_K }(K \mid \theta)
\longrightarrow \PIrr_{\alpha}(L \mid \theta).
\]
\end{lem}
\begin{proof}
The proof of \cite[Theorem 6.11]{Isaacs} transfers, mutatis mutandis, to the present
situation as Frobenius reciprocity and Clifford's theorem hold in the more general
context of projective representations (see, e.g.,
\cite[Theorem~10.1]{Karpilovsky2} for the latter).
\end{proof}
We generalise the $G$-twist equivalence relation $\Gtwist$ on $\Irr(L)$ to $\PIrr_{\alpha}(L)$ in the obvious way, that is, for $\pi_1, \pi_2 \in \PIrr_{\alpha}(L)$,
\[
\pi_1 \Gtwist \pi_2 \iff \pi_1 = \pi_2 \psi\lvert_L, \quad \text{for some}\ \psi\in \Lin(G).
\]
For a projective character $\theta$ denote its $G$-twist class
by $\tic{\theta}$ and we denote the set of $G$-twist classes in $\PIrr_{\alpha}(L)$ by
$\widetilde{\PIrr}_{\alpha}(L)$.
Moreover, if $\theta \in \PIrr_{\alpha_N}(N)$, we define
\[
\widetilde{\PIrr}_{\alpha}(L \mid \tic{\theta})
\]
as the set of those $G$-twist classes
$\tic{\pi}\in\widetilde{\PIrr}_{\alpha}(L)$
such that $\pi\in\PIrr_{\alpha}(L\mid\theta \psi\lvert_{N})$, for some $\psi\in\Lin(G)$.
The $G$-twist equivalence relation is compatible
with $\Ind_{K,\alpha}^L$ in the sense that for any $\lambda \in \PIrr_{\alpha_K}(K)$ and $\psi\in \Lin(G)$, we have $\Ind_{K,\alpha}^L(\lambda)\psi = \Ind_{K,\alpha}^L(\lambda \psi|_H)$. This follows immediately from the character formula for induced projective characters; see \cite[Chapter~1, Proposition~9.1\,(i)]{Karpilovsky3}.
Thus, if $\alpha_K$ is $L$-invariant, there is a function
\begin{equation}
\label{eq:twist_iso_ind}
\twind_{K,\alpha}^L : \widetilde{\PIrr}_{\alpha_K}(K \mid \tic{\theta})
\longrightarrow \widetilde{\PIrr}_{\alpha}(L \mid \tic{\theta})
\end{equation}
sending $\tic{\pi} \in \widetilde{\PIrr}_{\alpha_K}(K \mid \tic{\theta})$ to the $G$-twist
class of $\Ind_{K,\alpha}^L \pi$.
The following lemma is a straightforward
application of Mackey's intertwining number formula for projective characters \cite[Ch.~1, Theorem~8.6]{Karpilovsky3}.
\begin{lem}
\label{lem:same_ind}
Let $\pi_1, \pi_2 \in \PIrr_{\alpha_K}(K)$ and assume that $\alpha_K$ is $L$-invariant.
Then
\[
\twind_{K,\alpha}^L \tic{\pi}_1 = \twind_{K,\alpha}^L \tic{\pi}_2
\iff \exists\,\, g \in L\ (\pi_1 \Gtwist \leftexp{g}{\pi_2}).
\]
\end{lem}
\begin{prop}
\label{prop:Analogue-Jaikin}
Let $\tic{\theta}, \tic{\theta'} \in \widetilde{\Irr}_{L,K,\Gamma}(N)$ and assume that
$\ensuremath{\mathcal{C}}_{K_{p}}(\tic{\theta}) = \ensuremath{\mathcal{C}}_{K_{p}}(\tic{\theta'})$ and $\ensuremath{\mathcal{T}}_{L_{p}, K_{p}, \Gamma_p} (\tic{\theta})= \ensuremath{\mathcal{T}}_{L_{p}, K_{p}, \Gamma_p}(\tic{\theta'})$. Then
\[
\widetilde{f}_{(L,N,\tic{\theta})}(s)=\widetilde{f}_{(L,N,\tic{\theta'})}(s).
\]
\end{prop}
\begin{proof}
\newcommand{\mathrm{\sigma}}{\mathrm{\sigma}}
\newcommand{\mathrm{\sigma_0}}{\mathrm{\sigma_0}}
We prove this by constructing a bijection
\[
\mathrm{\sigma} : \widetilde{\Irr}(L \mid\tic{\theta}) \longrightarrow \widetilde{\Irr}(L \mid\tic{\theta'})
\]
such that for all $\tic{\lambda} \in \widetilde{\Irr}(L \mid\tic{\theta})$ we have
\[
\frac{\lambda(1)}{\theta(1)} = \frac{\lambda'(1)}{\theta'(1)}
\]
for $\lambda' \in \mathrm{\sigma}(\tic{\lambda})$.\par
We have
$\ensuremath{\mathcal{C}}_{K_{p}}(\tic{\theta}) = \ensuremath{\mathcal{C}}_{K_{p}}(\tic{\theta'})$, so by Proposition~\ref{lem:Jaikins-prop},
there are strong extensions $\hat{\theta}$ and $\hat{\theta}'$ of
$\theta$ and $\theta'$, respectively, with the same factor set, say $\alpha$.
Suppose that
$\ensuremath{\mathcal{T}}_{L_{p}, K_{p}, \Gamma_p} (\tic{\theta})= \ensuremath{\mathcal{T}}_{L_{p}, K_{p}, \Gamma_p}(\tic{\theta'})$.
Then by Proposition~\ref{prop:red_coeff_to_Sylow}, there are cocycles $\mu, \mu' \in \cocy{1}(L/N, F_{K})$,
such that for any $g \in L$ there exist $\psi_g, \psi'_g \in \Lin(G)$ with
\begin{align*}
\leftexp{g}{\hat{\theta}} &= \hat{\theta} \psi_g\lvert_K \cdot \mu(gN),\\
\leftexp{g}{\hat{\theta}'} &= \hat{\theta}' \psi'_g\lvert_K \cdot \mu'(gN),\\
\mu(gN) \Gamma &= \mu'(gN) \leftexp{g}{\eta}\eta^{-1}\Gamma,
\end{align*}
for some function $\eta: K/N \to \ensuremath{\mathbb{C}}^{\times}$. By changing $\psi_g$ and $\psi'_g$ if necessary,
we may assume without loss of generality that $\mu(gN) = \mu'(gN) \leftexp{g}{\eta}\eta^{-1}$.
This gives, in particular, that
\begin{equation}
\label{eq:mod_mu}
\leftexp{g}{(\eta\hat{\theta}')}
= \big( \eta\hat{\theta}'\big) \psi'_g\lvert_K \cdot \mu(gN).
\end{equation}
Let $\omega: L/N \to \ensuremath{\mathbb{C}}^{\times}$ be a function extending $\eta$ and let
$\delta \in \cobo{2}(L/N)$
be its factor set. Clearly, the restriction
$\delta_K$ equals the factor set of $\eta$.
Let $\tic{\lambda} \in \widetilde{\Irr}(L \mid \tic{\theta})$ and let
$\rho \in \Irr(K \mid \theta)$ be such that
\[
\tic{\lambda} = \tic{\Ind}_K^L \tic{\rho}.
\]
By Lemma~\ref{lem:Clifford-extensions} we have $\rho = \hat{\theta} \cdot \pi$ for a unique $\pi\in \PIrr_{\alpha^{-1}}(K/N)$.
We define
\[
\mathrm{\sigma_0}(\tic{\lambda}) =
\twind_{K,\delta}^L \big(\tic{\eta\hat{\theta}' \cdot \pi}\big).
\]
Note that $\hat{\theta}' \cdot \pi \in \Irr(K)$, so $\eta\hat{\theta}' \cdot \pi \in \PIrr_{\delta_K}(K)$
and since $\eta$ has constant value $\eta(1)$ on $N$, we have
$\mathrm{\sigma_0}(\tic{\lambda})\in \widetilde{\PIrr}_{\delta}(L\mid \eta(1)\tic{\theta}')$.
We will show that
\[
\mathrm{\sigma_0}: \widetilde{\Irr}(L \mid \tic{\theta}) \longrightarrow \widetilde{\PIrr}_{\delta}(L\mid \eta(1)\tic{\theta}')
\]
is a well-defined bijection.\par
First, to prove that $\mathrm{\sigma_0}$ is well-defined we need to prove that
$\mathrm{\sigma_0}(\tic{\lambda})$ is independent of the choice of the $G$-twist class $\tic{\rho}$ inducing
to $\tic{\lambda}$. To this end, suppose that $\rho^* \in \Irr(K \mid \theta)$ is another character
such that $\tic{\lambda} = \tic{\Ind}_K^L \tic{\rho^*}$ and let
$\rho^* = \hat{\theta} \cdot \pi^*$ with $\pi^*\in \PIrr_{\alpha^{-1}}(K/N)$.
The relation $\tic{\Ind}_K^L \tic{\rho} = \tic{\Ind}_K^L \tic{\rho^*}$ implies
(by Mackey's induction-restriction theorem for ordinary characters) that there
is a $g \in L$ such that $\leftexp{g}{(\hat{\theta}\cdot \pi)} \Gtwist \hat{\theta}\cdot \pi^*$.
Moreover, we have
\begin{equation}
\label{eq:pre-mod_mu}
\begin{split}
\leftexp{g}{(\hat{\theta}\cdot\pi)}
& =\leftexp{g}{\hat{\theta}} \cdot \leftexp{g}{\pi}
=\big( \hat{\theta}\psi_{g}|_{K}\cdot\mu(gN)\big)\cdot\leftexp{g}{\pi}\\
&=\hat{\theta}\cdot \big(\leftexp{g}{\pi}\psi_{g}|_{K}\cdot\mu(gN)\big),
\end{split}
\end{equation}
and thus
$ \pi^* \Gtwist \leftexp{g}{\pi}\mu(gN)$.
On the other hand, by equation \eqref{eq:mod_mu},
\begin{equation}
\label{eq:mod_mu-pi}
\leftexp{g}{(\eta \hat{\theta}'\cdot\pi)}
= \eta \hat{\theta}' \cdot \big(\leftexp{g}{\pi} \psi_{g}'|_{K}\cdot\mu(gN)\big),
\end{equation}
so
\[
\leftexp{g}{(\eta\hat{\theta}'\cdot \pi)} \Gtwist \eta\hat{\theta}'\cdot \pi^*.
\]
As $\hat{\theta}$ and $\hat{\theta'}$ have the same factor set,
$\mu(gN)$ and $\mu'(gN)$ have the same factor set, so
$\mu^{-1}(gN)\mu'(gN)$, and thus $\leftexp{g}{\eta}\eta^{-1}$, is a homomorphism for all $g \in L$. Hence the factor set $\delta_K$ of $\eta$ is $L$-invariant.
We can thus apply Lemma~\ref{lem:same_ind} to obtain that
\[
\twind_{K,\delta}^L \big(\tic{\eta\hat{\theta}' \cdot \pi}\big)
= \twind_{K,\delta}^L \big(\tic{\eta\hat{\theta}' \cdot \pi^*}\big),
\]
that is, $\mathrm{\sigma_0}$ is well-defined.
Similarly, we can prove that $\mathrm{\sigma_0}$ is injective. Indeed, if $\mathrm{\sigma_0}(\tic{\lambda})=\mathrm{\sigma_0}(\tic{\lambda}^*)$, with
$\tic{\lambda} = \tic{\Ind}_K^L \tic{\rho}$, $\rho=\hat{\theta}\cdot \pi$ and $\tic{\lambda}^* = \tic{\Ind}_K^L \tic{\rho^*}$, $\rho^*=\hat{\theta}\cdot \pi^*$,
then Lemma~\ref{lem:same_ind} implies that there is a $g\in L$ such that
$\leftexp{g}{(\eta\hat{\theta}'\cdot \pi)} \Gtwist \eta\hat{\theta}'\cdot \pi^*$, so by \eqref{eq:mod_mu-pi}, $\pi^* \Gtwist \leftexp{g}{\pi}\mu(gN)$, hence by
\eqref{eq:pre-mod_mu} we get $\leftexp{g}{(\hat{\theta}\cdot \pi)} \Gtwist \hat{\theta}\cdot \pi^*$, which by Lemma~\ref{lem:same_ind} implies that $\tic{\lambda}=\tic{\lambda}^*$.
The surjectivity part of Lemma~\ref{lem:isaacs_6_11_proj} implies that
the function in equation~\eqref{eq:twist_iso_ind}
is surjective. Thus the function $\mathrm{\sigma_0}$ is surjective and hence bijective.\par
%
We now define
\[
\mathrm{\sigma}(\tic{\lambda}) = \omega^{-1} \cdot\mathrm{\sigma_0}(\tic{\lambda}),
\qquad \text{ for } \tic{\lambda} \in \widetilde{\Irr}(L \mid \tic{\theta}).
\]
Multiplying by $\omega^{-1}$ is clearly a bijection $\widetilde{\PIrr}_{\delta}(L\mid \eta(1)\tic{\theta'})\rightarrow \widetilde{\Irr}(L\mid\tic{\theta'})$ so $\mathrm{\sigma}$ is a bijection
$\widetilde{\Irr}(L \mid \nolinebreak \tic{\theta}) \to \widetilde{\Irr}(L \mid\tic{\theta'})$.
Moreover, for all $\tic{\lambda} \in \widetilde{\Irr}(L \mid \tic{\theta})$ with $\tic{\lambda} = \tic{\Ind}_K^L \tic{\rho}$, $\rho=\hat{\theta}\cdot \pi$
and $\lambda' \in \mathrm{\sigma}(\tic{\lambda})$, we have
\begin{align*}
\lambda(1) & = \lvert L : K \rvert \theta(1)\pi(1),\\
\lambda'(1) &= \omega(1)^{-1} \lvert L : K \rvert \,\omega(1)\theta'(1)\pi(1) = \lvert L : K \rvert\, \theta'(1)\pi(1).
\end{align*}
This concludes the proof.
\end{proof}
\subsection{Reduction of Theorem~\ref{thm:Main-twist} to the partial twist zeta series}
From now on, let $G$ be a twist-rigid compact $p$-adic analytic
group. Note that $G$ is allowed to be FAb here and that in this case we may well have $Z_G(s) \neq \widetilde{Z}_G(s)$. Let $N$ be a normal
open pro-$p$ subgroup of $G$.
As in Sections~\ref{sec:Twist-iso-Clifford} and \ref{sec:Reduction-to-pro-p_twisted_case} we write
\[
K_{\tic{\theta}}=\Stab_{G}(\theta),\qquad L_{\tic{\theta}}=\Stab_{G}(\tic{\theta}),
\]
for any $\theta\in\Irr(N)$.
For $K$, $L$, $\Gamma$, $K_p$ and $L_p$ as in Section~\ref{sec:Reduction-to-pro-p_twisted_case} and
for any $c\in\coho{2}(K_{p}/N)$ and $c'\in \coho{1}(L_{p}/N,M_{K_{p}}/\Gamma_p)$,
define
\begin{align*}
\twirr\vphantom{\Irr}^{c,c'}_{L,K,\Gamma}(N) = \{\tic{\theta}\in\widetilde{\Irr}_{L,K,\Gamma}(N)\mid
\ensuremath{\mathcal{C}}_{K_{p}}(\tic{\theta})=c,\ \ensuremath{\mathcal{T}}_{L_{p}, K_{p}, \Gamma_p}(\tic{\theta})=c'\}.
\end{align*}
In analogy with the partial representation zeta series defined earlier,
we introduce the \emph{partial twist zeta series}
\[
\tpartial{N;L, K, \Gamma}{c, c'} =\sum_{\tic{\theta}\in\widetilde{\Irr}_{L, K, \Gamma}^{c,c'}(N)}\theta(1)^{-s}.
\]
Note that $\tpartial{N;L, K, \Gamma}{c, c'}=0$ unless there is a $\theta \in \Irr(N)$ such that
$K=K_{\tic{\theta}}$, $L=L_{\tic{\theta}}$ and $\Gamma_{K,\tic{\theta}} = \Gamma$. Note also that
\[
\widetilde{Z}_{N}(s)=
\sum_{\substack{ N\leq K\leq L\leq G \\
\Gamma \leq \Lin(K/N)}}
\sum_{\substack{ c\in\coho{2}(K_{p}/N)\\
c'\in \coho{1}(L_{p}/N,F_{K_{p}}/\Gamma_p)}}
\tpartial{N;L, K, \Gamma}{c, c'}.
\]
As in Section~\ref{sec:red_partial}, let $\ensuremath{\mathcal{S}}$
denote the set of subgroups $K\leq G$ such that $N\leq K$ and $K=K_{\tic{\theta}}$
for some $\theta\in\Irr(N)$.
Similarly, let $\widetilde{\ensuremath{\mathcal{S}}}$ denote the set of subgroups $L\leq G$ such that $N\leq L$ and $L=L_{\tic{\theta}}$, for some $\tic{\theta}\in\widetilde{\Irr}(N)$.
For $K \in \ensuremath{\mathcal{S}}$, let $\ensuremath{\mathcal{G}}(K)$ be the set of
subgroups of $\Gamma \leq \Lin(K/N)$ such that $\Gamma=\Gamma_{K,\tic{\theta}}$,
for some $\tic{\theta}\in\widetilde{\Irr}(N)$ such that $K\leq K_{\tic{\theta}}$.
\begin{prop}\label{prop:partial-Main-twist}
Suppose that $\tpartial{N;L, K, \Gamma}{c, c'}$ is rational in $p^{-s}$,
for every $L\in\widetilde{\ensuremath{\mathcal{S}}}$, $K\in\ensuremath{\mathcal{S}}$, $\Gamma \in \ensuremath{\mathcal{G}}(K)$,
$c\in\coho{2}(K_{p}/N)$ and $c'\in \coho{1}(L_{p}/N,F_{K_p}/\Gamma)$. Then Theorem~\ref{thm:Main-twist} holds.
\end{prop}
\begin{proof}
By Lemma~\ref{lem:Cliffords-thm-twists}, for every $\tic{\rho}\in\widetilde{\Irr}(G)$, there are exactly $|G:L_{\tic{\theta}}|$
distinct $G$-twist classes $\tic{\theta}\in\widetilde{\Irr}(N)$ such that
$\tic{\rho}\in\widetilde{\Irr}(G\mid\tic{\theta})$. Thus
\[
\widetilde{Z}_{G}(s) = \sum_{\tic{\rho}\in\widetilde{\Irr}(G)}\rho(1)^{-s} =
\sum_{\tic{\theta} \in \widetilde{\Irr}(N)} |G:L_{\tic{\theta}}|^{-1}\sum_{\tic{\rho}\in\widetilde{\Irr}(G\mid\tic{\theta})}\rho(1)^{-s}.
\]
By Lemma~\ref{lem:Ind-is-bijective},
induction of $G$-twist classes from $\widetilde{\Irr}(L_{\tic{\theta}}\mid\tic{\theta})$
to $\widetilde{\Irr}(G\mid\tic{\theta})$ is a bijective map. Therefore,
\[
\sum_{\tic{\rho}\in\widetilde{\Irr}(G\mid\tic{\theta})} \rho(1)^{-s} =
\sum_{\tic{\lambda}\in\widetilde{\Irr}(L_{\tic{\theta}}\mid\tic{\theta})} (\lambda(1)\cdot|G:L_{\tic{\theta}}|)^{-s},
\]
and so
\begin{align*}
\widetilde{Z}_{G}(s) & = \sum_{\tic{\theta}\in\widetilde{\Irr}(N)}|G:L_{\tic{\theta}}|^{-s-1}
\sum_{\tic{\lambda} \in \widetilde{\Irr}(L_{\tic{\theta}}\mid\tic{\theta})}
\theta(1)^{-s} \left(\frac{\lambda(1)}{\theta(1)}\right)^{-s}\\
& = \sum_{\tic{\theta}\in\widetilde{\Irr}(N)}
|G:L_{\tic{\theta}}|^{-s-1} \theta(1)^{-s} \widetilde{f}_{(L_{\tic{\theta}},N,\tic{\theta})}(s)\\
& = \sum_{\substack{L\in\widetilde{\ensuremath{\mathcal{S}}}}} |G:L|^{-s-1}
\sum_{\substack{\tic{\theta}\in\widetilde{\Irr}(N)\\L_{\tic{\theta}}=L}}
\theta(1)^{-s} \widetilde{f}_{(L,N,\tic{\theta})}(s).
\end{align*}
If $\tic{\theta},\tic{\theta'}\in\twirr\vphantom{\Irr}^{c,c'}_{L,K,\Gamma}(N)$, then
$\ensuremath{\mathcal{C}}_{K_{p}}(\tic{\theta})=\ensuremath{\mathcal{C}}_{K_{p}}(\tic{\theta'})$
and $\ensuremath{\mathcal{T}}_{L_{p}, K_{p}, \Gamma_p}(\tic{\theta})=\ensuremath{\mathcal{T}}_{L_{p}, K_{p}, \Gamma_p}(\tic{\theta'})$.
Thus, by Proposition~\ref{prop:Analogue-Jaikin}, we have
$\widetilde{f}_{(L,N,\tic{\theta})}(s)=\widetilde{f}_{(L,N,\tic{\theta'})}(s)$.
By the above expression for $\widetilde{Z}_{G}(s)$,
we can therefore write
\[
\widetilde{Z}_{G}(s)=\sum_{\substack{ L\in\widetilde{\ensuremath{\mathcal{S}}}\\
K\in\ensuremath{\mathcal{S}}\\
\Gamma \in \ensuremath{\mathcal{G}}(K)}}
|G:L|^{-s-1}\sum_{\substack{ c\in\coho{2}(K_{p}/N)\\
c'\in \coho{1}(L_{p}/N,F_K/\Gamma)}}
\widetilde{f}_{L,K,\Gamma}^{c,c'}(s)\tpartial{N;L, K, \Gamma}{c, c'}
\]
where
$\widetilde{f}_{L,K,\Gamma}^{c,c'}(s) := \widetilde{f}_{(L,N,\tic{\theta})}(s)$ for some
(equivalently, any) $G$-twist character triple $(L,N,\tic{\theta})$ such that $\tic{\theta} \in \twirr\vphantom{\Irr}^{c,c'}_{L,K,\Gamma}(N)$.
From the assumption that $\tpartial{N;L, K, \Gamma}{c, c'}$ is rational
in $p^{-s}$, it now follows that $\widetilde{Z}_{G}(s)$, and hence $\widetilde{\zeta}_{G}(s)$,
is virtually rational. Moreover, if $G$ is pro-$p$, then $|G:L|$
is a power of $p$ for any subgroup $L$, and likewise $\lambda(1)$
is a power of $p$, so $\widetilde{f}_{(L,N,\tic{\theta})}(s)$ is a polynomial
in $p^{-s}$. Thus, when $G$ is pro-$p$, $\widetilde{Z}_{G}(s)$, and
hence $\widetilde{\zeta}_{G}(s)$, is rational in $p^{-s}$.
\end{proof}
\section{Rationality of the partial twist zeta series}
\label{sec:rationality_partial_tw}
This final section is an analogue of Section~\ref{sec:proof_main} for partial twist zeta series.
The groups $G$, $N$, $K$, $L$, $\Gamma$, $K_p$ and $L_p$ are as in the previous two sections.
We will show that, for each $c\in\coho{2}(K_p/N)$ and $c'\in \coho{1}(L_{p}/N,F_{K_{p}}/\Gamma_p)$, the set of twist classes $\twirr\vphantom{\Irr}^{c,c'}_{L,K,\Gamma}(N)$
is in bijection with a set of equivalence classes under a definable equivalence relation in $\struc_{\mathrm{an}}$.
We deduce from this that each partial twist zeta series is rational in $p^{-s}$ and hence prove
Theorem~\ref{thm:Main-twist}. Fix $c\in\coho{2}(K_p/N)$ and $c'\in \coho{1}(L_{p}/N,F_{K_{p}}/\Gamma_p)$ throughout the
section. In order to use Proposition~\ref{prop:red_Gamma} we assume throughout that
$\twirr\vphantom{\Irr}^{c,c'}_{L,K,\Gamma}(N) \neq \emptyset$.
\subsection{Reduction of the predicate $\ensuremath{\mathcal{T}}_{{L_p},{K_p},\Gamma_p}(\tic{\theta}) = c'$ to degree one characters}
In this section we reduce the computation of $\ensuremath{\mathcal{T}}_{{L_p},{K_p},\Gamma_p}(\tic{\theta})$ to a statement
on a degree one character. Namely, if $c'\in \coho{1}{({L_p}/N, F_{{K_p}}/\Gamma_p)}$, our goal is to
express $\ensuremath{\mathcal{T}}_{{L_p},{K_p},\Gamma_p}(\tic{\theta}) = c'$ with a statement involving only
elements of $N$, conjugation by elements of $G$ and a degree one character of a subgroup of $N$
of finite index. To this end, let $\theta$ be a representative of $\tic{\theta}$. We fix a pair $(H, \chi) \in X_K$
(where $X_K$ is as defined on page~\pageref{def:X_K}) such that $\theta = \Ind_{N \cap H}^{N}(\chi)$. \par
We set ${\indexPN} = \lvert {K_p} : N \rvert$ as in Section~\ref{sec:red_deg_one} and, in addition, we now define
${\indexPN'} = \lvert {L_p} : N \rvert$ and ${m} = \lvert G : N \rvert$. Let
\[
( y_1, \dots, y_{m})
\]
be a left transversal for $N$ in $G$ with $y _1 = 1$ and such that
\begin{gather*}
(y_1, \dots, y_{\indexPN} )\\
(y_1, \dots, y_{{\indexPN}} , y_{{\indexPN} + 1},\dots, y_{{\indexPN'}} )
\end{gather*}
are
left transversals for $N$ in ${K_p}$ and ${L_p}$ respectively. Notice that, as before, ${K_p} = H N$ implies that
there exist elements $t_1,\dots,t_{r}\in N$ such that $(y_1t_1,\dots,y_{{\indexPN}}t_{{\indexPN}})$
is a left transversal for $N\cap H$ in $H$.\par
%
In view of proving definability in $\struc_{\mathrm{an}}$ we shall express conjugation by elements in $G$ in
terms of the associated automorphism of $N$. In order to do so, we extend the notation from
Section~\ref{sec:red_deg_one}, that is, for $i \in \{ 1, \dots {m}\}$,
we define $\varphi_i: N\rightarrow N$
to be the automorphism of $N$ sending $n$ to $y_i n y_i^{-1}$ and set
\begin{align*}
C_{K_p} &= \{ \varphi_i \mid i = 1, \dots, {\indexPN} \} \\
C_{{L_p}} &= \{ \varphi_i \mid i = 1, \dots, {\indexPN'} \} \\
C_G &= \{ \varphi_i \mid i = 1, \dots, {m} \}.
\end{align*}
Finally we need to express how conjugating by a coset representative acts on the
other coset representatives. We therefore define
$d_{ij}\in N$ and $\kappa: \{ 1,\dots, {m}\} \times \lbrace 1, \dots, {m} \rbrace \rightarrow\lbrace 1,\dots,{m} \rbrace$
by
\begin{equation}
\label{eq:delta}
y_i^{-1}y_j y_i = y_{\kappa(i,j)}d_{ij},
\end{equation}
for all $i\in \{ 1,\dots, {\indexPN'}\}$ and $j \in \{ 1,\dots, {\indexPN} \}$. The choice of using right conjugation in the
definition of $\kappa$
is more natural here, as it will only be needed to simplify expressions in the argument of $\chi$.\par
We need to choose a strong extension of $\theta$. By Proposition~\ref{prop:Linearisation}, this may be
done by inducing a strong extension $\hat{\chi}$ of $\chi$ from $H$ to ${K_p}$. Since the definition of
$\ensuremath{\mathcal{T}}_{{L_p},{K_p},\Gamma_p}(\tic{\theta})$ is independent of the choice of strong extension (of $\theta$), we may
assume without loss of generality that $\hat{\chi}$ is given by
\[
\hat{\chi}(y_{i}t_{i}n)=\chi(n),
\]
for all $n\in N\cap H$ and $i \in \{ 1, \dots, {\indexPN}\}$.\par
The first step is to obtain an expression of the conjugate of $\hat{\chi}$ by an element of ${L_p}$. This is done in the
following lemma.
\begin{lem}
\label{lem:conjugating_chi^}
Let $z \in {L_p}$ and let $n \in N$ be such that $z = y_i n$ for some $i \in \{ 1, \dots, {\indexPN'}\}$. Let moreover $n' \in N$ and
$j \in \{ 1, \dots, {\indexPN'}\}$
be such that $y_j n' \in \leftexp{z}{H}$. Then
\[
\leftexp{z}{\hat{\chi}}(y_jn') = \chi ( t_{\kappa(i,j)}^{-1} \varphi_{\kappa(i,j)}^{-1}(n^{-1})d_{ij} \varphi_i^{-1}( n' ) n).
\]
\end{lem}
\begin{proof}
We have $\leftexp{z^{-1}}(y_j n') = n^{-1}y_i^{-1}y_j n' y_i n$. Moreover
\begin{align*}
n^{-1}y_i^{-1}y_j n' y_i n & = n^{-1}y_i^{-1}y_j y_i \varphi_i^{-1}( n' ) n \\
& = n^{-1}y_{\kappa(i,j)} d_{ij} \varphi_i^{-1}( n' ) n\\
& = y_{\kappa(i,j)} \varphi_{\kappa(i,j)}^{-1}(n^{-1})d_{ij} \varphi_i^{-1}( n' ) n\\
& = y_{\kappa(i,j)}t_{\kappa(i,j)} t_{\kappa(i,j)}^{-1}\varphi_{\kappa(i,j)}^{-1}(n^{-1})d_{ij} \varphi_i^{-1}( n' ) n
\end{align*}
The element $y_{\kappa(i,j)} \varphi_{\kappa(i,j)}^{-1}(n^{-1})d_{ij} \varphi_i^{-1}( n' ) n$ is in $H$ by assumption so
\[
t_{\kappa(i,j)}^{-1} \varphi_{\kappa(i,j)}^{-1}(n^{-1})d_{ij} \varphi_i^{-1}( n' ) n \in N\cap H.
\]
Therefore $\hat{\chi}(n^{-1}y_i^{-1}y_j n' y_i n) = \chi(t_{\kappa(i,j)}^{-1} \varphi_{\kappa(i,j)}^{-1}(n^{-1})d_{ij} \varphi_i^{-1}( n' ))$.
\end{proof}
Next we need to be able to express when an element of ${K_p}$ is in a conjugate of $H$ in terms of conditions that we will eventually
be able to translate into a formula in $\lan_{\mathrm{an}}$.
\begin{defn}
Let $i,j \in \{ 1, \dots, {\indexPN}\}$ and $n, n' \in N$. We define $\mathbf{A}_{ij}(H, \chi, n, n')$ to be the predicate
\begin{multline*}
\Big( \forall n'' \in \varphi_i( \leftexp{n}{(N\cap H)}) : {\varphi_{j}(\leftexp{n'}{n''})} \in \varphi_i (\leftexp{n}{(N\cap H)})\\
\wedge\; \chi (\leftexp{n^{-1}}{\varphi_i}^{-1}(\varphi_{j}(\leftexp{n'}{n''}))) = \chi (\leftexp{n^{-1}}{\varphi_i}^{-1}(n''))\Big).
\end{multline*}
\end{defn}
\begin{lem}
\label{lem:cond_H}
Let $i,j \in \{ 1, \dots, {\indexPN}\}$ and $n, n' \in N$. Then $y_j n' \in \leftexp{y_i n}{H}$ if and only if
$\mathbf{A}_{ij}(H, \chi, n, n')$ holds.
\end{lem}
\begin{proof}
Let $M$ be the normaliser of $N\cap \leftexp{y_i n}{H}$ in ${K_p}$. We prove that
\[
\leftexp{y_i n}{H} = \Stab_{M}(\leftexp{y_i n}{\chi}).
\]
This will be enough to conclude. Indeed $\mathbf{A}_{ij}$ is the conjunction of two predicates: the first
expresses exactly that $y_j n' \in M$ and the second means that $(y_j n')^{-1}$ fixes $\leftexp{y_i n}{\chi}$. Let therefore
\[
A = \Stab_{M}(\leftexp{y_i n}{\chi}).
\]
Clearly $N\cap \leftexp{y_i n}{H}$ is normal in $ \leftexp{y_i n}{H}$ and $\leftexp{y_i n}{H}$ fixes $ \leftexp{y_i n}{\chi}$,
so $ \leftexp{y_i n}{H} \subseteq A$. This inclusion gives that ${K_p} = \leftexp{y_i n}{H} N = A N $.
Hence, by the second isomorphism theorem, we have that $\lvert {K_p} : N \rvert = \lvert A : N\cap A \rvert
= \lvert \leftexp{y_i n}{H} : N\cap \leftexp{y_i n}{H} \rvert$. Moreover,
by Mackey's formula, $N\cap \leftexp{y_i n}{H} = N\cap A$ as $ \leftexp{y_i n}{\chi}$ induces irreducibly to $N$.
Thus
$A = \leftexp{y_i n}{H}$ because $ \leftexp{y_i n}{H} \subseteq A$ and there is a subgroup that has the same index
in both.
\end{proof}
We are now able to express $\ensuremath{\mathcal{T}}_{{L_p},{K_p},\Gamma_p}(\widetilde{\theta}) = c'$ in terms
of the pair $(H, \chi)$. Define
\begin{align*}
\widetilde{Z}_p &= \cocy{1}(L_p/N,\Func(K_p/N,W_{(p)}))\\
\widetilde{B}_p &= \cobo{1}(L_p/N,\Func(K_p/N,W_{(p)}))
\end{align*}
By Lemma~\ref{lem:Z-BU_H-finite} every class $\coho{1}(L_{p}/N,F_{K_{p}})$ has a representative in
$\widetilde{Z}_p$. Moreover, let $\delta \in
\widetilde{Z}_p \cap \cobo{1}(L_p/N, F_{{K_p}})$. Then there is an $\omega\in F_{K_p}$ such that for any $g\in L_p$,
$\delta(gN) = \leftexp{g}{\omega}\omega^{-1}$.
Since $\leftexp{g}{\omega}\omega^{-1}$ has values in $W_{(p)}$ we have $\leftexp{g}{\omega}\omega^{-1} =
(\leftexp{g}{\omega}\omega^{-1})_{(p)}$, so by the properties of $f_{(\ell)}$ just before Lemma~\ref{lem:pi_ell_mu-cobound},
\[
\delta(gN) = \leftexp{g}{\omega_{(p)}}\omega^{-1}_{(p)},
\]
hence $\delta \in \cobo{1}(L_p/N,\Func(K_p/N,W_{(p)}))$. Thus
\[
\widetilde{Z}_p \cap \cobo{1}(L_p/N, F_{{K_p}})= \cobo{1}(L_p/N,\Func(K_p/N,W_{(p)}).
\]
It follows that the inclusion of $\widetilde{Z}_p$
in $ \cocy{1}(L_p/N, F_{{K_p}})$ induces an isomorphism
\[
\coho{1}(L_{p}/N,F_{K_{p}})\cong \widetilde{Z}_p/\widetilde{B}_p.
\]
We need the following definition for ease of notation.
\begin{defn}
Let $i,j \in \{1, \dots, {\indexPN}\}$ and $k \in \{ 1, \dots, {\indexPN'}\}$. Let also $\psi \in \Lin(G)$, $\nu \in \Gamma_p$,
and $n, n' \in N$. We define $\mathbf{B}_{ijk}(H, \chi, \psi, \nu, n, n')$ to be the predicate
\begin{multline*}
\chi (t_{\kappa(k,j)}^{-1} d_{kj} \varphi_k^{-1} \big( n' \big) ) =\\
\mu (y_k N)(y_{\kappa(i,j)} N)\, \cdot\, \delta(y_k N)(y_{\kappa(i,j)} N)\, \cdot \, \nu(y_{\kappa(i,j)} N)\,\cdot\, \psi(y_j) \psi(n')\\
\cdot \chi \big( t_{\kappa(i,j)}^{-1} \varphi_{\kappa(i,j)}^{-1}(n^{-1})d_{ij} \varphi_i^{-1}( n' ) n \big).
\end{multline*}
\end{defn}
\begin{prop}
\label{prop:Linearisation_twist}
Let $(H,\chi) \in X_K$ be a pair corresponding to $\theta \in \Irr_{K}(N)$. Let $c' \in \coho{1}{({L_p}/N, F_{{K_p}}/\Gamma_p)}$. Fix
$\mu\in \widetilde{Z}_p$ such that, the $1$-cocycle in $\cocy{1}{({L_p}/N, F_{{K_p}}/\Gamma_p)}$
defined by $g \mapsto \mu(g N)\Gamma_p$
is in the class $c'$. Then $\ensuremath{\mathcal{T}}_{{L_p},{K_p},\Gamma_p}(\widetilde{\theta}) = c'$ if and only if there is a
coboundary $\delta \in \widetilde{B}_p$ such that, for all $k = 1, \dots, {\indexPN'}$, there are
\begin{enumerate}
\renewcommand{\theenumi}{\em\alph{enumi})}
\item $i \in \{ 1, \dots, {\indexPN} \} $ and $n \in N$,
\item
a character $\psi \in \Lin(G)$,
\item a homomorphism $\nu \in \Gamma_p$,
\end{enumerate}
such that, for all $ j \in \{ 1, \dots, {\indexPN}\}$ and $n' \in N$
\[
\mathbf{A}_{ij}(H, \chi, n, n') \wedge \mathbf{A}_{kj}(H, \chi, 1, n') \Longrightarrow \mathbf{B}_{ijk}(H, \chi, \psi, \nu, n, n').
\]
\end{prop}
\begin{proof}
Recall that we defined $\widetilde{f}_{H}:\cocy{2}(H/(N\cap H))\rightarrow \cocy{2}({K_p}/N)$
to be the isomorphism induced by pulling back cocycles along the isomorphism
${K_p}/N\rightarrow H/(N\cap H)$. Let $\alpha$ be the factor set of $\hat{\chi}$. Let
$\hat{\alpha}$ be the (unique) cocycle in $\coho{2}({K_p})$ descending to
$\widetilde{f}_{H}(\alpha)$.
The projective character
\begin{equation}
\label{eq:theta^_as_ind}
\hat{\theta} = \Ind_{H, \hat{\alpha}}^{{K_p}} \hat{\chi}
\end{equation}
is a strong extension of $\theta$. We therefore have $\ensuremath{\mathcal{T}}_{{L_p},{K_p},\Gamma_p}(\widetilde{\theta}) = c'$ if and only if
there is a coboundary $\delta \in \cobo{1}{({L_p}/N, F_{{K_p}})}$ such that, for all $\varphi_k \in C_{{L_p}}$, there are
a degree one character $\psi \in \Lin(G)$ and a homomorphism $\nu \in \Gamma_p$ such that
\[
\leftexp{y_k}{\hat{\theta}} = \hat{\theta} \cdot \mu (y_k N) \delta(y_k N) \nu \psi\lvert_{{K_p}}.
\]
Substituting \eqref{eq:theta^_as_ind} in the last equation we obtain
\begin{equation}
\label{eq:defining_mu_with_ind}
\leftexp{y_k}{\big( \Ind_{H, \hat{\alpha}}^{{K_p}} \hat{\chi} \big)} =
\big( \Ind_{H, \hat{\alpha}}^{{K_p}} \hat{\chi} \big)
\cdot \mu (y_k N) \delta(y_k N) \nu \psi\lvert_{{K_p}}.
\end{equation}
Let $\hat{\beta}$ be the factor set of $\leftexp{y_k}{\hat{\theta}}$. The left-hand side is equal to
\[
\Ind_{\varphi_k(H), \hat{\beta}}^{{K_p}} \leftexp{y_k}{\hat{\chi}}
\]
The right-hand side is equal to
\[
\Ind_{H, \hat{\beta}}^{{K_p}} \big( \hat{\chi} \cdot (\mu (y_k N) \delta(y_k N) \nu \psi)\lvert_{H} \big).
\]
Note that $\mu (y_k N) \delta(y_k N) \nu$ has factor set $\hat{\beta}\hat{\alpha}^{-1}$ because of \eqref{eq:defining_mu_with_ind}.
By Mackey's intertwining formula we therefore have that equation~\eqref{eq:defining_mu_with_ind} holds if and only if
there are $n \in N$ and $i \in \{ 1, \dots, {\indexPN}\}$ such that
\begin{multline*}
\leftexp{y_k}{\hat{\chi}}\lvert_{(\varphi_k(H) \cap {\varphi_i(\leftexp{n}{H})})}
= \leftexp{y_i n}{\hat{\chi}}\lvert_{(\varphi_k(H) \cap{\varphi_i( \leftexp{n}{H)}})}
\, \leftexp{y_i n}{(\mu (y_k N) \delta(y_k N) \nu \psi)}
\lvert_{(\varphi_k(H) \cap {\varphi_i(\leftexp{n}{H})})}.
\end{multline*}
In other words, since $\nu$ and $\psi$ are fixed by ${K_p}$,
\begin{multline}
\label{eq:pred_before_B}
\leftexp{y_k}{\hat{\chi}}(y_j n') = \\
\leftexp{y_i n}{\hat{\chi}}(y_j n')
\cdot \mu (y_k N)(y_{\kappa(i,j)} N) \cdot \delta(y_k N) (y_{\kappa(i,j)} N) \cdot \nu(y_j N) \cdot \psi(y_j) \psi(n'),
\end{multline}
for all $n' \in N$ and $j \in \{ 1, \dots, {\indexPN}\}$ such that $y_j n' \in \varphi_k(H) \cap {\varphi_i(\leftexp{n}{H})}$.
By Lemma~\ref{lem:conjugating_chi^} we have
\begin{align*}
\leftexp{y_k}{\hat{\chi}}(n' y_j) &= \chi (t_{\kappa(k,j)}^{-1} d_{kj} \varphi_k^{-1} \big( n' \big) )\\
\leftexp{y_i n}{\hat{\chi}}(n' y_j) &= \chi \big( t_{\kappa(i,j)}^{-1} \varphi_{\kappa(i,j)}^{-1}(n^{-1})d_{ij}
\varphi_i^{-1}( n' ) n \big).
\end{align*}
Substituting in \eqref{eq:pred_before_B} gives $\mathbf{B}_{ijk}(\chi, \psi, \nu, n, n')$.
Moreover, by Lemma~\ref{lem:cond_H}, $y_j n' \in \varphi_k(H)$ if and only if
${\varphi_{j}(\leftexp{n'}{n})} \in \varphi_k(N\cap H)$ and
\[
\chi (\varphi_k^{-1}(\varphi_{j}(\leftexp{n'}{n''}))) = \chi (\varphi_k^{-1}(n''))
\]
for all $n'' \in \varphi_k(N\cap H)$. These two conditions form $\mathbf{A}_{kj}(H, \chi, n, n')$. Similarly,
$y_j n' \in \leftexp{n}{\varphi_i(H)}$ if and only if $\mathbf{A}_{ij}(H, \chi, n, n')$ holds.\par
We finish the proof noticing that $\chi, \psi, \mu(y_k N)$ and $\nu$ all have values in $W_{(p)}$.
Thus if there is $\delta \in \cobo{1}{({L_p}/N, F_{{K_p}})}$ satisfying the conditions above, then necessarily
$\delta \in \widetilde{B}_p$. We may therefore restrict to $\delta \in \widetilde{B}_p$ in the
equivalence statement.
\end{proof}
\subsection{Definable sets for degree one characters of subgroups of $G$.}
We shall now show how to interpret predicates that involve quantifying on $\Lin(G)$ and other groups
of characters.
\subsubsection{Definable set for twisting characters.}
We show
that characters $\tau\in\Lin(N)$, such that $\tau = \psi\lvert_N$
for some $\psi \in \Lin(G)$, may be definably parametrised in $\struc_{\mathrm{an}}$, in a way that keeps track of the values
of $\psi(y_i)$ for $i =1, \dots, {\indexPN}$. Notice that, since ${K_p}$ is a pro-$p$ group, every $\psi\in \Lin(G)$
is such that $\psi(y_i) \in W_{(p)}$ for all $i =1, \dots, {\indexPN}$.
\begin{lem}
\label{lem:ext_N_G}
Let $\tau \in \Lin(N)$ and let $\sigma_1, \dots, \sigma_{\indexPN} \in W_{(p)}$.
Then $\tau = \psi\lvert_N$ for some $\psi \in \Lin(G)$ such that $\psi(y_i) = \sigma_i $
for $i =1, \dots, {\indexPN}$, if and only if there are $\sigma_{{\indexPN} + 1}, \dots, \sigma_{{m}} \in W_{(p)}$
such that for $i, j \in \{1, \dots, {m}\}$ and all $n, n'\in N$,
\[
\sigma_{\gamma(i,j)} \tau(a_{ij} \varphi_j^{-1}(n) n') = \sigma_i \sigma_j \tau(n) \tau (n'),
\]
where $\gamma$ and $a_{ij}$ are as in \eqref{eq:gamma_ext}.
\end{lem}
\begin{proof}
We have
\begin{equation}
\label{eq:multiplying_ys}
y_i n y_j n'= y_i y_j y_j^{-1} n y_j n' = y_i y_j \varphi_j^{-1}( n ) n' = y_{\gamma(i,j)} a_{ij} \varphi_j^{-1}( n ) n' .
\end{equation}
Assume that $\psi \in \Lin(G)$ restricts to $\tau$. Then, since ${K_p}$ is a pro-$p$
group, $\psi$ and $\psi_{(p)}$ restrict to the same character of $K$ (hence also $\psi_{(p)}$ restricts to $\tau$).
Set $\sigma_i = \psi_{(p)}(y_i)$ (for $i = 1, \dots, {m}$).
On the one hand, $\psi(y_i n y_j n') = \sigma_i \sigma_j \tau(n) \tau (n') $
and, on the other hand,
\[
\psi(y_i n y_j n') = \sigma_{\gamma(i,j)} \tau(a_{ij} \varphi_j^{-1}( n ) n' )
\]
by \eqref{eq:multiplying_ys}.\par
%
Conversely, assume there exist $\sigma_1, \dots, \sigma_{{m}} \in W_{(p)}$ such that
for $i, j \in \{1, \dots, {m}\}$, and all $n, n'\in N$,
$ \sigma_{\gamma(i,j)} \tau(a_{ij} \varphi_j^{-1}(n) n') = \sigma_i \sigma_j \tau(n) \tau (n')$. Then
\[
\psi (y_i n) = \sigma_i \tau(n) \qquad n \in N,\, i = 1, \dots, {m}
\]
defines a homomorphism $G \to W_{(p)}$. Indeed, we have
\[
\sigma_{\gamma(i,j)} \tau(a_{ij} \varphi_j^{-1}(n) n') = \sigma_i \sigma_j \tau(n) \tau (n') = \psi(y_i n)\psi(y_jn').
\]
Moreover, $\psi(y_i n y_j n') = \sigma_{\gamma(i,j)} \tau(a_{ij} \varphi_j^{-1}(n) n')$
by \ref{eq:multiplying_ys}. Thus we get $\psi(y_i n y_j n') = \psi(y_i n)\psi_p(y_j n')$. Clearly $\psi\lvert_N = \tau$ and
we conclude.
\end{proof}
Recall that we have an isomorphism $\iota : \ensuremath{\mathbb{Q}}_p/ \ensuremath{\mathbb{Z}}_p \to W_{(p)}$.
Let $\tuple{\lambda}_0 \in \M_{(d + r) \times d}(\ensuremath{\mathbb{Q}}_p)$ whose rows correspond to the $\ensuremath{\mathbb{Z}}_p$ coordinates of a
basis of $K_p \in \ensuremath{\mathcal{H}}(K_p)$. Let $\ensuremath{\mathcal{D}}^G$ be the projection on the $\tuple{\xi}$-component of the set
\[
\{ (\tuple{\lambda}, \tuple{\xi}) \in \ensuremath{\mathcal{D}}^1 \mid \tuple{\lambda} = \tuple{\lambda}_0 \}
\]
where $\ensuremath{\mathcal{D}}^1$ is as in
Proposition~\ref{pro:X_definable} for $K = G$.
Clearly $\ensuremath{\mathcal{D}}^G$ is a definable subset of
$\ensuremath{\mathbb{Q}}_p^d$ in $\struc_{\mathrm{an}}$. By definition, the first $d$ rows of $\tuple{\lambda}_0$ are the $\ensuremath{\mathbb{Z}}_p$-coordinates
of a good basis of $N$. Thus we have that $\ensuremath{\mathcal{D}}^G$ is precisely the set of $d$-tuples $\tuple{\xi}$
such that the function
\[
\{n_1,\dots, n_d\} \longrightarrow \ensuremath{\mathbb{Q}}_p/\ensuremath{\mathbb{Z}}_p, \qquad n_i\longmapsto \xi_i + \ensuremath{\mathbb{Z}}_p,
\]
extends to a (necessarily unique) continuous homomorphism
$\tau : N \cap H\longrightarrow \ensuremath{\mathbb{Q}}_p/\ensuremath{\mathbb{Z}}_p$ such that $\iota \circ \tau = \psi\lvert_N$ for some
$\psi \in \Lin(G)$.
\begin{prop}
\label{pro:X_definable-lin}
Let $\ensuremath{\mathcal{D}}^G_{K_p}$ be the set of tuples of the form $(\xi_1, \dots, \xi_d, \sigma_1, \dots, \sigma_{\indexPN})
\in \ensuremath{\mathbb{Q}}_p^{d + {\indexPN}}$ such that the function
$\{y_i n_j \mid i = 1, \dots, {\indexPN},\, j = 1, \dots, d \}\rightarrow \ensuremath{\mathbb{Q}}_p/\ensuremath{\mathbb{Z}}_p$
\[
y_i n_j \longmapsto \sigma_i + \xi_j + \ensuremath{\mathbb{Z}}_p,
\]
extends to a (necessarily unique) continuous homomorphism $\sigma: G \to \ensuremath{\mathbb{Q}}_p / \ensuremath{\mathbb{Z}}_p$.
Then $\ensuremath{\mathcal{D}}^G_{K_p}$ is a
definable set of $\ensuremath{\mathbb{Q}}_p^{d + {\indexPN}}$ in $\struc_{\mathrm{an}}$.
\end{prop}
\begin{proof}
By Lemma~\ref{lem:ext_N_G}, a tuple $(\xi_1, \dots, \xi_d, \sigma_1, \dots, \sigma_{\indexPN})$ is in $\ensuremath{\mathcal{D}}^G_{K_p}$ if and only
if
\begin{enumerate}
\item \label{pro:X_definable-lin_pf_1}
$\tuple{\xi} = (\xi_1, \dots, \xi_d) \in \ensuremath{\mathcal{D}}^G$,
\item \label{pro:X_definable-lin_pf_2}
and, denoting by $\tau: N \to \ensuremath{\mathbb{Q}}_p/\ensuremath{\mathbb{Z}}_p$ the homomorphism defined by $\tuple{\xi}$, there are
$\sigma_{{\indexPN} + 1}, \dots, \sigma_{m} \in \ensuremath{\mathbb{Q}}_p$ such that for
$i, j \in \{1, \dots, {m}\}$ and all $n, n'\in N$,
\[
(\sigma_{\gamma(i,j)} - \sigma_i - \sigma_j) + \ensuremath{\mathbb{Z}}_p =
\tau(n) + \tau (n') - \tau(a_{ij} \varphi_j^{-1}(n) n'),
\]
\todo{21/07/2020 MZ: Corrected the equation here}
where $\gamma$ and $a_{ij}$ are as in \eqref{eq:gamma_ext}.
\end{enumerate}
Clearly, we have that \ref{pro:X_definable-lin_pf_1} is a definable condition because $\ensuremath{\mathcal{D}}^G$ is definable.
By the definable interpretation of $\mathcal{M}_N$ in $\struc_{\mathrm{an}}$ and using $\tuple{\xi}$ to express the values of $\tau$,
we see that also \ref{pro:X_definable-lin_pf_2} is a definable $\lan_{\mathrm{an}}$-condition on
$ (\xi_1, \dots, \xi_d, \sigma_1, \dots, \sigma_{\indexPN})$.
\end{proof}
\subsubsection{Definable set for $\Gamma_{{K_p}, \tic{\theta}}$.}
Let $(\tuple{\lambda}, \tuple{\xi}) \in \ensuremath{\mathcal{D}}^c$ and let
$(H, \chi) = \Psi (\tuple{\lambda}, \tuple{\xi})$. Set $\theta = \Ind_{N \cap H}^N \chi$. We shall now
produce a definable set that will be used to interpret predicates quantifying over $\Gamma_{{K_p}, \tic{\theta}}$.
\begin{defn}
We define $\ensuremath{\mathcal{D}}_{{K_p}/N}$ as the set of tuples $(\sigma_1, \dots, \sigma_{\indexPN}) \in \ensuremath{\mathbb{Q}}_p^{{\indexPN}}$
giving a function ${K_p}/N \rightarrow \ensuremath{\mathbb{Q}}_p/\ensuremath{\mathbb{Z}}_p$ defined by
\[
y_i N \longmapsto \nu_i + \ensuremath{\mathbb{Z}}_p \qquad \text{for } i \in \{ 1, \dots, {\indexPN}\},
\]
extending to a homomorphism ${K_p}/N \to \ensuremath{\mathbb{Q}}_p/\ensuremath{\mathbb{Z}}_p$.
\end{defn}
Clearly $\tuple{\nu} \in \ensuremath{\mathcal{D}}_{{K_p}/N}$ if and only if for $i, j \in \{1, \dots, {\indexPN}\}$, $\sigma_{\gamma(i,j)} =
\sigma_i + \sigma_j \mod \ensuremath{\mathbb{Z}}_p$. Thus $\ensuremath{\mathcal{D}}_{{K_p}/N}$ is a definable set.
\begin{lem}
\label{lem:def_Gamma}
Let $\ensuremath{\mathcal{D}}_{{K_p}}(\tuple{\lambda}, \tuple{\xi})$ be the set of tuples of the form
$(\nu_1, \dots, \nu_{\indexPN}) \in \ensuremath{\mathbb{Q}}_p^{{\indexPN}}$ such that the function
$\bar{\nu} : {K_p}/N \rightarrow \ensuremath{\mathbb{Q}}_p/\ensuremath{\mathbb{Z}}_p$ defined by
\[
y_i N \longmapsto \nu_i + \ensuremath{\mathbb{Z}}_p\qquad \text{for } i \in \{ 1, \dots, {\indexPN}\},
\]
is a homomorphism such that $\iota \circ \bar{\nu} \in \Gamma_{{K_p},\tic{\theta}}$.
Then $\ensuremath{\mathcal{D}}_{{K_p}}(\tuple{\lambda}, \tuple{\xi})$ is a definable subset of $\ensuremath{\mathbb{Q}}_p^{{\indexPN}}$ in $\struc_{\mathrm{an}}$.
\end{lem}
\begin{proof}
We start by expressing the definition of $\Gamma_{{K_p}, \tic{\theta}}$ in terms of $(H, \chi)$.
To do this, we need to fix a strong extension $\hat{\theta}$ of $\theta$ (all strong extensions are equally good as
the definition of $\Gamma_{{K_p}, \tic{\theta}}$ does not depend on this choice). We choose the strong extension obtained
by inducing to ${K_p}$ the projective character $\hat{\chi}$ of $H$ defined by
\[
\hat{\chi}(y_{i}t_{i}n)=\chi(n),
\]
for all $n\in N\cap H$ and $i \in \{ 1, \dots, {\indexPN}\}$. To say that
$\nu \in \Lin({K_p}/N)$ belongs to $\Gamma_{{K_p}, \tic{\theta}}$ is to say that there is $\varepsilon \in \Lin(G)$ such that
\[
\hat{\theta}\nu = \hat{\theta} \varepsilon\lvert_{{K_p}}.
\]
By Mackey's formula, this happens if and only if there exist $\varepsilon \in \Lin(G)$, $i \in \{ 1, \dots, {\indexPN} \}$ and $n \in N$ such that
\[
(\leftexp{y_i n}{\hat{\chi}}\,\nu)\lvert_{H \cap \leftexp{y_i n}{H}}
= (\hat{\chi}\, \varepsilon)\lvert_{H \cap \leftexp{y_i n}{H}}.
\]
In other words, if and only if there exist $\varepsilon \in \Lin(G)$, $i \in \{ 1, \dots, {\indexPN} \}$ and $n \in N$ such that for all $j \in \{ 1,\dots, {\indexPN}\}$
and all $n' \in N$ we have
\begin{equation}
\label{eq:chi_has_Gamma}
y_j n' \in H \cap \leftexp{y_i n}{H} \Longrightarrow \leftexp{y_i n}{\hat{\chi}} (y_j n') \nu(y_j) =
\hat{\chi}(y_j n') \varepsilon(y_j) \varepsilon(n).
\end{equation}
We now rewrite \eqref{eq:chi_has_Gamma} in a way that involves only quantifying over $N$ and $\Lin(G)$, conjugation by the chosen
coset representatives of $N$ in ${K_p}$, and values of $\chi$ on $N\cap H$. First we observe that,
by Lemma~\ref{lem:cond_H} we may replace the antecedent with the predicate
\[
\mathbf{A}_{1j}(H, \chi, 1, n') \wedge \mathbf{A}_{ij}(H, \chi, n, n').
\]
Secondly, by Lemma~\ref{lem:conjugating_chi^}, we may replace the consequent in
\eqref{eq:chi_has_Gamma} by the predicate $\mathbf{C}_{ij}(H, \chi, \nu, \varepsilon, n, n')$ defined as
\[
\chi \big( t_{\kappa(i,j)}^{-1} \varphi_{\kappa(i,j)}^{-1}(n^{-1})d_{ij} \varphi_i^{-1}( n' ) n \big) \nu(y_j) = \chi(t_j^{-1} n') \varepsilon(y_j)
\varepsilon(n)
\]
We obtain that $\nu \in \Gamma_{{K_p}, \tic{\theta}}$ if and only if the following predicate is true
\begin{multline}
\label{eq:predicate_Gamma}
\exists \varepsilon \in \Lin(G) :
\bigvee_{i \in \{1, \dots, {\indexPN}\} } \Big( \exists n \in N\\
\bigwedge_{j \in \{ 1, \dots, {\indexPN}\}}
\big( \forall n'\in N : \mathbf{A}_{kj}(H, \chi, 1, n') \wedge \mathbf{A}_{ij}(H, \chi, n, n')
\Longrightarrow \mathbf{C}_{ij}(H, \chi, n, n', \nu, \varepsilon) \big) \Big).
\end{multline}
The last predicate may be written as an $\lan_{\mathrm{an}}$-condition on $\tuple{\nu}$ and $(\tuple{\lambda} , \tuple{\xi})$:
\begin{itemize}
\item[-] we use the interpretation of $\mathcal{M}_N$ in $\struc_{\mathrm{an}}$ to express elements in $N$.
\item[-] We use tuples in $\ensuremath{\mathcal{D}}_{{K_p}/N}$ to express the values of $\nu$.
\item[-] We interpret $\exists \varepsilon \in \Lin(G)$ as
\[
\exists (\xi_i, \dots, \xi_d, \sigma_1, \dots, \sigma_{\indexPN}) \in \ensuremath{\mathcal{D}}_{{K_p}}^{G},
\]
using $(\xi_1, \dots, \xi_d)$ to express $\varepsilon$ on $N$ and $(\sigma_1, \dots, \sigma_{\indexPN})$
to express
\[
\varepsilon(y_1),\dots, \varepsilon(y_{\indexPN}).
\]
\item[-] We interpret multiplication in $W_{(p)}$ as addition in $\ensuremath{\mathbb{Q}}_p/\ensuremath{\mathbb{Z}}_p$ and equality in $W_{(p)}$
as equality in $\ensuremath{\mathbb{Q}}_p/\ensuremath{\mathbb{Z}}_p$ via $\iota$.
\end{itemize}
Writing \eqref{eq:predicate_Gamma} with these rules clearly gives an $\lan_{\mathrm{an}}$-condition and we conclude.
\end{proof}
\subsection{Definable sets for $\widetilde{Z}_p$ and $\widetilde{B}_p$}
In this subsection we describe the definable sets used to interpret predicates
quantifying over $\widetilde{Z}_p$ and $\widetilde{B}_p$ in $\struc_{\mathrm{an}}$.
To this end and for later use, we need to extend \eqref{eq:gamma}, that is we define a
function $\gamma: \{1,\dots,{m}\}^2\rightarrow \{1,\dots,{m}\}$ and $a_{ij} \in N$ such that
\begin{equation}
\label{eq:gamma_ext}
y_{i}y_{j} = y_{\gamma(i,j)}a_{ij}.
\end{equation}
\begin{lem}
\label{lem:schur_def_sets-twist}
Let $\Omega$ be the surjective map from the set of matrices $\M_{{\indexPN'} \times {\indexPN}}(\ensuremath{\mathbb{Q}}_p)$ to
the set of functions $L_p/N \rightarrow F_{K_p}$,
defined by
\[
\Omega((z_{ij})) = \big[ y_iN \longmapsto \iota \circ f_i\big],\quad \text{for}\ i\in \lbrace 1, \dots, {\indexPN'} \rbrace.
\]
where for each $i$, $f_i: {K_p}/N \to \ensuremath{\mathbb{Q}}_p / \ensuremath{\mathbb{Z}}_p$ is the function $y_j N \mapsto z_{ij} + \ensuremath{\mathbb{Z}}_p$, for $j\in \lbrace 1, \dots, {\indexPN} \rbrace$.
Define $\widetilde{\mathcal{Z}} = \Omega^{-1}(\widetilde{Z}_p)$ and $\widetilde{\mathcal{B}} = \Omega^{-1}(\widetilde{B}_p)$.
Then $\widetilde{\mathcal{Z}}$ and
$\widetilde{\mathcal{B}}$ are definable in $\struc_{\mathrm{an}}$.
\end{lem}
\begin{proof}
We prove that the set $\widetilde{\mathcal{Z}}$ is definable. Let $\mathbf{z} \in\M_{r\times {\indexPN'}}(\ensuremath{\mathbb{Q}}_p)$. Then
$\Omega(\mathbf{z}) \in \widetilde{Z}_p$ if and only if the following definable predicate in $\struc_{\mathrm{an}}$ holds:
for all $i,j \in \lbrace 1, \dots, {\indexPN'} \rbrace$ and $k \in \lbrace 1, \dots, {\indexPN} \rbrace$,
\[
z_{\gamma(i,j)\, k} = z_{jk} + z_{\kappa(i,j)\, k} \mod \ensuremath{\mathbb{Z}}_p.
\]
This is obtained by just pulling back the $1$-cocycle identity through $\Omega$.
More precisely, by definition, $\Omega(\tuple{z})$ is a $1$-coboundary if for all
$i,j \in \lbrace 1, \dots, {\indexPN'} \rbrace$
\[
\Omega(\tuple{z})(y_i y_jN) = \Omega(\tuple{z})(y_iN) \leftexp{y_i}{\Omega(\tuple{z})(y_jN)},
\]
That is, if $f_{\gamma(i,j)} = f_j + f_{\kappa(i,j)}$, which in turn is equivalent to the condition that,
for all $k \in \{ 1, \dots, {\indexPN}\}$, we have
\[
f_{\gamma(i,j)}(y_k) = f_j(y_k) + f_{\kappa(i,j)}(y_k),
\]
or equivalently, $z_{\gamma(i,j)\, k} = z_{jk} + z_{\kappa(i,j)\, k} \mod \ensuremath{\mathbb{Z}}_p$.\par
We prove that $\widetilde{\mathcal{B}}$ is definable. We need to express in coordinates the condition for being
a $1$-coboundary. To this end, we parametrise the functions $K_p/N\rightarrow \ensuremath{\mathbb{Q}}_p/\ensuremath{\mathbb{Z}}_p$
by the ${\indexPN}$-tuples $(b_1,\dots,b_{\indexPN})\in \ensuremath{\mathbb{Q}}_p^{\indexPN}$
representing their values on $y_1N,\dots, y_{{\indexPN}}N$. Writing the $1$-coboundary condition
in terms of $b_1,\dots,b_{\indexPN}$ we obtain that $\Omega(\mathbf{z}) \in \widetilde{B}_p$
is a $1$-coboundary if and only if there is $(b_1, \dots, \allowbreak b_{\indexPN}) \in \ensuremath{\mathbb{Q}}_p^{r}$
such that for all $i\in \lbrace 1, \dots, {\indexPN'} \rbrace$ and $j \in \{ 1, \dots, {\indexPN}\}$,
\[
z_{ij} = b_{\kappa(i,j)} - b_j \mod \ensuremath{\mathbb{Z}}_p.
\]
This is a definable predicate in $\struc_{\mathrm{an}}$ and we conclude.
\end{proof}
\subsection{Definability of the predicate $\ensuremath{\mathcal{T}}_{L_p, K_p, \Gamma_p}(\tic{\theta}) = c'$}
We are now ready to give an interpretation of $\twirr\vphantom{\Irr}^{c,c'}_{L,K,\Gamma}(N)$ in the structure $\struc_{\mathrm{an}}$. In this subsection we
will construct a definable set $\ensuremath{\mathcal{D}}^{c,c'}$ corresponding to $\twirr\vphantom{\Irr}^{c,c'}_{L,K,\Gamma}(N)$ up to a definable equivalence
relation that we shall introduce in the next subsection. This
correspondence will be explicit and we will have a definable function (also introduced in the next
subsection) giving the degree of the corresponding character for every element in $\ensuremath{\mathcal{D}}^{c,c'}$.\par
We start by stating the analog of Lemma~\ref{lem:conj_induced}. It is proved using the fact that
twisting by degree one characters and induction are compatible (see for instance
the proof of \cite[Lemma~8.6(b)]{hrumar2015definable}).
\begin{lem}
\label{lem:conj_induced_tw}
Let $M$ be a finite index subgroup of $N$, $\chi \in \Lin(M)$ and $\psi \in \Lin(G)$.
Then, for all $g \in G$,
\[
\leftexp{g}{\big(\mathrm{Ind}_M^N \chi\big)} \psi\lvert_{N}
= \mathrm{Ind}_{\leftexp{g}{M}}^N(\leftexp{g}{\chi} \psi\lvert_{\leftexp{g}{M}}).
\]
Moreover if $M'$ is another finite index subgroup of $N$ and $\chi, \chi'$ are degree one characters
of $M$ and $M'$ respectively, such that $\Ind_M^N \chi$ and $\Ind_{M'}^N \chi'$ are irreducible, then
$(\Ind_M^N \chi )\psi\lvert_N = \Ind_{M'}^N \chi'$ if and only if there exists $g \in N$ such that
$(\Res^{\leftexp{g}{M}}_{\leftexp{g}{M} \cap M'} \leftexp{g}{\chi})
\psi\lvert_{\leftexp{g}{M} \cap M'} = \Res^{M'}_{\leftexp{g}{M} \cap M'} \chi'$.
\end{lem}
\begin{prop}
\label{pro:X_definable_tw}
Let $\ensuremath{\mathcal{D}}^{c,c'}$ be the set of pairs
$(\tuple{\lambda}, \tuple{\xi}) \in \ensuremath{\mathcal{D}}^{c}$ with the property that, for $(H, \chi) = \Psi (\tuple{\lambda}, \tuple{\xi})$,
$\chi$ induces to a character $\theta$ of $N$ such that $\tic{\theta} \in \twirr\vphantom{\Irr}^{c,c'}_{L,K,\Gamma}(N)$. Then $\ensuremath{\mathcal{D}}^{c,c'}$ is a definable
subset of $\ensuremath{\mathbb{Q}}_p^{d(d+r+1)}$
in $\struc_{\mathrm{an}}$.
\end{prop}
\begin{proof}
Since $(\tuple{\lambda}, \tuple{\xi}) \in \ensuremath{\mathcal{D}}^{c}$, we have that $\tic{\theta} \in \twirr\vphantom{\Irr}^{c,c'}_{L,K,\Gamma}(N)$ if and only if
\begin{enumerate}
\item \label{pro:X_definable_tw_1}
$\Stab_G( \tic{\theta} ) = L$
%
\item \label{pro:X_definable_tw_2}
$\Gamma_{K, \tic{\theta}} = \Gamma$
%
\item \label{pro:X_definable_tw_3}
$\ensuremath{\mathcal{T}}_{L_p, K_p, \Gamma_p}(\tic{\theta}) = c'$.
\end{enumerate}
Let ${u'} = \lvert L : N \rvert$. Up to reordering $(y_1,\dots, y_{m})$, we may assume that
\[
(y_1,\dots,y_{\indexPN}, y_{{\indexPN} +1}, \dots, y_{u'})
\]
is a left transversal of $N$ in $L$. Accordingly we will then have
\[
C_L = \{ \varphi_i \mid i = 1, \dots, {u'} \}
\]
for the set of automorphism of $N$ consisting of conjugation by $y_1, \dots, y_{u'}$.
By Lemma~\ref{lem:conj_induced_tw},
$\Stab_G(\tic{\theta}) = L $ if and only the following statement holds:
\begin{equation}
\label{eq:stab_is_L}
\forall\, \varphi \in C_G\ : \Big(\exists \psi\in \Lin(G) \big( (\Ind_{N\cap H}^{N}\chi) \psi\lvert_N =
\Ind_{\varphi (N\cap H)}^{N}\chi \circ\varphi^{-1} \big )
\Longleftrightarrow \varphi \in C_L \Big).
\end{equation}
Fix $\varphi \in C_G$. Lemma~\ref{lem:conj_induced_tw} with
$M = N \cap H$, $M' = \varphi(N \cap H)$ and $\chi' = \chi \circ\varphi^{-1}$ implies that
$(\Ind_{N\cap H}^{N}\chi) \psi\lvert_N=\Ind_{\varphi(N\cap H)}^{N}\chi \circ\varphi^{-1}$ if and only if
\begin{multline*}
\exists\, g \in N,\ \forall\, h\in N\cap H:
\big(\leftexp{g}{h} \in \varphi(N \cap H)\Longrightarrow
\chi(h) \psi(h)
= \chi \circ \varphi^{-1}(\leftexp{g}{h})\big).
\end{multline*}
We interpret $\exists \psi \in \Lin(G)$ as
\[
\exists (\psi_1, \dots, \psi_d) \in \ensuremath{\mathcal{D}}^{G},
\]
using $(\psi_1, \dots, \psi_d)$ to express $\psi$ on $N$. We interpret multiplication in $W_{(p)}$
as addition in $\ensuremath{\mathbb{Q}}_p/\ensuremath{\mathbb{Z}}_p$ and equality in $W_{(p)}$ as equality in $\ensuremath{\mathbb{Q}}_p / \ensuremath{\mathbb{Z}}_p$. Substituting in \eqref{eq:stab_is_L} shows that
there is an $\lan_{\mathrm{an}}$-condition on $(\tuple{\lambda}, \tuple{\xi})$ expressing $\Stab_G (\tic{\theta}) = L$ because $\ensuremath{\mathcal{D}}^{G}$
is a definable set.
\par
%
Next we show how to express $\Gamma_{K, \tic{\theta}} = \Gamma$ with an $\lan_{\mathrm{an}}$-condition on $(\tuple{\lambda}, \tuple{\xi})$.
First of all we notice that, by Proposition~\ref{prop:red_Gamma},
$\Gamma_{{K_p}, \tic{\theta}} = \Gamma_p$ if and only if $\Gamma_{K, \tic{\theta}} = \Gamma$. Thus it suffices to show that
$\Gamma_{{K_p}, \tic{\theta}} = \Gamma_p$ gives rise to an $\lan_{\mathrm{an}}$-condition on $(\tuple{\lambda}, \tuple{\xi})$.
This is done using the definable set $\ensuremath{\mathcal{D}}_{{K_p}}(\tuple{\lambda}, \tuple{\xi})$
in Lemma~\ref{lem:def_Gamma}. Indeed, $\Gamma_{{K_p}, \tic{\theta}} = \Gamma_p$ if and only if, for all $\nu \in \Lin({K_p}/N)$,
\begin{equation}
\label{eq:Gamma_Kp_is_Gamma_p}
\nu \in \Gamma_{{K_p}, \tic{\theta}} \iff \nu \in \Gamma_p
\end{equation}
Let $\mathcal{Q}_{{K_p}/N}$ be a set of representatives of the equivalence classes $\bmod$ $\ensuremath{\mathbb{Z}}_p$ in $\ensuremath{\mathcal{D}}_{{K_p}/N}$.
The set $\mathcal{Q}_{{K_p}/N}$ is finite and therefore it is definable in $\struc_{\mathrm{an}}$. Notice that for all $\nu \in \Lin({K_p}/N)$, the
set $\mathcal{Q}_{{K_p}/N}$ contains a unique tuple $\tuple{\nu} \in \ensuremath{\mathcal{D}}_{{K_p}/N}$ such that $\iota^{-1} \circ \nu$ is the function
\[
{K_p}/N \longrightarrow \ensuremath{\mathbb{Q}}_p/\ensuremath{\mathbb{Z}}_p, \qquad y_i N \longmapsto \nu_i + \ensuremath{\mathbb{Z}}_p.
\]
Let $\tuple{\Gamma}_p$ be the subset of $\mathcal{Q}_{{K_p}/N}$ consisting of the tuples that correspond to
the homomorphisms in $\Gamma_p$.
This allows us to express \eqref{eq:Gamma_Kp_is_Gamma_p}
as an $\lan_{\mathrm{an}}$-condition on $(\tuple{\lambda}, \tuple{\xi})$, namely
\[
\forall \tuple{\nu} \in \mathcal{Q}_{{K_p}/N}: \tuple{\nu} \in \ensuremath{\mathcal{D}}_{{K_p}}(\tuple{\lambda}, \tuple{\xi}) \iff \tuple{\nu} \in \tuple{\Gamma}_p.
\]
We prove that \ref{pro:X_definable_tw_3} is given by an $\lan_{\mathrm{an}}$-condition (on $(\tuple{\lambda}, \tuple{\xi})$). Fix
$\mu\in \widetilde{Z}_p$ such that, the $1$-cocycle on ${L_p}/N$ defined by $g \mapsto \mu(g N)\Gamma_p$
is in the class $c'$. By Proposition~\ref{prop:Linearisation_twist},
$\ensuremath{\mathcal{T}}_{L_p, K_p, \Gamma_p}(\tic{\theta}) = c'$ if and only if
\begin{multline*}
\exists \delta\in \widetilde{B}_p:
\bigwedge_{k \in \{ 1, \dots, {\indexPN'}\}} \exists n \in N\, \exists \psi \in \Lin(G)\, \exists \nu \in \Gamma_p : \\
\bigvee_{i \in \{ 1, \dots, {\indexPN}\}}
\Big( \mathbf{A}_{kj}(H, \chi, 1, n') \wedge \mathbf{A}_{ij}(H, \chi, n, n') \Longrightarrow
\mathbf{B}_{ijk}(H, \chi, \psi, \nu, n, n')\Big).
\end{multline*}
Now it suffices to write the last predicate as $\lan_{\mathrm{an}}$-condition on $(\tuple{\lambda} , \tuple{\xi})$:
\begin{itemize}
\item[-] we use the interpretation of $\mathcal{M}_N$ in $\struc_{\mathrm{an}}$ to express elements and group operations in $N$.
\item[-] We use $\tuple{\xi}$ to express the values of $\chi$, as explained in Proposition~\ref{pro:X_definable}.
\item[-] We interpret the predicate $\exists \delta\in \widetilde{B}_p$
as $\exists \tuple{\delta} \in \tic{\ensuremath{\mathcal{B}}}$ and we use $\delta_{k\, {\kappa(i,j)}}$ to express the value
$\delta(y_k N)(y_{\kappa(i,j)} N)$.
\item[-] By \ref{pro:X_definable_tw_2} we replace $\exists \nu \in \Gamma_p$ with
$\exists \nu \in \Gamma_{{K_p}, \tic{\theta}}$ and we interpret the latter as $\exists \tuple{\nu} \in
\ensuremath{\mathcal{D}}_{{K_p}}(\tuple{\lambda}, \tuple{\xi})$, using $(\nu_1, \dots, \nu_{\indexPN})$ to express the
values of $\nu$.
\item[-] We interpret $\exists \psi \in \Lin(G)$ as
\[
\exists (\tau_1, \dots, \tau_d, \sigma_1, \dots, \sigma_{\indexPN}) \in \ensuremath{\mathcal{D}}_{{K_p}}^{G},
\]
using $(\tau_1, \dots, \tau_d)$ to express $\psi$ on $N$ and $(\sigma_1, \dots, \sigma_{\indexPN})$
to express $\psi(y_1),\dots,\allowbreak \psi(y_{\indexPN})$.
\item[-] We interpret multiplication and equality in $W_{(p)}$ via $\iota$.
\end{itemize}
This concludes the proof because
the sets $\tic{\ensuremath{\mathcal{B}}}$, $\ensuremath{\mathcal{D}}_{{K_p}}(\tuple{\lambda}, \tuple{\xi})$, and $\ensuremath{\mathcal{D}}_{{K_p}}^{G}$ are definable in $\struc_{\mathrm{an}}$.
\end{proof}
Proposition~\ref{pro:X_definable_tw} shows that $\Psi: (\tuple{\lambda}, \tuple{\xi})\mapsto (H, \chi)$ is a surjection from $\ensuremath{\mathcal{D}}^{c,c'}$ to the set of pairs $(H, \chi)\in X_K$ such that $\theta = \Ind_{N\cap H}^N \chi$ satisfies $\tic{\theta}\in \twirr\vphantom{\Irr}^{c,c'}_{L,K,\Gamma}(N)$.
\subsection{Finishing the proof of Theorem~\ref{thm:Main-twist}}
We write the partial zeta series as a generating function enumerating the equivalence classes of a family
of definable equivalence relations. We conclude rationality of the partial twist zeta series
by Theorem~\ref{thm:rational_series}. Theorem~\ref{thm:Main-twist} then follows from
Proposition~\ref{prop:partial-Main-twist}.\par
We start by constructing a definable equivalence
relation on $\ensuremath{\mathcal{D}}^{c,c'}$ whose equivalence classes will be in bijection with
$\twirr\vphantom{\Irr}^{c,c'}_{L,K,\Gamma}(N)$.
Let $(\tuple{\lambda}, \tuple{\xi}), (\tuple{\lambda}', \tuple{\xi}') \in \ensuremath{\mathcal{D}}^{c,c'}$ and
let $(H, \chi) = \Psi (\tuple{\lambda}, \tuple{\xi})$ and $(H', \chi') = \Psi (\tuple{\lambda}', \tuple{\xi}')$. We define an
equivalence relation $\widetilde{\mathbin{\mathcal{E}}}$ on $\ensuremath{\mathcal{D}}^{c,c'}$ by
\[
((\tuple{\lambda}, \tuple{\xi}), (\tuple{\lambda}', \tuple{\xi}')) \in \widetilde{\mathbin{\mathcal{E}}} \Longleftrightarrow
\exists\, \psi\in \Lin(G),\ \mathrm{Ind}_{N\cap H}^N \chi = \mathrm{Ind}_{N\cap H'}^N (\chi'\psi|_N).
\]
\begin{lem}
The relation $\widetilde{\mathbin{\mathcal{E}}}$ is definable in $\struc_{\mathrm{an}}$.
\end{lem}
\begin{proof}
Let $(H, \chi), (H', \chi')$ be as above. By Lemma~\ref{lem:conj_induced_tw}, we have that
$\Ind_{N\cap H}^N \chi = \allowbreak \Ind_{N\cap H'}^N \chi'(\psi|_N)$ for some $\psi \in \Lin(G)$ if and only if
\[
\exists\, \psi \in \Lin(G),\ \exists\, g \in N,\ \forall\, h\in N \cap H\ \left(\leftexp{g}{h} \in N \cap H' \Longrightarrow \chi(h) = (\chi'\psi|_N)(\leftexp{g}{h})\right).
\]
Using Proposition~\ref{pro:X_definable-lin} to parametrise $\psi|_N$ for $\psi\in \Lin(G)$ by points
in $\ensuremath{\mathcal{D}}_{K_p}^G$ and writing the above formula in the $\ensuremath{\mathbb{Z}}_p$-coordinates of $N$ we obtain an $\lan_{\mathrm{an}}$-formula defining
$\widetilde{\mathbin{\mathcal{E}}}$. Note that, as done before, we interpret multiplication and equality in $W_{(p)}$
via $\iota^{-1}$.
\end{proof}
Composing $\Psi$ with the surjective map $X_K\rightarrow \Irr_K(N)$ of Corollary~\ref{cor:surj_coho}
induces a bijection between the set of equivalence classes $\ensuremath{\mathcal{D}}^{c,c'}/\widetilde{\mathbin{\mathcal{E}}}$ and $\twirr\vphantom{\Irr}^{c,c'}_{L,K,\Gamma}(N)$. We now use
this bijection to produce a definable family of equivalence relations giving the partial zeta series.
For $(\tuple{\lambda}, \tuple{\xi}) \in \ensuremath{\mathcal{D}}^{c,c'}$,
write $(h_1(\tuple{\lambda}), \dots, h_d(\tuple{\lambda}))$ for the good basis associated
with $\tuple{\lambda}$ by Proposition~\ref{pro:X_definable}~\ref{pro:X_definable_1}.
Recall that the function $f:\ensuremath{\mathcal{D}}^{c,c'}\rightarrow \ensuremath{\mathbb{Z}}$ is given by
\[
(\tuple{\lambda}, \tuple{\xi}) \longmapsto \sum_{i = 1}^{d} {\omega(h_i(\tuple{\lambda})) - 1}
\]
and is definable in $\struc_{\mathrm{an}}$. Moreover, if $\Psi(\tuple{\lambda}, \tuple{\xi}) = (H, \chi)$,
then $p^{f(\tuple{\lambda}, \tuple{\xi})}$ is the degree of $\Ind_{N\cap H}^{N} \chi$.
We extend the function $F$ from Section~\ref{sec:proof_main} as
\[
\tic{F}: \tic{\mathbin{\mathcal{E}}} \longrightarrow \ensuremath{\mathbb{Z}}
\]
defined by
$((\tuple{\lambda}, \tuple{\xi}), (\tuple{\lambda}', \tuple{\xi}'))\mapsto f(\tuple{\lambda}, \tuple{\xi})$.
Similar to Section~\ref{sec:proof_main}, we have that $\tic{F}$ is definable.
It follows that,
for $n \in \ensuremath{\mathbb{N}}_0$, the fibre of $\tic{F}$ at $n$ gives a definable subset of $\tic{\mathbin{\mathcal{E}}}$. Let
$\tic{\mathbin{\mathcal{E}}}_n = \tic{F}^{-1}(n)$. As before, the sets
\[
\ensuremath{\mathcal{D}}^{c, c'}_n = \lbrace (\tuple{\lambda}, \tuple{\xi}) \in \ensuremath{\mathcal{D}}^{c, c'} \mid f(\tuple{\lambda}, \tuple{\xi}) = n\rbrace
\]
are definable for all $n \in \ensuremath{\mathbb{N}}_0$. Furthermore, we have that $\tic{\mathbin{\mathcal{E}}}_n = \tic{\mathbin{\mathcal{E}}} \cap (\ensuremath{\mathcal{D}}^{c, c'}_n \times \ensuremath{\mathcal{D}}^{c, c'}_n)$,
so each $\tic{\mathbin{\mathcal{E}}}_n$ is an equivalence relation on $\ensuremath{\mathcal{D}}^{c, c'}_n$ and $\lbrace \tic{\mathbin{\mathcal{E}}}_n \rbrace_{n \in \ensuremath{\mathbb{N}}_0}$ is a definable
family of equivalence relations.\par
Since, for all $n\in \ensuremath{\mathbb{N}}_0$,
the set $\ensuremath{\mathcal{D}}^{c,c'}_n/\widetilde{\mathbin{\mathcal{E}}}_n$ is in bijection with the subset of characters of degree $p^n$ in $\twirr\vphantom{\Irr}^{c,c'}_{L,K,\Gamma}(N)$,
it follows that
\[
\tpartial{N; L, K, \Gamma}{c, c'} = \sum_{n \in \ensuremath{\mathbb{N}}_0} \#(\ensuremath{\mathcal{D}}^{c,c'}_n/\widetilde{\mathbin{\mathcal{E}}}_n) p^{-ns}.
\]
Applying Theorem~\ref{thm:rational_series} to the series above
we deduce that $\tpartial{N; L, K, \Gamma}{c, c'}$ is a rational function in $p^{-s}$. This concludes the proof.
\begin{acknowledgement*}
We thank Benjamin Martin for helpful suggestions and Andrei Jaikin-Zapirain for reading a preliminary version of
this paper and giving valuable comments. We also thank Marcus du Sautoy for answering our questions about bases,
Gabriel Navarro for his comments on Proposition~\ref{prop:Linearisation}, and Raf Cluckers for answering our questions
about \cite[Theorem~A.2]{hrumar2015definable}.
The second author was financially supported by Research Project G.0792.18N of the Research Foundation - Flanders (FWO),
by the Hausdorff Research Institute for Mathematics (Universit\"at Bonn), by the University of Auckland, and by Imperial College London.
The work on this paper was supported by a Durham University Travel Grant and LMS Scheme 4 grant 41678.
\end{acknowledgement*}
\bibliographystyle{alex}
| proofpile-arXiv_065-253 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}\label{sec:intro}
Rearrangements manipulate the shape of a geometric object while preserving its size \cite{Burchard2009}. Majorisation arises from rearrangements and provides an order on probability vectors from which various inequalities follow, as first established by Hardy, Littlewood and Polya \cite{Hardy1988} which, in turn, led to key work by Marshall, Olkin and Arnold \cite{Marshall1979}.
Applications of majorisation have appeared in diverse fields including economics \cite{Arnold2018, Lorenz1905}, chemistry \cite{Klein1997}, statistics \cite{Degroot1988,Giovagnoli1987,Pukelsheim1987}, and more recently quantum information \cite{Partovi2011}.
The concept of majorisation yields a \emph{partial} ordering, not a \emph{total} ordering, that is there are vectors for which neither vector majorises the other, so they are not comparable. In contrast, consider the well known measure of uncertainty, Shannon entropy, that corresponds to the resources required to send information that will eliminate the uncertainty \cite{cover1999}.
Such entropic measures of uncertainty impose a total ordering.
Majorisation expresses a form of uncertainty, as the word ``more" in the statement ``more uncertain'' can be interpreted as a statement of relative order through the majorisation partial order.
Further, this notion of uncertainty does not have the extra requirement that it relies on a measure of information \cite{jacobs2014}, and does not make any assumptions about its functional form \cite{friedland2013}.
When vectors cannot be compared with respect to the majorisation ordering, questions of relative uncertainty are unanswerable: one would have to specify in what way one was more certain or uncertain, at which point one could select a specific order-preserving comparison function.
The partial order is weaker in the mathematical sense but, if events {\em can} be compared, the comparison is stronger as it can be made for fewer pairs of events.
This is not a shortcoming of majorisation, but rather a consequence of its rigorous approach to ordering uncertainty~\cite{Partovi2011}.
The challenge of defining the meaning of uncertainty
has long been recognised: in 1914 Bertrand Russell wrote ``These varying degrees of certainty attaching to different data may be regarded as themselves forming part of our data: they, along with the other data, lie within the vague, complex, inexact body of knowledge which it is the business of the philosopher to analyse'' \cite{russell1914our}.
We present two properties that we believe make majorisation a good candidate for a theory of uncertainty: (i) being dimension-free and (ii) being geometry-free. These properties create the possibility of comparing multivariate distributions with different support and different numbers of dimensions.
Majorisation is, in a well-defined sense, \emph{dimension-free}. In this paper, we show how this approach enables us to create, for a multivariate distribution, a univariate decreasing rearrangement (DR) by considering a decreasing threshold and ``squashing'' all of the multivariate mass for which the density is above the threshold to a univariate mass adjacent to the origin.
The \emph{geometry-free} property follows because majorisation is independent of the support of the distribution. This distinguishes the approach from metric based measures such as variance and various types of multidimensional generalised variances \cite{pronzato2018simplicial}.
Metric-based dispersion orderings are well known and discussed in, for example, \cite{bickel1979descriptive}, with multivariate versions in \cite{belzunce2008multivariate,giovagnoli1995multivariate}.
Our contribution is to introduce a set of operations that can be applied to study uncertainty in a range of settings. These operations include how to project from many dimensions into one, how to combine two probability distributions, and mixing uncertainties (with different weights). We believe that the form of uncertainty captured by majorisation is close in spirit to entropy, but that it is less restrictive. We illustrate the introduced operations with examples and demonstrate that entropic measures can be used together with majorisation.
This paper is organised as follows, in the remainder of this section we introduce the concept of majorisation for discrete probabilities and in Section \ref{sec:cont_major} we present results for the continuous case. In Section \ref{sec:multivariate} we present the key concepts for multivariate distributions. In Section \ref{sec:operations}, we collect together operations for the study of uncertainty and in Section \ref{sec:algebra} we define a lattice and an algebra for uncertainty. We present empirical applications in Section \ref{sec:empirical} and concluding remarks are given in Section~\ref{sec:conclusion}.
\subsection{Discrete majorisation and related work}\label{sec:discrete}
\begin{defn} \cite{Marshall1979} Consider two discrete distributions with $n$-vectors of
probabilities $p_1=(p^{(1)}_1,\ldots, p_n^{(1)})$ and $p_2=(p^{(2)}_1,\ldots, p_n^{(2)})$, where
$\sum_{i=1}^n p_i^{(1)}=\sum_{i=1}^n p_i^{(2)}=1$.
Placing the probabilities in decreasing (i.e., nonincreasing) order:
\begin{align}
\tilde{p}^{(1)}_1 \geq \ldots \geq \tilde{p}_n^{(1)}\quad \text{ and } \quad \tilde{p}^{(2)}_1 \geq \ldots \geq \tilde{p}_n^{(2)},
\end{align}
it is then said that $p_2$ majorises $p_1$, written $p_1 \preceq p_2$, if
\begin{align}
\sum_{i=1}^k \tilde{p}_i^{(1)} \leq \sum_{i=1}^k \tilde{p}_i^{(2)}, \quad k=1, \dots, n-1.
\end{align}
\end{defn}
This means that the largest element of $p_2$ is greater than the largest element of $p_1$ and the largest two elements of $p_2$ are greater than the largest two elements of $p_1$, etc. This implies that the distribution of $p_1$ is more \textit{disordered} or spread out than $p_2$, which, in turn, means that the distribution of $p_2$ has less uncertainty than that of $p_1$. For example, for any $p$, $(1/n, \dots, 1/n)\preceq p\preceq (1, 0, \dots, 0)$
Marshall \emph{et al.} \cite{Marshall1979} provided several equivalent conditions to $p_1\preceq p_2$. We present three (A1-A3) of the best known in detail below.\\
(A1) There is a doubly stochastic $n\times n$ matrix $P$, such that
\begin{align}\label{equiv:doubly}
p_1 = P p_2.
\end{align}
The intuition of this result is that a probability vector which is a mixture of the permutations of another is more disordered. The relationship between a stochastic matrix $P$ and the stochastic transformation function in the refinement concept was presented by DeGroot \cite{Degroot1988}. \\
(A2) Schur \cite{Schur1923} demonstrated that, if (A1) holds for some stochastic matrix $P$, then for all continuous convex functions $s( \cdot )$ and for all $n$,
\begin{align}\label{condition3}
\sum_{i=1}^n s(\tilde{p}_i^{(1)}) \leq \sum_{i=1}^n s(\tilde{p}_i^{(2)}).
\end{align}
The sums in Equation (\ref{condition3}) are special cases of the more general Schur-convex functions on probability vectors. In particular, information measures such as Shannon information, for which $s(y)=y\log(y)$, and Tsallis information, for which $s(y)=\frac{y}{\gamma}(y^{\gamma}-1), \gamma>0$, where, in the limit as $\gamma\rightarrow 0$, Shannon is obtained.
For any function $h(y)$ of Shannon or Tsallis entropies, where $h(y) = -s(y)$, $p_1 \preceq p_2$ implies $\sum_{i=1}^n h(\tilde{p}_i^{(1)}) \ge \sum_{i=1}^n h(\tilde{p}_i^{(2)})$, but not conversely. Further, if for every such function $h(y)$ this relationship holds, then $p_1 \preceq p_2$. (A2) indicates that the ordering imposed by majorisation is stronger than the ordering by any single entropic measure and, in a sense, is equivalent to all such entropic measures taken collectively \cite{Partovi2011}.
\noindent (A3) Let $\pi(p) = (p_{\pi(1)}, \ldots, p_{\pi(n)})$ be the vector whose entries are a permutation $\pi$ of the entries of a probability vector $p$, with symmetric group $S$, then
\begin{align}
p_1 \in \mbox{conv}_{\pi \in S} \{\pi(p_2)\}.
\end{align}
That is to say, $p_1$ is in the convex hull of all permutations of entries of $p_2$. Majorisation is a special case of group-majorisation (G-majorisation) \cite{Eaton1977} for the symmetric (permutation) group \cite{Giovagnoli1985}.
\section{Continuous majorisation}\label{sec:cont_major}
In this section, we define continuous majorisation, the analog of the partial sums definition in Section \ref{sec:discrete}, following Hardy \emph{et al.} \cite{Hardy1988}.
\begin{defn}
\label{drdefn}
Let $f(x)$ be a (univariate) pdf and define $m(y)=\mu \{z: f(z) \geq y\}$, where $\mu$ is Lebesgue measure. The decreasing rearrangement of $f(x)$ is
\begin{align}
\tilde{f}(z)=\mbox{sup}\{t: m(t) >z\},\; z >0.
\end{align}
\end{defn}
\begin{defn}
Let $\tilde{f}_1(z)$ and $\tilde{f}_2(z)$ be the DR of two pdfs $f_1(x)$ and $f_2(x)$, respectively and $\tilde{F}_1(z)$ and $\tilde{F}_2(z)$ their corresponding cdfs. We say that $f_2(x)$ majorises $f_1(x)$, written
$f_1 \preceq f_2$, if and only if
\begin{align}
\tilde{F}_1(z) \leq \tilde{F}_2(z),\; z > 0.
\end{align}
\end{defn}
Similarly to the discrete case, we give three equivalent conditions for continuous majorisation.\\
\noindent (B1) For some non-negative doubly stochastic kernel $P(x,t)$,
\begin{align}
f_1(x) = \int P(x,t) f_2(t) dt.
\end{align}
\noindent (B2) For all continuous convex functions $s(\cdot)$,
\begin{align} \label{equiv_convex}
\int s(f_1(z)) dz \leq \int s(f_2(z))dz.
\end{align}
\noindent (B3) Slice condition:
\begin{align}
\int(f_1(x)-c)_+ dx \leq \int(f_2(x)-c)_+dx, \quad c>0. \label{eq:slice}
\end{align}
\begin{example}\label{example|_beta}
Consider the Beta$(3,2)$ distribution with pdf $p(z)=12(1-z)z^2$. We look for $z_1$ and $z_2$, where $z_1<z_2$, such that $p(z_1)=p(z_2)=c$, illustrated in
Figure~\ref{fig:newbetaplots} (left panel). Setting $z=z_2-z_1$, we have
\begin{align}\label{example_beta_system}
\begin{cases}
p(z_1)=12(1-z_1)z_1^2=y ,\\
p(z_2)=12(1-z_2)z_2^2=y,\\
z_2-z_1=z,\\
0\leq z\leq 1,
\end{cases}
\end{align}
from which the DR can be obtained by eliminating $z_1$ and $z_2$ and setting
$\tilde{f} = y$. With the elimination variety, a set of points (solutions) satisfying a system of polynomial equations being equal to zero,
$ 48z^6 - 96z^4 + 9y^2 + 48z^2 - 16y =0$, we obtain
\begin{align}
\tilde{f}(z) = \left\{
\begin{array}{l}
\frac{8}{9}
+ \frac{4}{9} (-27z^6 + 54z^4 - 27z^2 + 4)^{\frac{1}{2}}, \quad 0 \leq z \leq \frac{1}{\sqrt{3}}, \\ \frac{8}{9} - \frac{4}{9} (-27z^6 + 54z^4 - 27z^2 + 4)^{\frac{1}{2}}, \quad \frac{1}{\sqrt{3}} \leq z \leq 1.
\end{array}
\right.
\end{align}
The DR cdf is obtained by adjoining the equations $Y = F(z_2)-F(z_1)$ to get the second variety $3z^8-12z^6+16z^4+9Y^2-16Yz = 0$, then
\begin{equation}
\tilde{F}(z) = \frac{z}{9} \big(\sqrt{-(3z^2-4)^3} +8 \big),
\end{equation}
and is illustrated in Figure \ref{fig:newbetaplots} (right panel) alongside the pdf (central panel).
\begin{figure}[h!]
\begin{center}
\includegraphics[height=4.5cm]{DensityFunctionBeta.pdf}
\includegraphics[height=4.5cm]{Plot2.pdf}
\caption{Example \ref{example|_beta}. \emph{Left panel:} Identification of $z_{1}$ and $z_{2}$, \emph{central panel}: DR pdf $\tilde{f}(z)$, \emph{right panel}: DR cdf $\tilde{F}(z)$.}
\label{fig:newbetaplots}
\end{center}
\end{figure}
\end{example}
It is hard to derive the DR in the general case when $f_1\preceq f_2$ in which $f_i(x) \sim \mbox{Beta}(a_i,b_i),\; i=1,2$. However, we can prove the following.
\begin{lem}
Assume $a_1 , b_1 , a_2 , b_2 > 1$. If pdfs $f_1 (x)\sim \text{Beta}(a_1 , b_1 )$ and $f_2 (x)\sim\text{Beta}(a_2 , b_2 )$, have the same mode, then $\max_x f_1(x) \leq \max_x f_2(x)$ if and only if $X_1 \preceq X_2.$
\end{lem}
\begin{proof} We first prove that, under the same mode condition, $f_1(x)$ and $f_2(x)$ intersect at two distinct $x$-values at which the values of $f_1(x)$ and $f_2(x)$ are the same. Setting the modes equal,
$$ \frac{a_1-1}{a_1+b_1-2} = \frac{a_2-1}{a_2+b_2-2},$$
without loss of generality, we have that $a_2 > a_1$ and find
$$\frac{f_1(x)}{f_2(x)} = \left\{x (1-x)^u \right\}^v C, $$
where $u= a_2-a_1,v = \frac{b_1-1}{b_2-1}$ and $C$ is a constant. Setting this equal to 1, we have two solutions given by
$ x (1-x)^u = C^{-\frac{1}{v}}$. It is then straightforward to verify that the common value of $f_1(x)$ and $f_2(x)$ is the same at the two solutions.
The proof is completed by using the slice condition in Equation \eqref{eq:slice}.
\end{proof}
Comparison of DR cdfs may use algebraic or numerical techniques. If the DR cdfs are polynomial, then comparison involves testing whether two increasing polynomials cross or one dominates the other over the union of the support of the distribution. Whether closed form characterisations of the DR are available for non-polynomial cdfs is outside the scope of this paper.
\section{Multivariate case: matching of uncertainty}\label{sec:multivariate}
\begin{defn}\label{multitouni}
A univariate decreasing rearrangement $\tilde{f}(z)$, compatible with $f(x)$, is, for all constants $c\geq 0$,
\begin{align}
\label{decreaseRearrangemult}
\int_{\{x:f(x)\geq c \}}f(x)dx=\int_{\{z:\tilde{f}(z)\geq c\}}\tilde{f}(z)dz.
\end{align}
\end{defn}
\begin{proof}
As
\begin{align}
{\{x:f(x)\geq c \}} = {\{z:\tilde{f}(z)\geq c\}},
\end{align}
then the volume of these sets are consistent \cite{Burchard2009}.
\end{proof}
This result induces a one dimensional DR from a multidimensional distribution. The following lemma is a key result and shows that the information/entropy for $X \sim f(x)$ and $Z \sim \tilde{f}(z)$ are the same.
\begin{lem}
Let $f(x)$ be a multidimensional pdf and $\tilde{f}(z)$ on $[0, \infty]$ its decreasing rearrangement. Then, given a convex function $\varphi(x)$, we have
\begin{align}
\int_S \varphi(f(x)) dx = \int_0^{\infty} \varphi(\tilde{f}(z)) dz,
\end{align}
where $S$ is the support of $f(x)$.
\begin{proof}
Matching volume to length elements in $S$ and $[0,1)$, for $c>0$ and small $\delta c > 0$ we have
$$ \int_{x: f(x) \geq c, x \in S} f(x)dx - \int_{x: f(x) \geq c + \delta c, x \in S} f(x) dx = \int_{z: \tilde{f}(z) \geq c, z \in [0, \infty)} \tilde{f}(z) dz -\int_{z: \tilde{f}(z) \geq c +\delta c, z \in [0, \infty)} \tilde{f}(z) dz.$$
We can then write, approximately,
$$u(c)A(c, \delta c)=u(c)L(c, \delta c),$$
where $A(c, \delta c)$ and $L(c, \delta c)$ are the corresponding increments in volume and length, respectively, as corresponding to the interval $[c,c+\delta c)$, that is
$ f^{(-1)}([c,c+ \delta))$ and $ \tilde{f}^{(-1)}([c,c+ \delta))$, respectively. Cancelling $c$, we can equate $A(c, \delta c)$ and $L(c, \delta c)$, and this allows us to recapture and equate the integrals of any measurable function $u(\cdot)$. In particular, we can write $u(c) = \varphi(f(c)).$
\end{proof}
\end{lem}
In Examples \ref{multi_norm_ex} and \ref{indep_exp_ex}, we demonstrate how to obtain a DR from a multivariate distribution. The following idea is used to carry out computations: there may be cases in which, for a given $c$, the inverse set $f^{(-1)}(c)$ is described by some useful quantity $\delta$. Moreover $\delta$, expressed as a function of $x$, then becomes a random variable with a known (univariate) distribution. Since $\tilde{F}(\tilde{f}^{-1}(c) ) =F_{\delta}(f_{{X}}^{-1}(c) )$, Definition \ref{multitouni} can be expressed as
\begin{align} \label{eq:valid_DR}
\tilde{f}(r) & = f_{\delta} \left( f_X^{(-1)}(\tilde{f}(r)) \right) \frac{\partial}{\partial r}\left(f_X^{(-1)}(\tilde{f}(r))\right).
\end{align}
\begin{example}\label{multi_norm_ex}
Let the random vector ${X}=(X_1, \dots, X_n)^T$ be an $n$-variate standard normal distribution with pdf
\begin{align}
f_{{X}}(x_1, \dots, x_n)=\frac{1}{(2\pi)^{\frac{n}{2}}}\exp\Big\{-\frac{1}{2}\sum_{i=1}^n x^2_i \Big\}.
\end{align}
We refer to ${X}$ as a spherical Gaussian random vector with ${X}\sim\text{N}_n({0}, I_n)$, where ${0}$ is an $n$-vector of zeros and $I_n$ is the $n\times n$ identity matrix. To construct the DR, we slice first the pdf at $f_{{X}}(x_1, \dots, x_n)=c$. We have that the square of the radius of a spherical Gaussian random vector is $R^2 = \sum_{i=1}^nX_i^2$, defining $r^2 =\sum_{i=1}^n x_i^2$, then
\begin{align}
r=\Big(-2\log\big((2\pi)^{n/2} c\big) \Big)^{1/2},
\end{align}
where the volume of the $n$-dimensional Euclidean ball of radius $r$ is
\begin{equation}
\label{eq:ball_volume}
V_n(r)=\frac{\pi^{n/2}}{\Gamma\Big(\frac{n}{2}+1 \Big)}r^n,
\end{equation}
from which we obtain
\begin{align}
c=\frac{1}{(2\pi)^{n/2}}\exp\bigg\{-\frac{1}{2}\bigg(\frac{V_n(r)\Gamma(n/2+1)}{\pi^{n/2}} \bigg)^{2/n} \bigg\},
\end{align}
noting the values of $c$ and $V_n(r)$ are dependent on each other. To generalise this expression, we replace $c$ and $V_n(r)$ with $\tilde{f}(z)$ and $z$, respectively. The resulting form of the DR is
\begin{equation}
\label{eq:DRM_mult_normal}
\tilde{f}(z) = \frac{1}{(2\pi)^{n/2}}\exp\bigg\{-\frac{1}{2}\bigg(\frac{z}{V_n} \bigg)^{2/n} \bigg\},
\end{equation}
where $V_n$ is the volume of the unit sphere in $\mathbb{R}^n$. For the two-dimensional multivariate normal, we illustrate the construction of the univariate DR in Figure \ref{fig:DRM2}.
We can validate the form of the DR in Equation (\ref{eq:DRM_mult_normal}) by the construction in Equation (\ref{eq:valid_DR}), where $R^2=\sum_{i=1}^n X_i^2$ follows a Chi-squared distribution with $n$ degrees of freedom.
\begin{figure}[h!]
\begin{center}
\includegraphics[height=0.2\textheight]{MultivariateNorm1b.png}\includegraphics[height=0.2\textheight]{MultivariateNorm23.pdf}
\end{center}
\caption{Example \ref{multi_norm_ex}. \textit{Left panel:} Density plot of a two-dimensional standard multivariate normal. The dashed line and blue shaded region correspond to $f(x)=c$ and $\int_{\{x:f(x)\geq c \}}f(x)dx$, respectively.
\textit{Central panel:} A plot to demonstrate the connection between ${X}=(X_1, X_2)^T$ and $R^2$. The radius $r$ of a blue circle corresponds to $x=f^{-1}(c)$.
\textit{Right panel:} The DR $\tilde{f}(z)$ obtained for the multivariate normal. The blue shaded region corresponds to $\int_{\{z: \tilde{f}(z)\geq c\}}\tilde{f}(z) dz$.}
\label{fig:DRM2}
\end{figure}
\end{example}
\begin{example}\label{indep_exp_ex}
Consider the $n$-fold independent standard exponential distribution with pdf
\begin{align}
f_X(x_1, \dots, x_n)=\exp\left\{-\sum_{i=1}^n x_i \right\}.
\end{align}
As $f_X(x_1, \dots, x_n) = f_1(x_1) f_2(x_2) \cdots f_n(x_n)$, slicing the pdf at $c=f_{{X}}(x_1, \dots, x_n)$ yields $-\log(c)=\sum_{i=1}^n x_i$.
The volume of an $n$-dimensional simplex in which all $n$ variables are greater than $0$ but with sum less than $R$ is $V_n = R^n/n!$, then $ c=\exp\left\{-(n!V_n)^{1/n} \right\}$. Replacing $c$ and $V_n$ with $\tilde{f}(z)$ and $z$, respectively, the DR can be written as
\begin{equation}
\label{eq:DRM_mult_exp}
\tilde{f}(z)=\exp\left\{-(n!z)^{1/n} \right\}.
\end{equation}
To verify the form of the DR in Equation (\ref{eq:DRM_mult_exp}), we can use Equation (\ref{eq:valid_DR}) with the relationship $\delta$ given by $R=\sum_{i=1}^n X_i$, such that $R\sim\text{Gamma}(n, 1)$.
\end{example}
\begin{example}\label{example_multi}
Figure \ref{fig:Plot3} shows the DR cdfs for standard normal and exponential distributions with $n=1, 2, 3, 4$. We conclude that for $X\in\mathbb{R}^n$ and $Y\in\mathbb{R}^m$, where $X\sim f_X$ and $Y\sim f_Y$, and $f_X$ and $f_Y$ come from the same family of multivariate densities, standard normal or standard exponential, when $n>m$, $f_X\preceq f_Y$. We observe that univariate distributions majorise the remaining cdfs, which implies that adding more random variables drives the uncertainty up.
\end{example}
\begin{figure}[h!]
\begin{center}
\includegraphics[width = 0.8\textwidth]{Plot3.pdf}
\caption{Example \ref{example_multi}. \textit{Left panel:} $\tilde{F}(z)$ for the standard normal with $n=1,2,3,4$. \textit{Right panel:} $\tilde{F}(z)$ for the independent standard exponential distribution with $n=1,2,3,4$.}\label{fig:Plot3}
\end{center}
\end{figure}
\section{Some operations with $\preceq$}\label{sec:operations}
We present operations for combining the uncertainty of two distributions and discuss the effect of dependence and the area of support of the distributions on the ordering. In Section \ref{subsec:InverseMix}, we introduce inverse mixing for discrete and continuous probability distributions. In Section \ref{subsec:dependece_order}, we discuss the use of majorisation ordering as an ordering of dependence for distributions with the same marginal densities. In Section \ref{subsec:volume}, we present a volume-contractive mapping for a random variable which leads to a reduction in uncertainty.
\subsection{Inverse Mixing}
\label{subsec:InverseMix}
\begin{defn}\label{def:inversemixing}
Inverse mixing is defined by
\begin{align}
\tilde{f}_1\; [+] \;\tilde{f}_2=
\left(\tilde{f}_1^{(-1)}(z)+\tilde{f}_2^{(-1)}(z) \right)^{(-1)},
\end{align}
and $\alpha$-inverse mixing is defined by
\begin{align}
\tilde{f}_1\; [+]_{\alpha}\; \tilde{f}_2=\left(\tilde{f}_1^{(-1)}\left(\frac{z}{1-\alpha}\right) + \tilde{f}_2^{(-1)} \left(\frac{z}{\alpha}\right)\right)^{(-1)},
\end{align}
where $0 < \alpha < 1$ is the mixing parameter.
\end{defn}
Inverse mixing is a method for combining uncertainty given two distributions over two different populations. We note that, when performing inverse mixing, the supports (atoms in the discrete case) can be different.
We demonstrate inverse mixing for the discrete and continuous distributions. For the continuous distribution, we consider cases in which the maximum values (modes) occur (i) at the same point, and (ii) at different points.
\begin{example}\label{exampleworkplace}
Consider two distinct groups of people in a workplace. Define the probability of the $i^\text{th}$ member of group one obtaining a promotion by $p_i$ and, correspondingly, by $q_i$ for group two. Let
$p = (0.577,\ 0.192,\ 0.128, $ $ \ 0.064, \ 0.038)$ and $q = (0.730 ,\ 0.219, \ 0.036,\ 0.007,\ 0.007)$, noting that
$ p_1\geq p_2\geq\cdots\geq p_5$, $q_1\geq q_2\geq \cdots q_5$ and
$\sum_{i}p_i=1$, $\sum_{i}q_i=1$.
To perform inverse mixing with $\alpha = \frac{1}{2}$, we take the inverse of each pmf, combine them and then sort them into ascending order (Figure \ref{fig:inversetilt}, left panel). The inverse is then taken to obtain a pmf (central panel). The inverse mixing procedure combines all of the probabilities scaled by a factor $\frac{1}{2}$, i.e. $\frac{1}{2} p_i \ \cup \ \frac{1}{2} q_i,$ and sorts them in decreasing order. The direct mixing procedure is obtained by summing the ordered probabilities of two populations and scaling them by a factor $\alpha$, i.e. $\frac{1}{2}(p_i+q_i)$ (right panel). Whilst both mixings provide information about the joint population, the inverse mixing also preserves information about the individual subpopulations.
\end{example}
\begin{figure}[h!]
\begin{center}
\includegraphics[width=.96\textwidth]{combine_inv.pdf}
\vspace{-2em}
\end{center}
\caption{Example \ref{exampleworkplace}. \textit{Left panel:} The addition of the two inverse pmfs, \textit{central panel:} inverse mixing distribution with $\alpha=\frac{1}{2}$, \textit{right panel:} direct mixing distribution with $\alpha=\frac{1}{2}$.
}
\label{fig:inversetilt}
\end{figure}
\begin{example}\label{example:unibi_exp}
For the univariate and bivariate exponential distributions, from Equation \eqref{eq:DRM_mult_exp} we have
\begin{align}
\tilde{f}_1(z)=\exp\{-z\}, \quad \tilde{f}_2(z)=\exp\{-(2z)^{1/2}\}.
\end{align}
Observing that $0<\tilde{f}_1(z), \tilde{f}_2(z)\leq 1$, we have functional inverses,
\begin{align}
\tilde{f}_1^{(-1)}(z) = -\log(z), \quad \tilde{f}_2^{(-1)}(z)=\frac{1}{2}(\log(z))^2, \quad z\in(0, 1],
\end{align}
and we illustrate these in Figure \ref{fig:inverse_mixing_con1} (left and central panels). The inverse mixing of the two distributions is
\begin{align}
f^{(1)}(z) = \left(-\log\Big(\frac{z}{1-\alpha}\Big) +\frac{1}{2} \Big(-\log\Big(\frac{z}{\alpha}\Big) \Big)^2 \right)^{(-1)} ,
\end{align}
for $0 \leq \alpha \leq 1$, and as the maximum values of the distribution functions occur at the $z=0$, the curve is smooth. The direct averaging of $f_1(x)$ and $f_2(x)$ is
\begin{align}
f^{(2)}(z) = \left((\alpha-1)\log(z)+\frac{\alpha}{2}(-\log(z))^2 \right)^{(-1)}.
\end{align}
For $\alpha=1/2$ we have
\begin{align}
f^{(1)}(z)=\Big(-\log(2z)+\frac{1}{2}\big(\log(2z)\big)^2 \Big)^{(-1)},
\end{align}
which is a quadratic in $\log(2z)$, so we obtain the two solutions $f^{(1)}(z)=\frac{1}{2}\exp\{1+\sqrt{1+2z}\}$ and $f^{(1)}(z)=\frac{1}{2}\exp\{1-\sqrt{1+2z}\}$. As the first solution does not integrate to one, we retain the second solution. The values of the mean, variance and Shannon entropy for $f^{(1)}(z)$ are
$\left [\frac{7}{2}, \ \frac{99}{4}, \ \frac{3}{2}+\log(2) \right]$, respectively. For the direct mixing, with $\alpha=1/2$ and pdf $f^{(2)}(z)=\exp\left\{1-\sqrt{1+4z}\right\}$,
the corresponding values are
$\left [\frac{7}{4}, \ \frac{99}{16}, \ \frac{3}{2}\right]$. We observe that both the variance and the Shannon entropy are greater for $f^{(1)}(z)$ than for $f^{(2)}(z)$, which indicates that there is more uncertainty attributed to $f^{(1)}(z)$ than to $f^{(2)}(z)$. We confirm this finding with the comparison plot of inverse mixing and direct averaging in Figure \ref{fig:inverse_mixing_con1} (right panel). We have the relationship $f^{(2)}(z)=2f^{(1)}(2z)$, and can see $f^{(1)}(z)$ (red line) stretches the support of the distributions, and lowers the overall maximum, whereas $f^{(2)}(z)$ (blue line) preserves the maximum and shrinks the support.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=1\textwidth]{inverse_mixing_con1}
\vspace{-2em}
\end{center}
\caption{\textit{Left panel:} DR functions $\tilde{f}_1(z)$ (solid line) and $\tilde{f}_2(z)$ (dashed line). \textit{Central panel:} functional inverses of the DR functions: $\tilde{f}_1^{(-1)}(z)$ (solid line) and $\tilde{f}_2^{(-1)}(z)$ (dashed line). \textit{Right panel:} inverse mixing and direct averaging: $f^{(1)}(z)$ (red line) and $f^{(2)}(z)$ (blue line).}
\label{fig:inverse_mixing_con1}
\end{figure}
\end{example}
\begin{example} \label{example:6}
Consider exponential distributions with means 1 and 2, then the pdfs are already DRs,
\begin{align}
\tilde{f}_1(z)=\exp\{-z\}, \quad\tilde{f}_2(z)=\frac{1}{2}\exp\{-\frac{z}{2}\},
\end{align}
as illustrated in Figure \ref{fig:inverse_mixing_con2final} (left panel). Since
$0<\tilde{f}_1(z)\leq 1$ and $0<\tilde{f}_2(z)\leq 1/2$, the functional inverses have different support:
\begin{align}
\tilde{f}_1^{(-1)}(z)=-\log(z), \quad z\in (0, 1], \quad \text{and} \quad \tilde{f}_2^{(-1)}(z)=-2\log(2z), \quad z\in (0, 1/2].
\end{align}
For $\alpha=1/2$, the inverse mixing is
\begin{align}
f^{(1)}(z)=\tilde{f}_1\; [+]_{\frac{1}{2}}\; \tilde{f}_2=\left(-\log(2z)-2\log(4z) \right)^{(-1)}.
\end{align}
To avoid negative values of the expression inside the functional inverse, we propose the following modification:
\begin{align}
\tilde{f}_1^{(-1)}(2z)+\tilde{f}_2^{(-1)}(2z) =\max\{0, -\log(2z)\}+\max\{0, -2\log(4z)\},
\end{align}
illustrated in
Figure \ref{fig:inverse_mixing_con2final} (central plot) which results in a kink at $z=0.25$. We take another functional inverse to obtain the inverse mixing,
\begin{align}
f^{(1)}(z)&=\begin{cases}
\frac{1}{2}\exp\{-z\}, &\mbox{if } 0<z<\log(2) ,\\
\frac{1}{2}\exp\{ \frac{-2\log(2)-z}{3} \}, &\mbox{if } z \geq \log(2),
\end{cases}
\end{align}
and, as illustrated in Figure \ref{fig:inverse_mixing_con2final} (right panel), we observe a kink at $z=\log(2)$.
For $\alpha=1/2$, the direct averaging of these distributions is
\begin{align}
f^{(2)}(z)=\left(-\frac{1}{2}\log(z)-\log(2z) \right)^{(-1)}.
\end{align}
To avoid negative values, we can write
\begin{align}
\frac{1}{2}\tilde{f}_1^{(-1)}(z)+\frac{1}{2}\tilde{f}_2^{(-1)}(z)=\max\left\{ 0,-\frac{1}{2}\log(z)\right\}+\max\left\{0, -\log(2z) \right\},
\end{align}
illustrated by the solid line of Figure \ref{fig:inverse_mixing_con2final} (central plot), noting a kink at $z=\frac{1}{2}$. As a result, we obtain a kink in $f^{(2)}(z)$ at $z=-\frac{1}{2}\log{\frac{1}{2}}$ in Figure \ref{fig:inverse_mixing_con2final} (right panel). The direct averaging is
\begin{align}
f^{(2)}(z)
&=\begin{cases}
\exp\{-2z\}, &\mbox{if } 0<z<-\frac{1}{2}\log(\frac{1}{2}), \\
\exp\Big\{\frac{-2z-2\log(2)}{3}\Big\}, &\mbox{if } z\geq -\frac{1}{2} \log(\frac{1}{2}).
\end{cases}
\end{align}
The values of the mean, variance and Shannon entropy for $f^{(1)}(z)$ are
$\left [ 2.85, \ 8.91, \ 1 + \frac{3}{2}\log(2)\right]$, respectively,
and corresponding values for $f^{(2)}(z)$ are
$\left [1.42, \ 2.23, \ 1+\frac{1}{2}\log(2)\right]$. From the representation of inverse mixing and direct averaging in Figure \ref{fig:inverse_mixing_con2final}, we can see that $f^{(1)}(z)$ stretches the support of the distribution, whilst $f^{(2)}(z)$ shrinks it. The maximum (mode) from the direct averaging is double the maximum of the inverse mixing.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=1\textwidth]{inverse_mixing_con2final.pdf}
\vspace{-2em}
\end{center}
\caption{\textit{Left panel:} DR functions $\tilde{f}_1(z)$ (solid line) and $\tilde{f}_2(z)$ (dashed line). \textit{Central panel:} functional inverses of the DR functions: $\tilde{f}_1^{(-1)}(z)$ (solid line) and $\tilde{f}_2^{(-1)}(z)$ (dashed line). \textit{Right panel:} inverse mixing and direct averaging: $f^{(1)}(z)$ (red line) and $f^{(2)}(z)$ (blue line).}
\label{fig:inverse_mixing_con2final}
\end{figure}
\end{example}
\subsection{Dependence ordering}
\label{subsec:dependece_order}
We briefly discuss how to obtain the ordering of dependence as well as measures of dependence using majorisation and entropy, on which Joe \cite{Joe1985,Joe1987} has written extensively.
\begin{defn} Let $X=(x_{ij})$ be an $m\times n$ matrix. Place the $mn$ entries of $X$ in decreasing (i.e., nonincreasing) order, to generate an $mn$-dimensional vector $X^*$. It is then said that $X$ majorises $Y$, written $Y\preceq X$, when $Y^*\preceq X^*$.
\end{defn}
\begin{defn} Let $X$ be a matrix with entries in $S\subset \mathbb{R}$. $X$ is a minimal matrix if, for any other matrix $Y$, with entries in $S$ such that $Y\preceq X$, then $X^*=Y^*$.
\end{defn}
When the matrix entries represent probabilities of discrete bivariate distributions, then the majorisation ordering can be interpreted as an ordering of dependence, e.g., $X$ represents more ``dependence'' than $Y$ if $Y\preceq X$. Similarly to the discrete
majorisation presented in Section \ref{sec:discrete}, condition (A2) holds for matrix majorisation.
From Joe \cite[Theorem 7]{Joe1985}, $X$ is a minimal matrix if it maximises $h(\cdot)$ subject to maintaining the row and column sums. We demonstrate that the minimal matrix corresponds to the bivariate distribution of two independent random variables for specific types of entropy.
\begin{example}\label{ex:entropy}
Let $X_1$ and $X_2$ be two independent random variables with $P(X_1=0)=\alpha$ and $P(X_2=0)=\beta$. We compute the joint probabilities:
\begin{align}
p_{00} = \alpha \beta,\; p_{10} = (1-\alpha)\beta,\; p_{01} = \alpha (1-\beta),\; p_{11} = (1-\alpha)(1-\beta).
\end{align}
We can generate all binary distributions with the same margins as $X_1,X_2$ with a perturbation $\epsilon$:
\begin{align}
p_{00} = \alpha \beta + \epsilon,\; p_{10} = (1-\alpha)\beta-\epsilon,\; p_{01} = \alpha (1-\beta)-\epsilon,\; p_{11} = (1-\alpha)(1-\beta)+\epsilon,
\end{align}
with the restriction that $|\epsilon| < \min(p_{00}, p_{10}, p_{01}, p_{11}).$ If $\epsilon=0$, we retain the independence case. We compute the Shannon entropy, $ H(X_1, X_2)=-\sum_{i, j}p_{ij}\log(p_{ij})$, and by the maximum entropy principle, we have
\begin{align}
\frac{\partial H(X_1, X_2)}{\partial \epsilon}=\log\Bigg(\frac{\big(\alpha (1-\beta)-\epsilon\big)\big((1-\alpha)\beta-\epsilon\big)}{(\alpha\beta+\epsilon)\big((1-\alpha)(1-\beta)+\epsilon\big)} \Bigg).
\end{align}
Setting $\frac{\partial H(X_1, X_2)}{\partial \epsilon}=0$, we find $\epsilon=0$ and conclude that the Shannon entropy is at its maximum in the independence case.
We also compute the Tsallis entropy, $ H(X_1, X_2)=\sum_{i, j=}p_{ij}(1-p_{ij})$ with $\gamma=1$,
\begin{align}
H(X_1, X_2) &=1-(\alpha\beta+\epsilon)^2-\big((1-\alpha)\beta-\epsilon \big)^2-\big(\alpha(1-\beta)-\epsilon \big)^2-\big((1-\alpha)(1-\beta)+\epsilon \big)^2.
\end{align}
As before we follow the maximum entropy principle to derive that the Tsallis entropy with $\gamma=1$ is at its maximum when $\epsilon = - \frac{1}{4}(2\beta-1)(2\alpha-1)$. We note that $\epsilon$ is zero, the independence case, if at least one of $\alpha$ or $\beta$ is $\frac{1}{2}$. We observe that the maximum value of the Tsallis entropy could be obtained for $\epsilon\neq 0$ (dependence case). We conclude that the independence case cannot be uniformly dominated within the ordering $\preceq$.
\end{example}
Similar results hold for continuous multivariate densities. For two distributions with pdfs $f_1$ and $f_2$, where $f_1\preceq f_2$, this implies $f_2$ represents more ``dependence'' than $f_1$. In addition, the relative entropy function can be used to measure the dependence of distribution.
Joe \cite{Joe1987} introduced the concept of dependence parameters to construct the ordering of multivariate distributions. Let $a(\theta)$ represent a dependence parameter for a family of densities $f_{\theta}$ and $f_{\theta}\preceq f_{\theta'}$, if $a(\theta)\leq a(\theta')$ (or $f_{\theta}\preceq f_{\theta'}$, if $a(\theta')\leq a(\theta)$). For example, for zero-mean multivariate normal densities $f_{\Sigma_1}$ and $f_{\Sigma_2}$ parameterised by variance-covariance matrices $\Sigma_1$ and $\Sigma_2$ respectively, we have $f_{\Sigma_1}\preceq f_{\Sigma_2}$, if $|\Sigma_1|>|\Sigma_2|$. We are interested in demonstrating that the ordering imposed by the dependence parameters holds by using a DR introduced in Section \ref{sec:multivariate}. We derive the $DR$, $\tilde{f}(z)$, for $X\sim\text{MVN}(\bb{0}, \Sigma)$ with $\Sigma=\text{diag}\{\sigma^2, \dots, \sigma^2 \}$, that is
\begin{equation}
\tilde{f}(z)=\frac{1}{(2\pi)^{n/2}|\Sigma|^{1/2}}\exp\Bigg\{-\frac{1}{2\sigma^2}\Big(\frac{z}{V_n} \Big)^{2/n} \Bigg\}.
\end{equation}
\begin{example} \label{example:binorm}
For bivariate normal densities $X\sim\text{MVN}(\bb{0}, \Sigma_1)$ and $Y\sim\text{MVN}(\bb{0}, \Sigma_2)$ with $\Sigma_1=\text{diag}(1, 1)$ and $\Sigma_2=\text{diag}(3, 3)$, we find DR cdfs
\begin{align}
\tilde{F}_1(z)=1-\exp\Big\{-\frac{z}{2\pi} \Big\}, \quad \tilde{F}_2(z)=1-\exp\Big\{-\frac{z}{6\pi} \Big\}.
\end{align}
Figure \ref{fig:PlotComp} shows that $\tilde{F}_2(z)\preceq \tilde{F}_1(z)$, since $|\Sigma_2|>|\Sigma_1|$, which is in line with results shown by Joe \cite{Joe1987}.
\begin{figure}[h!]
\begin{center}
\includegraphics[width = 0.37\textwidth]{PlotComp.pdf}
\caption{Example \ref{example:binorm}. DR cdfs $\tilde{F}_1(z)$ and $\tilde{F}_2(z)$.}\label{fig:PlotComp}
\end{center}
\end{figure}
\end{example}
To obtain the orderings for multivariate normal distributions with non-zero off-diagonal entries in variance-covariance matrices, we could use empirical DRs discussed in Section \ref{sec:empirical}.
\subsection{Volume-contractive mappings}
\label{subsec:volume}
The area of the support is a key component of studying $\preceq$. For example, in the discrete case, if $p= (p_1, p_2, p_3)$ are our probabilities with $p_1+p_2+p_3 = 1$, we have support of size 3. Splitting $p_3$ to form $q = (p_1, p_2, \frac{p_3}{2}, \frac{p_3}{2})$ has support of size 4 and we can conclude that $q \preceq p$. In the continuous case, we may refer to such an operation as dilation: locally, we have the same amount of density but with stretched support. Dilation in the continuous case is obtained via a transformation of the random variable, whose inverse we can call contractive. We proceed to demonstrate that the volume-contractive mapping implies a decrease in uncertainty.
\begin{defn}
A differentiable and invertible function $ h: \mathbb R \rightarrow \mathbb R$, $y = h(x)$, is a volume-contractive mapping if the absolute value of the Jacobian determinant: $J = \vline \frac{\partial y}{\partial x} \vline$ satisfies $ 0 < J \leq 1$ for $x \in \mathbb R$.
\end{defn}
\begin{lemma}
If $h:\mathbb R \rightarrow \mathbb R$ is a volume-contractive mapping, then, for any random variable $X \sim f_X(x)$, it holds that
$X \preceq Y = h(X)$.
\end{lemma}
\begin{proof} We give a proof for the one dimensional case only and, in addition, assume $f_X(x)$ and $f_Y(y)$ are invertible. Using the slice condition, we want to show that
$$\mbox{prob}_X \{ f_X(X) \geq c\} \geq \mbox{prob}_Y\{ f_Y(Y) \geq c\}. $$
Developing the left hand side, we see that
\begin{eqnarray*}
\{ f_X (X) \geq c \} \Leftrightarrow \{X \geq f_X^{(-1)}(c) \}
\Leftrightarrow \{Y \geq h(f_X^{(-1)}(c)) \}.
\end{eqnarray*}
Computing the density of $Y$ as
$$ f_Y(y) = \frac{1}{J} f_X(h^{(-1)} (y)),$$
gives
$$ f_Y^{(-1)}(c) = h\left(f_X^{(-1)} (Jc)\right).$$
We thus need to establish whether
$h(f_X^{(-1)} (c)) \geq h( f_X^{(-1)} (Jc)).$
We see the statement reduces to
$c \geq J c,$ which holds by assumption.
\end{proof}
\section{Algebra for uncertainty}\label{sec:algebra}
\begin{defn}\label{def:invCDF}
For any two DR pdfs $\tilde{f}_1(z)$ and $\tilde{f}_2(z)$, define $\tilde{F}_1(z) \otimes \tilde{F}_2(z)$, the associated cdf of the density
\begin{align} \label{eq:x}
\tilde{f}_1(z) \; [+]_{ \frac{1}{2}} \; \tilde{f}_2(z) = \left(\tilde{f}_1^{(-1)}(2z) + \tilde{f}_2^{(-1)}(2z) \right)^{(-1)}.
\end{align}
\end{defn}
\begin{defn}\label{def:lattice}
For any two DR cdfs $\tilde{F}_1$ and $\tilde{F}_2 (z)$, define
\begin{equation}
\tilde{F}_1(z) \vee \tilde{F}_2 (z) = \max(\tilde{F}_1(z), \tilde{F}_2(z)),\end{equation}
\begin{equation}
\tilde{F}_1(z) \wedge \tilde{F}_2 (z) = \min(\tilde{F}_1(z), \tilde{F}_2(z)),
\end{equation}
which themselves are cdfs.
\end{defn}
The partial ordering $\preceq$ under the meet and join $\vee$ and $\wedge$ defines a lattice
which we refer to as the {\em uncertainty lattice}. It is satisfying that the `meet' and `join' which are defined once $\preceq$ is established can be manifested by the max and min of Definition \ref{def:lattice}. Since we can embed a multidimensional distribution with density $f(x)$ into the one-dimensional DR, we claim that the lattice is universal.
The inverse mixing cdf $\otimes$ can be combined with $\vee$ (or $\wedge$). We modify the notation and replace $\vee$ by $\oplus$, which implies that $\oplus$ and $\otimes$ yield a so-called max-plus algebra (also called tropical algebra) \cite{maclagan2015introduction}. For this to be valid, we need to demonstrate that the distributive property holds.
\begin{lem}\label{dist}
We have
\begin{equation}
\tilde{F}_3(z) \otimes ( \tilde{F}_1(z) \oplus \tilde{F}_2(z)) = (\tilde{F}_3(z) \otimes \tilde{F}_1(z)) \oplus (\tilde{F}_3(z) \otimes \tilde{F}_2(z)),
\end{equation}
where $\tilde{F}_1(z),\tilde{F}_2(z)$ and $\tilde{F}_3(z)$ are DR cdfs and $\tilde{F}_1(z) \oplus \tilde{F}_2 (z) = \max(\tilde{F}_1(z),\tilde{F}_2(z))$.
\end{lem}
\begin{proof}
The proof follows by switching the min and max when we take the inverses:
\begin{align*}
\tilde{F}_3(z) \otimes \left( \tilde{F}_1(z) \oplus \tilde{F}_2(z) \right)
& = \tilde{F}_3^{(-1)}(2z) + \left (\max \left ( \tilde{F}_1(2z) , \tilde{F}_2(2z) \right )^{(-1)}\right)^{(-1)},\\
& = \left (\tilde{F}_3^{(-1)}(2z) +\min \left ( \tilde{F}^{(-1)}_1(2z) , \tilde{F}^{(-1)}_2(2z) \right ) \right)^{(-1)},\\
& = \left ( \min \left(
\tilde{F}_3^{(-1)}(2z) + \tilde{F}^{(-1)}_1(2z),
\tilde{F}_3^{(-1)}(2z) + \tilde{F}^{(-1)}_2(2z) \right ) \right)^{(-1)},\\
& = \max \left(
\left (\tilde{F}_3^{(-1)}(2z) + \tilde{F}^{(-1)}_1(2z)\right )^{(-1)}, \left ( \tilde{F}_3^{(-1)}(2z) + \tilde{F}^{(-1)}_2(2z) \right )^{(-1)}\right ) ,\\
& = \max \left (
\tilde{F}_3(z) \otimes \tilde{F}_1(z)), \tilde{F}_3(z) \otimes \tilde{F}_2(z) \right),
\end{align*}
from which the result is obtained.
\end{proof}
\begin{defn}
The \emph{uncertainty ring} is the toric (semi) ring of non-decreasing, twice-differentiable functions on $[0, \infty)$ on $\otimes$ and $\oplus$ and with $\oplus$ identity as $-\infty$.
\end{defn}
We note that the $\otimes$ unit element will be $0$ and the $\oplus$ unit element will be $-\infty$. To obtain proper decreasing densities, we need to impose the additional condition that $\tilde{f}(z)$ is decreasing, $\tilde{F}(z)$ is a non-negative function, $\tilde{F}(0) = 0$ and $\tilde{F}(z) \rightarrow 1$ as $z \rightarrow \infty$.
To introduce polynomials which comprise the ring needs the concept of a power. Consider the pdf arising from $\tilde{F}_1(z) \otimes \tilde{F}_1(z)$,
\begin{align}
\tilde{f}(z) = \left( \tilde{f}_1^{(-1)}(2z) + \tilde{f}_1^{(-1)}(2z) \right)^{(-1)} =
\frac{1}{2} \tilde{f}_1\left(\frac{z}{2} \right),
\end{align}
which is the pdf for the scaled random variable $Y= 2 X$ where $X \sim \tilde{f}_1(z)$, and
the cdf for $Y$ is $\tilde{F}_1\left(\frac{z}{2}\right)$. In general, we define the $k^{\text{th}}$ $\otimes$ power as
\begin{align}
\otimes^n \tilde{F} (z) = \tilde{F}\left( \frac{z}{n} \right).
\end{align}
The intuition of this expression is that increasing powers represent increasing dilation and form a decreasing chain with respect to our $\preceq$ ordering. A monomial with respect to $\otimes$ takes the form
\begin{align}
\prod_{i=1}^{m} \otimes^{\alpha_i} \tilde{F}_i(z).
\end{align}
Adjoining the base field $\mathbb R$, and appealing to Lemma \ref{dist}, we can define a ring of tropical polynomials \cite{glicksberg1959convolution}.
We summarise the operations that we have:
\begin{enumerate}
\item Scalar multiplication $\tilde{F}(z) \rightarrow \beta\tilde{F}(z)$, $ \beta \in \mathbb R$.
\item Inverse mixing $\tilde{F}_1(z) \otimes \tilde{F}_2(z)$ and $\otimes$ powers and monomials/polynomials
\item Maximum and minimum of $\tilde{F}_1(z)$ and $\tilde{F}_2(z)$, denoted $\vee$ and $\wedge$, respectively.
Noting that $\vee$ is written $\oplus$, when discussing the ring. We can also define a min-plus algebra and may use $\oplus$.
\item The one-dimensional DR from independent pairs of random variable $(\tilde{F}_1,\tilde{F}_2)$, where $\tilde{F}_1$ and $\tilde{F}_2$ can come from different distributions.
\item Convolutions $\tilde{F}_1(z) * \tilde{F}_2(z)$. This refers to the DR cdf of the sum of independent random variables $X_1 \sim f_1(x)$ and $X_2 \sim f_2(x)$.
\end{enumerate}
Further natural developments using ring concepts such as ideals are the subject of further work. In fact, convolutions themselves form a semi-group, but we do not delve into the relationship between our ring and that semi-group. It is instructive to work over the binary field so that we do not have to use full scales from $\mathbb R$, but only $\{0,1\}$. This also has the advantage that, in every polynomial, we have proper pdfs and cdfs. An analogy is Boolean algebra. In the next example we illustrate these operations to demonstrate the complexity that can arise from from a single distribution.
\begin{example} \label{sec6:example_exp}
Let $X_1 \sim \exp\{-x_1\}, $ and $X_2\sim \exp\{-(x_1+x_2) \}$ with $x_1, x_2>0$. The DRs are given by Equation (\ref{eq:DRM_mult_exp}), from which the DR cdfs are
\begin{eqnarray}
\tilde{F}_1(z) & = & 1-e^{-z}, \\
\tilde{F}_2(z) & = & 1-(1+\sqrt{2z})e^{-\sqrt{2z}}.
\end{eqnarray}
To compute $\tilde{F}_3(z) = \otimes^2 \tilde{F}_1(z)$, we first calculate the inverse mixing of $\tilde{f}_1(z)$ with itself and $\alpha=1/2$,
\begin{align}
\tilde{f}_3(z) = \big(-2\log(2z) \big)^{(-1)}=\frac{1}{2}e^{-z/2},
\end{align}
with the corresponding cdf,
\begin{equation}
\tilde{F}_3(z)=1-e^{-\frac{z}{2}}.
\end{equation}
Similarly, to compute $\tilde{F}_4(z)=\otimes^2 \tilde{F}_2(z)$, we first obtain the two solutions for $\tilde{f}_4(z)=\exp\{-\sqrt{z}/2\}$ and $\tilde{f}_4(z)=\exp\{\sqrt{z}/2\}$. As the second solution does not satisfy the definition of DRs, we derive the cdf from the first:
\begin{equation}
\tilde{F}_4(z) = 1-(1+\sqrt{z})e^{-\sqrt{z}}.
\end{equation}
Note that the following relationships between DR cdfs hold:
$ \tilde{F}_3(z) = \tilde{F}_1 (z/2 ) $ and $ \tilde{F}_4(z) = \tilde{F}_2(z/2 )$. Finally, we compute the DR cdf for the convolution of two univariate standard exponential random variables, with pdf $f_3(x) = x\exp\{-x\}$, i.e., $X_3=X_1+X_1$, denoted as $\tilde{F}_5(z) = (\tilde{F}_1(z) * \tilde{F}_1(z))$, we employ the slice method introduced in Example \ref{example|_beta}. We have
\begin{equation}
\tilde{F}_5(z) = \exp\Big\{-\frac{z}{e^z-1}\Big\} - \exp\Big\{-\frac{ze^z}{e^z-1} \Big\}.
\end{equation}
\begin{figure}[h!]
\begin{center}
\includegraphics[width = 0.4\textwidth]{PlotExample10.pdf}
\caption{Example \ref{sec6:example_exp}. DR cdfs for various operations on $X_1$ and $X_2$.}
\label{fig:PlotExample10}
\end{center}
\end{figure}
Figure \ref{fig:PlotExample10} illustrates the relationships between DR cdfs and we observe that
$ \tilde{F}_4(z)\preceq\tilde{F}_2(z)\preceq \tilde{F}_1(z),$ $ \tilde{F}_4(z) \preceq\tilde{F}_3(z)\preceq \tilde{F}_1(z),$ and
$ \tilde{F}_4(z)\preceq\tilde{F}_5(z)\preceq \tilde{F}_1(z)$.
and there are no orderings between $\tilde{F}_2(z), \tilde{F}_3(z)$ and $\tilde{F}_5(z)$. Figure \ref{fig:PlotExample10_2} shows that under $\vee$ and $\wedge$, we have the following sets of inequalities
\begin{align}\label{lattice_eqn}
\begin{array}{cc}
& \tilde{F}_4(z)\preceq \tilde{F}_2(z)\wedge \tilde{F}_3(z)\preceq \tilde{F}_2(z)\vee \tilde{F}_3(z) \preceq \tilde{F}_1(z),\\
& \tilde{F}_4(z)\preceq \tilde{F}_3(z)\wedge \tilde{F}_5(z)\preceq \tilde{F}_3(z)\vee \tilde{F}_5(z) \preceq \tilde{F}_1(z),\\
& \tilde{F}_4(z)\preceq \tilde{F}_2(z)\wedge \tilde{F}_5(z)\preceq \tilde{F}_2(z)\vee \tilde{F}_5(z) \preceq \tilde{F}_1(z),
\end{array}
\end{align}
from which the full lattice can be formed.
\begin{figure}[h!]
\begin{center}
\includegraphics[width = 1\textwidth]{PlotExample10_2.pdf}
\caption{Example \ref{sec6:example_exp}. DR cdfs of Equation \eqref{lattice_eqn}.}
\label{fig:PlotExample10_2}
\end{center}
\end{figure}
\end{example}
Given the equivalence relation (B2) in Equation \eqref{equiv_convex}, we can study how the above structures affect the manipulation of uncertainty measured by a single metric. Since $H(f(x)) = \int_{0}^{\infty} h(\tilde{f}(z)) dz$ , without loss of generality, we can consider $H$ as a functional of the DR with the advantage that the operations $\otimes,\oplus$ can be applied. If we are able either theoretically or computationally to show that the ordering holds, then we feel that this strong condition deserves to be a candidate for a universal version of what we may mean by ``more certain'' or ``more uncertain''. We have seen with the above examples that computing whether two distributions can be compared according to $\preceq$ may be not be an easy computation. This relies on the difference of the DR cdfs not having a zero value on $[0,\infty)$. What we term the ``ring" above concerns equalities, not inequalities. The relationship between these equalities and the partial ordering $\preceq$ requires a fuller development. With $\vee$ and $\wedge$, the situation is clearer, but with other operations, this is not the case and may depend radically on the distributional family concerned. We present a practical situation in which the results above can be used.
\begin{example}
Consider $2$ horseraces each with a different number of horses $n_i$ ranked by a punter (or bookmaker) in order of their probability of winning:
$p_{(i,1)} \geq \cdots \geq p_{(i,n_i)}$, such that $\sum_{j=1}^{n_i}p_{i,j}=1$. If the two sets of horses are to be combined into a single race, the issue then is how to combine the probabilities. Dividing each probability by two, combining and ranking them, i.e, $\tilde{F}_1(z) \otimes \tilde{F}_2(z)$, is inverse mixing with $\alpha = \frac{1}{2}$. One can imagine some effect which may lead to having $\alpha \neq \frac{1}{2}$ such as the track being wet and not suited to one set of horses.
Consider the case of a single race with two punters that rank the horses in the same order, but with different probabilities: $p_{(1,1)}\geq \ldots \geq p_{(1,n_1)}$ and $q_{(1,1)} \geq \ldots \geq \ldots q_{(1,n_1)}$. To define a joint betting strategy, a set of odds combining each of the individuals' odds could take different approaches: an optimistic, more certain, approach with $\tilde{F}_1(z) \vee \tilde{F}_2(z)$ or a more pessimistic, uncertain, approach with $ \tilde{F}_1(z) \wedge \tilde{F}_2(z)$. This argument is predicated on the rank order being the same, otherwise the same horse may appear twice in the min or max ordering. The min or max may then refer to a kind of hypothetical race. Nonetheless, we suggest that they are useful notionally. The same issue arises if one considers an average of the two actual probabilities, direct mixing,
$\frac{1}{2}(p_{(1,1)} +q_{(1,1)} )\geq \ldots \geq\frac{1}{2}(p_{(1,n_1)}+q_{(1, n_1)}),$ which corresponds to taking $\frac{1}{2}(\tilde{F}_1(z) + \tilde{F}_2(z))$.
\end{example}
\section{Empirical decreasing rearrangements}\label{sec:empirical}
We present two approaches for deriving the empirical DR and its associated cdf for the analysis of an experimental data set. In Section \ref{subsec:Discrete_Climate}, we assess the uncertainties associated with climate projections in two dimensions. In Section \ref{subsec:Cont_Heat}, we present two algorithms to obtain approximations for $\tilde{f}(z)$ and $\tilde{F}(z)$ for data sets in higher dimensions, and apply this to energy systems planning in Section \ref{subsubsec:DHE}.
\subsection{Climate projections}\label{subsec:Discrete_Climate}
The 2018 UK climate projections \cite{UKCP18} considered four different scenarios, called Representative Concentration Pathways (RCP), for greenhouse gas concentrations in the atmosphere.
These scenarios contain a range of inputs that reflect socio-economic change, technological change, energy use and emissions of greenhouse gases and air pollutants, which are used to study the impact on climate through to the year 2100. We consider two variables: (i) the increase in mean air temperature at a height of 1.5 m, and (ii) the percentage increase in precipitation, where each variable is relative to the baseline period of 1981-2010.
The projections illustrated in Figure~\ref{fig:climate_scatter} correspond to mean daily values over the period 2050-2079. The data is discretised into twenty categories, as temperature anomaly is divided into five categories and precipitation into four. From this, the probabilities are ordered, and we obtain empirical DR cdfs in the left panel of Figure \ref{fig:climate_example_pdf_cdf2}. Observing that the cdf for RCP2.6 majorises that of RCP8.5, we conclude that RCP8.5 is more uncertain than RCP2.6. The maximum and minimum of the cdfs are given in the right panel, and RCP2.6 carries the lowest level of uncertainty among the considered scenarios since its cdf corresponds to $\tilde{F}_1(z) \lor\tilde{F}_2(z)\lor\tilde{F}_3(z)\lor\tilde{F}_4(z)$, where the subscript indicates the scenario. In contrast, RCP8.5 carries the most uncertainty, as its cdf corresponds to $\tilde{F}_1(z) \land\tilde{F}_2(z)\land\tilde{F}_3(z)\land\tilde{F}_4(z)$.
In this analysis, majorisation identifies one comparison of universal uncertainty, illustrating that scenarios are associated with different levels of uncertainty. In addition, transforming the climate projection data into DR cdfs allows us to visualise the uncertainty associated with RCP scenarios and is therefore a tool for the communication of uncertainty in settings with decision makers and stakeholders \cite{gov_toolkit}.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.55\textwidth]{climate_scatterb.png}
\vspace{-1em}
\end{center}
\caption{Projections of mean daily values over the period 2050-2079 \cite{UKCP18}. Each point represents an ensemble member and each colour represents a different RCP.}
\label{fig:climate_scatter}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.75\textwidth]{climate_example_pdf_cdf2.pdf}
\vspace{-1em}
\end{center}
\caption{\textit{Left panel}: Empirical DR cdf for each RCP scenario. \textit{Right panel}: the representation of $\tilde{F}_1(z) \lor\tilde{F}_2(z)\lor\tilde{F}_3(z)\lor\tilde{F}_4(z)$ and $\tilde{F}_1(z) \land\tilde{F}_2(z)\land\tilde{F}_3(z)\land\tilde{F}_4(z)$.}
\label{fig:climate_example_pdf_cdf2}
\end{figure}
\subsection{Majorisation in higher dimensions}\label{subsec:Cont_Heat}
Consider the data set $x_{ij}$ of data points $ i=1, \dots, m$ and dimension $j=1, \dots, n$. To obtain the DR, we require the density function values to construct the measure (distribution) function $m(y)$. Assume the observed data is sampled from a population with unknown pdf $f_X(x_1, \dots, x_n)$, from which we estimate the pdf $\hat{f}_X(x_1, \dots, x_n)$. We employ kernel density estimation (KDE) \cite{Parzen1962} to obtain $\hat{f}_X(\cdot)$ using the \texttt{ks} package in \texttt{R}, which automatically selects the bandwidth parameters \citep{ks2020}. Alternative approaches such as density forests \cite{Criminisi2013} and the $k$-nearest neighbour density estimation algorithm \cite{Bishop2006} would also be appropriate. To obtain empirical DRs, we adopt a two stage process for $\tilde{f}_{\hat{f}}(z)$ as described in Algorithm 1. The first stage involves obtaining the distribution function $m(y)$, which is used in the second stage to to derive the DR.
\begin{algorithm}[H]
\SetAlgoLined
Based on data $x_{ij}\in R, i=1, \dots, m$ and $j=1, \dots, n$, fit a pdf $\hat{f}_X(x_1, \dots, x_n)$ using KDE\;
Produce a uniform and/or space-filling set $S$ of size $N$ across the input space $R$, with $s\in S$\;
\For{$y=y_1, \dots, y_M$}{
Derive a set $S_y=\big\{s\in S:\hat{f}_X(s)>y \big\}$ of size $N_y=\vert S_y\vert$\;
Estimate the volume of $S_y$, i.e., $m_{\hat{f}}(y)=\text{Vol}(S_y)$ by the Monte Carlo method\;
}
Plot the estimated measure function values, $m_{\hat{f}}(y)$ against $y$\;
Swap the axes, so that $\tilde{f}_{\hat{f}}(z)$ and $z$ correspond to $y$ and $m_{\hat{f}}(y)$.
\caption{Empirical DR $\tilde{f}_{\hat{f}}(z)$.}
\end{algorithm}
Monte Carlo integration is used to estimate the volume of domain $S_y$ to derive the measure function $m_{\hat{f}}(y)$. In particular, \cite{Fok1989} proposed specifying another domain $R$ (a hypercube or a hyperplane) of known volume $\text{Vol}(R)$, such that $S_y\in R$. The ratio of two volumes, $p=\text{Vol}(S_y)/\text{Vol}(R)$, and the volume $\text{Vol}(S_y)$ are estimated by $\hat{p}=N_y/N$ and $\hat{\text{Vol}}(S_y)=\hat{p}\text{Vol}(R)$.
We demonstrate the use of Algorithm 1 by generating a random sample of size $m=200$ from the standard bivariate normal distribution, with DR given in Equation (\ref{eq:DRM_mult_normal}). To apply the algorithm, we produce a uniform sample of points of size $N=2500$ across the domain $R=[-5, 5]\times [-5, 5]$ of $\text{Vol}(R)=10^2$. In the left panel of Figure \ref{fig:EmpDRFinal1} we depict the estimated values of the distribution function $m_{\hat{f}}(y)$ against $y$ and
note that the smoothness of the estimated distribution depends on $M$, thus we expect to obtain a smooth representation with large $M$ (cutoffs in density). In the right panel of Figure \ref{fig:EmpDRFinal1} we compare $\tilde{f}_{\hat{f}}(z)$ with $\tilde{f}(z)$ and observe that the empirical DR (red dashed line) overlaps with the DR (blue solid line).
\begin{figure}[h!]
\begin{center}
\includegraphics[width=.7\textwidth]{EmpDRFinal.pdf}
\vspace{-1em}
\end{center}
\caption{\textit{Left panel}: Estimated measure function $m_{\hat{f}}(y)$. \textit{Right panel}:
$\tilde{f}(z)$ (blue line) and $\tilde{f}_{\hat{f}}(z)$ (red dashed line) .}
\label{fig:EmpDRFinal1}
\end{figure}
\begin{algorithm}[h]
\SetAlgoLined
Specify an equally spaced vector $\boldsymbol{z}^*=(z_1^*, z_2^*, \dots, z_l^*)$\;
Fit a linear interpolator (spline) through $\{z_i, \tilde{f}_{\hat{f}}(z_i)\}_{i=1}^M$ (these values were derived in Algorithm 1) to obtain values of $\tilde{f}_{\hat{f}}(z_i^*), i=1, \dots, l$\;
\For{$z_i^*, i=1 \dots, l-1$}{
Estimate probability values $P(z_i^*<z<z_{i+1}^*)$ by numerical integration\;
Obtain values $\tilde{F}_{\hat{f}}(z_i^*)=\frac{\sum_{k=1}^{i-1}P(z_k<z<z_{k+1})}{\sum_{k=1}^{l-1}P(z_k<z<z_{k+1})}$\;
}
Plot $\tilde{F}_{\hat{f}}(z^*)$ against $z^*$.
\caption{Empirical DR cdf $\tilde{F}_{\hat{f}}(z)$.}
\end{algorithm}
We present Algorithm 2 for obtaining an empirical cdf of the DR, an approximation to $\tilde{F}(z)$, denoted by $\tilde{F}_{\hat{f}}(z)$. The weighting of computed probabilities by
$(\sum_{k=1}^{l-1}P(z_k<z<z_{k+1}))^{-1}$
comes from the assumption that $z$ is upper bounded and we can only compute probabilities at the values specified in $\boldsymbol{z}^*$. Therefore, we expect $\sum_{k=1}^{l-1}P(z_k<z<z_{k+1})=1$. However, we tend to observe this sum to be slightly less than one due to errors introduced by numerical integration. In Figure \ref{fig:CDFDR2D} we apply the algorithm on the bivariate data set used previously in this section. The closed form expression for $\tilde{F}(z)$ is
$ \tilde{F}(z)=1-\exp \{-\frac{z}{2\pi} \}$.
From the right panel, it can be seen that the empirical cdf $\tilde{F}_{\hat{f}}(z)$ is an accurate representation of $\tilde{F}(z)$.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=.7\textwidth]{CDFModifiedScheme2D.pdf}
\vspace{-1em}
\end{center}
\caption{\textit{Left panel}: binned probability representation of the empirical DR, $\tilde{f}_{\hat{f}}(z)$, obtained as part of Algorithm 2. \textit{Right panel}: empirical DR cdf, $\tilde{F}_{\hat{f}}(z)$ obtained from Algorithm 2. The blue solid line and red dashed line correspond to $\tilde{F}(z)$ and $\tilde{F}_{\hat{f}}(z)$ respectively.}
\label{fig:CDFDR2D}
\end{figure}
\subsection{Energy systems planning}
\label{subsubsec:DHE}
We compare the uncertainty associated with three potential design options for supplying heat to a residential area in Brunswick, Germany, considered as part of EU project ReUseHeat (Recovering urban excess heat) \cite{REUSEHEAT}. District heating networks allow heat from a centralised source to be distributed to buildings through a network of insulated pipes \cite{werner2013district}, and the primary objective of the project is to demonstrate the use of low temperature sources of heat in these networks.
The city's existing district heating network is powered by a Combined Heat and Power (CHP) plant, which uses natural gas as a fuel and outputs both heat for use in the network and electricity. The network in the newly constructed area of interest will be connected to the CHP and, in addition, there is an option to use excess heat from a nearby data centre to provide at least some of the heat to the district. Excess heat from a data centre is a low temperature source which requires an electric heat pump to ``upgrade'' the temperature before being suitable for use in the system.
We are interested in the uncertainty ordering for three heating design options: (1) CHP, (2) CHP and Heat Pump, (3) Heat Pump; considering the two variables: Net Present Cost (NPC) and $\text{CO}_2$-equivalent emissions (in metric tonnes). Using an energy systems simulation (OSeMOSYS \cite{Howells2011}), we produce predicted outputs for these variables. We define three scenarios by varying a number of inputs to the simulations, in particular elements of government climate policy and consumer engagement with green technology. These are shown in Table \ref{tab:Scenarios}. We refer to Volodina \emph{et al.} \cite{Volodina2020} for further details.
\begin{table}[h!]
\caption{District heating study scenarios \cite{Volodina2020}.}
\begin{center}
\begin{small}
\begin{tabular}{c|lll}
\midrule
\textbf{Scenario} & \textbf{Emission Penalty} & \textbf{Consumer demand} & \textbf{Commodity prices}\\
Green & 100\euro/metric tonne & -1\% annual change &$\uparrow$ gas, $\downarrow$ electricity\\
Neutral & 40\euro/metric tonne &small fluctuations & small fluctuations\\
Market & no penalty & +1\% annual change & $\downarrow$ gas, $\uparrow$ electricity\\
\midrule
\end{tabular}
\end{small}
\end{center}
\label{tab:Scenarios}
\vspace{-1em}
\end{table}
Figure \ref{fig:ScatterPlot} shows the distribution of points produced for each design option and scenario. Design option 1 is associated with the highest level of emissions due to the use of natural gas, while design option 3 has the lowest emissions levels but has the highest costs.
\FloatBarrier
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.55\textwidth]{ScatterPlot.pdf}
\end{center}
\caption{Net Present Costs against carbon emissions for each design option and scenario. }
\label{fig:ScatterPlot}
\end{figure}
\FloatBarrier
Employing Algorithms 1 and 2 on the model outputs, we obtain $\tilde{f}_{\hat{f}}(z)$ and $\tilde{F}_{\hat{f}}(z)$. To apply equal importance to both outputs, we scale the data on $[0, 1]$ and generate a uniform set $S$ across $[0, 1]\times [0, 1]$ of size $N=2500$. To produce a smooth representation of the DR and its cdf, we set $M=5000$.
\FloatBarrier
\begin{figure}[h!]
\begin{center}
\includegraphics[width=1\textwidth]{ScenarioPlotHeat.pdf}
\vspace{-2em}
\end{center}
\caption{Empirical DR cdfs $\tilde{F}_{\hat{f}}(z)$ for all three design options plotted together for each individual scenario.}
\label{fig:CDF_HeatExample}
\end{figure}
\FloatBarrier
Plots of the empirical DR cdfs $\tilde{F}_{\hat{f}}(z)$ are shown in Figure \ref{fig:CDF_HeatExample}. A feature here is that, under the green and neutral scenarios, the cdf for design option 3 lies above that for design option 2, which lies above that for design option~1 whilst, under the market scenario, the difference of DR cdfs for design option 1 and design option 2 contains a zero, which indicates that the two distributions cannot be compared according to $\preceq$. We conclude that, under all three scenarios, the (unknown) distribution function associated with design option 3 majorises the cdfs for both design options 1 and 2. Therefore, for the outputs considered, design option 3 is less uncertain (more robust) than the alternatives.
Table \ref{tab:entropies} provides values of Shannon and Tsallis entropies computed using the DR pdfs for each design option under the three scenarios. We observe that the total orderings imposed by the entropies on the distribution functions are in agreement with the majorisation orderings in Figure \ref{fig:CDF_HeatExample} under the green and neutral scenarios. This result is supported by condition (B2) in Section \ref{sec:cont_major}, whereas, under the market scenario, both entropy measures provide us with the total orderings, which are different to each other. In particular, the lowest Shannon entropy is obtained for design option 3 followed by design option 1 and design option 2, whereas, for Tsallis entropy, the value for design option 2 is lower that for design option 1.
\begin{table}[h!]
\caption{Entropies computed using DR pdfs for each design option under three scenarios.}
\begin{center}
\begin{small}
\begin{tabular}{c|lll|lll}
& \multicolumn{3} {l|}{\textbf{Shannon entropy}}& \multicolumn{3} {l}{\textbf{Tsallis entropy with $\gamma = 1$ }}\\
\midrule
\textbf{Option} & \textbf{Green} & \textbf{Neutral} & \textbf{Market}& \textbf{Green} & \textbf{Neutral} & \textbf{Market}\\
design 1 & 7.45 & 7.28 & 6.60 & 0.927 & 0.944 & 0.931\\
design 2 & 6.49 & 6.62 & 6.80 & 0.920 & 0.922 & 0.928\\
design 3 & 5.84 & 5.97 & 6.11 & 0.904 & 0.902 & 0.896\\
\midrule
\end{tabular}
\end{small}
\end{center}
\label{tab:entropies}
\vspace{-1em}
\end{table}
We now demonstrate the uncertainty tools from Section \ref{sec:algebra} in order to combine the uncertainty under different scenarios and produce orderings of the design options. In particular, under each design option, we find the maximum of the empirical cdfs associated with individual scenarios to obtain an approximation to $\tilde{F}_1(z)\lor\tilde{F}_2(z)\lor\tilde{F}_3(z)$. This is shown in the left panel of Figure \ref{fig:maxminDR} and can be considered to represent an optimistic (more certain) approach. We find that design option 3 majorises the other design options.
\FloatBarrier
\begin{figure}[h!]
\begin{center}
\includegraphics[width=.7\textwidth]{maxminDR.pdf}
\vspace{-1em}
\end{center}
\caption{\textit{Left panel}: representation of $\max(\tilde{F}_1(z), \tilde{F}_2(z), \tilde{F}_3(z))$. \textit{Right panel}: representation of $\min(\tilde{F}_1(z), \tilde{F}_2(z), \tilde{F}_3(z))$.}
\label{fig:maxminDR}
\end{figure}
\FloatBarrier
We also produce an approximation to $\tilde{F}_1(z)\land\tilde{F}_2(z)\land\tilde{F}_3(z)$, which corresponds to the pessimistic (less certain) approach. The results are shown in the right panel of Figure \ref{fig:maxminDR} in which we obtain the minimum of the empirical cdfs associated with individual scenarios. In this case, we observe a clear ordering between design options: design option 3 majorises design option 2, which majorises design option 1. Under both the pessimistic and optimistic outlooks, we conclude that design option 3 is less uncertain than the two alternatives.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=1\textwidth]{HeatInverseMixing.pdf}
\vspace{-2em}
\end{center}
\caption{Cdfs from inverse mixing with different weightings on each scenario: \textit{Left panel}: equal weights on each scenario. \textit{Central panel}: $\alpha_{G}=0.7$, $\alpha_{N}=0.2$ and $\alpha_{M}=0.1$. \textit{Right panel}: $\alpha_{G}=0.05$, $\alpha_{N}=0.05$ and $\alpha_{M}=0.9$}
\label{fig:HeatInverseMixing}
\end{figure}
In practice, the proposed uncertainty tools provide experts and analysts with additional ways to express their expert judgements. For instance, the weights in inverse mixing represent the probabilities of the occurrence of each scenario. Let $\alpha_{G}, \alpha_{N}$ and $\alpha_{M}$ be the weights applied to the Green, Neutral and Market scenarios, respectively. We consider three cases: (i) equal weights, (ii) $\alpha_G=0.7$, $\alpha_M=0.15$ and $\alpha_N=0.15$ and (iii) $\alpha_G=0.05$, $\alpha_M=0.9$ and $\alpha_N=0.05$. The cdf from inverse mixing for each of these cases is shown in Figure \ref{fig:HeatInverseMixing}. In cases (i) and (ii), there are clear orderings in which the cdf of design option 3 lies above the cdf of design option 2 which lies above that of design option 1. In case (iii), however, there is no ordering between the empirical cdfs since the cdfs for design options 1 and 2 cross. However, the cdf associated with design option 3 majorises the cdfs for both design options 1 and 2 and we conclude that design option 3 is the least risky option in all three cases. It is important to note that, whilst the above results provide useful guidance for comparing uncertainty, the uncertainty is only one aspect of such decisions and one would want to take into account the actual costs and carbon emissions (rather than just their variability) in each case. However, here we have demonstrated majorisation to be an intuitive approach to comparing uncertainty and ultimately aiding informed decisions in such settings.
\section{Concluding remarks}\label{sec:conclusion}
The concept of uncertainty is the subject of much discussion, particularly at the technical interface between scientific modelling and statistics. We suggest that majorisation, which only compares the rank order of probability mass, continuous or discrete, provides a valuable form of uncertainty. We have shown that any two distributions can be compared, and consider this to be a principal contribution of the paper.
We demonstrated this approach to assess the uncertainty with examples from well known distributions and in applications of climate projections and energy systems. The algorithms are straightforward and were introduced to enhance the understanding of the concept of majorisation. The idea presented is that a candidate for a wider framework is a stochastic ordering for which most, if not all, types of entropy are order preserving.
When events cannot be compared with respect to the majorisation ordering, questions of relative uncertainty are unanswerable: if events can be compared, the comparison is stronger as it can be made for fewer pairs of events. We believe that this strong condition deserves to be a candidate for a universal version of what is meant by `more certain' or `more uncertain.'
Extensions to our approach to uncertainty include: developing computationally efficient and scalable algorithms to perform empirical decreasing rearrangements; using majorisation in sensitivity analysis, that is, the study of the propagation of variability through systems from input to output; further exploring the properties of the uncertainty ring and lattice and the connection between the two algebraic structures.
\section*{Acknowledgements}
We would like to thank Chris Dent (Edinburgh), Jim Smith (Warwick) and Peter Challenor (Exeter) for their senior support. Authors three and four acknowledge the the EU grant ReUseHeat, ID: 767429, conducted under H2020-EU 3.3.1.
\bibliographystyle{plain}
| proofpile-arXiv_065-254 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The dimensionality of a quantum system is crucial for its ability to perform quantum information processing tasks. For example, the security of some protocols for quantum key distribution and randomness expansion depends on the presumed dimensionality of the underlying physical system. The dimensionality also plays a crucial role in device characterisation tasks. Also, non-classical phenomena such as Kochen-Specker contextuality is known to require quantum systems of dimension at least three \cite{Kochen:1967JMM}. Therefore, it is of fundamental importance to have efficient tools to determine the dimensionality of the underlying Hilbert space where the measurement operators act on the physical system for any experimental setup.
There are several approaches to tackle this problem. One of them is known as {\em self-testing} \cite{Yao_self}. The idea of self-testing is to identify unique equivalence class of configurations corresponding to extremal quantum violation of a Bell inequality. The members of the equivalence class are related via some fixed local isometry. The dimension of the individual quantum system can be lower bounded by identifying the equivalence class of configurations attaining the optimality \cite{Yao_self}. Though initially proposed in the setting of Bell non-locality, the idea of self-testing has been extended to prepare-and-measure scenarios, contextuality, and quantum steering \cite{tavakoli2018self, BRVWCK19, bharti2019local,vsupic2016self,shrotriya2020self}. For a review of self-testing, we refer to \cite{vsupic2019self}. It is important to stress that only extremal points of the quantum set of correlations that can be attained via finite-dimensional configurations admit self-testing \cite{goh2018geometry}.
The second approach is {\em tomography}. Quantum tomography is a process via which the description of a quantum state is obtained by performing measurements on an ensemble of identical quantum states. For quantum systems of dimension $d$, to estimate an unknown quantum system to an error $\epsilon$ (in $l_1$ norm) requires $\Theta \left(d^2 \epsilon^{-2}\right)$ copies of a quantum state \cite{OW17}. One drawback of this approach is that it requires a prior knowledge of the dimensionality of the system.
The third approach is {\em dimension witnesses} \cite{brunner_testing_2008}. This is the approach we will focus on in this paper.
The goal of dimension witness is to render a lower bound on the dimensionality of the underlying physical system based on the experimental statistics. For example, a quantum dimension witness is a quantity that can be computed from the input-output correlations and whose value gives a lower bound to the dimension of the Hilbert space needed to accommodate the density matrices and the measurement operators needed to produce such correlations. Dimension witnesses have been investigated for the following types of scenarios:
\begin{enumerate}
\item \label{type1} \textbf{Bell scenarios:} Here, quantum dimension witnesses are based on the observation that certain bipartite Bell non-local correlations are impossible to produce with quantum systems of local dimension $d$ (and thus global dimension $d^2$) or less, implying that the experimental observation of these correlations certifies that the quantum local dimension is at least $d+1$ \cite{brunner_testing_2008,vertesi_bounding_2009,brunner_dimension_2013}. There are dimension witnesses of this type for arbitrarily high quantum local dimension $d$ \cite{brunner_testing_2008}, but they require preparing entangled states of dimension $d^2$ and conditions of spatial separation that do not occur naturally in quantum computers. This approach to dimension witnessing is related to self-testing based on Bell non-local correlations \cite{Yao_self}. A Bell dimension witness certifies the minimum quantum dimension accessed by the measurement devices acting on the physical systems prepared by a single source.
\item \label{type2} \textbf{Prepare-and-measure scenarios:} These scenarios consists of $p$ different preparation sources and $m$ measurements acting on the physical systems emitted by those sources. Prepare-and-measure dimension witnesses require $p > d+1$ preparations to certify classical or quantum dimension $d$ \cite{wehner2008lower,gallego_device-independent_2010}. They have been used to experimentally certify in a device-independent way small classical and quantum dimensions \cite{hendrych_experimental_2012,ahrens2012experimental,d2014device}. A prepare-and-measure dimension witness certifies the minimum classical or quantum dimension spanned by the $p$ preparation sources and the $m$ measurements.
\item \label{type3} \textbf{Kochen-Specker contextuality scenarios:} They consist of a single state preparation followed by a sequence of compatible ideal measurements chosen from a fixed set. Two measurements are compatible (or jointly measurable) when there is a third measurement that works as a refinement for both of them, so each of them can be measure by coarse graining the third measurement and thus both of them can be jointly measured. A measurement is ideal when it yields the same outcome when repeated on the same physical system and does not disturb any compatible measurement. Checking experimentally that a set of measurements are ideal and have certain relations of compatibility can be done from the input-output correlations \cite{LMZNCAH18}. Correlations between the outcomes of ideal measurements are Kochen-Specker contextual when they cannot be reproduced with models in which measurements have predetermined context-independent outcomes~\cite{cabello2008experimentally,KCBS}. Quantum Kochen-Specker contextuality dimension witnesses are based on the observation that certain Kochen-Specker contextual correlations are impossible to produce with quantum systems of dimension $d$ or less, implying that its experimental observation certifies a local quantum dimension of at least $d$. The problem of contextuality dimension witnesses is that they require testing in addition that the measurements are ideal and satisfy certain relations of compatibility. A {\em state-dependent} contextuality dimension witness certifies the minimum quantum dimension accessed by the measurement devices acting on the physical systems prepared by a single source. In a {\em state-independent} contextuality scenario, these measurements form a state-independent contextuality set in dimension $d$, defined as one for which the quantum predictions for sequences of compatible measurements for any quantum state in dimension $d$ cannot be reproduced by non-contextual models \cite{cabello2015necessary}. The minimum quantum dimension for contextual correlations have been studied in~\cite{GBCKL14}. A state-independent Kochen-Specker contextuality dimension witness certifies the minimum quantum dimension accessed by the measurement devices, without relating the conclusion to any particular source.
\end{enumerate}
In this paper, we introduce a novel graph-theoretic approach to quantum dimension witnessing. We deal with abstract structures of measurement events produced for one preparation and several measurements, as is the case in Kochen-Specker contextuality and Bell scenarios. This means that our approach will always work in Kochen-Specker contextuality scenario and sometimes in specific Bell scenarios.
Our approach is, first, based on the observation that the problem of finding dimension witnesses can be reformulated as the problem of finding correlations for structures of exclusivity which are impossible to produce with systems of quantum dimension $d$ or less, implying that its experimental observation certifies a quantum dimension of at least $d+1$. Second, it is based on the observation that, given a set of events and their relations of mutual exclusivity, the sets of correlations allowed in quantum theory are connected to well-known and easy to characterize invariants and sets in graph theory \cite{CSW}. In fact, the power of the graph-theoretic approach to dimension witnessing is based on three pillars:
\begin{itemize}
\item The connection between correlations for structures of exclusivity and easy to characterize sets in graph theory. This connection allows us to use tools and results of graph theory for quantum graph dimension witnessing.
\item The observation that finding dimension witnesses in scenarios with many measurements is difficult due to the difficulty to fully characterize in these scenarios the sets of correlations that cannot be achieved with a given dimension. In contrast, the graph approach allows us to rapidly identify structures of exclusivity that have dimension witnesses, even though many of them correspond to scenarios with many measurements.
\item The connection between abstract structures of exclusivity and some specific contextuality scenarios (those consisting of dichotomic measurements having a structure of compatibility isomorphic to the structure of exclusivity). This assures that any quantum dimension witness for a graph of exclusivity always admits a physical realization in {\em some} Kochen-Specker contextuality scenario. Moreover, by imposing extra constraints, we can find, in principle, those dimension witness that also admit a physical realizations in a {\em specific} Kochen-Specker contextuality or Bell scenario.
\end{itemize}
The paper is organized as follows. In Sec.~\ref{notation_context} we introduce some standard definitions of graph theory and the graph-theoretic approach to correlations. In Sec.~\ref{sec2}, we use this graph-theoretic approach to study quantum dimension witness. Specifically, in Subsec.~\ref{heuristics}, we present a heuristic technique to compute a lower bound on the $d$ dimensional-restricted quantum value and find the corresponding $d$-dimensional quantum realisations. We illustrate the usefulness of this tool with some examples.
In Subsec.~\ref{sec4Qites}, we introduce a family of graphs, which we call the $k$-Qite family, and show that their elements are relatively simple quantum dimension witness for any dimension $k \geq 3$. Finally, in Sec.~\ref{disc}, we conclude by listing future directions for research.
Most of the notations used in the paper are self-explanatory. A graph describes relationships between several entities or vertices. We denote an edge between two vertices $i$ and $j$ by the symbol $i \sim j$. A class of commonly studied graphs is the cycles on $n$ vertices, which we denote by $C_n$. The work also uses semidefinite programming where we use the symbol $S_+^{n}$ to denote the class of positive semi-definite hermitian matrices of size $n \times n$.
\section{Graph theoretic approach to contextuality}
\label{notation_context}
Consider an experiment in the black-box setting.
An outcome $a$ and its associated measurement $M$, are together called a measurement event and denoted as $(a|M)$.
\begin{definition}(Exclusive event)
Two events $e_{i}$ and $e_{j}$ are defined to be exclusive if there exists a measurement $M$ such that $e_{i}$ and $e_{j}$ correspond to different outcomes of $M,$ i.e. $e_{i}=\left(a_{i} \mid M\right)$ and $e_{j}=\left(a_{j} \mid M\right)$ such that $a_{i} \neq a_{j}.$
\end{definition}
\begin{definition}(Exclusivity graph)
For a family of events $\left\{e_{1}, e_{2} \ldots e_{n}\right\}$ we associate a simple undirected graph, $\mathcal{G}_{\mathrm{ex}}:=(V, E),$ with vertex set $V$ and edge set $E$ such that two vertices $i, j \in V$ share an edge if and only if $e_{i}$ and $e_{j}$ are exclusive events. $G$ is called an exclusivity graph.
\end{definition}
Now we consider theories that assign probabilities to the events corresponding to its vertices. Concretely, a {\em behaviour} corresponding to $\mathcal{G}_{\mathrm{ex}}$ is a mapping $p: [n]\to [0,1]$, such that $p_i+p_j\le 1$, for all $i\sim j$, where we denote $p(i)$ by $p_i$.
Here, the non-negative scalar $p_i\in [0,1]$ encodes the probability that measurement event $e_i$ occurs. Furthermore, note that two exclusive events $e_i$ and $e_j$ implies the linear constraint $p_i+p_j\le 1$.
A behaviour $p: [n]\to [0,1]$ is {\em deterministic non-contextual} if each $p_i \in \{0,1\}$ such that $p_i+p_j \leq 1$ for exclusive events $e_i$ and $e_j$. A {\em deterministic non-contextual} behaviour can be considered as a vector in $\mathbb{R}^n$. The polytope of {\em non-contextual behaviours}, denoted by $\mathcal{P}_{NC}(\mathcal{G}_{\mathrm{ex}})$, is the convex hull of all deterministic non-contextual behaviours. The behaviours that do not lie in $\mathcal{P}_{NC}(\mathcal{G}_{\mathrm{ex}})$ are called {\em contextual}. It is worthwhile to mention that in combinatorial optimisation, one often encounters the {\em stable set} polytope of a graph $G$, $STAB(G)$ (defined below). It is quite easy to see that stable sets of $G$ (a subset of vertices, where no two vertices share an edge between them) and {\em deterministic} behaviours coincide.
\begin{definition}
\[ STAB(G) = \{ conv(x) : x \text{ is a characteristic vector of a stable set of } G \}
\]
\end{definition}It thus follows from the definition that $\mathcal{P}_{NC}(\mathcal{G}_{\mathrm{ex}})=STAB(\mathcal{G}_{\mathrm{ex}})$.
Lastly, a behaviour $p: [n]\to [0,1]$ is called {\em quantum} if there exists a quantum state $\ket{\psi}$ and projectors $\Pi_1,\ldots \Pi_n$ acting on a Hilbert space $\mathcal{H}$ such that
\begin{equation} p_i= \bra{\psi}\Pi_i \ket{\psi}, \forall i\in [n] \text{ and } \mathrm{tr}(\Pi_i\Pi_j)=0, \text{ for } i\sim j.\end{equation}
We refer to the ensemble $\ket{\psi}, \{\Pi\}_{i=1}^n$ as a {\em quantum realization} of the behaviour $p$.
The convex set of all quantum behaviours is denoted by $\mathcal{P}_{Q}(\mathcal{G}_{\mathrm{ex}})$. It turns out this set too is a well studied entity in combinatorial optimisation, namely the {\em theta body}.
\begin{definition}
The theta body of a graph $G=([n],E)$ is defined by:
$${\rm TH}(G)=\{x\in \mathbb{R}^n_+: \exists Y\in \mathbb{S}^{1+n}_+, \ Y_{00}=1, \ Y_{ii}=x_i = Y_{0i} \quad \, \forall i \in [n], \ Y_{ij}=0, \forall (i,j)\in E\}.$$
\end{definition}
The fact that $\mathcal{P}_{Q}(\mathcal{G}_{\mathrm{ex}}) = TH(\mathcal{G}_{\mathrm{ex}})$, was observed in ~\cite{CSW} and follows by taking $d = \ket{\psi}$ and $w_i = \Pi_i \ket{\psi} /\sqrt{\bra{\psi}\Pi_i \ket{\psi}} \ \forall i \in [n]$, in the following lemma.
\begin{lemma}\label{startpoint}
We have that $x\in TH(G)$ iff there exist unit vectors $d,w_1,\ldots,w_n$ such that
\begin{equation}\label{csdcever}
x_i=\langle d,w_i\rangle^2, \forall i\in [n] \text{ and } \langle w_i, w_j\rangle=0, \text{ for } (i,j)\in E.
\end{equation}
\end{lemma}
\begin{proof}Let $x\in {\rm TH}(G)$. By definition, $x$ is the diagonal of a matrix $Y$ satisfying $Y\in \mathbb{S}^{1+n}_+, \ Y_{00}=1, \ Y_{ii}=Y_{0i}, \ Y_{ij}=0, \forall (i,j)\in E$. Let $Y=Gram(d,v_1,\ldots,v_n)$. Define $w_i={v_i\over \|v_i\|}$. Using that $x_i=Y_{ii}=Y_{0i}$ we get that
$$x_i=\langle v_i,v_i\rangle=\langle d,v_i\rangle=\langle d,w_i\|v_i\|\rangle=\|v_i\|\langle d,w_i\rangle.$$
Lastly, note that $\langle d,w_i\rangle=\langle d, {v_i\over \|v_i\|}\rangle={ \langle v_i,v_i\rangle \over \|v_i\|}=\|v_i\|.$ Combining these two equations we get that
$$x_i=\langle d,w_i\rangle^2.$$
\noindent Conversely, let $Y$ be the Gram matrix of $d, \langle d,w_1\rangle w_1,...,\langle d,w_1\rangle w_1$. Note that $\langle d,w_i\rangle w_i$ is the orthogonal projection of $d$ onto the unit vector $w_i$. It is easy to see that $Y$ has all the desired properties.
\end{proof}
\noindent In the above lemma, the vectors $w_i$, for $i \in [n]$, are sometimes referred to as an orthonormal representation (OR) of $G$.
\begin{definition}(orthonormal representation) An orthonormal representation of a graph $G = (V,E)$, is a set of unit vectors $w_i$ for $i \in [|V|]$, such that $\braket{w_i}{w_j} = 0, \text{ for all } (i,j) \in E$.
\end{definition}
\noindent The cost of this orthonormal representation of the graph is defined as $\lambda_{\max}\left( \sum_{i \in [|V|]} \ketbra{w_i}\right)$.
\medskip
Next, we turn our attention to the sum $S = p_1 + p_2 + \cdots + p_n$, where $p \in \mathcal{P}_{NC}(\mathcal{G}_{\mathrm{ex}})$ is a {\em non-contextual} behaviour. The set of non-contextual behaviors forms a bounded polyhedron i.e. a polytope. The facets of the aforementioned polytope define tight non-contextuality inequalities, which correspond to half-spaces. This explains why we are interested in $\sum_i p_i $. The maximum of $S$ over {\em deterministic} behaviours is the same as the maximum of $S$ over {non-contextual} behaviours. To see this, let $p \in \mathcal{P}_{NC}(\mathcal{G}_{\mathrm{ex}})$ be a maximizer of $S$. We can write $p$ as a convex sum of deterministic behaviours, that is $p = \sum_j \lambda_j p^{(j)}$, where $p^{(j)}$ are deterministic behaviours and $\lambda_i > 0, \ \sum_i \lambda_i = 1$. Now, note that the optimal value of $S = \sum_j \lambda_j \|p^{(j)}\|_1 \leq \max_j \|p^{(j)}\|_1$. This shows that there always exist a {\em deterministic} behaviour of $\mathcal{G}_{\mathrm{ex}}$ that attains the maximum of $S$. Therefore, the maximum of $S$ for classical theories is the size of the largest stable set of $\mathcal{G}_{\mathrm{ex}}$. This is exactly the independence number of $\mathcal{G}_{\mathrm{ex}}$, denoted by $\alpha(\mathcal{G}_{\mathrm{ex}})$. So we get the inequality $p_1 + p_2 + \cdots + p_n \leq \alpha(\mathcal{G}_{\mathrm{ex}})$.
\begin{definition}(Independence number)
Given a graph $G=(V,E)$, Independence number is the size of the largest subset of vertices $S \subseteq V$ such that no pair of vertices in $S$ are connected. Independence number is denoted by $\alpha(G)$.
\end{definition}
\begin{definition}
A non-contextuality inequality corresponds to a half-space that contains the set of non-contextual behaviours, that is,
\begin{equation}
\sum_{i \in [n]} p_i \leq \alpha(\mathcal{G}_{\mathrm{ex}}),
\end{equation}
for all $p \in \mathcal{P}_{NC}(\mathcal{G}_{\mathrm{ex}})$.
\end{definition}
Interestingly in the quantum setting, one has some additional degrees of freedom to increase this sum. Indeed, let state $u_0$ be a unit vector in a complex Hilbert space $\mathcal{H}$. The event $e_i$ correspond to projecting $u_0$ to a one-dimensional subspace, spanned by a unit vector $u_i \in \mathcal{H}$; the probability that the event occurs is just the squared length of the projection. That is, $p_i = |\braket{u_0}{u_i}|^2$ and $p_1 + p_2 + \cdots + p_n = \sum_{i=1}^n |\braket{u_0}{u_i}|^2$. Now two exclusive events must correspond to projections onto orthogonal vectors, and hence $\braket{u_i}{u_j} = 0$, for all edges $(i,j)$ in $\mathcal{G}_{\mathrm{ex}}$. From Lemma~\ref{startpoint}, $p \in TH(\mathcal{G}_{\mathrm{ex}})$. Therefore, the optimisation problem we are interested in is
\begin{equation} \label{sayma}
\max \sum_i p_i : p \in TH(\mathcal{G}_{\mathrm{ex}}).
\end{equation}
In other words, find a matrix $ X\in \mathbb{S}^{1+n}_+, \text{ with the largest diagonal sum such that } X_{00}=1, \ X_{ii} = X_{0i} \, \forall i \in [n], \ X_{ij}=0, \forall (i,j)\in E\ $. This is precisely the definition of the Lov\'asz theta SDP~\eqref{lovtheta} corresponding to $\mathcal{G}_{\mathrm{ex}}$. The value of this SDP is the famous Lov\'asz theta number $\vartheta(\mathcal{G}_{\mathrm{ex}})$.
\begin{equation}\label{lovtheta}
\begin{aligned}
\vartheta(\mathcal{G}_{\mathrm{ex}}) = \max & \ \sum_{i=1}^n { X}_{ii} \\
{\rm s.t.} & \ { X}_{ii}={ X}_{0i}, \ i\in [n],\\
& \ { X}_{ij}=0,\ i\sim j,\\
& \ X_{00}=1,\ X\in \mathcal{S}^{n+1}_+.
\end{aligned}
\end{equation}
\noindent Hence we get $p_1 + p_2 + \cdots + p_n \leq \vartheta(\mathcal{G}_{\mathrm{ex}})$.
\section{Graph-theoretic dimension witnesses}
\label{sec2}
Any Bell or contextuality inequality can be associated to a graph of exclusivity~\cite{CSW}. In this sense, all of them can be studied under the graph-theoretic framework. While in all previous works one first fixes a (Bell or contextuality) scenario and then looks for dimension witnesses, in this work we investigate the dimension witnesses for graphs (of exclusivity), without fixing a priori any scenario.
\subsection{Quantum correlations with dimensional restrictions}
In this section we examine from a graph-theoretic perspective the problem of
quantum correlations (aka behaviours) with dimensional restrictions. We use some standard concepts of graph theory and the graph-theoretic approach to correlations introduced in Section~\ref{notation_context}.
\begin{definition}(\textbf{$d$-quantum behaviour for a graph of exclusivity}) A behaviour $p: [n]\to~[0,1]$ corresponding to a graph of exclusivity $\mathcal{G}_{\mathrm{ex}}$, having $n$ vertices, is $d$-quantum if there exists a quantum state $\ket{\psi} \in \mathcal{H}^d$ and non zero projectors $\Pi_1,\ldots, \Pi_n$, belonging to a $d$-dimensional Hilbert space $\mathcal{H}^d$~such that
\begin{equation}\label{rankdef}
p_i= \bra{\psi}\Pi_i\ket{\psi},\, \forall i\in [n] \text{ and } \mathrm{tr}(\Pi_i\Pi_j)=0, \text{ for } i \sim j.
\end{equation}
\end{definition}
We call a quantum realization of the behaviour $p$, the set $\ket{\psi}, \{\Pi_i\}_{i=1}^n \in \mathcal{H}^d$ satisfying \eqref{rankdef}. We denote the set of $d$-quantum behaviours by $\mathcal{P}_{Q}^d(\mathcal{G}_{\mathrm{ex}})$.
\begin{definition}(\textbf{Orthogonal rank}) The orthogonal rank of a graph $G$, denoted by $R_o(G)$, is the minimum $d$ such that there exists a $d$-dimensional orthonormal representation for $G$.
\end{definition}
\noindent For example, any orthonormal representation of the $3$-cycle graph of exclusivity must consist of three mutually orthonormal vectors and therefore must be of dimension at least~$3$. Therefore, $R_o(C_3) = 3$. Note that ${\cal P}^d_Q(\mathcal{G}_{\mathrm{ex}})$ is an empty set for $d < R_o(\mathcal{G}_{\mathrm{ex}})$.
Suppose that we are interested in the largest value of the expression $\sum_{i \in [n]} p_i$, as $p$ ranges over the set of $d$-quantum behaviours, that is, the following optimisation problem:
\begin{equation}\label{dimbehaviour}
\max \sum_{i=1}^{n} p_i : p \in \mathcal{P}_{Q}^d(\mathcal{G}_{\mathrm{ex}}).
\end{equation}
Removing the dimensional constraint, the set of quantum behaviours $\mathcal{P}_{Q}(\mathcal{G}_{\mathrm{ex}})$ becomes the theta body of $\mathcal{G}_{\mathrm{ex}}$, $TH(\mathcal{G}_{\mathrm{ex}})$ (see Sec.~\ref{notation_context}). As explained in Eq.~(\ref{sayma}), maximizing the $\ell_1$ norm of $p$ over the theta body is equivalently given by the Lov\'asz theta SDP. Therefore, for all $d \geq R_o(\mathcal{G}_{\mathrm{ex}})$, problem in Eq~\eqref{dimbehaviour} with the dimensional constraint is equivalently expressed by the following rank constrained version of the Lov\'asz theta SDP:
\begin{equation}\label{theta:primalrank}
\begin{aligned}
\vartheta^d(\mathcal{G}_{\mathrm{ex}}) = \max & \ \ \sum_{i=1}^n {X}_{ii} \\
\text{ subject to} & \ \ {X}_{ii}={ X}_{0i}, \ \ 1\le i\le n,\\
& \ \ { X}_{ij}=0, \ \ i\sim j,\\
& \ \ X_{00}=1,\ \ X\in \mathcal{S}^{1+n}_+, \\
& \ \ \text{rank}(X) \leq d.
\end{aligned}
\end{equation}
More concretely, using the same arguments as in Lemma~\ref{startpoint}, if $p \in \mathcal{P}_{Q}^d(\mathcal{G}_{\mathrm{ex}})$ is optimal for \eqref{dimbehaviour} and $ \{\ket{u_i}\bra{u_i}\}_{i=0}^n \in \mathbb{C}^d$ is a quantum realization of $p$ ( where $\ketbra{u_0}$ refers to the quantum state where as $\ketbra{u_i}$ for $1 \leq i \leq n$, refers to the $n$ projectors), then the Gram matrix of the vectors $\ket{u_0},\braket{u_0}{u_1}\ket{u_1},\ldots,\braket{u_0}{u_n}\ket{u_n}$ corresponds to an optimal solution for~\eqref{theta:primalrank} of rank at most~$d$.
Conversely, for any optimal solution $X={\rm Gram}(\ket{u_0},\ket{u_1},\ldots,\ket{u_n})$, with $u_i \in \mathbb{C}^d$, of the SDP \eqref{theta:primalrank},
the realization $\{{\ket{u_i}\bra{u_i} / \|\ket{u_i}\bra{u_i}\|}\}_{i=0}^n$ is optimal for \eqref{dimbehaviour}. The equivalence fails to hold for $d < R_o(\mathcal{G}_{\mathrm{ex}})$, due to the inverse norm factor in the above line, since $\|u_i\|=0$ for at least one $i$. This is because otherwise $\{u_i/\|u_i\|\}_{i=1}^n$ is a valid orthonormal representation for $\mathcal{G}_{\mathrm{ex}}$ of dimension $d < R_o(\mathcal{G}_{\mathrm{ex}})$, violating the definition of orthogonal rank. The quantities $\vartheta^1(\mathcal{G}_{\mathrm{ex}}),\vartheta^2(\mathcal{G}_{\mathrm{ex}}), \ldots, \vartheta^{R_o(\mathcal{G}_{\mathrm{ex}})-1}(\mathcal{G}_{\mathrm{ex}})$ are still well-defined but they do not seem to have any physical relevance in this context.
\medskip
On the other hand, we are also interested in the minimum dimension in which the Lov\'asz theta bound can be achieved.
\begin{definition}(\textbf{Lov\'asz rank}) The Lov\'asz rank of a graph $G$, denoted by $R_L(G)$, is the minimum $d$ for which $\vartheta^d(G) = \vartheta(G)$.
\end{definition}
\noindent By definition, $R_L(G) \geq R_o(G)$. $R_L(G)$ can be sometimes much smaller than the number of vertices of $G$. The following lemma due to Barvinok~\cite{Barvinok1995} gives an upper bound on $R_L(G)$.
\begin{lemma}(\textbf{Barvinok bound}) \label{barvinok}
There exists an optimal solution of $X^*$ of the following SDP
\begin{equation}
\begin{aligned}
\max : & \ \ \mathrm{tr}(CX) \\
\mathrm{s.t.} & \, \mathrm{tr} (A_i X) = b_i, \quad \forall i = 1,2,\ldots, m \\
& X \succeq 0,
\end{aligned}
\end{equation}
with rank $r$ satisfying the inequality $r(r+1)/2 \leq m$.
\end{lemma}
\noindent For the Lov\'asz theta SDP, the number of linear constraints is $m = 1 + |V| + |E|$. Hence $R_L(G) \leq \frac{1}{2} \left(\sqrt{8(|V| + |E|)+9}-1 \right)$. To summarise, we have the following relationships:
\begin{equation}
\vartheta^{R_o(\mathcal{G}_{\mathrm{ex}})}(\mathcal{G}_{\mathrm{ex}}) \leq \vartheta^{R_o(\mathcal{G}_{\mathrm{ex}})+1}(\mathcal{G}_{\mathrm{ex}}) \leq \cdots \leq \vartheta^{R_L(\mathcal{G}_{\mathrm{ex}})}(\mathcal{G}_{\mathrm{ex}}) = \vartheta(\mathcal{G}_{\mathrm{ex}}).
\end{equation}
This suggests a way to lower bound the dimension of the underlying quantum system that violates a certain dimension restricted non-contextuality inequality. More formally, a violation of the inequality $\sum_i p_i \leq \vartheta^d(\mathcal{G}_{\mathrm{ex}})$, where $p \in {\cal P}_Q(\mathcal{G}_{\mathrm{ex}})$, implies that the underlying quantum system must have dimension at least $d+1$. We shall refer to the operator in such a dimension restricted non-contextuality inequality as a {\cal dimension witness} for dimension $d+1$.
\medskip
Finally, we note an equivalent way to compute the dimension restricted Lov\'asz theta, which we define as:
\begin{equation}
\begin{aligned}\label{Prog_2}
\mathcal{\theta}^d(G) = &\max_{\{v_i \in \mathbb{C}^d\}_{i= 1}^n} \lambda_{max}\left(\sum_{i= 1}^n\ketbra{v_i}\right) \\
& \mathrm{s.t.} \; \braket{v_i}{v_i} = 1, \forall i \in [n]\\
& \mathrm{and} \; \braket{v_i}{v_j} = 0, i\sim j.
\end{aligned}
\end{equation}
\begin{lemma}\label{Lmax}
$\theta^d(G) = \vartheta^d(G)$.
\end{lemma}
\begin{proof}
\textsf{($\geq$ direction)} Let $X$ be a solution of SDP. Let $X = VV^{\dagger}$ and the rows of $V$ be $v_i \in \mathbb{C}^{d}$ for $0\leq i \leq n$. Let $\tilde{v_i} = v_i /\|v_i\|$. Clearly, $\tilde{v_i}$ satisfies the constraints in~(\ref{Prog_2}). Now observe that
\begin{equation}
\begin{aligned}
\theta^d(G) \geq & \lambda_{max}\left(\sum_{i=1}^{n} \ketbra{\tilde{v_i}}\right) = \max_{v: \|v\|=1} \sum_{i=1}^n |\braket{v}{\tilde{v_i}}|^2 \\
&\geq \sum_{i=1}^n |\braket{v_0}{\tilde{v_i}}|^2 = \sum_{i=1}^n |\braket{v_i}{\tilde{v_i}}|^2 = \sum_{i=1}^n \braket{v_i}{v_i} \\
&= \vartheta^d(G).
\end{aligned}
\end{equation}
\textsf{($\leq$ direction)} Let $\{v_i \in \mathbb{C}^{d}\}_{i=1}^n$ be a an optimal solution of $\theta^d(G)$ and let $v_0$ be the eigen-vector of $\sum_{i= 1}^n\ketbra{v_i}$ corresponding to the largest eigenvalue. Now construct a $(n+1) \times d$ matrix $V$, with $V_0 = v_0$, the first row of $V$ and $V_i = \braket{v_i}{v_0}v_i$, for all $i \in [n]$. Let $X = VV^\dagger$. Firstly, we note that it satisfies all the constraints of the SDP. Now observe that
\begin{equation}
\begin{aligned}
\vartheta^d(G) & \geq \mathrm{tr}(X) -1 \\
&= \sum_{i=1}^n \braket{v_i}{v_i}|\braket{v_i}{v_0}|^2 \\
&= \sum_{i=1}^n |\braket{v_i}{v_0}|^2 \\
&= \lambda_{max} \left(\sum_{i=1}^{n} \ketbra{v_i} \right) \\
&= \theta^d(G).
\end{aligned}
\end{equation}
\end{proof}\qedhere
\subsection{Finding low rank solutions: Heuristic approach}
\label{heuristics}
Unfortunately, \emph{rank-constrained} SDPs are \emph{NP}-hard problems and hence they are computationally intractable. An easy way to see this is that the NP-hard \textsf{Max-Cut} problem with weight matrix $W$ can be expressed as the following rank one restricted SDP:
\begin{equation}
\begin{aligned}
\max \ \ &\frac{1}{2}\mathrm{tr} (W X) \\
\text{s.t.}\ & {X}_{ii}= 1, \forall i,\\
&X \succeq 0, \\
&\text{rank}(X) = 1.
\end{aligned}
\end{equation}
\noindent Because of this restriction, it seems unlikely that given a non-contextuality inequality and a dimension $d$, one can efficiently compute the value $\vartheta^d(\mathcal{G}_{\mathrm{ex}})$ and find a quantum realisation of dimension $d$ that achieves the bound. Nevertheless, it is important to find such low dimensional quantum realisations which at least violate the classical bound $\alpha(\mathcal{G}_{\mathrm{ex}})$. For this purpose, we provide a heuristic technique (algorithm \ref{algo:heuristic}) to compute a lower bound on the $d$ dimensional restricted quantum value and find the corresponding $d$-dimensional quantum realisations.
\medskip
\begin{algorithm}[H]
\SetKwInOut{Input}{input}
\SetKwInOut{Output}{output}
\Input{Graph $G$ having $n$ nodes, dimension $d$, number of iterations \texttt{k}}
\Output{A lower bound to $\vartheta^d(G)$}
\vspace{0.2cm}
Generate a random matrix $W \in \mathbb{R}^{(n+1)\times (n+1)}$\;
\textit{iter} = 1\;
\While{iter $< \texttt{k}$}{
Minimise $\mathrm{tr}((W-I_{n+1})X)$, subject to $X \succeq 0$, $X_{00} = 1$, $X_{ii} = X_{0i}$ for all $i$ and $X_{ij} = 0$ for all $i \sim j$\;
Obtain optimal $X$ for the above SDP\;
Minimise $\mathrm{tr}(XW)$, subject to $I_{n+1} \succeq W \succeq 0$, $\mathrm{tr}(W) = n+1-d$\;
Obtain optimal $W$ from the above SDP \;
\textit{iter} = \textit{iter} + 1\;
}
\caption{Heuristics using SDPs.}
\label{algo:heuristic}
\end{algorithm}
\medskip
The algorithm is adapted from an approach to solving rank constrained problems given in Chapter 4 of ~\cite{dattorro2005convex}. The reference gives a heuristic algorithm for producing low rank solutions to feasibility SDP of the form:
\vspace{-0.5cm}
\begin{equation} \label{genranksdp}
\begin{aligned}
\text{Find} \ \ & G \in \mathcal{S}^N_+ \\
\text{ s.t. } & G \in \mathcal{C}\\
&\text{rank}(G) \leq d,
\end{aligned}
\end{equation}
\noindent where $\mathcal{C}$ is a convex set. Instead of solving this non-convex problem directly, they suggest to solve a couple of SDPs~\eqref{ranksdp1} and \eqref{ranksdp2} iteratively, until the following stopping criteria is met. After a particular iteration, let $G^*$ and $W^*$ be the optimal solution of the SDPs ~\eqref{ranksdp1} and \eqref{ranksdp2} respectively. The loop is stopped if $\langle G^*, W^*\rangle = 0$. Let us see why. Note that the eigenvalues of $W^*$ lie in the closed interval $[0,1]$ and they sum up to $N-d$. This implies that at least $N-d$ of its eigenvalues are non-zero, that is, rank$(W^*) \geq N-d$. This, along with the fact that $\langle G^*, W^*\rangle = 0$, implies that rank$(G^*) \leq d$. Since $G^*$ is a solution of the first SDP, it must also satisfy the conditions $G^* \in \mathcal{C}$ and $G^* \in \mathcal{S}^N_+$. Thus $G^*$ is a solution of SDP~\eqref{genranksdp}. However, note that there is no guarantee that the stopping criteria will be met.
\begin{multicols}{2}
\begin{equation} \label{ranksdp1}
\begin{aligned}
\min_G \ \ &\langle G,W \rangle \\
\text{ s.t. } & G \in \mathcal{C}\\
& G \in \mathcal{S}^N_+.
\end{aligned}
\end{equation}
\begin{equation} \label{ranksdp2}
\begin{aligned}
\min_W \ \ &\langle G,W \rangle \\
\text{ s.t. } & \mathrm{tr}(W) = N - d \\
& I_N \succeq W \succeq 0.
\end{aligned}
\end{equation}
\end{multicols}
In our case, the SDP~\eqref{theta:primalrank} is more general in the sense that it also involves optimising an objective function. Thus we include the objective function of the Lov\'asz theta SDP, $\mathrm{tr}(X)$, as an extra additive term to the objective function of the first SDP~\eqref{ranksdp1}. Besides this, the main idea of Algorithm~\ref{algo:heuristic}, is same as in the feasibility SDP case - to solve two SDPs iteratively. The first SDP tries to satisfy all the Lov\'asz theta SDP constraints, while the second SDP tries to restrict the rank of the solution $X$ to the desired value. The algorithm is made to run for a predefined number of iterations, $\texttt{k}$. In the end of the program, if the final $X$ and $W$ are such that $\langle X,W \rangle = 0$, then the solution $X$ is indeed a feasible solution to SDP~\eqref{theta:primalrank}. If not, we restart the program. We find that this heuristic works pretty well in practice and enables us to find low rank solutions to the Lov\'asz theta SDP. Taking a Gram decomposition of the solution matrix $X$ allows us to compute the $d$ dimensional quantum realisations.
\medskip
Note that Algorithm~\ref{algo:heuristic} only outputs a lower bound for $\vartheta^d(G)$ and is not directly used to find dimension witnesses (which would require an upper bound). However one may expect to guess this upper bound by running this algorithm several times (by taking the maximum among all the runs). This idea allows us to find candidate graphs for which we can find dimension witnesses and prove the upper bound theoretically. In fact, in Sec.~\ref{sec4Qites}, we describe a family of graphs, which can be used as dimension witnesses, which was found precisely by the same logic using Algorithm~\ref{algo:heuristic}.
\subsection{Examples}
To demonstrate the usefulness of the tools introduced, we apply them to two of graphs which are relevant in the literature on contextuality. For each graph, we report the lower bounds on the rank constrained Lov\'asz theta values for different dimensions obtained with the algorithm introduced before\footnote{A MATLAB implementation of the code using the SDPT3 solver, can be found \href{https://www.dropbox.com/sh/595q05xpo7wfzpd/AAC8jvuprr-C-DTcJccxl6fea?dl=0}{here}.} and discuss why the results are interesting.
\begin{figure}[H]
\centering
\includegraphics[width=0.4\textwidth]{dagograph.png}
\caption{$G_1$ graph: The $9$-vertex graph $G_1$ was used in \cite{Dagomirgraph} to illustrate the notion of almost state-independent contextuality.}
\label{dagograh}
\end{figure}
\subsubsection{Almost state-independent contextuality}
The earliest proof of state-independent quantum contextuality by
Kochen and Specker \cite{Kochen:1967JMM} required 117 three-dimensional real projective measurements. Since then, the number of projective measurements needed to demonstrate state-independent contextuality has been drastically reduced to thirteen over the years \cite{cabello1996bell, yu2012state}. The paper by Yu and Oh suggested a test to reveal state-independent contextuality with only thirteen
projectors \cite{yu2012state}. Later, a computer-aided proof confirmed that it is impossible to demonstrate state-independent contextuality
with less than thirteen measurements \cite{cabello2016quantum}. Thus, any test of contextuality with less than thirteen projective measurements
would fail to exhibit contextuality for at least a few quantum states. The $9$-vertex graph $G_1$ in Fig.~\ref{dagograh} is a part of the original proof of the Kochen-Specker theorem \cite{Kochen:1967JMM} and has been used in \cite{Dagomirgraph} to illustrate the concept of ``almost state-independent'' contextuality. The almost state-independent non-contextuality inequality is given by,
\begin{equation} \label{eq: Dag_ineq}
\sum_{i \in [n]} p_i \leq 3,
\end{equation}
with the events satisfying the exclusivity relation given by the graph in Fig.~\ref{dagograh}. In reference \cite{Dagomirgraph}, authors showed that the non-contextuality inequality in \eqref{eq: Dag_ineq} is saturated by a three dimensional maximally mixed state and violated by every other choice of three-dimensional preparation, for an appropriate choice of measurement settings. Since the non-contextuality inequality in \eqref{eq: Dag_ineq} is violated for every quantum state, except maximally mixed state, it exemplifies the concept of almost state-independent contextuality. For details, refer to \cite{Dagomirgraph}. As one can see, the non-contextual bound for the aforementioned non-contextuality inequality is given by its independence number, $\alpha(G_1) = 3$ \cite{CSW}. In addition, $R_o(G_1) = 3$ and $R_L(G_1) \leq 4$.
Our calculations lead to the following results:
\begin{center}
\begin{tabular}{ |c|c|c| }
\hline
$d =$ & $3$ & $4$ \\
\hline
$\vartheta^d(G_1) \geq$ & $3.333$ & $3.4706=\vartheta(G_1)$ \\
\hline
\end{tabular}
\end{center}
The authors of \cite{Kochen:1967JMM,Dagomirgraph} used this graph to illustrate state-independent and almost state-independent in $d=3$, respectively. From numerics, we know that there exists a rank 4 solution which achieves the Lov\'asz theta number and it would be interesting to show that $R_L(G_1) = 4$. Also, numerical evidence suggests that $\vartheta^3(G_1) \leq 3.333$, however we do not have theoretical proof. If we assume $\vartheta^3(G_1) \leq 3.333$, it would mean that any experimental value $> 3.333$ will certify that the underlying dimension is greater than $3$.
\begin{figure}[H]
\centering
\includegraphics[width=0.4\textwidth]{mermin_graph.png}
\caption{$G_2$ graph: the $16$-vertex graph $G_2$, is the graph of exclusivity corresponding to the $16$ events in the Bell operator of Mermin's tripartite Bell inequality. The aforementioned tripartite Bell inequality can be used to self-test the $3$-qubit GHZ state.}
\label{mermingrah}
\end{figure}
\subsubsection{Mermin's Bell inequality}
We discuss an $n$-partite Bell inequality (for odd $n\geq 3$ ), known as Mermin's Bell inequality \cite{Mermin90}, the interest of which is based on the fact that the Bell operator
\begin{equation}
S_n = \frac{1}{2 i} \left[\bigotimes_{j=1}^n (\sigma_x^{(j)}+i \sigma_z^{(j)}) - \bigotimes_{j=1}^n (\sigma_x^{(j)}-i \sigma_z^{(j)})\right],
\end{equation}
where $\sigma_x^{(j)}$ is the Pauli matrix $x$ for qubit $j$, has an eigenstate with eigenvalue $2^{(n-1)}$ . In contrast, for local hidden-variable (LHV) and noncontextual hidden-variable (NCHV) theories,
\begin{equation}
\langle S_n \rangle \overset{\scriptscriptstyle{\mathrm{LHV, NCHV}}}{\le} 2^{(n-1)/2}.
\end{equation}
The aforementioned inequality thus demonstrates the fact that there is no
limit to the amount by which quantum theory can surpass the limitations imposed by local hidden variable theories (or non-contextual hidden variable theories). We are interested in the tripartite case, i.e. for $n=3$,
\begin{equation} \label{eq: mermin_3}
\langle \sigma_z^{(1)} \otimes \sigma_x^{(2)} \otimes \sigma_x^{(3)} \rangle + \langle
\sigma_x^{(1)} \otimes \sigma_z^{(2)} \otimes \sigma_x^{(3)}\rangle +
\langle \sigma_x^{(1)} \otimes \sigma_x^{(2)} \otimes \sigma_z^{(3)} \rangle-
\langle \sigma_z^{(1)} \otimes \sigma_z^{(2)} \otimes \sigma_z^{(3)} \rangle \leq 2.
\end{equation}
The tripartite inequality in \eqref{eq: mermin_3} can be used to self-test a $3$- qubit GHZ state \cite{kaniewski2017self}. One can study the aforementioned inequality via the graph approach introduced in \cite{CSW}. The $16$-vertex graph $G_2$ in Fig.~\ref{mermingrah} is the graph of exclusivity corresponding to the $16$ events in the Bell operator of Mermin's tripartite Bell inequality~\cite{mermin_cabello}. In this case, $\alpha(G_2) = 3$, $R_o(G_2) = 4$, and $R_L(G_2) \leq 7$. Our calculations give
\begin{center}
\begin{tabular}{ |c|c|c|c|c| }
\hline
$d =$ & $4$ & $5$ & $6$ & $7$ \\
\hline
$\vartheta^d(G_2) \geq$ & $3.414$ & $3.436$ & $3.6514$ & $4=\vartheta(G_2)$ \\
\hline
\end{tabular}
\end{center}
Further if we can show that these lower bounds are tight, then one can use these inequalities as dimension witnesses. It is also interesting to note that the Lov\'asz theta can be achieved in $d=7$, since achieving it in the three-party, two-setting, two-outcome Bell scenario requires $3$ qubits and thus $d=2^3=8$.
\subsection{Quantum dimension witnesses for arbitrary dimensions : the family of Qites}
\label{sec4Qites}
It was realised \cite{Kochen:1967JMM} that achieving Kochen-Specker contextuality requires quantum dimension of at least $3$. A simple proof of this is provided in the following Lemma.
\begin{lemma}\label{gleason}
$\vartheta^2(\mathcal{G}_{\mathrm{ex}}) = \alpha(\mathcal{G}_{\mathrm{ex}})$.
\end{lemma}
\begin{proof}
For this proof we use the definition of the restricted Lov\'asz theta number from (\ref{Prog_2}). We need to show that, if we restrict ourselves to $2$ dimensional vectors, then the restricted Lov\'asz theta number is at most the independence number of the graph. Firstly note that if the graph has an odd cycle ($>1$), then it cannot have orthonormal representation in 2 dimensions. Thus we consider only bipartite graphs. Furthermore, assume that $\mathcal{G}_{\mathrm{ex}}$ is connected. If it is not connected, apply the same arguments as follows, to each connected component and then note that the independence number of the graph is the sum of the independence number of its connected components. For a connected bipartite graph its bi-partition is unique and for $\mathcal{G}_{\mathrm{ex}}$, let them be denoted as $V$ and $V'$. The key observation is that for any unit vector $\ket{v}$ in $\mathbb{C}^2$, there exists a unique (up to a unit complex number $e^{i \theta}$) vector $\ket{v^{\perp}}$ that is orthogonal to $\ket{v}$. This implies that if we assign a unit vector $v \in \mathbb{C}^2$ to a vertex in $V$ then all the vectors in $V$ must be of the form $e^{i \theta} \ket{v}$, for some $\theta \in [0,2\pi]$, whereas all vectors in $V'$ must be of the form $e^{i \theta} \ket{v^{\perp}}$. This implies that the cost of the orthonormal representation is at most $\lambda_{\max} \left(\sum_{i \in V} \ketbra{v} + \sum_{i \in V'} \ketbra{v^{\perp}}\right) = \max \{|V|, |V'| \} = \alpha(\mathcal{G}_{\mathrm{ex}})$. \qedhere
\end{proof}
\medskip
To look for more interesting dimension witnesses for arbitrary higher dimensions we define a family of graphs parameterised by integers $k \geq 2$, called \emph{k-Qite}\footnote{The reason for is that they resemble kites. However the name kite is already reserved for another family of graphs.}.
\begin{definition}
A $k$-Qite graph has $2k+1$ vertices, $v_1,v_2,\ldots, v_{2k+1}$, with the first $k$ vertices forming a fully connected graph. Vertex $v_i$ is connected to vertex $v_{i+k}$, for all $1\leq i \leq k$. Vertex $v_{2k+1}$ is connected to vertices $v_{k+i}$, for all $1\leq i \leq k$.
\end{definition}
\noindent Note that the first member of the family, that is $k=2$, is just the $C_5$ graph (see Fig.~\ref{fig:2qite}). This is one of the most well studied graphs in the field of contextuality since it is the smallest graph for which the Lov\'asz theta number is strictly greater than the independence number. The corresponding non-contextuality inequality is the famous {\em KCBS} inequality~\cite{KCBS}. The graph corresponding to $k=3$ is shown in Fig.~\ref{fig:3qite}.
\begin{figure}
\centering
\begin{minipage}{0.45\textwidth}
\centering
\includegraphics[width=0.75\textwidth]{2qite.png}
\caption{$2$-Qite $\equiv$ $C_5$, where \hspace{2cm} $\alpha(C_5) = 2, \vartheta(C_5) = \sqrt{5} \approx 2.2361$}.
\label{fig:2qite}
\end{minipage}\hfill
\begin{minipage}{0.45\textwidth}
\centering
\includegraphics[width=0.9\textwidth]{3qite.png}
\caption{$3$-Qite, where $\alpha(3\text{-Qite}) = 3, \vartheta(3\text{-Qite}) \approx 3.0642$}.
\label{fig:3qite}
\end{minipage}
\end{figure}
\begin{lemma}\label{qiteind}
The independence number of the $k$-Qite graph is $k$.
\end{lemma}
\begin{proof}
Partition the set of the vertices into three sets: $S_1 = \{v_1,v_2,\ldots, v_{k}\}$, $S_2 = \{v_{k+1},v_{k+2},\newline \ldots, v_{2k}\}$ and $S_3 = \{v_{2k+1}\}$. Firstly note that since none of the vertices in $S_2$ are connected to each other, the independence number is at least $|S_2| = k$. Since every vertex in $S_1$ is connected to each other, there can be at most one vertex from $S_1$ in a maximal independent set. However, the inclusion of a vertex from $S_1$, say $v_i$ in the maximal independent set would imply the vertex $v_{k+i}$ cannot be included simultaneously in the maximal independent set. Similarly inclusion of $v_{2k+1}$ implies that one cannot have any vertex of $S_2$ in the maximal independent set. Hence the lemma follows.
\end{proof}
\begin{theorem}
$R_o(\emph{k-}Qite) = k$, for all $k \geq 3$.
\end{theorem}
\begin{proof}
Consider the vertex partitioning as in Lemma~\ref{qiteind}. Since vertices in $S_1$ form a $k$-complete graph, we have $R_o(\text{k-}Qite) \geq k$. Now we show that there exists an orthonormal representation in dimension $k$ for all $\text{k-}Qite$ graphs with $k \geq 3$. Depending of the parity of $k$, we give an explicit construction for the orthonormal representation. \newline
\textbf{When $k$ is odd:} For all the vertices in $S_1$, assign the standard vectors $e_i$ in a $k$-dimensional Hilbert space to vertex $v_i$, for $i \in [$k$]$. Assign the vector $\frac{1}{\sqrt{k}}(1,1,\ldots,1)$ to vertex $v_{2k+1}$. Now consider the vertices $v_{k+i}$ in $S_2$, for $i \in [$k$]$. For vertex $v_{k+i}$ to be orthogonal to vertex $v_{i}$, the vector for $v_{k+i}$ must have $0$ in the $i^{th}$ position. Let the magnitude of the remaining entries of the vector be $\frac{1}{\sqrt{k}}$. Since $k$ is odd, the number of entries with non-zero (also equal) magnitude is even. Setting, half of them randomly to negative sign, makes it orthogonal to the vector $v_{2k+1}$. Hence, in this case, all orthonormality constraints are satisfied.
\newline
\noindent \textbf{When $k$ is even:} Assign the vectors to all the vertices in $S_1$ in the same way as in the odd $k$ case. Set the vector corresponding to vertex $v_{2k+1}$ as $\frac{1}{\sqrt{k-1}}(0,1,1,\ldots,1)$. Except vertex $v_{k+1}$, set all the rest of the vertex in $S_2$ in the same way as in the odd $k$ case. Note that this establishes orthogonality of vertex $v_{k+i}$ with $v_{2k+1}$ for all $2 \leq i \leq k$. Vertex $v_{k+1}$ is then set such that its first entry is $0$ (to make it orthogonal to $v_1$) and is orthogonal to $v_{2k+1}$. There are many such vectors which would satisfy these conditions. For example, set $v_{k+1}$ as $\frac{1}{\sqrt{(k-2)(k-1)}}(0,1,1,\ldots,1,2-k)$ to conclude the proof.
\end{proof}
In order to propose dimension witnesses, we want to find upper bounds on the dimension restricted Lov\'asz theta number corresponding to the {\em Qite} family. For $k=2$, Lemma~\ref{gleason} already gives us the required bound of $2$. We now generalise the Lemma for the {\em Qite} family.
\begin{theorem}
$\vartheta^k(\emph{k-}Qite) \leq k$, for all $k \geq 2$.
\end{theorem}
\begin{proof}
We use the $\theta^d(G)$ definition of rank restricted Lov\'asz theta for the proof, see Lemma~\ref{Lmax}. $\vartheta^k(\emph{k-}Qite) = \max_{\{v_i\}} \lambda_{max} \left(\sum_{i=1}^{2k+1} \ketbra{v_i} \right)$, where $\ket{v_i} \in \mathbb{C}^k$ is a $k$-dimensional quantum state corresponding to the vertex $v_i$, such that $\braket{v_i}{v_j} = 0$, whenever vertices $v_i$ and $v_j$ share an edge. Since, the first $k$ vectors must form an orthogonal basis (as they form a $k$-complete graph), one can suppose that $\ket{v_i} = e_i$ (the standard basis vector), for $1\leq i \leq k$, without loss of generality. This is because there will always exist a unitary $U$, that can rotate any orthonormal basis to the standard basis. Note that this unitary rotation on all the vertices, gives us another set of orthonormal representation of the graph with the same cost, that is,
\begin{equation}
\begin{aligned}
\lambda_{max}\left(\sum_{i=1}^{2k+1} \ketbra{v_i}\right) &= \lambda_{max}\left(U\left(\sum_{i=1}^{2k+1} \ketbra{v_i}\right)U^{\dagger}\right) = \lambda_{max}\left(\sum_{i=1}^{2k+1} U\ketbra{v_i}U^{\dagger}\right).
\end{aligned}
\end{equation}
\noindent Since $ \sum_{i=1}^{k} \ketbra{v_i} = \mathbb{I}$, we are required to show that $\lambda_{max}\left(\sum_{i=k+1}^{2k+1} \ketbra{v_i}\right) \leq k-1$. Note that setting the first $k$ vectors to the standard basis vectors also implies that the $i^{th}$ component of $\ket{v_{k+i}}$ is $0$, for $1 \leq i \leq k$. Next, observe that $\ket{v_{2k+1}}$ is orthogonal to $\ket{v_{k+i}}_{i=1}^k$ and so $\lambda_{max}\left(\sum_{i=k+1}^{2k+1} \ketbra{v_i}\right) \leq \max \{\lambda_{max}\left(\sum_{i=k+1}^{2k} \ketbra{v_i}\right), 1\}$. Hence it suffices to show that $\lambda_{max}\left(\sum_{i=k+1}^{2k} \ketbra{v_i}\right) \leq k-1$.
Let $M \in \mathbb{C}^{k \times k}$ be the matrix whose $i^{th}$ row is $\ket{v_{k+i}}^{\mathrm{T}}$, for $i \in [k]$. Note that $M^{\dagger}M = \sum_{i=k+1}^{2k} \ketbra{v_i}$. Also, observe that $M$ has the property that it's diagonal is all zero and it's rows are all normalized to 1 in $\ell_2$-norm. We shall now bound the largest eigenvalue of $M^{\dagger}M$. We make use of Gershgorin's circle theorem which states that given a complex square matrix $A \in \mathbb{C}^{n \times n}$, it's eigenvalues (which may be complex) lie within at least one of the $n$ Gershgorin discs, that is a closed disk in the complex plane centered at $A_{ii}$ with radius given by the row sum $r_i = \sum_{j\neq i } |A_{ij}|$ for $1 \leq i \leq n$. Since $M_{ii} = 0$ for all $i$,
\begin{equation}\max_{x: \|x\|=1}\|Mx\|_2 = |\lambda_{max}(M)| \leq \max_{k+1\leq i \leq 2k} \, \|\ket{v_{i}}\|_1 \leq \sqrt{k-1} \max_{k+1\leq i \leq 2k} \, \|\ket{v_{i}}\|_2 = \sqrt{k-1},
\end{equation}
where the second inequality follows from the fact that the $\ell_1$-norm of a vector $v$ is at most $\sqrt{\dim(v)}$ times it's $\ell_2$-norm. Finally putting everything together,
\begin{equation}
\lambda_{max}(M^{\dagger}M) = \max_{x: \|x\|=1} x^{\dagger}M^{\dagger}Mx = \max_{x:\|x\|=1} \|Mx\|_2^2 \leq (\sqrt{k-1})^2 = k-1.
\end{equation}
\end{proof}
On the other hand, one can verify that $\vartheta(\emph{k-}Qite) > k$, for any $k > 1$, by solving the Lov\'asz theta SDP for the $\emph{k-}Qite$ graph numerically. This gives us the following corollary.
\begin{corollary}
Violating the non-contextuality inequality $\sum_i p_i \leq k$ where $p \in {\cal P}_{Q}(\emph{k-}Qite)$, implies that the underlying quantum realisation must have dimension at least $k+1$.
\end{corollary}
\section{Conclusion}
\label{disc}
In this work, we have introduced a novel approach to quantum dimension witnessing in scenarios with one preparation and several measurements (examples of them are Kochen-Specker contextuality and Bell nonlocality scenarios). Our approach is based on graphs which represent the relations of exclusivity between events. Each graph can be realized in different scenarios, and there is always a (specific Kochen-Specker contextuality) scenario for which all quantum behaviours for the graph can be realized. The virtue of our approach is precisely that we do not need to fix any scenario. Instead, we explore the features of abstract graphs for dimension witnessing. Here, we have introduced all the necessary tools to identify graph-based dimension witnesses, and we have illustrated their usefulness by showing how famous exclusivity graphs in quantum theory hide some surprises when re-examined with our tools and how one can construct simple dimension witnesses for any arbitrary dimension. Arguably, however, the main interest of our results is that they can be extended in many directions, connected to multiple problems, and applied to in different ways. Here we list some of possible future lines of research:
\begin{itemize}
\item Identifying graph-theoretic dimension witnesses for specific Bell and Kochen-Specker contextuality scenarios.
\item Using previous knowledge in graph theory for finding useful quantum dimension witnesses. For example, there are graphs for which the ratio of Lov\'asz theta number to independence number is quite large, i.e., $\frac{\vartheta(G)}{\alpha(G)} \gg 1$ \cite{Feige1997RandomizedGP, amaral2015maxcontext}. This indicates situations where the quantum vs classical advantage is highly robust against imperfections. Therefore, dimension witnesses based on such graphs could be useful for certification tasks on, e.g., noisy intermediate-scale quantum devices \cite{preskill2018quantum}.
\item For the purpose of noise robust dimension witnesses, one may also use a weighted version of graphs (corresponding to a weighted non-contextuality inequality). As an example, for our family of $\emph{k-}Qite$ graphs, one can consider a weight vector given by $w=(1,1,\ldots,1,k-1)$, where more weight is given to the $(2k+1)^{th}$ vertex of $\emph{k-}Qite$. Note that the weighted independence number of this weighted graph is still $k$. However numerically solving the weighted Lov\'asz theta for this graph suggests $\vartheta(\emph{k-}Qite,w) - \alpha(\emph{k-}Qite,w)> 0.26$ for all $k \geq 3$. For large $k$ this difference converges to $\approx 1/3$. However note that since for large $k$, the ratio $\frac{\vartheta(\emph{k-}Qite,w)}{\alpha(\emph{k-}Qite,w)} \approx 1$, this approach is still not noise robust.
\item Implementing graph-theoretic quantum dimension witnesses in actual experiments.
\item Obtaining the classical memory cost \cite{kleinmann2011memory,CGGX18} for simulating graph-theoretic dimension witnesses and identifying quantum correlations achievable with low-dimensional quantum systems but requiring very-high dimensional classical systems.
\item Extending the graph-theoretic framework to classical dimension witnessing.
\item Developing a general graph-theoretic framework to analyse and unify different approaches to dimension witnessing.
\end{itemize}
\section*{Acknowledgments}
The authors thank Zhen-Peng Xu for valuable comments on the ar$\chi$iv version and suggesting the use of weighted graphs for increasing the quantum-classical gap as described in the conclusions. The authors also thank Antonios Varvitsiotis for helpful discussions. We also thank the National Research Foundation of Singapore, the Ministry of Education of Singapore for financial support. This work was also supported by \href{http://dx.doi.org/10.13039/100009042}{Universidad de Sevilla} Project Qdisc (Project No.\ US-15097), with FEDER funds, \href{http://dx.doi.org/10.13039/501100001862}{MINECO} Projet No.\ FIS2017-89609-P, with FEDER funds, and QuantERA grant SECRET, by \href{http://dx.doi.org/10.13039/501100001862}{MINECO} (Project No.\ PCI2019-111885-2).
\bibliographystyle{alpha}
| proofpile-arXiv_065-255 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Phase-contrast x-ray imaging provides superior contrast for materials of low atomic number, including soft tissues, compared to traditional attenuation-based radiography, especially in the high energy regimes \cite{russo2017handbook}. This has the potential to enable greater image quality with less radiation dose delivered to the patient in a clinical setting \cite{keyrilainen2011,kitchen2017ct}. Analyser-Based Phase-Contrast imaging (ABPCI), also referred to as Diffraction Enhanced imaging, is a phase-contrast imaging technique that utilizes an analyser crystal to render phase gradients visible \cite{goetz1979,forster1980double,somenkov1991refraction,davis1995,bushuev1996dynamicalx,gureyev1997regimes,bushuev1997wave,chapman1997,bushuev1998wave,bravin2003exploiting,menk2005diffraction,coan2005phase,brankov2006computed,rigon2007generalized,zhou2014analyzer}. ABPCI is highly sensitive to components of phase gradients lying in the plane of diffraction of the analyser crystal, meaning it has 1-D phase sensitivity \cite{andre2001dynamical,wilkins2014}. The analyser crystal is mainly sensitive to the first derivative of the phase shift caused by the sample, which means it can pick up small discrepancies in the wavefield propagated through a sample. This 1-D phase sensitivity is also typical in other phase contrast methods such as grating interferometry \cite{david2002,momose2003}. For grating interferometry, \citeasnoun{ruthihauser2011} developed a method in a computed tomography setup to overcome the problem of 1-D sensitivity by utilizing an inclined geometry for the two 1-D gratings through rotation of $45^\circ$ about the optical axis to reconstruct a 2-D phase gradient. Taking a tomographic projection and its respective $180^\circ$ projection, then flipping the second projection enables orthogonal components of the phase gradient to be reconstructed. These can be combined and integrated to retrieve the phase map.
\begin{figure}
\includegraphics[width=\textwidth]{setup_final.PNG}
\caption{Inclined geometry Laue ABPCI experimental setup. The inclination was applied through rotation of the sample and detector about the optical axis.}
\label{fig:realsetup}
\end{figure}
The aim of this paper was to apply the methodology of inclined geometry, proposed in \citeasnoun{ruthihauser2011}, to ABPCI to reconstruct 2-D phase maps and improve the 3-D reconstructions of an object’s complex refractive index using ABPCI. This 2-D phase reconstruction for ABPCI was achieved by rotating the detector and sample by an identical angle about the optical axis, while using the Laue geometry of ABPCI, as seen in Figure \ref{fig:realsetup}. 2-D ABPCI phase reconstruction has been previously achieved by \citeasnoun{modregger2007} using two analyser crystals in perpendicular directions to achieve 2-D phase sensitivity and by \citeasnoun{pavlov2004}, \citeasnoun{pavlov2005}, \citeasnoun{coan2005phase} using a variant of combined ABPCI and Propagation-Based Phase-Contrast Imaging (PBPCI). Our 2-D phase sensitive ABPCI is more straightforward and robust than the aforementioned methods \cite{pavlov2004,pavlov2005,coan2005phase,modregger2007}. For instance, we use a single crystal in a simple setup, which does not suffer from the intensity loss due to the interaction of the wavefield with an additional crystal.
\section{Theory and Methods}\label{sec:theo}
This section describes the theory and the methods using an inclined geometry Laue ABPCI setup to reconstruct the real and imaginary parts of the refractive index.
\subsection{Approximations applied to Phase-Contrast Imaging}\label{sec:applied}
The phase retrieval procedure outlined in Section (\ref{sec:phase}) is based upon the Geometrical Optics Approximation (GOA). The GOA incorporates the paraxial and projection approximations, allowing the simplification of the phase retrieval procedure by assuming smallness of the second derivative of the phase \cite{indenbom1972geometric,bushuev1996dynamicalx,bushuev1998wave,pavlov2001variant,pavlov2004,paganin2006coherent,nesterets2006}. The Laue geometry of ABPCI allows phase retrieval to be performed with two images (diffracted and transmitted) of the sample to be acquired simultaneously. Applying the GOA gives us a method using the transmitted and diffracted projections acquired from the Laue geometry setup, to separate the refraction and attenuation information \cite{ingal1995xx,bushuev1996dynamicalx,kitchen2008,kitchen2010a,kitchen2011}. Some samples have unresolvable microstructure that produces Ultra Small Angle X-ray Scattering (USAXS). The USAXS can be reconstructed using multiple image radiography that requires multiple sets of data to be recorded upon rotation of the analyser crystal \cite{oltulu2003,wernick2003,pagot2003,nesterets2006}. The multiple-image method allows the effects of refraction and USAXS to be separated and can be applied with the Laue geometry using data sets of either the transmitted or diffracted projections \cite{kitchen2010a}.
We focused on a sample that does not have any appreciable microstructure within the sample and hence produces minimal USAXS as shown in \citeasnoun{kitchen2010a}. Therefore the simultaneous dual-image Laue geometry method, neglecting USAXS, is suitable for imaging this sample.
The size of the Borrmann triangle base (see e.g. \citeasnoun{bushuev2005}) is about 15 microns in our experiment. Therefore we used the detector with an effective pixel size of $16.2$ microns and a spatial resolution of $\sim{3}$ pixels ($\sim{50}$ microns)(i.e., larger than the Borrmann triangle base) in our experiment. Thus the Borrmann fan could not significantly affect the resolution in our experiment.
\subsection{ABPCI Phase retrieval}\label{sec:phase}
We performed phase retrieval following a method derived by \citeasnoun{kitchen2010a} utilizing rocking curves (RCs) produced by rotating the analyser crystal. These rocking curves are produced for every pixel in the transmitted and diffracted images. We also measured the ratio of diffracted over transmitted projections without the object present in the wavefield.
RCs can be modeled using a Taylor series, Gaussian distribution or a PearsonVII function \cite{pearson1916ix}, allowing phase retrieval to be performed. Gaussian functions are commonly used to fit the RCs as they are relatively easy to implement and accurately models the bell curve shape \cite{zhifeng2007extraction,hu2008comparison,diemoz2010absorption,arfelli2018}. However, Gaussian functions can fail at accurately modeling the peak and tails of the RC from the long slit geometry of ABPCI \cite{oltulu2003,nesterets2006}. The broadening of the RC tails from the long slit geometry is caused by scattering being integrated in the direction perpendicular to the diffraction plane \cite{suortti2013analyser}. PearsonVII functions have been shown to more accurately model the peaks and tails of the RCs \cite{kitchen2010a}. Using the phase retrieval method of \citeasnoun{kitchen2010a}, with the GOA, the transmitted ($I_T$) and diffracted ($I_D$) intensities, produced from the Laue geometry ABPCI setup, can be approximated as
\begin{equation}\label{rc1}
I_T = I_R{T(\Delta\theta+\Delta\theta{'})},
\end{equation}
and
\begin{equation}\label{rc2}
I_D = I_R{D(\Delta\theta+\Delta\theta{'})},
\end{equation}
respectively. Here $I_R$ is the intensity of the refracted beam incident on the crystal, $T(\Delta\theta+\Delta\theta{'})$ and $D(\Delta\theta+\Delta\theta{'})$ are the angularly dependent diffraction and transmission coefficients, $\Delta\theta$ is the deviation from the Bragg angle and $\Delta\theta{'}$ is the shift caused by refraction in the object as seen in Figure \ref{fig:RatioRC}.
We can obtain an expression independent of $I_R$ by dividing Eqn (\ref{rc2}) by Eqn (\ref{rc1}) to obtain
\begin{equation}\label{rc3}
\frac{I_D}{I_T} = \frac{D(\Delta\theta+\Delta\theta{'})}{T(\Delta\theta+\Delta\theta{'})}.
\end{equation}
This ratio RC is used to perform phase retrieval. We modelled the RCs with a PearsonVII function given by \citeasnoun{hall1977} of the form
\begin{equation}\label{eqn70000}
y=c\big[1+(x-\tilde{x})^2/(ma^2)\big]^{-m}.
\end{equation}
Here $c$ defines the amplitude, $x$ is the independent variable, $\tilde{x}$ is the centroid, $m$ is the rate of decay of the tail and $a$ and $m$ determine the profile of the curve. This function can be adapted to the type of bell curve by modifying the $m$ such as the Lorentzian ($m=1$), the modified Lorentzian ($m=2$) and Gaussian ($m\rightarrow\infty$). We can apply this model to the ratio RC to give
\begin{equation}\label{rc6}
\frac{I_D}{I_T} = c[1+(\Delta\theta+\Delta\theta{'})^2/(ma^2)]^{-m}.
\end{equation}
We can rearrange Eqn (\ref{rc6}) for $\Delta\theta+\Delta\theta{'}$ to give
\begin{equation}\label{eq:1333}
\Delta\theta+\Delta\theta{'} = \pm{a}\sqrt{m[(cI_T/I_D)^{1/m}-1]},
\end{equation}
which is an expression for the angular deviation with respect to the Bragg angle position of the wavefield incident upon the analyser crystal.
Furthermore, we can rearrange Eqns (\ref{rc1}) and (\ref{rc2}) for $I_R$ to give
\begin{equation}\label{rc4}
I_R = \frac{I_T}{{T(\Delta\theta+\Delta\theta{'})}},
\end{equation}
\begin{equation}\label{rc5}
I_R = \frac{I_D}{{D(\Delta\theta+\Delta\theta{'})}}.
\end{equation}
\begin{figure}
\includegraphics[width =\textwidth]{RC_Final.png}
\vspace*{-10mm}
\caption{Ratio RC of the diffracted RC divided by transmitted RC. The analyser crystal was positioned at the dashed red line working point on the RC through rotating it about the horizontal axis to achieve an angular shift $\Delta\theta$, shown as the red arrow from the Bragg angle $\theta_B$ position, which is placed at the origin. When an object is placed in the path of the wavefield it will cause refraction and attenuation in the wavefield propagated through the object. This changes the incident angle of the wavefield entering the analyser crystal and thus shifts it to a new position on the RC shown as the blue line with an angular shift $\Delta\theta'$ shown as the blue double arrow. We can calculate this shift from the change in intensity. These calculations will generate an intensity map and two $\Delta\theta'$ maps for every projection, as observed in Figure \ref{f9}.}
\label{fig:RatioRC}
\end{figure}
This gives us potentially two relations to calculate the intensity contrast of the x-ray wavefield. We can fit an inverted PearsonVII function to the transmitted RC such that
\begin{equation}\label{eq:13333}
I_T=I_RT(\theta)=I_R\{f-d[1+\theta^2/(nb^2)]^{-n}\}.
\end{equation}
The PearsonVII coefficients $b$, $d$, and $n$ are equivalent variables to $a$, $c$ and $m$ in Eqn (\ref{rc6}) and applied to avoid confusion between the two fitted RCs with $f$ being the only unique coefficient.
\subsection{Phase Retrieval using an Inclined Geometry}\label{sec:app}
The phase shift of the wave, propagated through the sample, with spatial coordinates defined in Figure \ref{fig:noninc}, can be expressed in the form of \cite{paganin2006coherent}
\begin{equation}\label{eq:14}
\Phi=-\int{k}\delta(x,y,z)dz.
\end{equation}
Here $\delta$ is defined as the refractive index decrement of a sample and $k=2\pi/\lambda$ is the wavenumber. Furthermore, $\delta$ is related to the absorptive properties of the sample, $\beta$, and the refractive index, $n$, through \cite{james1954}
\begin{equation}\label{inteqre}
n=1-\delta+i\beta.
\end{equation}
We can measure the appropriate components of the phase gradient
\begin{equation}\label{eq:15}
\frac{\partial \Phi}{\partial x} = -k\Bigg[\frac{\partial }{\partial x}\int\delta(x,y,z)dz\Bigg],
\end{equation}
by looking at the angular shift in the rocking curve, $\Delta\theta{'}$, caused by the object in the bea
\begin{equation}\label{eq:16}
\Delta\theta{'} =-\frac{1}{k}\frac{\partial \Phi}{\partial x}.
\end{equation}
It should be noted that Eqns (\ref{eq:15}), (\ref{eq:16}) and Figure \ref{fig:noninc} illustrate the situation when both the $x$ and $x_1$ axes are parallel to the direction of the 1-D sensitivity of the analyser crystal. We applied an $\alpha=8^\circ$ inclination of the object and detector clockwise following the x-ray propagation direction and from this a two dimensional phase gradient can be reconstructed. This is done through differential phase images from opposing projections being combined that will produce both components of the phase gradient vector $\frac{\partial \Phi}{\partial x}$ and $\frac{\partial \Phi}{\partial y}$ by retrieving $\Delta\Tilde{\theta{'}}$ and $\Delta\hat{\theta}{'}$. $\Delta\hat{\theta}{'}$ is the rocking curve shift at the $\tilde{\phi} + 180^{\circ}$ projection, while $\Delta\Tilde{\theta{'}}$ is the angular shift at the $\tilde{\phi}$ projection. In our chosen geometry $\Delta\hat{\theta{'}}$ corresponds to the projection of $\bm{\rho{'}}$ on the $x_1$ axis (see Figure \ref{fig:inc}) and $\Delta\Tilde{\theta{'}}$ corresponds to the projection of $\bm{\rho}$ on the $x_1$ axis. The equations for these two angular shifts will be of the form
\begin{equation}\label{eq:17}
\Delta\Tilde{\theta{'}}=-\frac{\kappa_1\bm{\rho}_{x}+\kappa_2\bm{\rho}_{y}}{k},
\end{equation}
\begin{equation}\label{eq:18}
\Delta\hat{\theta}{'}=-\frac{\kappa_3\bm{\rho}_{x}+\kappa_4\bm{\rho}_{y}}{k},
\end{equation}
where $\kappa_1$, $\kappa_2$, $\kappa_3$ and $\kappa_4$ are constants accounting for the rotation of the detector and sample at both projections and $\bm{\rho}_{x}$, $\bm{\rho}_{y}$ are the components of the phase gradient in the x and y direction, respectively. To calculate these values we need to consider the effect of $\alpha$ inclination to the object and detector by looking at both the non-inclined and inclined geometries.
For clarity, rather than using the angle $\Delta\theta{'}$, let us consider that we have a phase gradient, $\bm{\rho}$, and its respective $180^\circ$ projection, $\bm{\rho}'$, for the non-inclined geometry (see Figure \ref{fig:noninc}). We can derive a simple expression for the $x$, $x_1$ and $y$, $y_1$ components of $\bm{\rho}$ using simple trigonometry, see Figure \ref{fig:noninc}
\begin{figure}
\includegraphics[width=\textwidth]{nonincgeodone.PNG}
\vspace*{-10mm}
\caption{Non-inclined geometry of phase gradient, $\bm{\rho}$, with its corresponding $180^\circ$ projection, $\bm{\rho}'$, where the two coordinate systems $x$, $y$ and $x_1$, $y_1$ are the vertical and horizontal vectors for the object and detector, and analyser crystal, respectively, are equivalent and $z$ is the propagation direction of the x-ray wavefield going into the page. In this setup the analyser crystal is only sensitive to variations of the phase in the $x_1$ direction.}
\label{fig:noninc}
\end{figure}
\begin{equation}\label{eq:19}
\bm{\rho}_{x_1}=|\bm{\rho}|\cos(\psi)=\bm{\rho}_{x},
\end{equation}
\begin{equation}\label{eq:20}
|\bm{\rho}_{y_1}|=|\bm{\rho}|\sin(\psi)=|\bm{\rho}_{y}|.
\end{equation}
Here $|\bm{\rho}|$ is the magnitude of phase gradient, $x$ and $y$ are the axes for the object and detector, $x_1$ and $y_1$ are the axes for the analyser crystal, which is only sensitive to the phase variations in the $x_1$ direction and $\psi$ is the angle between the vector $\bm{\rho}$ and the $x$ axis.
The two coordinate systems are equivalent, as shown in Figure \ref{fig:noninc}. However, if we rotate the detector and sample by an angle $\alpha$ anticlockwise along the path of the wavefield, the $x$, $y$ coordinates and the orientation of the object will change with respect to coordinates $x_1$, $y_1$, as seen in Figure \ref{fig:inc}. We can again derive expressions for components of $\bm{\rho}$ and utilize the cosine law $\cos(A+B) =\cos{A}\cos{B}-\sin{A}\sin{B}$ to give
\begin{equation}\label{eq:21}
\begin{split}
\bm{\rho}_{x_1} =|\bm{\rho}|\cos(\psi-\alpha)=|\bm{\rho}|[\cos\psi\cos(\alpha)+\sin{\psi}\sin(\alpha)]=-k\Delta\Tilde{\theta{'}},
\end{split}
\end{equation}
\begin{equation}\label{eq:22}
\bm{\rho}'_{x_1}=|\bm{\rho}|\cos(\psi+\alpha)=|\bm{\rho}|[\cos\psi\cos(\alpha)-\sin\psi\sin(\alpha)]=-k\Delta\hat{\theta}{'},
\end{equation}
from Eqn (\ref{eq:16}). From here we can add Eqns (\ref{eq:21}) and (\ref{eq:22}) to obtain
\begin{equation}\label{eq:23}
-k(\Delta\Tilde{\theta{'}}+\Delta\hat{\theta}{'})=2\bm{\rho}_{x}\cos(\alpha).
\end{equation}
\begin{figure}
\includegraphics[width=\textwidth]{incgeodone.PNG}
\vspace*{-10mm}
\caption{Inclined geometry where the object, detector and, therefore, $(x,y)$ coordinate system has been rotated by $\alpha$ anticlockwise with respect to the $(x_1,y_1)$ coordinate system about the optical axis $z$. While the analyser crystal is still only sensitive to the phase variations in the $x_1$ direction in the $(x_1,y_1)$ coordinate system, it is sensitive to both the $x$ and $y$ components of the gradient of phase. This allows a 2-D phase gradient to be reconstructed from comparison of the two projections $\bm{\rho}$ and $\bm{\rho}'$ as they provide unique information in the inclined geometry setup.}
\label{fig:inc}
\end{figure}
Here we used $\bm{\rho}_{x}$, from following Figure \ref{fig:inc}, as
\begin{equation}\label{eq:41}
\bm{\rho}_{x}=|\bm{\rho}|\cos(\psi),
\end{equation}
then rearranging Eqn (\ref{eq:23}) to obtain
\begin{equation}\label{eq:24}
\bm{\rho}_{x}=-\frac{k(\Delta\Tilde{\theta{'}}+\Delta\hat{\theta}{'})}{2\cos(\alpha)}.
\end{equation}
Similarly for subtracting Eqns (\ref{eq:21}) and (\ref{eq:22}) we get
\begin{equation}\label{eq:25}
-k(\Delta\Tilde{\theta{'}}-\Delta\hat{\theta}{'}) =2\bm{\rho}_{y}\sin(\alpha).
\end{equation}
Here we used $\bm{\rho}_{y}$, from following Figure \ref{fig:inc}, as
\begin{equation}\label{eq:42}
\bm{\rho}_{y}=|\bm{\rho}|\sin(\psi),
\end{equation}
then after rearranging Eqn (\ref{eq:25}) we obtain
\begin{equation}\label{eq:26}
\bm{\rho}_{y}=-\frac{k(\Delta\Tilde{\theta{'}}-\Delta\hat{\theta}{'})}{2\sin(\alpha)}.
\end{equation}
Therefore, going back to Eqns (\ref{eq:17}) and (\ref{eq:18}) the expression for the coefficients is given by
\begin{equation}\label{eq:27}
\kappa_1=\cos(\alpha),
\end{equation}
\begin{equation}\label{eq:28}
\kappa_2=\sin(\alpha),
\end{equation}
\begin{equation}\label{eq:29}
\kappa_3=\cos(\alpha),
\end{equation}
\begin{equation}\label{eq:30}
\kappa_4=-\sin(\alpha).
\end{equation}
This method will allow the reconstruction of a 2-D phase gradient with the additional phase information gathered using an inclined geometry. This is achieved by mirroring $\Delta\hat{\theta}'$ about the vertical axis so that it matches with its opposing plane. These planes will provide different information about the object that can be extracted and used in tomographic reconstruction.
\section{Experimental Setup}\label{sec:exp}
This experiment was performed in hutch 3 of beamline 20B2 in the Medium-length Beamline Facility at the SPring-8 radiation facility (Japan) using a mounted perspex phantom as a sample. The imaged cylindrical perspex phantom was $12.75$\,mm in diameter with four $1.02$\,mm diameter cylindrical holes in the top of the phantom. Two of these holes were filled with aluminium and teflon pins with $1.02$\,mm diameter each with a cap on the top, while the other two were left empty. This phantom was discussed in greater detail in \citeasnoun{Beltran2010}. We employed an inclined Laue geometry ABPCI experimental setup as observed in Figure \ref{fig:realsetup}.
Following from left to right in Figure \ref{fig:realsetup} we have the synchrotron set to produce x-ray wavefields approximately 210\,m away from the sample. The x-ray wavefields then interacted with a double-bounce monochromator in a non-dispersive setup. This consisted of two parallel Si$(1 1 1)$ crystals that monochromatize the x-rays yielding a $26$\,keV monochromatic wavefield with energy bandwidth $\Delta{E}/E\approx{10^{-4}}$\, \cite{goto2001construction}. This x-ray wavefield then interacted with the object, with intensity $I_R$ just after the object that was rotated $\alpha=8^\circ$ clockwise about the optical axis following the propagation direction of the x-ray wavefield. \citeasnoun{ruthihauser2011} applied an ideal $45^\circ$ rotation of the two gratings in their experimental setup, while the tilt stages available for use in our experiment were limited to $8^\circ$. Because of the small $\alpha=8^\circ$ inclination angle we applied, our setup will still be predominantly sensitive to phase effects in the $x$ direction. The x-ray wavefield then traveled $22$ cm from the sample before being incident on the near perfect Si$(1 1 1)$ analyser crystal in the Laue geometry.
This analyser crystal consisted of a nominally $100$\,$\mu$m thick silicon wafer that was connected at the base to a monolithic silicon slab. The interaction between the x-ray wavefield and the analyser crystal caused the x-ray wavefield to be simultaneously diffracted and transmitted with respective intensities $I_D$ and $I_T$. The diffracted x-ray wavefield then propagated with an angle $2\theta_B=8.722^\circ$ with respect to the propagation direction of the incident wavefield \cite{stepanov2004x,stepxray}.
The data from these separated beams were then gathered by a $4000\times{2672}$ pixel Hamamatsu CCD camera (C9399-124F), with a tapered fibre optic bonded to the CCD chip and the $20$\,$\mu$m thick gadolinium oxysulfide ($Gd_2O_2S:Tb^+;P43$) phosphor. The CCD detector with native pixel size of $9$\,$\mu$m was converted to an effective pixel size of $16.2$ $\mu$m by the 1.8:1 taper ratio. The CCD detector was positioned $16$ cm away from analyser crystal and was also rotated $8^\circ$ clockwise following the propagation direction of the x-ray wavefield.
\begin{figure}
\includegraphics[width=0.8\textwidth]{fiducials_done.PNG}
\caption{a) Transmitted (left) and diffracted (right) projections of three gold foil fiducial markers captured by a single exposure. Separated and aligned b) diffracted and c) transmitted images produced using the positions of the three fiducial markers in the projections. We can check the quality of the alignment by producing an image of the d) difference between diffracted over transmitted projections. Any misalignment between these two images will be visible through bright and dark arcs around the fiducial markers, making them stand out from the background. Only slight imperfections in the alignment can be seen. The `chicken wire' structure is produced from the fibre optic taper in the detector the wavefield travels through before incident upon the CCD chip.}
\label{fig:first}
\end{figure}
\begin{figure}
\includegraphics[width=1\textwidth]{phantommaps_done.PNG}
\label{f9}
\vspace*{-10mm}
\caption{Caption on next page}
\end{figure}
\addtocounter{figure}{-1}
\begin{figure}
\vspace*{-10mm}
\caption{Maps generated throughout the experimental procedure beginning with the a) raw tomographic data, projection number 362, showing the diffracted (left) and transmitted (right) intensities in this single projection that needs to be separated and aligned. We then perform phase retrieval to obtain maps b) $\Delta\Tilde{\theta{'}}$ of the change of the angle of incidence upon the analyser crystal and c) $\Delta\hat{\theta}{'}$, the $180^\circ$ equivalent of $\Delta\Tilde{\theta{'}}$ in microradians. From this we split $\Delta\Tilde{\theta{'}}$ and $\Delta\hat{\theta}{'}$ into the d) vertical and e) horizontal components of the phase gradients, divided by the wavenumber, in microradians. From the transmitted projection we obtain a map of f) intensity, while performing 2-D integration using d) and e) to calculate the g) phase map in radians. We then performed tomographic reconstruction using the intensity and phase maps to calculate the h) $\beta$ maps ($\times{10^{-9}}$) , and i) $\delta$ maps ($\times{10^{-6}}$), respectively.}
\end{figure}
\subsection{Diffracted and Transmitted Image Alignment}\label{sec:align}
The data was dewarped using triangular interpolation to correct for the distortion caused by the fibre optic taper \cite{kitchen2010a,islam2010}. We applied a Laue geometry ABPCI method that allows one the simultaneous acquisition of diffracted and transmitted images of the object captured by a single CCD detector similar to \cite{kitchen2011}, see Figure \ref{fig:realsetup}.
The alignment of the transmitted and diffracted images was achieved using three gold foil disks placed in the object plane, as seen in Figure \ref{fig:first}. Upon locating the central coordinates of the foils, we used the three pairs to align the images via the affine transformation described by \citeasnoun{kitchen2011}. From Figure \ref{fig:first}, we can see the alignment procedure appears to fairly successfully align the transmitted and diffracted projections as the aligned and subtracted gold foils markers blend in well with the background, as seen in Figure \ref{fig:first}d).
\section{Results}
Following the phase retrieval procedure discussed in Section (\ref{sec:theo}), maps of the object were obtained, as shown in Figure \ref{f9}, with the analyser crystal positioned at a working point of $50\%$ peak intensity on the left side of the RC. Beginning with the raw data a) we have the transmitted and diffracted phase contrast images on the right and left hand sides of the image, which must be separated and aligned, as discussed in Section (\ref{sec:align}). We then fit rocking curves with a PearsonVII function to the ratio and diffracted projections with no object present in the beam for each pixel in the images. We used these fitted rocking curves with the transmitted and diffracted projections to calculate the b) $\Delta\Tilde{\theta{'}}$, c) $\Delta\hat{\theta}{'}$ and f) the attenuation contrast image.
We then split $\Delta\Tilde{\theta{'}}$ and $\Delta\hat{\theta}{'}$ into the d) vertical and e) horizontal components of the phase gradients, which were then integrated to calculate the phase map g). We then performed $180^\circ$ CT filtered back projection reconstruction using the attenuation contrast and corrected phase maps (see section 4.1) to produce 3-D reconstructions of $\beta$ and $\delta$, respectively. Reconstructions were obtained for a slice of the reconstructed $\delta$ and $\beta$ maps in Figure \ref{f9} i) and h) as shown in Table \ref{table1}. The uncertainties were calculated by taking the standard deviation over some area around the reference point in the slice \cite{rasband1997imagej}.
The measured $\beta$ values are in a good agreement with the theoretical ones. However, the $\delta$ values are all approximately a factor of two smaller than the theoretical ones.
\begin{table}
\label{table1}
\caption{Reconstructed $\delta$ and $\beta$ values for media present in the reconstructed phantom with theoretical values obtained from \citeasnoun{henke1993x}. Note: Al = Aluminium, PMMA = Perspex, The = Theoretical, Mea = Measured.}
\begin{tabular}{lllll}
&$\beta_{The}$ & $\beta_{Mea}$ & $\delta_{The}$ & $\delta_{Mea}$\\
\hline
Al & $1.5\cdot{10^{-9}}$ & $1.6\pm0.1\cdot{10^{-9}}$ & $8.0\cdot{10^{-7}}$ & $4.3\pm0.1\cdot{10^{-7}}$\\
PMMA & $1.4\cdot{10^{-10}}$ & $1.5\pm0.3\cdot{10^{-10}}$ & $3.9\cdot{10^{-7}}$ & $2.1\pm0.1\cdot{10^{-7}}$\\
Teflon & $3.7\cdot{10^{-10}}$ & $4.0\pm0.3\cdot{10^{-10}}$ & $6.5\cdot{10^{-7}}$ & $3.6\pm0.1\cdot{10^{-7}}$\\
\end{tabular}
\end{table}
\begin{figure}
\includegraphics[width=1\textwidth]{Phase_0061_uncorrdone.png}
\includegraphics[width=1\textwidth]{Phase_0061_corrdone.png}
\includegraphics[width=1\textwidth]{corr_uncorr_plot.png}
\vspace*{-10mm}
\caption{Phase maps a) without and b) with the linear trend correction applied through measuring the linear gradient between the left and right edge of the phantom then dividing it through the entire phase map. c) Plots for the uncorrected and corrected phase maps given by the red solid line and blue dashed line, respectively, approximately show this linear trend.}
\label{fig:linear}
\end{figure}
\subsection{Corrections}\label{sec:corr}
Under plane wave illumination the phase outside the object should be approximately constant. However, Figure \ref{fig:linear} shows large low frequency phase gradients are present across the images. This comes from the 2-D integration process to calculate the phase map, which amplifies low frequency noise in the image. A linear ramp correction was applied to the phase map as the phase of one side of the phantom was underestimated with respect to the other. This linear correction was applied during the phase retrieval process to all the phase maps in order to make the sides of the phantom have the same phase value. Figure \ref{fig:linear}a) and b) show phase maps and their plots in c) with and without the correction applied. We see that for the uncorrected phase map the right hand side is lower than the left hand side and we also see a slope in the parabolic shape. This is corrected by estimating the linear ramp through measuring the phase values on the left and right hand side of the phantom for each line, normalising, then dividing the phase map by the linear ramp. The linear ramp of the phase behaves inhomogeneously, changing in both magnitude and sides of the phantom over the sequence of acquired phase maps. Causes of these approximately linear trends are discussed in Section 5.
\section{Discussion}
Qualitatively, our reconstructions of the $\beta$ and $\delta$ distributions shown in Figure \ref{f9}h) and i) are in excellent agreement with expectation and there are minimal artefacts seen in the reconstructions, despite having to correct the phase maps. The application of the correction procedure to the phase retrieval process has provided high contrast and high resolution reconstructions of the object even though it is not the most effective CT filtering method. The quantitative measures of the attenuation properties are in excellent agreement with theoretical predictions, as shown in Table \ref{table1}. The underestimation of the $\delta$ values, however, is most likely due to the underestimation of the phase gradient. Inaccuracies in the phase gradient maps lead to low frequency artefacts of the phase maps. This was partially corrected by applying the linear correction. These inaccuracies can arise from (1) the failure of the GOA at boundaries due to the high phase gradient, (2) imperfect alignment of the transmitted and diffracted projections and (3) the shallow $8^\circ$ inclination applied.
We explore these issues, beginning with point (1). The GOA assumes slow variations of the phase as the wavefield propagates through the sample. This assumption may break down at the boundaries between PMMA and air, where the refractive index difference is quite large. This could be fixed by submerging the sample in a fluid with similar refractive properties to PMMA such as paraffin, which would reduce the change in the phase gradient. \citeasnoun{ruthihauser2011} obtained results for rat cerebellum submerged in paraffin, using the inclined geometry, of $\delta=4\times{10^{-7}}$ with small discrepancies when compared to the theoretical value of $3.52\times{10^{-7}}$ from \cite{Ts,brennan1992suite,white1988tissue,chantler2003x,zschornack2007handbook,stepanov2004x,stepxray,stevenson1993x}. Whereas, \cite{ruthihauser2011,rutishauser2013} demonstrated a large discrepancy of about one order of magnitude between the reconstructed and theoretical values while imaging a cylindrical PMMA phantom in air with a photon energy of 25\,keV $(\Delta{E}/E)\approx{2\%}$. The $\delta$ value for perspex was reconstructed to be $0.4\times{10^{-7}}$ compared to the theoretical value of $3.9\times{10^{-7}}$ \cite{henke1993x}. However, \citeasnoun{kitchen2010a} showed good agreement between the theoretical and reconstructed values of the function $\delta$ obtained for a PMMA block in air with cylindrical cavities, with a photon energy of $26$\,keV using a 1-D phase sensitive Laue ABPCI setup. In that study, the PMMA block was positioned in such a manner that the direction of the phase gradient produced by the cylinder was aligned with the direction of maximum sensitivity of the analyser crystal used. Therefore, part of our deviation from the theoretical value may result from the restricted angle ($8^\circ$) by which we could rotate the sample and detector.
Following with point (2), any misalignment between the transmitted and diffracted projections can result in significant inaccuracies in the reconstructed phase gradients. However, Figure \ref{fig:first} shows that projections appear to be relatively well aligned. It is possible that our alignment method needs further improvements in order to more accurately reconstruct the 2-D phase gradient map as even subpixel misalignments can have a significant effect \cite{kitchen2011}. It is important to note that the $\beta$ values were calculated from a single set of $180^\circ$ projections, while the $\delta$ values used two sets of projections, the second coming from mirroring the $\Delta\hat{\theta}{'}$ projections, as described previously. This could suggest that complications may have occurred when utilizing the mirrored projections causing the observed discrepancy.
Finally, following point (3), we recall that the analyser crystal is only sensitive to the component of the phase gradients lying in the plane of diffraction of the analyser crystal. For our experimental setup, this was in the vertical direction. Our mechanical restriction to $8^\circ$ inclination did allow some information from the horizontal direction to be obtained, but with low amplitude and relatively high noise compared to the vertical component, as seen by comparing Figure \ref{f9} d) and e). This horizontal component information is then amplified through division of $2\sin(8^\circ)=0.28$, as shown in Eqn (\ref{eq:26}), while the vertical component information is decreased through division of $2\cos(8^\circ)=1.98$, as shown in Eqn (\ref{eq:24}). Additional experiments need to be undertaken to test these speculations to determine the primary source of errors, beginning with increasing the inclination of the applied inclined geometry. The fact that our results have the correct order of magnitude for the $\delta$ value is therefore encouraging given the small inclination angle of just 8 degrees. We anticipate that a larger inclination angle of $30^\circ-60^\circ$ will lead to reconstructions with values more closely matching the theoretical values. This larger inclination will allow more information from the horizontal axis to be acquired and used in the integral to calculate the phase.
\section{Conclusion}
We applied an inclined geometry method in order to achieve 2-D phase sensitivity for ABPCI in a Laue geometry setup through rotation of the object and detector by $8^\circ$ clockwise folowing the x-ray wavefield propagation direction. Our measured $\beta$ values were in excellent agreement with the theoretical ones. The measured $\delta$ values were qualitatively correct and had the correct order of magnitude, but the measured values were approximately a factor of two less than the theoretical values. Considering the small angle inclination of the crystal relative to the sample stage and detector ($8^\circ$ compared to the ideal $45^\circ$ degrees), the results are encouraging. The discrepancy between the measured and theoretical $\delta$ values could also be due to GOA condition breaking down or slight misalignment of the transmitted and diffracted images with additional experiments required in order to confirm the source of error and obtain more accurate results.
\ack{\textbf{Acknowledgements}}
The synchrotron radiation experiments were performed at Beamline BL20B2 of SPring-8 with the approval of the Japan Synchrotron Radiation Research Institute (JASRI) (Proposal 2012B1315). We acknowledge travel funding provided by the International Synchrotron Access Program (ISAP) managed by the Australian Synchrotron, part of ANSTO (AS/IA124/6149). We acknowledge Timur Gureyev, David M. Paganin and Iain M. Young for their work on the journal article, funding, planning and discussion of the proposal for this experiment. MJK is funded by an ARC Future Fellowship (FT160100454).
\bibliographystyle{iucr}
| proofpile-arXiv_065-256 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Neural machine translation (NMT)~\cite{bahdanau2014nmt,Sutskever2014seq2seq,Vaswani2017transformer,song2018dpn,hassan2018achieving} have witnessed great progress due to the development of deep learning. The popular NMT models adopt an encoder-attention-decoder framework, where the decoder generates the target token based on previous tokens in an autoregressive manner. While its popularity, NMT models suffer from discrepancy between training and inference and the consequent error propagation~\cite{Bengio2015SS,Marc2016exposure,wu2019beyond}. During inference, the decoder predicts the next token given previous generated tokens as input, which is discrepant from that in training, where the previous ground-truth tokens as used as input for next token prediction. Consequently, the previous predicted tokens may have errors, which would cause error propagation and affect the prediction of next tokens.
Previous works have tried different methods to solve the above issues, where some of them focus on simulating the data that occurs in inference for training, such as data as demonstration~\cite{Venkatraman2015ImprovingMP}, scheduled sampling~\cite{Bengio2015SS}, sentence-level scheduled sampling~\cite{zhang-etal-2019-bridging}, or even predict them in different directions~\cite{wu2018beyond,tan2019efficient}. While being effective to handle the prediction errors occurred in inference during model training, these methods still leverage the predicted tokens that could be erroneous as the conditional information to predict the next token. Forcing the model to predict correct next token given incorrect previous tokens could be particularly hard and misleading for optimization, and cannot solve the training/inference discrepancy as well as error propagation effectively.
In this paper, moving beyond scheduled sampling~\cite{Bengio2015SS}, we propose a novel method to enable the model to correct the previous predicted tokens when predicting the next token. By this way, although the decoder may have prediction errors, the model can learn the capability to build correct representations layer by layer based on the error tokens as input, which is more precise for next token prediction than directly relying on previous erroneous tokens as used in scheduled sampling.
Specifically, we introduce two-stream self-attention, which is designed for language understanding in XLNet~\cite{Yang2019XLNet}, into the NMT decoder to correct the errors while translation. Two-stream self-attention is originally proposed to solve the permutation language modeling, which consists of two self-attention mechanisms: the content stream is exactly the same as normal self-attention in Transformer decoder and is used to build the representations of the previous tokens, while the query stream uses the positional embedding as the inputs to decide the position of the next token to be predicted. In our work, we reinvent two-stream self-attention to support simultaneous correction and translation in NMT, where the content stream is used to correct the previous predicted tokens (correction), and the query stream is used to simultaneously predict the next token with a normal left-to-right order based on the corrected context (translation).
We conduct experiments on IWSLT 2014 German-English, Spanish-English, Hebrew-English and WMT 2014 English-German and English-Romanian translation datasets to evaluate the effectiveness of our proposed error correction mechanism for NMT. Experimental results demonstrate that our method achieves improvements over Transformer baseline on all tasks. Further experimental analyses also verify the effectiveness of error correction to improve the translation accuracy.
Our contributions can be summarized as follows:
\begin{itemize}
\item To the best of our knowledge, we are the first to introduce an error correction mechanism during the translation process of NMT, with the help of the newly proposed two-stream self-attention.
\item Experimental results on a variety of NMT datasets and further experimental analyses demonstrate that our method achieves improvements over Transformer baseline and scheduled sampling, verifying the effectiveness of our correction mechanism.
\end{itemize}
\section{Background}
In this section, we introduce the background of our work, including the standard encoder-decoder framework, exposure bias and error propagation, and two-stream self-attention mechanism.
\paragraph{Encoder-decoder framework.}
Given a sentence pair $\{x, y\} \in (\mathcal{X}, \mathcal{Y})$, the objective of an NMT model is to maximum the log-likelihood probability ${\rm P}(y|x;\theta)$, where $\theta$ is the parameters of NMT model. The objective function is equivalent to a chain of the conditional probability: ${\rm P}(y|x;\theta) = \prod_{t=1}^n {{\rm P}(y_t|y_{<t},x;\theta)}$, where $n$ is the number of tokens in target sequence $y$ and $y_{<t}$ means the target tokens before position $t$. The encoder-decoder structure~\cite{Sutskever2014seq2seq,cho2014learning} is the most common framework to solve the NMT task. It adopts an encoder to transform the source sentence $x$ as the contextual information $h$ and a decoder to predict the next token $y_t$ based on the previous target tokens $y_{<t}$ and $h$ autoregressively. Specifically, for the $t$-th token prediction, the decoder feeds the last token $y_{t-1}$ as the input to predict the target token $y_t$. Besides, an encoder-decoder attention mechanism~\cite{bahdanau2014nmt} is used to bridge the connection between source and target sentence.
\begin{figure}[!t]
\centering
\begin{subfigure}[t]{0.20\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/p1.pdf}
\caption{Training}
\end{subfigure}
\hspace{0.3cm}
\begin{subfigure}[t]{0.20\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/p2.pdf}
\caption{Inference}
\end{subfigure}
\caption{The discrepancy between training and inference in autoregressive sequence generation.}
\label{Exposure_bias}
\end{figure}
\paragraph{Exposure bias and error propagation.} Exposure bias~\cite{Marc2016exposure} is a troublesome problem in the language generation. During the training stage for language generation, it always takes ground truth tokens as the model input. However, at the test stage, the decoder depends on its previous predictions to infer next token. Figure~\ref{Exposure_bias} illustrates us an example of the discrepancy between training and inference in autoregressive sequence generation.
Once an incorrect token is predicted, the error will accumulate continuously along the inference process. To alleviate this problem, the common solution is to replace some ground truth tokens by predicted tokens at the training time, which is named as \emph{scheduled sampling}. However, scheduled sampling still cannot handle the exposure bias perfectly since it only attempts to predict the next ground truth according to the incorrect predicted tokens but cannot reduce the negative effect from incorrect tokens. Therefore, we propose a novel correction mechanism during translation to alleviate the error propagation.
\paragraph{Two-stream self-attention.}
XLNet is one of the famous pre-trained methods~\cite{radford2019language,devlin2019bert,Yang2019XLNet,song2019mass} for natural language processing.
It first proposed a two-stream self-attention mechanism, which consists of a content stream and a query stream for permutation language modeling. For token $y_t$, it can see tokens $y_{\le t}$ in the content stream while only see tokens $y_{<t}$ in the query stream. Beneficial from two-stream self-attention mechanism, the model can predict the next token in any position with the corresponding position embedding as the query, in order to enable permutation language modeling. Besides, Two-stream self-attention also avoids the pre-trained model to use $\rm{[MASK]}$ token into the conditioned part during the training, where the $\rm{[MASK]}$ token would bring mismatch between pre-training and fine-tuning. In this paper, we leverage the advantages of two-stream self-attention to design a novel error correction mechanism for NMT.
\begin{figure}[!t]
\centering
\begin{subfigure}[t]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/GPT-NMT.pdf}
\caption{Standard NMT}
\label{gpt}
\end{subfigure}
\vspace{5pt}
\begin{subfigure}[t]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/TSSA-NMT.pdf}
\caption{NMT with two-stream self-attention. $p_i$ means the position of $i$-th token. The red dashed line means that the query stream can attend to the content stream.}
\label{tssa}
\end{subfigure}
\vspace{5pt}
\begin{subfigure}[t]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/correct.pdf}
\caption{NMT with our proposed error correction mechanism based on two-stream self-attention. $y'_i$ is predicted tokens by the model itself. The cells in red color represent that the potential error token $y'_2$ and $y'_4$ are corrected into the ground truth tokens $y_2$ and $y_4$.}
\label{correct}
\end{subfigure}
\caption{The illustrations of standard NMT, NMT with two-stream self-attention and our proposed error correction mechanism on two-stream self-attention.}
\label{NMT}
\end{figure}
\section{Method}
Previous NMT models~\cite{bahdanau2014nmt,Vaswani2017transformer} generate the next tokens $y'_t$ from the probability (\emph{i.e.}, $y'_t \sim {\rm P}(y_t|y_{<t},x;\theta)$). If $y'_t$ is predicted with error and taken as the next decoder input, can we force model to automatically build correct hidden representations that are close to the ground truth token $y_t$. In this case, the consequent token generation can build upon the previous correct representations and become more precise. A natural idea is to optimize the model to maximize the correction probability ${\rm P}(y_t|y'_t, y_{<t},x;\theta)$ simultaneously when maximizing the probability of next token prediction ${\rm P}(y_{t+1}|y'_t,y_{<t},x;\theta)$. However, the previous NMT decoder does not well support this correction mechanism. Inspired by the two-stream self-attention in XLNet~\cite{Yang2019XLNet}, we leverage the content stream to maximize ${\rm P}(y_t|y'_t, y_{<t},x;\theta)$ and the query stream to maximize ${\rm P}(y_{t+1}|y'_t,y_{<t},x;\theta)$, which can well meet the requirements of simultaneous correction and translation.
In order to introduce our method more clearly, in the following subsections, we first introduce the integration of two-stream self-attention into NMT model, and then introduce the error correction mechanism for NMT based on two-stream self-attention.
\subsection{NMT with Two-Stream Self-Attention}
Inspired by XLNet~\cite{Yang2019XLNet},
we incorporate the idea of two-stream self-attention to modify the decoding of NMT framework.
Specifically, the encoder is the same as the standard NMT model, and the decoder is incorporated with two-stream self-attention where the positional embedding is taken as input in query stream for prediction while content stream is used to build context representations. Different from that in XLNet, we make two modifications: 1) we remove the permutation language modeling in XLNet since the decoder in NMT usually only uses left-to-right generation, and 2) we let the decoder to predict the whole sequence rather than partial sentence in XLNet. Figure~\ref{gpt} and Figure~\ref{tssa} show the differences between the standard NMT and the NMT with two-stream self-attention.
We formulate the NMT with two-stream self-attention as follows. For the decoder, we feed the positions $\{p_1, \cdots, p_n\}$ to the query stream to provide the position information for the next token prediction, and the sequence $\{y_1, \cdots, y_n\}$ plus its positions $\{p_1, \cdots, p_n\}$ to the content stream to build contextual information. For the $l$-th layer, we define the hidden states of query/content streams as $q_t^l$ and $c_t^l$. The updates for the query and content streams are as follows:
\begin{align}
{q_t^{l+1}} &= {\rm Attention}({\rm Q}=q_t^l, {\rm KV}=c_{< t}^{l};h,\theta_{l+1}) \\
{c_t^{l+1}} &= {\rm Attention}({\rm Q}=c_t^l, {\rm KV}=c_{\le t}^{l};h,\theta_{l+1}),
\end{align}
where $h$ represents the hidden states from encoder outputs and $\theta_{l+1}$ represents the parameters of the ${l+1}$ layer, $\rm{Q}$ and $\rm{KV}$ represents the query, key and value in self-attention. Both the query and content stream share the same model parameters. The states of key and value can be reused in both query and content streams. Finally, we feed the outputs of the query stream from the last layer to calculate the log-probability for the next target-token prediction. During inference, we first predict the next token with the query stream, and then update the content stream with the generated token. The order of query and content stream will not affect the predictions since the tokens in the query stream only depend on the previously generated tokens of the content streams.
\subsection{Error Correction based on Two-Stream Self-Attention}
\label{sec3_3}
Benefiting from the two-stream self-attention, we can naturally introduce an error correction mechanism on the content stream. The content stream is originally designed to build the representations of previous tokens, which is used in query stream for next token prediction. In order to correct errors, the content stream also needs to predict the correct tokens given incorrect tokens as input.
In order to simulate the prediction errors in the input of the content stream, we also leverage scheduled sampling~\cite{Bengio2015SS} to randomly sample tokens either from the ground truth $y=\{y_1,\cdots,y_n\}$ or the previously predicted tokens $y'=\{y'_1,\cdots,y'_n\}$ with a certain probability as the new inputs $\Tilde{y}=\{\Tilde{y}_1,\cdots,\Tilde{y}_n\}$, where $y'_t$ is sampled from the probability distribution ${\rm P}(y_t|y_{<t},x;\theta)$. For inputs $\Tilde{y}_t$, it equals to $y_t$ with a probability $p(\cdot)$ otherwise $y'_t$. For each token $y'_t$ ($y'_t \ne y_t$) predicted by the query stream in step $t$, we force the content stream to predict its corresponding ground truth token $y_t$ again. The loss function for the error correction mechanism (ECM) is formulated as:
\begin{equation}
\label{eq4}
\mathcal{L}_{{\rm ECM}}(y|\Tilde{y},x;\theta) = -\sum_{t=1}^n 1(\Tilde{y}_{t} \ne y_t)\log({\rm P}(y_t|\Tilde{y}_{\le t},x;\theta)).
\end{equation}
In the mechanism, the content stream can learn to gradually correct the hidden representations of error tokens toward the correct counterpart layer by layer.
The query stream is still used to predict the the next token, given a random mixture of previous predicted tokens and ground truth tokens. The negative log-likelihood (NLL) loss is formulated as:
\begin{equation}
\label{eq3}
\mathcal{L}_{{\rm NLL}}(y|\Tilde{y},x;\theta) = -\sum_{t=1}^n \log({\rm P}(y_t|\Tilde{y}_{<t},x;\theta)).
\end{equation}
Finally, we combine the two loss functions as the final objective function for our method:
\begin{equation}
\min \mathcal{L}_{\rm{NLL}}(y|\Tilde{y},x;\theta) + \lambda \cdot \mathcal{L}_{{\rm ECM}}(y|\Tilde{y},x;\theta),
\end{equation}
where $\lambda$ is a hyperparameter to balance the NLL loss and ECM loss.
Figure~\ref{correct} demonstrates the workflow of our proposed error correction mechanism. The difference between our error correction mechanism and the naive scheduled sampling is that once an error token is predicted in scheduled sampling, the model still learns to predict the next correct token given error tokens as context, which could confuse the the model and mislead to learn incorrect prediction patterns. However, based on our error correction mechanism, the next token prediction is built upon the representations that are corrected by the content stream, and is more precise to learn prediction patterns.
In our error correction mechanism, how to control the scheduled sampling probability $p(\cdot)$ and when to sample tokens are important factors for the training. Previous works~\cite{Bengio2015SS} indicated that it is unsuitable to sample tokens from scratch during the training since the model is still under-fitting and the sampled tokens will be too erroneous. Inspired by OR-NMT~\cite{zhang-etal-2019-bridging}, we design a similar exponential decay function for sampling probability $p(\cdot)$ but with more restrictions. The decay function is set as:
\begin{equation}
p(s) =
\begin{cases}
1, & s \le \alpha \\
\max(\beta, \frac{\mu}{\mu + \exp((s-\alpha)/\mu)}), & {\rm otherwise} \\
\end{cases},
\end{equation}
where $s$ represents the training step, $\alpha$, $\beta$ and $\mu$ are hyperparameters. The hyperparameter $\alpha$ means the step when model starts to sample tokens, and hyperparameter $\beta$ is the maximum probability for sampling.
\section{Experimental Setting}
In this section, we introduce the experimental settings to evaluate our proposed method, including datasets, model configuration, training and evaluation.
\subsection{Datasets}
We conduct experiments on three IWSLT translation datasets (\{German, Spanish, Hebrew\} $\rightarrow$ English) and two WMT translation datasets (English $\rightarrow$ \{German, Romanian\}) to evaluate our method. In the follow sections, we abbreviate English, German, Spanish, Hebrew, Romanian as ``En", ``De", ``Es", ``He", ``Ro".
\paragraph{IWSLT datasets.} For IWSLT14 De$\to$En, it contains 160K and 7K sentence pairs in training set and valid set. We concatenate TED.tst2010, TED.tst2011, TED.tst2012, TED.dev2010 and TED.tst2012 as the test set. For IWLST14 Es$\to$En and He$\to$En~\footnote{IWSLT datasets can be download from \url{https://
wit3.fbk.eu/archive/2014-01/texts}}, they contain 180K and 150K bilingual data for training. We choose TED.tst2013 as the valid set and TED.tst2014 as the test set. During the data preprocess, we learn a 10K byte-pair-coding (BPE)~\cite{sennrich2016bpe} to handle the vocabulary.
\paragraph{WMT datasets.} WMT14 En$\to$De and WMT16 En$\to$Ro translation tasks contain 4.5M and 2.8M bilingual data for training. Following previous work~\cite{Vaswani2017transformer}, we concatenate newstest2012 and newstest2013 as the valid set, and choose newstest2014 as the test set for WMT14 En$\to$De. For WMT16 En$\to$Ro, we choose newsdev2016 as the valid set and newstest2016 as the test set. We learn 32K and 40K BPE codes to tokenize WMT14 En$\to$De and WMT16 En$\to$Ro dataset.
\subsection{Model Configuration}
We choose the state-of-the-art Transformer~\cite{Vaswani2017transformer} as the default model. For IWSLT tasks, we use 6 Transformer blocks, where attention heads, hidden size and filter size are 4, 512 and 1024. Dropout is set as 0.3 for IWSLT tasks. The parameter size is 39M. For WMT tasks, we use 6 Transformer blocks, where attention heads, hidden size and filter size are 16, 1024 and 4096. And the parameter size is as 214M. Dropout is set as 0.3 and 0.2 for En$\to$De and En$\to$Ro respectively. To make a fair comparison, we also list some results by the original NMT model, without two-stream self-attention. For the decay function of sampling probability, we set $a$, $b$ and $\mu$ as 30,000, 0.85 and 5,000. The $\lambda$ for $\mathcal{L}_{{\rm ECM}}$ is tuned on the valid set, and a optimal choice is 1.0. To manifest the advances of our method, we also prepare some strong baselines for reference, including: Layer-wise Transformer~\cite{he2018layer_wise}, MIXER~\cite{Marc2016exposure} on Transformer and Tied-Transformer~\cite{Yingce2019tied}.
\subsection{Training and Evaluation}
During training, we use Adam~\cite{kingma2014method} as the default optimizer, with a linear decay of learning rate. The IWSLT tasks are trained on single NVIDIA P40 GPU for 100K steps and the WMT tasks are trained with 8 NVIDIA P40 GPUs for 300K steps, where each GPU is filled with 4096 tokens.
During inference, We use beam search to decode results. The beam size and length penalty is set as 5 and 1.0 for each task except WMT14 En$\to$De, which use a beam size of 4 and length penalty is 0.6 by following the previous work~\cite{Vaswani2017transformer}. All of the results are reported by multi-bleu~\footnote{\url{https://github.com/moses-smt/mosesdecoder/blob/master/scripts/generic/multi-bleu.perl}}.
Our code is implemented on fairseq~\cite{ott-etal-2019-fairseq}~\footnote{https://github.com/pytorch/fairseq}, and we will release our code under this link: \url{https://github.com/StillKeepTry/ECM-NMT}.
\begin{table}[h]
\centering
\begin{tabular}{l|c c c}
\toprule
Method & De$\to$En & Es$\to$En & He$\to$En \\
\midrule
Tied Transformer & 35.10 & 40.51 & \\
Layer-Wise Transformer & 35.07 & 40.50 & - \\
MIXER & 35.30 & 42.30 & - \\
\midrule
Transformer Baseline & 34.78 & 41.78 & 35.32 \\
Our method & \textbf{35.70} & \textbf{43.05} & \textbf{36.49} \\
\bottomrule
\end{tabular}
\caption{BLEU score on IWSLT14 Translation tasks in different setting.}
\label{IWSLT}
\end{table}
\section{Results}
In this section, we report our result on three IWSLT tasks and two WMT tasks. Furthermore, we also study each hyperparameter used in our model, and conduct ablation study to evaluate our method.
\subsection{Results on IWSLT14 Translation Tasks}
The results of IWLST14 tasks are reported in Table~\ref{IWSLT}.
From Table~\ref{IWSLT}, we find our model with correction mechanism outperforms baseline by 0.89, 0.99 and 0.83 points and the original NMT baseline by 0.92, 1.27 and 1.17 points on De$\to$En, Es$\to$En and He$\to$En respectively. Note that our baseline is strong enough which is comparable to the current advanced systems. Even within such strong baselines, our method stills achieves consistent improvements in all three tasks. These improvements also confirm the effectiveness of our method in correcting error information.
\begin{table}[!t]
\centering
\begin{tabular}{l|c|c}
\toprule
Method & En$\to$De & En$\to$Ro \\
\midrule
Tied Transformer & 28.98 & 34.67 \\
Layer-wise Transformer & 29.01 & 34.43 \\
MIXER & 28.68 & 34.10 \\
\midrule
Transformer Baseline & 28.40 & 32.90 \\
Our method & \textbf{29.20} & \textbf{34.70} \\
\bottomrule
\end{tabular}
\caption{BLEU score on WMT14 En$\to$De and WMT16 En$\to$Ro.}
\label{WMT}
\end{table}
\subsection{Results on WMT Translation Tasks}
In order to validate the performance of our method on large-scale datasets, we also conduct experiments on WMT14 En$\to$De and WMT16 En$\to$Ro. The results are reported in Table~\ref{WMT}.
We found when incorporating error correction mechanism into the NMT model, it can achieve 29.20 and 34.70 BLEU score, which outperforms our baseline by 0.8 and 1.6 points in En$\to$De and En$\to$Ro, and is comparable to the previous works. These significant improvements on two large-scale datasets also demonstrate the effectiveness and robustness of our method in solving exposure bias. In addition, our approach is also compatible with previous works, \emph{i.e.}, our method can achieve better performance if combined with other advanced structure.
\begin{table}[h]
\centering
\begin{tabular}{l|c|c|c}
\toprule
Method & De$\to$En & Es$\to$En & En$\to$De \\
\midrule
Our method & \textbf{35.70} & \textbf{43.05} & \textbf{29.20} \\
\quad -ECM & 35.40 & 42.55 & 28.72 \\
\quad -ECM -SS & 34.81 & 42.16 & 28.48 \\
\quad -ECM -SS -TSSA & 34.78 & 41.81 & 28.40 \\
\bottomrule
\end{tabular}
\caption{Ablation study of different component in our model. The second, third and forth column are results on IWSLT14 De$\to$En, Es$\to$En and WMT14 En$\to$De. The second row is our method. The third row is equal to the second row removing error correction mechanism (ECM). The forth row is equal to the third row removing scheduled sampling (SS). The last raw is equal to the forth row removing two-stream self-attention, \emph{i.e.}, the standard NMT. The prefix ``-" means removing this part. }
\label{SS_study}
\end{table}
\subsection{Ablation Study}
To demonstrate the necessity of each component in our method, we take a series of ablation study on our model on IWSLT14 De$\to$En, Es$\to$En and WMT14 En$\to$De. The results are shown in Table~\ref{SS_study}. When disabling error correction mechanism (ECM), we find the model accuracy decreases 0.30, 0.48 and 0.50 points respectively in different tasks. When further removing scheduled sampling (SS), the model accuracy drop to 34.81, 41.16, 28.48 points in three tasks. We observe that in the large-scale dataset, the improvements for scheduled sampling are limited, while our model still achieves stable improvements in the large-scale dataset, which proves the effectiveness of the error correction mechanism.
In addition, we also make a comparison between the original NMT and the NMT with two-stream self-attention (TSSA) to verify whether two-stream self-attention mechanism will contribute to model accuracy improvement. From Table~\ref{SS_study}, we find the NMT model with TSSA performs slightly better than the original NMT model in the small-scale tasks by 0.03-0.35 points and is close to the accuracy of the original NMT model in the large-scale tasks. This phenomenon also explains the improvement of our method is mainly brought by the error correction, rather than the two-stream self-attention. In summary, every component all plays an indispensable role in our model.
\begin{figure}[b]
\centering
\begin{subfigure}[t]{0.235\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/de-en.jpg}
\caption{IWSLT14 De$\to$En}
\label{iwslt14_lambda}
\end{subfigure}
\begin{subfigure}[t]{0.235\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/en-de.jpg}
\caption{WMT14 En$\to$De}
\label{wmt14_ende}
\end{subfigure}
\caption{Results on IWSLT14 De$\to$En and WMT14 En$\to$De with different $\lambda$.}
\label{lambda_study}
\end{figure}
\begin{table*}[!t]
\centering
\begin{tabular}{l r l}
\toprule
Method & & Translation \\
\cmidrule{1-1} \cmidrule{3-3}
Source (De) & & in dem moment war es , als ob ein filmregisseur einen bühnenwechsel verlangt hätte . \\
\cmidrule{1-1} \cmidrule{3-3}
Target (En) & & at that moment , it was as if a film director called for a set change . \\
\cmidrule{1-1} \cmidrule{3-3}
Baseline & & it was \emph{like a movie} director \emph{had demanded a shift} at the moment . \\
\cmidrule{1-1} \cmidrule{3-3}
Baseline + SS & & at \emph{the} moment , it was \emph{like} a \emph{filmmaker had requested} a change. \\
\cmidrule{1-1} \cmidrule{3-3}
Our method & & at \emph{the} moment , it was as if a \emph{movie} director \emph{had} called for a change. \\
\bottomrule
\end{tabular}
\vspace{3pt}
\caption{A translation case on IWSLT14 De$\to$En test set, generated by the baseline method, baseline with scheduled sampling and our method with error correction. The italic font means the mismatch translation.}
\label{de-en_cases}
\end{table*}
\subsection{Study of $\lambda$ for $\mathcal{L}_{{\rm ECM}}$}
To investigate the effect of $\lambda$ in $\mathcal{L}_{{\rm ECM}}$ on model accuracy, we conduct a series of experiments with different $\lambda$ on IWSLT14 De$\to$En and WMT14 En$\to$De datasets. The results are shown in Figure~\ref{lambda_study}. It can be seen that the accuracy improves with the growth of $\lambda$. When $\lambda \ge 0.6$, the model accuracy increases slowly. The model achieves best accuracy at $\lambda = 0.9$ in IWSLT14 De$\to$En and $\lambda = 1.0$ in WMT14 En$\to$De.
To avoid the heavy cost in tuning hyperparameters for different tasks, we limit $\lambda$ as 1.0 in all of final experiments.
\begin{figure}[!t]
\centering
\begin{subfigure}[t]{0.237\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/hyper_a.jpg}
\caption{Study of $\alpha$}
\label{hyper_a}
\end{subfigure}
\begin{subfigure}[t]{0.237\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/hyper_b.jpg}
\caption{Study of $\beta$}
\label{hyper_b}
\end{subfigure}
\caption{Results on IWSLT14 De$\to$En with different start steps $\alpha$ for token sampling and maximum sampling probabilities $\beta$.}
\label{hyper_prob}
\end{figure}
\subsection{Study of Sampling Probability $p(\cdot)$}
In this subsection, we study how the hyperparameters $\alpha$ and $\beta$ in sampling probability $p(\cdot)$ will affect the model accuracy to what extent. We conduct experiments on IWSLT14 De$\to$En dataset to investigate when to start scheduled sampling ($\alpha$) and the best choice of maximum sampling probability ($\beta$). As can be seen from Figure~\ref{hyper_prob}, we have the following observations:
\begin{itemize}
\item From Figure~\ref{hyper_a}, it can be seen that starting to sample tokens before 10K steps results in worse accuracy than the baseline, which is consistent with the previous hypothesis as mentioned in Section~\ref{sec3_3} that sampling tokens too early will affect the accuracy. After 10K steps, our method gradually surpasses the baseline and achieves the best accuracy at 30K steps. After 30K steps, the model accuracy drops a little. In summary, $\alpha = 30K$ is an optimal choice to start token sampling.
\item From Figure~\ref{hyper_b}, the model accuracy decreases promptly, no matter when the maximum sampling probability is too big (\emph{i.e.}, the sampled tokens is close to the ground truths) or too small (\emph{i.e.}, the sampled tokens are close to the predicted tokens). This phenomenon is also consistent with our previous analysis in Section~\ref{sec3_3}. Therefore, we choose $\beta=0.85$ as the maximum sampling probability.
\end{itemize}
\subsection{Cases Study}
To better understand the advantages of our method in correcting error tokens, we also prepare some translation cases in IWSLT14 De$\to$En, as shown in Table~\ref{de-en_cases}. It can be seen that the baseline result deviates the ground truth too much, which only predicts five correct tokens. When further adding scheduled sampling, the quality of translation can be improved, but still generates error tokens like ``\emph{like a filmmaker had requested}", which cause the following error prediction. Finally, when using error correction mechanism, we find that the translation produced by our method is closer to the target sequence with only tiny errors. In our translation sentence, although a mismatched token ``\emph{movie}" is predicted, our model can still correct the following predicted sequence like ``\emph{called for a change}". The quality of our translation also confirms the effectiveness of our model in correcting error information and alleviating error propagation.
\section{Conclusion}
In this paper, we incorporated a novel error correction mechanism into neural machine translation, which aims to solve the error propagation problem in sequence generation. Specifically, we introduce two-stream self-attention into neural machine translation, and further design a error correction mechanism based on two-stream self-attention, which is able to correct the previous predicted errors while generate the next token. Experimental results on three IWSLT tasks and two WMT tasks demonstrate our method outperforms previous methods including scheduled sampling, and alleviates the problem of error propagation effectively. In the future, we expect to apply our method on other sequence generation tasks, \emph{e.g.}, text summarization, unsupervised neural machine translation, and incorporate our error correction mechanism into other advanced structures.
\section*{Acknowledgments}
This paper was supported by The National Key Research and Development Program of China under Grant 2017YFB1300205.
\bibliographystyle{named}
| proofpile-arXiv_065-257 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
With technological advancements in the automotive industry in recent times, modern vehicles are no longer made up of only mechanical devices but are also an assemblage of complex electronic devices called electronic control units (ECUs) which provide advanced vehicle functionality and facilitate independent decision making. ECUs receive input from sensors and runs computations for their required tasks~\cite{Alam:2018}. These vehicles are also fitted with an increasing number of sensing and communication technologies to facilitate driving decisions and to be \textit{self aware}~\cite{Anupam:2018}. However, the proliferation of these technologies have been found to facilitate the remote exploitation of the vehicle [7]. Malicious entities could inject malware in ECUs to compromise the internal network of the vehicle~\cite{Anupam:2018}. The internal network of a vehicle refers to the communications between the multiple ECUs in the vehicle over on-board buses such as the controller area network (CAN)~\cite{Han:2014}. The authors in [7] and [8] demonstrated the possibility of such remote exploitation on a connected and autonomous vehicle (CAV), which allowed the malicious entity to gain full control of the driving system and bring the vehicle to a halt.\\
To comprehend the extent to which smart vehicles are vulnerable, we conducted a risk analysis for connected vehicles in [1] and identified likely threats and their sources. Furthermore, using the Threat Vulnerability Risk Assessment (TVRA) methodology, we classified identified threats based on their impact on the vehicles and found that compromising one or more of the myriad of ECUs installed in the vehicles poses a considerable threat to the security of smart vehicles and the vehicular network. Vehicular network here refers to communication between smart vehicles and roadside units (RSUs) which are installed and managed by the transport authority. These entities exchange routine and safety messages according to the IEEE802.11p standard [4]. By compromising ECUs fitted in a vehicle, a malicious entity could for example, broadcast false information in the network to affect the driving decisions of other vehicles. Therefore, in this paper, we focus on monitoring the state of the in-vehicle network to enable the detection of an ECU compromise.
Previous efforts that focus on the security of in-vehicle networks have focused on intrusion and anomaly detection which enables the detection of unauthorized access to in-vehicle network [9-11], [15], [23] and the identification of deviation from acceptable vehicle behavior~\cite{Wasicek:2014}. Several challenges however persist. First, proposed security solutions are based on a centralized design which relies on a Master ECU that is responsible for ensuring valid communications between in-vehicle ECUs [9-10] [23]. However, these solutions are vulnerable to a single point of failure attack where an attacker's aim is to compromise the centralized security design. Furthermore, if the Master ECU is either compromised or faulty, the attacker could easily execute actions that undermine the security of the in-vehicle network. In-addition, efforts that focus on intrusion detection by comparing ECU firmware versions [10] [11] [15] are also vulnerable to a single point of exploitation whereby the previous version which is centrally stored could be altered. These works [11] [15] also rely on the vehicle manufacturer to ultimately verify the state of ECUs. However, vehicle manufacturers could be motivated to execute malicious actions for their benefits such as to evade liability [3]
Therefore, decentralization of the ECU state verification among entities in the vehicular ecosystem is desirable for the security of smart vehicles. Finally, the solution proposed in [24] which focuses on observing deviations from acceptable behavior utilized data generated from a subset of ECUs. However, this present a data reliability challenge when an ECU not included in the ECU subset is compromised. \\
We argue in this paper that Blockchain (BC) [12] technology has the potential to address the aforementioned challenges including centralization, availability and data reliability. \\
\textbf{BC } is an immutable and distributed ledger technology that provides verifiable record of transactions in the form of an interconnected series of data blocks. BC can be public or permissioned [3] to differentiate user capabilities including who has the right to participate in the BC network. BC replaces centralization with a trustless consensus which when applied to our context can ensure that no single entity can assume full control of verifying the state of ECUs in a smart vehicle. The decentralized consensus provided by BC is well-suited for securing the internal network of smart vehicles by keeping track of historical operations executed on the vehicle's ECUs such as firmware updates, thus easily identifying any change to the ECU and who was responsible for that change. Also, the distributed structure of BC provides robustness to a single point of failure.
\subsection{Contributions and Paper Layout}
Having identified the limitations of existing works, we propose a Blockchain based Framework for sEcuring smaRt vehicLes (B-FERL). B-FERL is an apposite countermeasure for in-vehicle network security that exposes threats in smart vehicles by ascertaining the state of the vehicle’s internal controls. Also, given that data modification depicts a successful attempt to alter the state of an ECU, B-FERL also suffices as a data reliability solution that ensures that a vehicle's data is trustworthy. We utilize a permissioned BC to allow only trusted entities manage the record of vehicles in the BC network. This means that state changes of an ECU are summarized, stored and managed distributedly in the BC.\\
\textit{The key contributions of this paper are summarized as follows:} \\
\textbf{(1)} We present B-FERL; a decentralized security framework for in-vehicle networks. B-FERL ascertains the integrity of in-vehicle ECUs and highlights the existence of threats in a smart vehicle. To achieve this, we define a two-tier blockchain-based architecture, which introduces an initialization operation used to create record vehicles for authentication purposes and a challenge-response mechanism where the integrity of a vehicle's internal network is queried when it connects to an RSU to ensure its security.\\
\textbf{(2)} We conduct a qualitative evaluation of B-FERL to evaluate its resilience to identified attacks. We also conduct a comparative evaluation with existing approaches and highlight the practical benefits of B-FERL. Finally, we characterize the performance of B-FERL via extensive simulations using the CORE simulator against key performance measures such as the time and storage overheads for smart vehicles and RSUs.\\
\textbf{(3)} Our proposal is tailored to meet the integrity requirement for securing smart vehicles and the availability requirement for securing vehicular networks and we provide succinct discussion on the applicability of our proposal to achieve various critical automotive functions such as vehicular forensics, secure vehicular communication and trust management. \\
This paper is an extension of our preliminary ideas presented in [1]. Here, we present a security framework for detecting when an in-vehicle network compromise occurs and provide evidence that reflect actions on ECUs in a vehicle. Also, we present extensive evaluations to demonstrate the efficacy of B-FERL. \\
The rest of the paper is structured as follows. In section 2, we discuss related works and present an overview of our proposed framework in Section 3 where we describe our system, network and threat model. Section 4 describes the details of our proposed framework. In section 5, we discuss results of the performance evaluation. Section 6 present discussions on the potential use cases of B-FERL, comparative evaluation with closely related works, and we conclude the paper in Section 7.
\section{Related Work}
BC has been proposed as security solutions for vehicular networks. However, proposed solutions have not focused on the identification of compromised ECUs for securing vehicular networks.
The author in~\cite{Blackchain:2017} proposed Blackchain, a BC based message revocation and accountability system for secure vehicular communication. However, their proposal does not consider the reliability of data communicated in the vehicular network which could be threatened when an in-vehicle ECU is compromised. The author in~\cite{Ali:2017} presents a BC based architecture for securing automotive networks. However they have not described how their architecture is secured from insider attacks where authorised entities could be motivated to execute rogue actions for their benefits. Also, their proposal does not consider the veracity of data from vehicles. The authors in~\cite{cube:2018} proposed a security platform for autonomous vehicle based on blockchain but have not presented a description of their architecture and its applicability for practical scenarios. Also, their security is towards the prevention of unauthorized network entry using a centralized intrusion detector which is vulnerable to a single point of failure attack. Their proposal do not also consider the malicious tendencies of authorized entities as described in~\cite{Oham:2018}.
The authors in~\cite{Coin:2018} proposed CreditCoin; a privacy preserving BlockChain based incentive announcement and reputation management scheme for smart vehicles. Their proposal is based on threshold authentication where a number of vehicles agree on a message generated by a vehicle and then the agreed message is sent to a nearby roadside unit. However, in addition to the possibility of collusion attacks, the requirement that vehicles would manage a copy of the blockchain presents a significant storage and scalability constraint for vehicles. The authors in~\cite{BARS:2018} have proposed a Blockchain-based Anonymous Reputation System (BARS) for Trust Management in VANETs however, they have not presented details on how reputation is built for vehicles and have also not presented justifications for their choice of reputation evaluation parameters. The authors in~\cite{Contract:2018} have proposed an enhanced Delegated Proof-of-stake (DPoS) consensus scheme with a two-stage soft security solution for secure vehicular communications. However, their proposal is directed at establishing reputation for road side infrastructures and preventing collusion attacks in the network. These authors~\cite{Coin:2018}~\cite{BARS:2018}~\cite{Contract:2018} have also not considered the security of in-vehicle networks.
\section{B-FERL Overview and Threat Model}
In this section, we present a brief overview of B-FERL including the roles of interacting entities, and a description of the network and threat models.
\subsection{Architecture overview}
The architecture of our proposed security solution (B-FERL) is described in Figure~\ref{fig:framework}.
Due to the need to keep track of changes to ECU states and to monitor the behaviour of a vehicle while operational, B-FERL consists of two main BC tiers namely, upper and lower tiers. Furthermore, these tiers clarify the role of interacting entities and ensure that entities are privy to only information they need to know.
The upper tier comprises vehicle manufacturers, service technicians, insurance companies, legal and road transport authorities. The integration of these entities in the upper tier makes it easy to also keep track of actions executed by vehicle manufacturers and service technicians on ECUs such as firmware updates which changes the state of an ECU and allows only trusted entities such as transport and legal authorities to verify such ECU state changes. Interactions between entities in this tier focus on vehicle registration and maintenance. The initial registration data of a vehicle is used to create a record (block) for the vehicle in the upper tier. This record stores the state of the vehicle and the hash values of all ECUs in the vehicle and is used to perform vehicle validation in the lower tier BC. This is accomplished by comparing the current state of the vehicle and the firmware hashes of each ECU in the vehicle to their values in the lower tier BC. Also, the upper tier stores scheduled maintenance or diagnostics data that reflects the actions of vehicle manufacturers and service technicians on a smart vehicle. This information is useful for the monitoring of the vehicle while operational and for making liability decisions in the multi-entity liability attribution model~\cite{Oham:2018}.\\
In the following, we describe actions that trigger interactions in the upper tier. In the rest of the paper unless specifically mentioned, we refer to smart vehicles as \textit{CAVs}.
\begin{itemize}
\item When a \textit{CAV} is assembled, the vehicle manufacturer obtains the ECU Merkle root value ($SS_{ID}$) by computing hash values of all ECUs in the vehicle and forwards this value to the road transport and legal authorities to create a public record (block) for the vehicle. This record is utilized by RSUs to validate vehicles in the lower tier. We present a detailed description of this process in Section 3.
\item When a maintenance occurs in the vehicle, vehicle manufacturers or service technicians follow the process of obtaining the updated $SS_{ID}$ value above and communicate this to the transport and legal authorities to update the record of the vehicle and assess the integrity of its ECUs. We present a detailed description of this process in Section 3. Maintenance here means any activity that alters the state of any of the vehicle's ECUs.
\end{itemize}
The lower tier comprises roadside units (\textit{RSUs}), smart vehicles, legal and road transport authorities. Interactions in this tier focus on identifying when an ECU in a vehicle has been compromised. To achieve this, a vehicle needs to prove its ECUs firmware integrity whenever it connects to an \textit{RSU}. When a vehicle approaches the area of coverage of an \textit{RSU}, the \textit{RSU} sends the vehicle a challenge request to prove the state of its ECUs. To provide a response, the vehicle computes the cumulative hash value of all of its ECUs i.e. its ECU Merkle root ($SS_{ID}$). The response provided by the vehicle is then used to validate its ECUs current state in comparison to the previous state in the lower tier. Also, as a vehicle moves from one \textit{RSU} to the other, an additional layer of verification is added by comparing the time stamps of its current response to the previous response to prevent the possibility of a replay attack. It is noteworthy, that compared to traditional BC which executes a consensus algorithm in order to insert transactions into a block, B-FERL relies on the appendable block concept (ABC) proposed in~\cite{Michelin:2018} where transactions are added to the blocks by valid block owners represented by their public key. Therefore, no consensus algorithm is required in B-FERL to append transactions to the block. To ensure that the integrity of a block is not compromised, ABC decouples the block header from the transactions to enable network nodes store transactions off-chain without compromising block integrity. Furthermore, to ensure scalability in the lower tier, we only store two transactions (which represents the previous and current ECU's firmware state) per vehicle and push other transactions to the cloud where historical data of the vehicle can be accessed when necessary.
However, this operation could introduce additional latency for pushing the extra transaction from the RSU to the cloud storage. This further imposes an additional computing and bandwidth requirement for the RSU. \\
Next, we discuss our network model which describes interacting entities in our proposed framework and their roles.
\begin{figure*}[h]
\centering
\includegraphics[width=0.9\textwidth]{B-CRIF.PNG}
\caption{The Proposed Blockchain Framework}
\label{fig:framework}
\end{figure*}
\subsection{Network model}
To restrict the flow of information to only concerned and authorized entities, we consider a two-tiered network model as shown in Figure \ref{fig:framework}. The upper tier features the road transport and legal authorities responsible for managing the vehicular network. This tier also integrates entities responsible for the maintenance of vehicles such as vehicle manufacturers and the service technicians. It could also include auto-insurance companies who could request complimentary evidence from Transport and Legal authorities for facilitating liability decisions. For simplicity we focus on single entities for each of these however, our proposal is generalizable to the case when there are several of each entity.\\
The lower tier features \textit{CAVs} as well as RSUs which are installed by the road transport authority for the management and monitoring of traffic situation in the road network.
For interactions between \textit{CAVs} and RSUs , we utilize the IEEE802.11p communication standard which has been widely used to enable vehicle-to-vehicle and vehicle-to-infrastructure communications [4]. However, 5G is envisaged to bring about a new vehicular communication era with higher reliability, expedited data transmissions and reduced delay [5]. Also, we utilise PKI to issue identifiable digital identities to entities and establish secure communication channels for permissible communication.
The upper tier features a permissioned blockchain platform managed by the road transport and legal authorities. Vehicle manufacturers and service technicians participate in this BC network by sending sensor update notification transactions which are verified and validated by the BC network managers. Insurance companies on the other hand participate by sending request transactions for complimentary evidence to facilitate liability attribution and compensation payments. The lower tier also features a permissioned BC platform managed by the road transport, legal authorities and RSUs. In this tier, we maintain vehicle-specific profiles. To achieve this, once a vehicle enters the area of coverage of a roadside unit (RSU), the RSU sends a challenge request to the vehicle by which it reports the current state of its ECUs. Once a valid response is provided, the vehicle is considered trustworthy until another challenge-response activity. \\
We present a full description of the entire process involved in our proposed framework in section 3.
\subsection{Threat Model}
Given the exposure of \textit{CAVs} to the Internet, they become susceptible to multiple security attacks which may impact the credibility of data communicated by a vehicle. In the attack model, we consider how relevant entities could execute actions to undermine the proposed framework. The considered attacks include: \\
\textbf{Fake data:} A compromised vehicle could try to send misleading information in the vehicular network for its benefit. For example, it could generate false messages about a traffic incident to gain advantage on the road. Also, to avoid being liable in the case of an accident, a vehicle owner could manipulate an ECU to generate false data.\\
\textbf{Code injection:} Likely liable entities such as the vehicle manufacturer and service technician could send malware to evade liability. vehicle owners on the other hand could execute such actions to for example reduce the odometer value for the vehicle to increase its resale value.\\
\textbf{Sybil attack:} A vehicle could create multiple identities to manipulate vehicular network, for example by creating false alarm such as false traffic jam etc.\\
\textbf{Masquerade attack (fake vehicle):} A compromised roadside unit could create a fake vehicle or an external adversary could create a fake vehicle for the purpose of causing an accident or changing the facts of an accident. \\
\textbf{ECU State Reversal Attack: } A vehicle owner could extract the current firmware version of an ECU and install its malicious version and revert to the original version for verification purpose.
\section{Blockchain based Framework for sEcuring smaRt vehicLes (B-FERL)} \label{sec:b-ferl}
This section outlines the architecture of the proposed framework. As described in Figure~\ref{fig:framework}, entities involved in our framework include vehicle manufacturers, service technicians, insurance companies, \textit{CAVs}, RSUs, road transport and legal authorities. Based on entity-roles described in section 2, we categorize entities as verifiers and proposers. Verifiers are entities that verify and validate data sent to the BC. Verifiers in B-FERL include RSUs, road transport and legal authorities. Proposers are entities sending data to the BC or providing a response to a challenge request. Proposers in our framework include \textit{CAVs}, vehicle manufacturers, service technicians and insurance companies. \\
In B-FERL architecture, we assume that the CAVs are producing many transactions, especially in high density smart city areas. Most of blockchains implementations are designed to group transactions, add them into a block and only after that append the new block into the blockchain, which leads to a sequential transaction insertion. To tackle this limitation, in B-FERL we adopted a blockchain framework presented by Michelin et al.~\cite{Michelin:2018} which introduces the appendable block concept (ABC). This blockchain solution enables multiple CAVs to append transactions in different blocks at same time. The framework identifies each CAV by its public key, and for each different public key, a block is created in the blockchain data structure. The block is divided in two distinct parts: (i) block header, which contains the CAV public key, the previous block header hash, the timestamp; (ii) block payload, where all the transactions are stored. The transaction storage follows a linked list data structure, the first transaction contains the block header hash, while the coming transactions contain the previous transaction hash. This data structure allows the solution to insert new transaction into existing blocks. Each transaction must be signed by the CAV private key, once the transaction signature is validated with the block's public key, the RSU can proceed appending the transaction into the block identified by the CAV public key. Based on the public key, the BC maps all the transactions from a specific entity to the same block.
\subsection{Transactions}
Transactions are the basic communication primitive in BC for the exchange of information among entities in B-FERL.
Having discussed the roles of entities in each tier in B-FERL, in this section, we discuss the details of communication in each tier facilitated by the different kind of transactions. Transactions generated are secured using cryptographic hash functions (SHA-256), digital signatures and asymmetric encryption. \\
\textbf{\textit{Upper tier}}\\
Upper tier transactions include relevant information about authorized actions executed on a \textit{CAV}. They also contain interactions that reflect the time a vehicle was assembled. Also, in this tier, insurance companies could seek complementary evidence from the road transport and legal authorities in the event of an accident hence, a request transaction is also sent in this tier. \\
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{Merkletree.PNG}
\caption{Obtaining the Merkle tree root value}
\label{fig:merkle}
\end{figure}
\textbf{Genesis transaction:} This transaction is initiated by a vehicle manufacturer when a vehicle is assembled. The genesis transaction contains the initial $SS_{ID}$ value which is the Merkle tree root from the \textit{CAV's} ECU firmware hashes at \textit{CAV} creation time, time stamp, firmware hashes of each ECU and associated timestamps, ($H(ECU){_1}$, $T{_1}$), ($H(ECU){_2}$, $T{_2}$), .....($H(ECU){_n}$, $T{_n}$) which reflect when an action was executed on the ECU, the public key and signature of the vehicle manufacturer. Figure \ref{fig:merkle} shows how the $SS_{ID}$ of a \textit{CAV} with 8 ECUs is derived.
\begin{center}
Genesis = [$SS_{ID}$, TimeStamp, ($H(ECU){_1}$, $T{_1}$), ($H(ECU){_2}$, $T{_2}$), .....($H(ECU){_n}$, $T{_n}$), PubKey, Sign]
\end{center}
The genesis transaction is used by the transport and legal authorities to create a genesis block for a \textit{CAV}. This block is a permanent record of the \textit{CAV} and used to validate its authenticity in the lower tier. It contains the genesis transaction, public key of the \textit{CAV}, time stamp which is the time of block creation and an external address such as an address to a cloud storage where \textit{CAV} generated data would be stored as the block size increases. \\
\textbf{Update transaction:} This transaction could be initiated by a vehicle manufacturer or a service technician. It is initiated when the firmware version of an ECU in the \textit{CAV} is updated during scheduled maintenance or diagnostics. An update transaction leads to a change in the initial $SS_{ID}$ value and contains the updated $SS_{ID}$ value, time stamp, public key of \textit{CAV}, public key of vehicle manufacturer or service technician and their signatures. \\
When an update transaction is received in the upper tier, the update transaction updates the record (block) of the \textit{CAV} in the lower tier. The updated \textit{CAV} block will now be utilized by RSUs to validate the authenticity of the \textit{CAV} in the lower tier.\\
\textbf{Request transaction:} This transaction is initiated by an insurance company to facilitate liability decisions and compensation payments. It contains the signature of the insurance company, the data request and its public key.\\
\textbf{\textit{Lower tier}} \\
Communication in the lower tier reflect how transactions generated in the upper tier for CAVs are appended to their public record (block) in the lower tier. Additionally, we describe how the block is managed by an RSU in the lower tier and the transport and legal authorities in the upper tier. Lower tier communications also feature the interactions between \textit{CAVs} and RSUs and describes how the integrity of ECUs in a \textit{CAV} is verified. In the following, we describe the interactions that occur in the lower tier. \\
\textbf{Updating CAV block:} Updating the block of a \textit{CAV} is either performed by the road transport and legal authorities or by an RSU. It is performed by the road transport and legal authorities after an update transaction is received in the upper tier. It is performed by an RSU after it receives a response to a challenge request sent to the vehicle. The challenge-response scenario is described in the next type of transaction. The update executed by an RSU contains a \textit{CAV’s} response which includes the signature of the \textit{CAV}, time stamp, response to the challenge and \textit{CAV’s} public key. It also contains the hash of the previous transaction in the block computed by the RSU, the signature and public key of the RSU.\\
\textbf{Challenge-Response transaction:} The Challenge-Response transaction is a request from an RSU to prove the integrity of its ECUs. This request is received when the \textit{CAV} comes into the RSU's area of coverage. When this occurs, the \textit{CAV} receives a twofold challenge from the RSU. First is a challenge to compute its $SS_{ID}$ to ascertain the integrity of its state. Next challenge is to compute the hash value of randomly selected ECUs to prevent and detect the malicious tendencies of vehicle owners discussed in Section 3.\\
The \textit{CAV} responds by providing a digitally signed response to the request.
\subsection{Operation}
In this section we describe key operations in our proposed framework. The proposed framework works in a permissioned mode where road transport and legal authorities have rights to manage the BC in the upper and lower tier. Service technicians as well as vehicle manufacturers generate data when they execute actions that alters the internal state of a \textit{CAV} while \textit{CAVs} prove the integrity of their ECUs when they connect to a RSU. \\
We define 2 critical operations in our proposed framework:
\subsubsection{Initialization} Describes the process of creating a record for a vehicle in the vehicular network. Once a genesis transaction is generated for a \textit{CAV} by a vehicle manufacturer, upper tier verifiers verify the transaction and upon a successful verification, a genesis block is broadcasted in the lower tier for the \textit{CAV}. \\
\begin{figure}[h]
\centering
\includegraphics[width=0.53\textwidth]{tiert.PNG}
\caption{\textit{CAV} record initialization (black) and upper-tier update (blue) operations.}
\label{fig:operation}
\end{figure}
Figure \ref{fig:operation} describes the process of block creation (assembling) for \textit{CAVs}. It outlines the requisite steps leading to the creation of a block (record) for a \textit{CAV}.
\subsubsection{Update} Describes the process of updating the record of the vehicle in the vehicular network. The update operation results in a change in the block of a \textit{CAV} in the lower tier. The update operation occurs in the upper and lower tier. In the upper tier, an update operation occurs when a vehicle manufacturer performs a diagnostic on a \textit{CAV} or when a scheduled maintenance is conducted by a service technician. In the lower tier, it occurs when a \textit{CAV} provides a response to the challenge request initiated by an RSU. In the following we discuss the update operation that occurs at both tiers. \\
\textbf{Upper-tier update:} Here, we describe how the earlier mentioned actions of the vehicle manufacturer or service technician alters the existing record for a \textit{CAV} in the vehicular network.\\
Figure \ref{fig:operation} outlines the necessary steps to update the record of a vehicle. After completing the diagnostics or scheduled maintenance (step 1), the vehicle manufacturer or service technician retrieves the hash of all sensors in the vehicle (step 2) and computes a new ECU Merkle root value (step 3). Next, an update transaction is created to reflect the action on the vehicle (step 4). This transaction includes the computed ECU Merkle root value, time stamp to reflect when the diagnostics or maintenance was conducted, signature of the entity conducting the maintenance or diagnostics and a metadata field that describes what maintenance or diagnostics was conducted on the \textit{CAV}. Next, the transaction is broadcasted in the upper tier (step 5) and verified by verifiers (step 6); road transport and legal authorities by validating the signature of the proposer (step 7). Upon signature validation, an update block is created by the verifiers for the \textit{CAV} (step 8) and broadcasted in the lower tier (step 9). \\
\textbf{Lower tier update:} We describe here how the update of a \textit{CAV’s} record is executed by an RSU after the initialization steps in the lower tier. \\
Figure \ref{fig:lowupdate} describes the necessary steps involved in updating the record of the \textit{CAV} in the lower tier. When a \textit{CAV} approaches the area of coverage of an RSU, the RSU sends the \textit{CAV} a challenge request which is to prove that it is a valid \textit{CAV} by proving its current sensor state (Step 1). For this, the \textit{CAV} computes its current $SS_{ID}$ value as well as the hash values of selected ECUs (Step 2) and forward it to the RSU including its signature, time stamp and public key (Step 3). \\
\begin{figure*}[h]
\centering
\includegraphics[width=0.85\textwidth]{lowupdate.PNG}
\caption{Lower-tier update operations.}
\label{fig:lowupdate}
\end{figure*}
When the RSU receives the response data from the \textit{CAV}, it first verifies that the vehicle is a valid \textit{CAV} by using its public key ($PubKey_{CAV}$) to check that the vehicle has a block in the BC (Step 4). Only valid vehicles have a block (record) in the BC. When the RSU retrieves $PubKey_{CAV}$, it validates the signature on the response data (Step 4.1). If validation succeeds, the RSU retrieves the firmware hash value in the \textit{CAV’s} block (Step 5) proceeds to compare the computed hash values with the value on the \textit{CAV’s} block (Step 5.1). Otherwise, the RSU reports to the road transport and legal authorities of the presence of a malicious \textit{CAV} or an illegal \textit{CAV} if there is no block for such \textit{CAV} in the BC (Step 4.2). If the comparison of hash values succeeds, the RSU updates the \textit{CAV’s} record in the lower tier to include the $SS_{ID}$ value, the time stamp, and public key of the \textit{CAV} (Step 6). This becomes the latest record of the \textit{CAV} in the lower tier until another challenge-response round or another maintenance or diagnostic session. However, if the hash value differs, the RSU reports to the road transport and legal authorities of the presence of a malicious \textit{CAV} (Step 5.2). \\
When the \textit{CAV} encounters another RSU, another challenge-response activity begins. This time, the RSU repeats the steps (1-5), in-addition, another layer of verification is executed. The RSU compares the time stamp on the response data to the immediate previous record stored on the lower tier blockchain (Step 5.1.2). The time stamp value is expected to continuously increase as the vehicle travels, if this is the case, RSU executes updates the \textit{CAV’s} block (Step 6). Otherwise, the RSU can detect a malicious action and report this to the road transport and legal authority (Step 5.2). However, if a malicious \textit{CAV} reports a time stamp greater than its previous time stamp, we rely on the assumption that one or more of its ECUs would have been compromised and so it would produce an $SS_{ID}$ different from its record in the lower tier. Another alternative is to comparatively evaluate its time-stamp against the time-stamp of other vehicles in the RSU area of coverage. To ensure that the blockchain in the lower tier scales efficiently, we store only two transactions per \textit{CAV} block. In this case, after successfully executing (Step 5.1.2), the RSU removes the genesis transaction from the block and stores it in a cloud storage which can be accessed using the external address value in the \textit{CAV’s} block. \\
With the challenge-response activity, we build a behaviour profile for \textit{CAV’s} and continuously prove the trustworthiness of a vehicle while operational. Also, by keeping track of the actions of likely liable entities such as the service technician and vehicle manufacturer and by storing vehicle’s behaviour profile in the blockchain, we obtain historical proof that could be utilised as contributing evidence for facilitating liability decisions.
\section{Performance Evaluation}
The evaluation of B-FERL was performed in an emulated scenario using Common Open Research Emulator (CORE), running in a Linux Virtual Machine using six processor cores and 12 Gb of RAM. Based on the appendable blocks concept described in section~\ref{sec:b-ferl}, B-FERL supports adding transactions of a specific \textit{CAV} to a block. This block is used to identify the \textit{CAV} in the lower tier and stores all of its records. \\
The initial experiments aim to identify the project viability, and thus enable us to plan ahead for real world scenario experimentation. The evaluated scenario consists of multiple CAVs (varying from 10 to 200) exchanging information with a peer-to-peer network with five RSU in the lower tier.
Initially, we evaluate the time it takes B-FERL to perform the system initialization. This refers to the time it takes the upper tier validators to create a record (block) for a \textit{CAV}. Recall that in Figure~\ref{fig:operation}, creating a record for a \textit{CAV} is based on the successful verification of the genesis transaction sent from vehicle manufacturers. The results presented are the average of ten runs and we also show the standard deviation for each given scenario. In this first evaluation, we vary the amount of genesis transactions received by validators from 10 to 200 to identify how B-FERL responds to the increasing number of simultaneous transactions received.
The results are presented in Figure~\ref{fig:createBlock}. Time increases in a linear progression as the number of \textit{CAVs} increases. The time measured in milliseconds increases from 0.31 ms (standard deviation 0.12 ms) for 10 \textit{CAVs}, to 0.49 ms (standard deviation 0.22 ms) for 200 \textit{CAVs} which is still relatively low compared to the scenario with 10 \textit{CAVs}.\\
Once the blocks are created for the \textit{CAVs}, the upper tier validators broadcast the blocks to the RSUs. In the next evaluation, we measure the time taken for an RSU to update its BC with the new block. The time required for this action is 0.06 ms for 200 \textit{CAVs} which reflects the efficiency of B-FERL given the number of \textit{CAVs}.
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{TimeToAddBlock.pdf}
\caption{Time taken to create a block}
\label{fig:createBlock}
\end{figure}
The next evaluation was the time that each RSU takes to evaluate the challenge response. This is an important measure in our proposed solution as it reflects the time taken by an RSU to verify the authenticity of a \textit{CAV} and conduct the ECU integrity check. This process is described in steps 4 to 6 presented in Figure~\ref{fig:lowupdate}. Figure~\ref{fig:validateChallenge} presents the average time, which increases linearly from 1.37 ms (standard deviation 0.15 ms) for 10 \textit{CAVs} to 2.02 ms (standard deviation 0.72 ms) for 200 \textit{CAVs}. From the result, we can see that the actual values are small even for large group of \textit{CAVs}.
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{TimeToValidateChallenge.pdf}
\caption{Time taken to validate a challenge from vehicles}
\label{fig:validateChallenge}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{TimeMerkleTree.pdf}
\caption{Time taken to calculate Merkle tree root}
\label{fig:merkleResult}
\end{figure}
In the next evaluation, we evaluate the time it takes a \textit{CAV} to compute its merkle tree root defined as the cumulative sum of all its ECU hash. According to NXP, a semiconductor supplier for automotive industries~\cite{NXP:2017}, the number of ECUs range from 30 to 100 in a modern vehicle. In this evaluation, we assume that as vehicle functions become more automated, the number of ECUs is likely to increase. Therefore, in our experiments, we vary the number of ECUs from 10 to 1,000. Figure~\ref{fig:merkleResult} presents the time to compute the Merkle tree root. The results present a linear growth as the number of ECUS increases. In our result, even when the number of ECUs in a \textit{CAV} are 1000, the time to compute the Merkle tree root is about 12 ms which is still an acceptable time for a highly complex scenario.
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{SizeBlockchain.pdf}
\caption{Blockchain size}
\label{fig:blocksize}
\end{figure}
In the final evaluation, we consider the amount of storage required by an RSU to store the BC for different number of \textit{CAVs}. To get a realistic picture of required storage, we considered the number of vehicles in New South Wales (NSW), Australia in 2018. As presented in Figure~\ref{fig:blocksize}, the number of blocks (which represents the number of vehicles) was changed from 100,000 representing a small city in NSW to 5,600,0000\footnote{5,600,0000 represents the number of cars in the state of New South Wales, according to www.abs.gov.au}. Based on the results, an RSU must have around 5 Gb to store the BC structure in the state of New South Wales. This result show that it is feasible for an RSU to maintain the BC for all \textit{CAVs} in NSW.
\section{Discussion}
In this section, we provide a further discussion considering the security, Use cases as well a comparative evaluation of B-FERL against related work.
\subsection{Security analysis}
In this section, we discuss how our proposal demonstrates resilience against attacks described in the attack model. \\
\textbf{Fake data:} For this to occur, one or more data generating ECU of a \textit{CAV} would have been compromised. We can detect this attack during the challenge-response activity between the compromised \textit{CAV} and an RSU where the \textit{CAV} is expected to prove the integrity of its ECU by computing its ECU Merkle tree root value. \\
\textbf{Code injection:} Actions executed by service technicians and vehicle manufacturers are stored in the upper tier and could be traced back to them. Vehicle owners are not be able to alter their odometer value as such actions would make the $SS_{ID}$ value different from what is in its record in the lower tier. \\
\textbf{Sybil attack:} The only entities capable of creating entities in the vehicular networks are the verifiers in the upper tier who are assumed to be trusted. A vehicle trying to create multiple entities must be able to create valid blocks for those entities which is infeasible in our approach. \\
\textbf{Masquerade attack (fake vehicles):} A compromised RSU cannot create a block for a \textit{CAV}. As such, this attack is unlikely to be undetected in B-FERL. Also, a \textit{CAV} is considered valid only if its public key exists in the BC managed by the road transport and legal authorities. \\
\textbf{ECU State Reversal Attack: } We address this attack using the random ECU integrity verification challenge. By randomly requesting the values of ECU in a \textit{CAV}, RSUs could detect the reversal attack by comparing the timestamps ECUs against their entries in the lower tier BC.
Having discussed our defense mechanism, it is noteworthy that while the utilization of a public key introduces a trade-off that compromises privacy and anonymity of a vehicle, the public key is only utilized by a RSU to identify a vehicle in the challenge-response transaction which ascertains the state of a vehicle and does not require the transmission of sensitive and privacy related information.
\subsection{Use case}
In this section, we discuss the applicability of our proposed solution to the following use cases in the vehicular networks domain: (1) Vehicular forensics, (2) Trust management, and (3) Secure vehicular communication. \\
\textbf{\textit{Vehicular forensics:}} In the liability attribution model proposed for \textit{CAVs} in~\cite{Oham:2018}, liability in the event of an accident could be split amongst entities responsible for the day-to-day operation of the \textit{CAVs} including the vehicle manufacturers, service technicians and vehicle owners. Also, the authors in~\cite{Norton:2017} have identified conditions for the attribution of liability to the aforementioned entities. The consensus is to attribute liability to vehicle manufacturer and technicians for product defect and service failure respectively and to the vehicle owners for negligence. In our proposed work, we keep track of authorized actions of vehicle manufacturers and service technicians in the upper tier and so we are able to identify which entity executed the last action on the vehicle before the accident. Also, with the challenge-response between RSUs and \textit{CAVs} in the lower tier, we are able to obtain historical proof that proves how honest or rogue a vehicle has been in the vehicular network. Consider the \textit{CAV} in Figure 1, if before entering the coverage region of an RSU, an accident occurs, we could generate evidence before the occurrence of the accident in the lower tier that reflects the behavior of the \textit{CAV} and such evidence could be utilized with the accident data captured by the vehicle for facilitating liability decisions. \\
\textbf{\textit{Trust Management:}} Trust management in vehicular networks either assesses the veracity of data generated by a vehicle or the reputation of a vehicle [19]. This information is used to evaluate trust in the network. However, existing works on trust management for vehicular networks significantly relies on the presence of witness vehicles to make trust based decisions [19-22] and could therefore make wrong trust decisions if there are little or no witnesses available. Also, reliance on witnesses also facilitate tactical attacks like collusion and badmouthing. In our proposal, we rely solely on data generated by a \textit{CAV} and we can confirm the veracity of data generated or communicated by the \textit{CAV} by obtaining such evidence in the lower tier from the historical challenge-response activity between a \textit{CAV} and RSUs as the \textit{CAV} travels. \\
\textbf{\textit{Secure vehicular communication networks:}} Given that the successful execution of a malicious action by a \textit{CAV} reflects that at least one of the \textit{CAV's} ECUs has been compromised and as a result, undermines the security of the vehicular networks. We describe below how our proposal suffices as an apposite security solution for vehicular networks. \\
\textbf{Identifying compromised \textit{CAVs}}: By proving the state of ECUs in \textit{CAVs}, we can quickly identify cases of ECU tampering and quickly broadcast a notification of malicious presence in the vehicular network to prevent other \textit{CAVs} from communicating with the compromised \textit{CAV}. \\
\textbf{Effective revocation mechanism:} Upon the identification of a malicious \textit{CAV} during the challenge-response activity, Road transport authorities could also efficiently revoke the communication rights of such compromised \textit{CAV} to prevent further compromise such as the propagation of false messages in the network by the compromised \textit{CAV}.
\subsection{Comparative evaluation}
In this section, we comparatively evaluate B-FERL against the works proposed in [9-10], [15], [23] using identified requirements for securing in-vehicle networks. \\
\textbf{Adversaries}: Identified works are vulnerable to attacks executed by authorized entities (insider attacks) but in B-FERL, we address this challenge by capturing all interactions between all entities responsible for the operation of the \textit{CAV} including the owner, manufacturer and service technician. By recording these actions in the upper tier (BC), we ensure that no entity can repudiate its actions. Furthermore, by proving the state of ECUs in a \textit{CAV}, we are able to identify possible attacks. \\
\textbf{Decentralization:} By storing vehicle related data as well as actions executed by manufacturers and service technicians in the BC, we ensure that no entity can alter or modify any of its actions. Also, by verifying the internal state of a \textit{CAV} as it moves from one RSU to another, we preserve the security of the vehicular networks. \\
\textbf{Privacy:} By restricting access to information to only authorized entities in B-FERL, we preserve the privacy of concerned entities in our proposed framework. \\
\textbf{Safety:} By verifying the current state of a \textit{CAV} against its record in the lower tier, we ensure communication occurs only between valid and honest \textit{CAVs} which ultimately translates to secure communications in the vehicular network.
\section{Conclusion}
In this paper, we have presented a Blockchain based Framework for sEcuring smaRt vehicLes (B-FERL). The purpose of B-FERL is to identify when an ECU of a smart vehicle have been compromised by querying the internal state of the vehicle and escalate identified compromise to requisite authorities such as the road transport and legal authority who takes necessary measure to prevent such compromised vehicles from causing harm to the vehicular network. Given this possibility, B-FERL doubles as a detection and reaction mechanism offering adequate security to vehicles and the vehicular network. Also, we demonstrated the practical applicability of B-FERL to critical applications in the vehicular networks domain including trust management, secure vehicular network and vehicular forensics where we discuss how B-FERL could offer non-repudiable and reliable evidence to facilitate liability attribution. Furthermore, by qualitatively evaluating the performance of B-FERL, we demonstrate how it addresses key challenges of earlier identified works. Security analysis also confirms B-FERL's resilience to a broad range of attacks perpetuated by adversaries including those executed by supposedly benign internal entities. Simulation results reflect the practical applicability of B-FERL in realistic scenarios. \\
Our current proposal provides security for smart vehicles by identifying when a vehicle becomes compromised and secures the vehicle against possible exploitations by internal adversaries. An interesting future direction would be to consider the privacy implication for a smart vehicle as it travels from one roadside unit to another.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
| proofpile-arXiv_065-258 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
A solid-on-solid (SOS) model can be considered as a generalization of the Ising model, which corresponds to
$E=\{-1,1\}$, or a less symmetric variant of the Potts model with non-compact state space.
SOS-models on the cubic lattice were analyzed in \cite{Maz},\cite{BW} where an analogue of the so-called Dinaburg-Mazel-Sinai theory was developed. Besides interesting phase transitions in these models, the attention to them is motivated by applications, in particular in the theory of communication networks; see, e.g., \cite{BK1}, \cite{G}, \cite{G'}, \cite{Ro}.
In \cite{HK} it is shown that on the Cayley tree there are several tree automorphism invariant gradient Gibbs measures and the existence of $q$ different gradient Gibbs measures for $q$-component models on the Cayley tree of order $k\geq 2$. To the best of our knowledge, the first paper devoted to the SOS model on the Cayley tree is \cite{Ro12}. In \cite{Ro12} the case of arbitrary $m\geq 1$ is treated and a vector-valued functional equation for possible boundary laws of the model is obtained. Recall that each solution to this functional equation determines a splitting Gibbs measure (SGM), in other words a tree-indexed Markov chain which is also a Gibbs measure. Such measures can be obtained by propagating spin values along the edges of the tree, from any site singled out to be the root to the outside, with a transition matrix depending on initial Hamiltonian and the boundary law solution. In particular the homogeneous (site-independent) boundary laws then define translation-invariant (TI) SGMs. For a recent investigation of the influence of weakly non-local perturbations in the interaction
to the structure of Gibbs measures, see \cite{be} in the context of the Ising model.
The present paper is organized as follows. In Section 2 we present the preliminaries of the model. In the third section we construct gradient Gibbs measures for period 4 height-periodic boundary laws on the Cayley tree of order $k\geq 2$. Note that the results in \cite{HK} are proved only on the Cayley tree of order two.
\section{Preliminaries}
{\it Cayley tree.} The Cayley tree $\Gamma^k$ of order $ k\geq 1 $
is an infinite tree, i.e., a graph without cycles, such that
exactly $k+1$ edges originate from each vertex. Let $\Gamma^k=(V,
L)$ where $V$ is the set of vertices and $L$ the set of edges.
Two vertices $x$ and $y$ are called {\it nearest neighbors} if
there exists an edge $l \in L$ connecting them. We will use the
notation $l=\langle x,y\rangle$. A collection of nearest neighbor
pairs $\langle x,x_1\rangle, \langle x_1,x_2\rangle,...,\langle
x_{d-1},y\rangle$ is called a {\it path} from $x$ to $y$. The
distance $d(x,y)$ on the Cayley tree is the number of edges of the
shortest path from $x$ to $y$.
For a fixed $x^0\in V$, called the root, we set
\begin{equation*}
W_n=\{x\in V\,| \, d(x,x^0)=n\}, \qquad V_n=\bigcup_{m=0}^n W_m
\end{equation*}
and denote
$$
S(x)=\{y\in W_{n+1} : d(x,y)=1 \}, \ \ x\in W_n, $$ the set of
{\it direct successors} of $x$.
{\it SOS model.} We consider a model where the spin takes values in
the set of all integer numbers $\emph{Z}:=\{\dots, -1,0,1,\dots
\}$, and is assigned to the vertices of the tree. A configuration
$\sigma$ on $V$ is then defined as a function $x\in V\mapsto\sigma
(x)\in\emph{Z}$; the set of all configurations is $\Omega:=\emph{Z}^V$.
The (formal) Hamiltonian of the SOS model is :
\begin{equation}\label{nu1}
H(\sigma)=-J\sum_{\langle x,y\rangle\in L}
|\sigma(x)-\sigma(y)|,
\end{equation}
where $J \in \emph{R}$ is a constant and $\langle
x,y\rangle$ stands for nearest neighbor vertices.
Note that the Hamiltonian is invariant under the spin-translation/height-shift $t:\left(t \omega \right)_i=\omega_i+t.$
This suggests reducing the complexity of the configuration space by considering {it gradient configurations } instead of height configuartions as will be explained in the following:
{\it Gradient configuration.} We may induce an orientation on $\Gamma ^k$ relative to an arbitrary site $\rho$ (which we may call the root) by calling an edge $\langle x,y\rangle$ oriented iff it points away from the $\rho$. More precisely, the set of oriented edges is defined by $$ \vec{L}:=\vec{L}_{\rho}:=\{\langle x,y \rangle \in L : d(\rho,y)=d(\rho , x)+1\}.$$
Note that the oriented graph $(V, \vec{L})$ also possesses all tree-properties, namely connectedness and absence of loops.
For any height configuration $\omega =(\omega (x))_{x \in V} \in Z^V$ and $b=\langle x, y \rangle \in \vec{L}$ the height difference along the edge $b$ is given by $ \nabla \omega_b=\omega_y-\omega_x $ and we also call $ \nabla \omega$ the gradient field of $\omega$. The gradient spin variables are now defined by $\eta_{\langle x,y \rangle}=\omega_y-\omega_x$ for each $\langle x,y \rangle \in \vec{L}$. Let us denote the space of {\it gradient configuration} by $\Omega^{\nabla}=Z^{\vec{L}}$. Note that in contrast to the notation used in \cite{S} for the lattice $Z^d$, the gradient configurations defined above are indexed by the oriented edges of the tree and not by its vertices. Equip the integers $Z$ with the power ste as measurable structure. Having done this, the measurable structure on the space $\Omega ^{\nabla}$ is given by the product $\sigma$-algebra $\mathcal{F}^{\nabla}:=\sigma(\{\nabla _ b | b \in \vec{L}\})$. Clearly $\nabla: (\Omega, \mathcal{F}) \rightarrow (\Omega^{\nabla}, \mathcal{F}^{\nabla}) $ then becomes a measurable map.
\section{Gradient Gibbs measures and tree-automorphism invariant solutions}
\subsection{Gibbs and Gradient Gibbs measures}
Recall that the set of height configurations $\Omega:=Z^V$ was endowed with the product $\sigma$-algebra $\otimes_{i \in V}2^Z$, where $2^Z$ denotes the power set of $Z$. Then, for any $\Lambda \subset V$, consider the coordinate projection map $\sigma_\Lambda:Z^V \rightarrow Z^{\Lambda}$ and the $\sigma$-algebra $\mathcal{F}_{\Lambda}:=\sigma(\sigma_{\lambda})$ of cylinder sets on $Z^V$ generated by the map $\sigma_{\Lambda}$.
We define Gibbs measures on the space of height-configurations for the model (\ref{nu1}) on a Cayley tree. Let $\nu =\{\nu(i)>0, \ i \in Z\}$ be $\sigma$-finite positive fixed a-priori measure, which in the following we will always assume to be the counting measure.
Gibbs measures are built within the DLR framwork by describing conditional probabilities w.r.t. the outside of finite sets, where a boundary condition is frozen. One introduces a so-called Gibbsian specification $\gamma$ so that any Gibbs measure $\mu \in \mathcal{G}(\gamma)$ specified by $\gamma$ verifies
\begin{equation}
\mu(A|\mathcal{F}_{\Lambda^c})=\gamma_{\Lambda}(A|\cdot) \ \mu -a.s
\end{equation}
for all $\Lambda \in \mathcal{S}$ and $A \in \mathcal{F}$. The Gibbsian specification associated to a potential $\Phi$ is given at any inverse temperature $\beta>0$, for any boundary condition $\omega \in \Omega$ as
\begin{equation}\label{33}
\gamma_{\Lambda}(A|\omega)=\frac{1}{Z_{\Lambda}^{\beta, \Phi}}\int e^{-\beta H_{\Lambda}^{\Phi}(\sigma_{\Lambda}\omega_{\Lambda^c})}\mathbf{1}_A (\sigma_{\Lambda}\omega_{\Lambda^c}) \nu^{\otimes \Lambda}(d \sigma_{\Lambda}), \end{equation}
where the partition function $Z_{\Lambda}^{\beta, \Phi}$-that has to be non-null and convergent in this countable infinite state-space context (this means that $\Phi$ is $\nu$-admissible in the terminology of \cite{Ge})-is the standard normalization whose logarithm is often related to pressure or free energy.
In our SOS-model on the Cayley tree, $\Phi$ is the unbounded nearest-neighbour potential
with $\Phi_{\{x,y\}}(\omega_x-\omega_y)=|\omega_x-\omega_y|$ and $\Phi_{x}\equiv 0$, so $\gamma$
is a \emph{Markov specification} in the sense
that
\begin{equation}\label{3.6} \gamma_{\Lambda}(\omega_{\Lambda}=\zeta | \cdot)\ \textrm{is}\ \mathcal{F}_{\partial\Lambda}- \textrm{measurable for all} \ \Lambda\subset V \ \ {and}\ \zeta\in \emph{Z}^{\Lambda}. \end{equation}
In order to build up gradient specifications from the Gibbsian specifications defined in \cite{HK}, we need to consider the following: Due to the absence of loops in trees, for any
finite subgraph $\Lambda / \emph{Z}$, the complement $\Lambda^{c}$ is not connected, but consists of at least two
connected components where each of these contains at least one element of $\partial\Lambda$. This
means that the gradient field outside $\Lambda$ does not contain any information on the relative
height of the boundary $\partial\Lambda$ (which is to be understood as an element of $\emph{Z}^{\partial\Lambda}\setminus \emph{Z}$). More
precisely, let $cc(\Lambda^c)$ denote the number of connected components in $\Lambda^c$ and note that $2\leq cc(\Lambda^c)\leq |\partial\Lambda|$.
Applying the general definition of Gradient Gibbs measure (see \cite{HK}) we have
\begin{equation}\label{37} \emph{Z}^{\Lambda^c}/ \emph{Z}=\emph{Z}^{\{b\in\vec{L}\}| b\subset \Lambda^c}\times (\emph{Z}^{cc(\Lambda^c)}/ \emph{Z}\subset \emph{Z}^{\{b\in\vec{L}\}| b\subset \Lambda^c}\times (\emph{Z}^{\partial\Lambda}/ \emph{Z})
\end{equation}
where "=" is in the sense of isomorphy between measurable spaces. For any $\eta\in \Omega^{\nabla}=\emph{Z}^{V}/ \emph{Z}$, let $[\eta]_{\partial\Lambda}/ \emph{Z}$ denote the image of $\eta$ under the coordinate projection
$\emph{Z}^{V}/ \emph{Z}\rightarrow \emph{Z}^{\partial \Lambda}/ \emph{Z}$ with the latter set endowed with the final $\sigma$-algebra generated by the coset projection. Set
\begin{equation}\label{3.8} \mathcal{F}_{\Lambda}^{\nabla}:=\sigma((\eta_b)_{b\subset\Lambda^c})
\subset \mathcal{T}_{\Lambda}^{\nabla}:=\sigma((\eta_b)_{b\subset\Lambda^c}, [\eta]_{\partial\Lambda}).\end{equation}
Then $\mathcal{T}_{\Lambda}^{\nabla}$ contains all information on the gradient spin variables outside $\Lambda$ and also
information on the relative height of the boundary $\partial\Lambda$. By (\ref{37}) we have that for any
event $A\in \mathcal{F}^{\nabla}$ the $\mathcal{F}_{\Lambda^{c}}$-measurable function
$\gamma_{\Lambda}(A | \cdot)$ is also measurable with respect to
$\mathcal{T}_{\Lambda}^{\nabla}$, but in general not with respect to $\mathcal{F}_{\Lambda}^{\nabla}$. These observations lead to the following:
\begin{definition}\label{3.1} The gradient Gibbs specification is defined as the family of probability
kernels $(\gamma_{\Lambda}^{'})_{\Lambda\subset\subset V}$ from $(\Omega^{\nabla}, \mathcal{T}_{\Lambda}^{\nabla})$ to $(\Omega^{\nabla},\mathcal{F}^{\nabla})$ such that \begin{equation}\label{3.9}\int F(\rho)\gamma_{\Lambda}^{'}(d\rho | \zeta)=\int F(\nabla\varphi)\gamma_{\Lambda}(d\varphi | \omega) \end{equation} for all bounded $\mathcal{F}^{\nabla}$-measurable functions $F$, where $\omega\in \Omega$ is any height-configuration with $\nabla\omega =\zeta$.
\end{definition}
Using the sigma-algebra $\mathcal{T}_{\Lambda}^{\nabla}$, this is now a proper and consistent family of probability
kernels, i.e.
\begin{equation}\label{3.10} \gamma^{'}_{\Lambda}(A | \zeta)=1_{A}(\zeta)
\end{equation}
for every $A\in \mathcal{T}_{\Lambda}^{\nabla}$ and
$\gamma^{'}_{\Delta}\gamma^{'}_{\Lambda}=\gamma_{\Delta}^{'}$ for any finite volumes $\Lambda, \Delta\subset V$ with $\Lambda\subset \Delta$. The proof
is similar to the situation of regular (local) Gibbs specifications (\cite{Ge}, Proposition 2.5).
Let $\mathcal{C}_b(\Omega^{\nabla})$ be the set of bounded functions on $\Omega^{\nabla}$. Gradient Gibbs measures will
now be defined in the usual way by having their conditional probabilities outside finite regions prescribed by the gradient Gibbs specification:
\begin{definition}\label{3.2} A measure $\nu\in \mathcal{M}_1(\Omega^{\nabla})$ is called a gradient Gibbs measure (GGM) if it satisfies the DLR equation
\begin{equation}\label{3.11} \int \nu(d\zeta)F(\zeta)=\int\nu(d \zeta)\int \gamma_{\Lambda}^{'}(d\tilde{\zeta} | \zeta) F(\tilde{\zeta})
\end{equation}
for every finite $\Lambda\subset V$ and for all $F\in \mathcal{C}_{b}(\Omega^{\nabla})$. The set of gradient Gibbs measures will be denoted by $\mathcal{G}^{\nabla}(\gamma)$.
\end{definition}
\subsection{Translation-invariant gradient Gibbs measures}
In this subsection we construct gradient Gibbs measures for period 4 height-periodic boundary laws on the Cayley tree of order $k\geq 2$.
\begin{pro}\label{nup1}\cite{HK} Probability distributions
$\mu^{(n)}(\sigma_n)$, $n=1,2,\ldots$, in (\ref{33}) are
compatible iff for any $x\in V\setminus\{x^0\}$ the following
equation holds:
\begin{equation}\label{nu5}
{\bf h}^*_x=\sum_{y\in S(x)}F({\bf h}^*_y,\theta).
\end{equation}
Here, $\theta=\exp(J\beta ),$ ${\bf
h}^*_x=(h_{i,x}-h_{0,x}+\ln\frac{\nu(i)}{\nu(0)},\, i\in \emph{Z}_0)$ and the function $F(\cdot,\theta ): \, \emph{R}^{\infty}
\to \emph{R}^{\infty}$ is $F({\bf h},\theta)=(F_{i}({\bf
h},\theta), \, i\in \emph{Z}_0)$, with
$$F_i({\bf h}, \theta )=\ln\frac{\nu(i)}{\nu(0)}
+\ln{\theta^{|i|}+\sum \limits_{j\in \emph{Z}_0}\theta^{|i-j|}\exp(h_j)\over 1+\sum \limits_{j\in \emph{Z}
_0}\theta^{|j|}\exp(h_j)},$$ ${\bf h}=(h_i, \, i\in \emph{Z}_0).$
\end{pro}
Assume ${\bf h}_x={\bf h}=(h_i,\, i\in \emph{Z}_0)$ for any $x\in V.$
In this case we obtain from (\ref{nu5}):
\begin{equation}\label{11}
z_i=\frac{\nu(i)}{\nu(0)}\left({\theta^{|i|}+
\sum_{j\in \emph{Z}_0}\theta^{|i-j|}z_j
\over
1+\sum_{j\in \emph{Z}_0}\theta^{|j|}z_j}\right)^k,
\end{equation}
where $z_i=\exp(h_i), \ \ i\in \emph{Z}_0$.
Let $\mathbf z(\theta)=(z_i=z_i(\theta), i\in \emph{Z}_0)$ be a solution to (\ref{11}). Denote
\begin{equation*}\label{lr}
l_i\equiv l_i(\theta)=\sum_{j=-\infty}^{-1}\theta^{|i-j|}z_j, \ \
r_i\equiv r_i(\theta)=\sum_{j=1}^{\infty}\theta^{|i-j|}z_j, \ \ i\in \emph{Z}_0.
\end{equation*}
It is clear that each $l_i$ and $r_i$ can be a finite positive number or $+\infty$. We shall consider all possible cases.
Clearly, a solution $\mathbf z=(z_i, i\in \emph{Z}_0)$ to (\ref{11}) defines a tree-indexed Markov chain iff $r_0+l_0<+ \infty$ (see \cite{HK}).
Let $\nu(i)=1$ for any $i\in \emph{Z}$ then we consider the solutions of (\ref{11}) with $l_0<+\infty$ and $r_0<+\infty$.
Put $u_i=u_0\sqrt[k]{z_i}$ for some $u_0>0$. Then (\ref{11}) can be written as
\begin{equation}\label{45}
u_i=C\left( \sum_{j=1}^{+\infty}\theta^ju_{i-j}^k+u_i^k+ \sum_{j=1}^{+\infty}\theta^ju_{i+j}^k\right), \ \ i\in \emph{Z}.
\end{equation}
\begin{pro}\cite{HK}
A vector $\mathbf u=(u_i,i\in \emph{Z})$, with $u_0=1$, is a solution to (\ref{45}) if and only if for $u_i \ \ (=\sqrt[k]{z_i})$ the following holds
\begin{equation}\label{V}
u_i^k={u_{i-1}+u_{i+1}-\tau u_i\over u_{-1}+u_{1}-\tau}, \ \ i\in \emph{Z},
\end{equation}
where $\tau=\theta^{-1}+\theta$.
\end{pro}
By this Lemma we have
\begin{equation}\label{1lr}
1+l_0+r_0={\theta-\theta^{-1}\over u_{-1}+u_1-\tau}.
\end{equation}
Equations of system (\ref{11}) for $i=-1$ and $i=1$ are satisfied independently on values of $u_{-1}$ and $u_1$ and
the equation (\ref{V}) can be separated to the following independent recurrent equations
\begin{equation}\label{L}
u_{-i-1}=(u_{-1}+u_1-\tau)u_{-i}^k+\tau u_{-i}-u_{-i+1}, \end{equation}
\begin{equation}\label{99}
u_{i+1}=(u_{-1}+u_1-\tau)u_{i}^k+\tau u_{i}-
u_{i-1}, \end{equation}
where $i\geq 1$, $u_0=1$ and $u_{-1}$, $u_{1}$ are some initial numbers (see again \cite{HK}).
So, if $u_i$ is a solution to (\ref{99}) then $u_{-i}$ will be a solution for (\ref{L}). Hence we can consider only
equation (\ref{99}).
Let's consider the periodic solutions of (\ref{V}) i.e., we describe solutions of (\ref{V}) which have the form
\begin{equation}\label{up}
u_n=\left\{ \begin{array}{lll}
1, \ \ \mbox{if} \ \ n=2m,\\[2mm]
a, \ \ \mbox{if} \ \ n=4m-1, \ \ m\in \emph{Z}\\[2mm]
b, \ \ \mbox{if} \ \ n=4m+1,
\end{array}
\right.
\end{equation}
where $a$ and $b$ some positive numbers.
In this case (\ref{99}) is equivalent to the following system of equations
\begin{equation}
\label{ab}
\begin{array}{ll}
(a+b-\tau)b^k+\tau b-2=0\\[2mm]
(a+b-\tau)a^k+\tau a-2=0.
\end{array}
\end{equation}
We describe positive solutions of (\ref{ab})
\textbf{Case $a\neq b$}. We multiply the first equation of (\ref{ab}) by $a^k$ and the second equation of (\ref{ab}) by $b^k$.
And after that, subtract the first equation from the second and we obtain the following equation:
\begin{equation*} \tau ab(a^{k-1}-b^{k-1})-2(a^k-b^k)=0\end{equation*}
Dividing both sides by $a-b$ we get
\begin{equation*} (a^{k-1}+a^{k-2}b+...+a^2b^{k-3}+ab^{k-2})(\tau b-2)-2b^{k-1}=0.
\end{equation*}
Put $x:=\frac{a}{b}$, then the last equation can be written as
\begin{equation*} (x^{k-1}+x^{k-2}+...+x^2+x)(\tau b-2)-2=0.
\end{equation*}
If $\tau b-2\leq 0$ then $(x^{k-1}+x^{k-2}+...+x^2+x)(\tau b-2)-2<0,$ i.e., there is not any solution $(a,b)$
of (\ref{ab}) such that $a\neq b.$
Let $\tau b-2> 0$ then for any positive fixed $b$ we consider the following polynomial $P_b(x):=(x^{k-1}+x^{k-2}+...+x^2+x)(\tau b-2)-2.$ For $x>0$ it's easy to check that $P^{'}_b(x)>0$ and $P_b(0)<0$, $\lim\limits_{x\rightarrow\infty}P_b(x)>0$.
Thus, $P_b(x)$ has exactly one positive solution. If $\tau\beta=2+\frac{2}{k-1}$ then there is not any solution $(a,b)$
to (\ref{ab}) such that $a\neq b.$ In other cases, from $P_b(1)\neq 0$ for any positive $b$ there exists a unique $a(b)\neq b$ such that $(a,b)$ is solution
to (\ref{ab}).
\textbf{Case $a=b$}. In this case it is sufficient to consider one of equations of $(\ref{ab})$, i.e.,
$$2a^{k+1}-\tau a^{k}+\tau a-2=0.$$
Last equation has the solution $a=1$ independently on the parameters $(\tau, k).$ Dividing both sides by $a-1$ we get
\begin{equation}\label{4.18} Q(a):=2a^k+(2-\tau)(a^{k-1}+a^{k-2}+...+a)+2=0\end{equation}
By definition of $\tau$ we get $\tau\geq 2$ i.e., From $2-\tau< 0$ and Descartes' rule of signs,
$Q(a)$ has at most two positive roots. Since $Q^{'}(a)=2ka^{k-1}+(2-\tau)((k-1)a^{k-2}+...+2a+1)$ and $Q^{'}(0)<0$, $Q^{'}(\infty)>0$
there is a unique $a_{c}$ such that $Q^{'}(a_{c})=0$. Consequently, if $\tau_c:=Q(a_{c})<0$ then the polynomial $Q(a)$ has exactly two positive solutions. Let $Q(a_{c})=0$ then $Q(a)$ has exactly one positive solution. Finally, if $Q(a_{c})>0$ then the polynomial $Q(a)$ has not any positive solution.
\begin{thm}\label{new}(Theorem 4.1, Remark 4.2 in \cite{cp}). Let $l$ be any spatially homogenous period-$q$ height-periodic boundary law to a tree-automorphism invariant gradient interaction potential on the Cayley tree. Let $\Lambda\subset V$ be any finite connected set and let $\omega\in \Lambda$ be any vertex. Then the measure $\nu$ with marginals given by
\begin{equation}\label{88} \nu(\eta_{\Lambda\cup\partial\Lambda}=\zeta_{\Lambda\cup\partial\Lambda})=Z_{\Lambda}
\left(\sum_{s\in \emph{Z}_q}\prod_{y\in\partial\Lambda}l\left(s+\sum_{b\in\Gamma(\omega, y)}\zeta_b\right)\right)\prod_{b\cap\Lambda\neq\emptyset}Q(\zeta_b),\end{equation}
\end{thm}
From above results and Theorem \ref{new} we can conclude the following theorems:
\begin{thm}\label{rozikov1} Let $k\geq 2$ and $a=b$. For the SOS-model (\ref{nu1}) on the $k$-regular tree, with parameter $\tau=2\cosh(\beta)$ there
numbers $\tau_c>0$ such that the following assertions hold:
\begin{enumerate}
\item If $\tau<\tau_c$ then there is a unique GGM corresponding to nontrivial period-3 height-periodic boundary laws of the type (\ref{up}) via Theorem (\ref{88})
\item At $\tau=\tau_c$ there are exactly two GGMs corresponding to a nontrivial period-3 heightperiodic boundary law of the type (\ref{up}) via Theorem
\item For $\tau>\tau_{c}$ there are exactly three such (resp. one) Gradient GMs.
\end{enumerate}
\end{thm}
\begin{thm}\label{rozikov} Let $k\geq 2$ and $a\neq b$. For the SOS-model (\ref{nu1}) on the $k$-regular tree, with parameter $\tau=2\cosh(\beta)$ the following assertions hold:
\begin{enumerate}
\item For any positive fixed $b$, if $\tau\leq \frac{2}{b}$ then there is no any Gradient Gibbs Measure (GGM) corresponding to nontrivial period-3 height-periodic boundary laws of the type (\ref{up}) via Theorem (\ref{88}).
\item For any positive fixed $b$, if $\tau>\frac{2}{b}$ then there is a unique GGM corresponding to nontrivial period-3 height-periodic boundary laws of the type (\ref{up}) via Theorem (\ref{88}).
\end{enumerate}
\end{thm}
\section*{ \textbf{Acknowledgements}}
We are deeply grateful to Professor U.A.Rozikov for the attention to our work and useful suggestions.
We thank the referee for many helpful comments.
| proofpile-arXiv_065-259 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
A key step in the origin of life is the formation of a metabolic network that is both self-sustaining and collectively autocatalytic \cite{ars, hay, liu, vai, vas, xav}. Systems that combine these two general properties have been studied within a formal framework that is sometimes referred to as RAF theory \cite{hor17}.
We give precise definitions shortly but, roughly speaking, a `RAF' \textcolor{black}{(=Reflexively Autocatalytic and F-generated)} set is a subset of reactions where the reactants and at least one catalyst of each reaction in the subset can be produced from an available food set by using reactions from within the subset only.
The study of RAFs traces back to pioneering work on `collectively autocatalytic sets' in polymer models of early life \cite{kau71, kau86}, which was subsequently developed mathematically (see \cite{hor19, hor17} and the references there-in). RAF algorithms have been applied recently to investigate the traces of earliest metabolism that can be detected in large metabolic databases across bacteria and archaea \cite{xav}, leading to the development of an open-source program to analyse and visualise RAFs in complex biochemical systems \cite{cat}. RAF theory overlaps with other graph-theoretic approaches in which the emergence of directed cycles in reaction graphs plays a key role \cite{bol, j1, j2}, and is also related to (M, R) systems \cite{cor, jar10} and chemical organisation theory \cite{dit2}.
RAF theory has also been applied in other fields, including ecology \cite{caz18} and cognition \cite{gab17}, and the ideas may have application in other contexts. In economics, for instance, the production of consumer items can be viewed as a catalysed reaction; for example, the production of a wooden table involves nails and wood (reactants) and a hammer (a catalyst, as it is not used up in the reaction but makes the reaction happen much more efficiently) and the output (reaction product) is the table. On a larger scale, a factory is a catalyst for the production of the items produced in it from reactants brought into the factory. In both these examples, notice that each reactant may either be a raw material (i.e. the elements of a `food set') or a products of other (catalysed) reactions, whereas the products may, in turn, be reactants, or catalysts, for other catalysed reactions. Products can sometimes also {\em inhibit} reactions; for example, the production of internal combustion engines resulted in processes for building steam engines being abandoned.
In this paper, we extend RAF theory further by investigating the impact of different modes of catalysis and inhibition on the appearance of (uninhibited) RAF subsets. We focus on the expected number of such sets (rather than on the probability that at least one such set exists which has been the focus of nearly all earlier RAF studies \cite{fil, mos}). \textcolor{black}{Using a mathematical approach, we derive explicit and exact analytical expressions for the expected number of such uninhibited RAF subsets,} as well as providing some insight into the expected population sizes of RAFs for the catalysis rate at which they first appear (as we discuss in Section~\ref{relation}). \textcolor{black}{In particular, we show that for simple systems, with an average catalysis rate that is set at the level where RAFs first appear, the expected number of RAFs depends strongly on the variability of catalysis across molecules. At one extreme (uniform catalysis), the expected number of RAFs is small (e.g. 1, or a few), while at the other extreme (all-or-nothing catalysis) the expected number of RAFs grows exponentially with the size of the system.}
\textcolor{black}{The motivation for looking at the expected number of RAFs (rather than the probability that a RAF exists) is twofold. Firstly, by focusing on expected values it is possible to present certain exact results (in Theorem~\ref{thm1}), rather than just inequalities or asymptotic results, while still gaining some information about the probability that a RAF exists. Secondly, in origin of life studies, it is relevant to consider populations of self-sustaining autocatalytic chemical networks, which may be subject to competition and selection, a topic which has explored by others (see e.g. \cite{sza, vas, vir}), and information concerning the likely diversity of RAFs available in a given chemical reaction system is therefore a natural question. In previous analyses where RAFs have been identified, subsequent analysis has revealed a large number of RAFs present within the RAF; for example, for a 7-reaction RAF in a laboratory-based study involving RNA-ribosymes (from \cite{vai}) more than half of the $2^7 = 128$ subsets of this RAF are also RAFs ({\em cf.} Fig. 5 of \cite{ste}). Simulation studies involving Kauffman's binary polymer model have also identified a large number of RAFs present once catalysis rises above the level at which RAFs first appear \cite{hor15}. }
\textcolor{black}{The structure of this paper is as follows. We begin with some formal definitions, and then described different models for catalysis and inhibition. In Section~\ref{gensec}, we present the main mathematical result, along with some remarks, and proof. We then present a number of consequences of our main result, beginning with a generic result concerning the impact of inhibition when catalysis is uniform. We then investigate the impact of different catalysis distributions on the expected number of RAF arising in `elementary' chemical reaction systems, focusing on the catalysis rate at which RAFs first appear. We end with some brief concluding comments.}
\subsection{Definitions}
Let $X$ be a set of molecule types; $R$ a set of reactions, where each reaction consists of a subset of molecule types as input (`reactants') and a set of molecule types as
outputs (`products'); and let $F$ be a subset of $X$ (called a `food set'). We refer to the triple $\mathcal Q=(X, R, F)$ as a {\em chemical reaction system with food set} and, unless stated otherwise, we impose no further restrictions on $\mathcal Q$ (e.g. it need not correspond to a system of polymers and a reaction can have any positive number of reactants and any positive number of products).
Given a reaction $r \in R$, we let $\rho(r) \subseteq X$ denote the set of reactants of $r$ and $\pi(r)$ denote the set of products of $r$.
Moreover, given a subset $R'$ of $R$, we let $\pi(R') = \bigcup_{r \in R'} \pi(r).$
A subset $R'$ of $R$ is said to be {\em $F$-generated} if $R'$ can be ordered $r_1, r_2, \ldots, r_{|R'|}$ so that
$\rho(r_1) \subseteq F$ and for each $i \in \{2, \ldots, |R|\}$, we have $\rho(r_i) \subseteq F \cup \pi(\{r_1, \ldots, r_{i-1}\})$. In other words, $R'$ is $F$-generated if the $R'$ can be built up by starting from one reaction that has all its reactants in the food set, then adding reactions in such a way that each added reaction has each of its reactants present either in the food set or as a product of a reaction in the set generated so far.
Now suppose that certain molecule types in $X$ can catalyse certain reactions in $R$.
A subset $R'$ of $R$ is said to be {\em Reflexively Autocatalytic and F-generated} (more briefly, a {\em RAF}) if $R'$ is nonempty and each reaction $r \in R'$ is catalysed by
at least one molecule type in $F \cup \pi(R')$ and $R'$ is $F$-generated.
We may also allow certain molecule types to also inhibit reactions in $R$, in which case a subset
$R'$ of $R$ is said to be an {\em uninhibited RAF} (uRAF) if
$R'$ is a RAF and no reaction in $R'$ is inhibited by any molecule type in $F \cup \pi(R')$. \textcolor{black}{The notion of a uRAF was first defined and studied in \cite{mos}.}
\textcolor{black}{Notice that inhibition is being applied in a strong sense: a reaction $r$ cannot be part of a uRAF if $r$ is inhibited by at least one molecule type present, regardless of how many molecule types are catalysts for $r$ and present in the uRAF}.
Since a union of RAFs is also a RAF, when a RAF exists in a system, there is a unique maximal RAF. However, the same does not apply to uRAFs -- in particular, the union of two uRAFs can fail to be a uRAF.
These concepts are illustrated in Fig.~\ref{fig1}.
\begin{figure}[h]
\centering
\includegraphics[scale=1.1]{fig1.pdf}
\caption{A chemical reaction system consisting of the set of molecule types $X=\{a, b, c, a', b', c', x, x', w,w', z,z'\}$, a food set $F=\{a, b, c, a', b', c'\}$ \textcolor{black}{(each placed inside a green box)} and the reaction set
$R=\{r_1, r_2, r_1', r_2', r_3, r_4\}$ \textcolor{black}{(bold, beside small white-filled squares)}. Solid arcs indicate two reactants entering a reaction and a product coming out.
Catalysis is indicated by dashed arcs (blue) and inhibition (also called blocking) is indicated by dotted arcs (red). The full set of
reactions is not a RAF, but it contains several RAFs that are contained in the unique maximal RAF $R'=\{r_1, r_1', r_2, r_2'\}$ (note that $r_4$ is not part of this RAF even though it is catalysed and the reactants of $r_4$ are present in the food set).
The maximal RAF $R'$ is not a uRAF (e.g. $r'_1$ is inhibited by $z$ which is a product of $r_2$); however, $\{r_1, r_2\}$ and $\{r_1', r_2'\}$ are uRAFs, and so are $\{r_1\}, \{r_1'\}$ and $\{r_1, r_1'\}$. }
\label{fig1}
\end{figure}
\section{Modelling catalysis and inhibition}
We will model catalysis and also blocking (inhibition) by random processes.
To provide for greater generality, we allow the possibility that elements in a subset $C^{-}$ (respectively, $B^{-}$) of the food set cannot catalyse (respectively block) any reaction in $R$. Let $c=|F \setminus C^{-}|$ and $b=|F \setminus B^{-}|$. Thus $c$ (respectively $b$) is the number of food elements that are possible catalysts (respectively blockers).
Suppose that each molecule type $x \in X\setminus C^{-}$ has an associated probability $C_x$ of catalysing any given reaction in $R$. \textcolor{black}{The values $C_x$ are sampled independently from a distribution $\mathcal D$, for each $x \in X$.}
This results in a random assignment of catalysis (i.e. a random subset $\chi$ of $X \times \mathcal R$), where $(x,r) \in \chi$ if $x$ catalyses $r$. Let
$\mathcal C_{x,r}$ be the event that $x$ catalyses $r$.
We assume that:
\begin{itemize}
\item[($I_1$)] $\mathcal C=(C_x, x\in X\setminus C^{-})$ is a collection of independent random variables.
\item[($I_2$)] Conditional on $\mathcal C$, $(\mathcal C_{x,r}: x\in X \setminus C^-, r \in R)$ is a collection of independent events.
\end{itemize}
Since the distribution of $C_x$ is the same for all $x \in X\setminus C^-$, we \textcolor{black}{will use $C$ to denote an arbitrary random variable sampled from the distribution $\mathcal D$.} Let
$\mu_C = \mathbb E[C]$ and, for $i\geq 0$, let $\lambda_i$ be the $i$--th moment of $1-C$; that is:
$$\lambda_i =\mathbb E[(1-C)^i].$$
Although our results concern general catalysis distributions, we will pay particular attention to three forms of catalysis \textcolor{black}{which have been considered in previous studies (e.g. \cite{hor16}), and which will be compared in our analyses.}
\begin{itemize}
\item The {\em uniform model:} Each $x \in X\setminus C^-$ catalyses each reaction in $\mathcal R$ with a fixed probability $p$. Thus, $C =p$ with probability 1, and so $\mu_C = p$.
\item The {\em sparse model:} $C= u$ with probability $\pi$ and $C =0$ with probability $1-\pi$, and so $\mu_C = u \pi$.
\item
The {\em all-or-nothing model:} $C=1$ with probability $\pi$ and $C=0$ with probability $1-\pi$, and so $\mu_C = \pi$.
\end{itemize}
The uniform model is from Kauffman's binary polymer network and has been the default for most recent studies involving polymer models \cite{hor17}.
More realistic catalysis scenarios can be modelled by allowing $C$ to take a range of values
values around $\mu_C$ with different probabilities. The {\em sparse model} generalises the uniform model slightly by allowing a (random) subset of molecule types to be catalysts. In this model, $\pi$ would typically be very small in applications (i.e. most molecules are not catalysts but those few that are will catalyse a lot or reactions, as in the recent study of metabolic \textcolor{black}{origins, described in} \cite{xav}). The all-or-nothing model is a special case of the sparse model.
The emergence of RAFs in these models (and others, including a power-law distribution) was investigated in \cite{hor16}.
For these three models, the associated $\lambda_i$ values are given as follows: $\lambda_0=1$, and
for all $i\geq 1$:
\begin{equation}
\label{mu-eq}
\lambda_i = \begin{cases}
(1-\mu_C)^i, & \mbox{(uniform model)};\\
1-\pi + \pi(1-u)^i, & \mbox{(sparse model)};\\
1-\mu_C, & \mbox{(all-or-nothing model)}.
\end{cases}
\end{equation}
In addition to catalysis, we may also allow random blocking (inhibition) of reactions by molecules, formalised as follows.
Suppose that each molecule type $x \in X\setminus B^{-}$ has an associated probability $B_x$ of blocking any given reaction in $R$. We will treat $B_x$ as a random variable taking values in $[0,1]$ with a common distribution $\hat{\mathcal D}$. This results in a random assignment of blocking ( i.e. a random subset \textcolor{black}{$\beta$} of $X \times \mathcal R$), where \textcolor{black}{$(x,r) \in \beta$} if $x$ blocks reaction $r$. Let
$\mathcal B_{x,r}$ be the event that $x$ blocks $r$. We assume that:
\begin{itemize}
\item[($I'_1$)] $\mathcal B=(B_x, x\in X\setminus B^{-})$ is a collection of independent random variables.
\item[($I'_2$)] Conditional on $\mathcal B$, $(\mathcal B_{x,r}: x\in X \setminus C^-, r \in R)$ is a collection of independent events.
\end{itemize}
Since the distribution of $B_x$ is the same for all $x$, we will use $B$ to denote this random variable, let $\mu_B = \mathbb E[B]$ and, for $i\geq 0$, let: $$\hat{\lambda}_i =\mathbb E[(1-B)^i].$$
We also assume that catalysis and inhibition are independent of each other. Formally, this is the following condition:
\begin{itemize}
\item[($I_3$)] $C$--random variables in ($I_1$, $I_2$) are independent of the $B$--random variables in ($I'_1$, $I'_2$).
\end{itemize}
Note that $(I_3)$ allows the possibility that a molecule type $x$ both catalyses and blocks the same reaction $r$ (the effect of this on uRAFs is the same as if $x$ just blocks $r$; (i.e. blocking is assumed to trump catalysis)).
Notice also that $\lambda_0 = \hat{\lambda}_0 = 1$.
\section{Generic results}
\label{gensec}
To state our first result, we require two further definitions. Let $\mu_{\rm RAF}$ and $\mu_{\rm uRAF}$ denote the expected number of RAFs and uRAFs (respectively) arising in $\mathcal Q$ under the random process of catalysis and inhibition described.
For integers $k, s\geq1$ let $n_{k,s}$ be the number of F-generated subsets $R'$ of $R$ \textcolor{black}{that have size $k$ and} for which the total number of non-food products in $X$ produced by reactions in $R'$ is $s$. Note that $n_{k,s}=0$ for $s>\min\{|X|-F, k M\}$ where $M$ is the maximum number of products of any single reaction.
Part (i) of the following theorem gives an exact expression for $\mu_{\rm RAF}$ and $\mu_{\rm uRAF}$, which we then use in Parts (ii) and (iii) to describe the catalysis and inhibition distributions (having a given mean) that minimise or maximise the expected number of RAFs and uRAFs. We apply this theorem to particular systems in the next section.
\begin{theorem}
\label{thm1}
Let $\mathcal Q$ be any chemical reaction system with food set, accompanied by catalysis and inhibition distributions $\mathcal D$ and $\hat{\mathcal D}$, respectively.
\begin{itemize}
\item[(i)]
The expected number of RAFs and uRAFs for $\mathcal Q$ is given as follows:
\begin{equation}
\label{mumu1}
\mu_{\rm RAF}= \sum_{k\geq 1,s\geq 0} n_{k,s} \left(\sum_{i=0}^k (-1)^i \binom{k}{i} \lambda_i^{s+c}\right)
\end{equation}
and
\begin{equation}
\label{mumu2}
\mu_{\rm uRAF}= \sum_{k\geq 1, s\geq 0} n_{k,s} \left(\sum_{i=0}^k (-1)^i \binom{k}{i} \lambda_i^{s+c}\right) \hat{\lambda}_k^{s+b}.
\end{equation}
\item[(ii)]
Among all distributions $\mathcal D$ on catalysis having a given mean $\mu_C$, the distribution that minimises the expected number of RAFs and uRAFs (for any inhibition distribution) is the uniform model (i.e. $C = \mu_C$ with probability 1).
\item[(iii)] Among all distributions $\hat{\mathcal D}$ on inhibition having a given mean $\mu_B$, the following hold:
\begin{itemize}
\item[(a)] the distribution that minimises the expected number of uRAFs (for any catalysis distribution) is the uniform model ($B = \mu_B$ with probability 1). \item[(b)] the distribution that maximises the expected number of uRAFs (for any catalysis distribution) is the all-or-nothing inhibition model (i.e. $B=1$ with probability $\mu_B$, and $B=0$ with probability $1-\mu_B)$.
\end{itemize}
\end{itemize}
\end{theorem}
\bigskip
\textcolor{black}{We give the proof of Theorem~\ref{thm1} shortly, following some brief remarks.}
\subsection{Remarks}
\begin{itemize}
\item[(1)]
If $P_{\rm RAF}$ and $P_{\rm uRAF}$ are the probability that $\mathcal Q$ contains a RAF and a uRAF, respectively, then these quantities are bounded above as follows:
$$P_{\rm RAF} \leq \mu_{\rm RAF} \mbox{ and } P_{\rm uRAF} \leq \mu_{\rm uRAF}.$$
This follows from the well-known inequality $\mathbb P(V>0) \leq \mathbb E[V]$ for any non-negative integer-valued random variable $V$, upon taking $V$
to be the number of RAFs (or the number of uRAFs). We will explore the extent to which $P_{\rm RAF}$ underestimates $\mu_{\rm RAF}$ in Section~\ref{relation}.
\item[(2)]
Theorem~\ref{thm1} makes clear that the only relevant aspects of the network $(X, R)$ for $\mu_{\rm RAF}$ and $\mu_{\rm uRAF}$ are encoded entirely within the coefficients $n_{k,s}$ (the two stochastic terms depend only on $r$ and $s$ but not on further aspects of the network structure). By contrast, an expression for the probabilities $P_{\rm RAF}$ and $P_{\rm uRAF}$ that a RAF or uRAF exists
requires more detailed information concerning the structure of the network. This is due to dependencies that arise in the analysis.
Notice also that Theorem~\ref{thm1} allows the computation of $\mu_{\rm uRAF}$ in $O(|R|^2 \times |X|)$ steps (assuming that the $\lambda_i, \hat{\lambda}_i$ and $n_{k,s}$ values are available).
\item[(3)]
Although the computation or estimation of $n_{k,s}$ may be tricky in general systems, Eqn.~(\ref{mumu1}) can still be useful (even with little or no information about $n_{k,s}$) for asking comparative types of questions. In particular, Parts (ii) and (iii) provide results that are independent of the details of the network $(X, R, F)$.
In particular, Theorem~\ref{thm1}(ii) is consistent with simulation results in \cite{hor16} for Kauffman's binary polymer model, in which variable catalysis rates (the sparse and all-or-nothing model) led to RAFs appearing at lower average catalysis values ($\mu_C$) than for uniform catalysis.
\item[(4)]
\textcolor{black}{For the uniform model, note that the term $\left(\sum_{i=0}^k (-1)^i \binom{k}{i} \lambda_i^{s+c}\right)$ in Eqns.~(\ref{mumu1}) and (\ref{mumu2}) simplifies to
$\left[ 1- (1-\mu_C)^{s+c}\right]^k$.}
\end{itemize}
\bigskip
\subsection{\textcolor{black}{Proof of Theorem~\ref{thm1}}}
For Part (i), recall that $\pi(R')$ denotes the set of products of reactions in $R'$.
\textcolor{black}{ For $k, s \geq 1,$ let ${\rm FG}(k,s)$ denote the collection of subsets $R'$ of $R$ that satisfy all of the following three properties:
\begin{itemize}
\item[(i)] $R'$ has size $k$;
\item[(ii)] $R'$ is F-generated, and
\item[(iii)] the number of non-food molecule types produced by reactions in $R'$ is $s$.
\end{itemize}
}
Thus, $$n_{k,s}= |{\rm FG}(k,s)|.$$
For $R' \subseteq R$, let $\mathbb I_{R'}$ be the Bernoulli random variable
that takes the value $1$ if each reaction in $R'$ is catalysed by at least one product of a reaction in $R'$ or by an element of $F\setminus C^{-}$, and $0$ otherwise.
Similarly, let $\hat{\mathbb I}_{R'}$ be the Bernoulli random variable
that takes the value $1$ if no reaction in $R'$ is blocked by the product of any reaction in $R'$ or by an element of $F\setminus B^{-}$. Then the random variable
$$\sum_{k\geq 1,s\geq 0} \sum_{R' \in {\rm FG}(k,s)} \mathbb I_{\mathcal R'}\cdot \hat{\mathbb I}_{\mathcal R'}$$
counts the number of uRAFs present, so we have:
\begin{equation}\label{nicer}
\mu_{\rm uRAF} = \mathbb E\left[\sum_{k \geq 1, s\geq 0} \sum_{R' \in {\rm FG}(k,s)} \mathbb I_{\mathcal R'}\cdot \hat{\mathbb I}_{\mathcal R'}\right]
=\sum_{k\geq 1, s\geq 0} \sum_{R' \in {\rm FG}(k,s)} \mathbb E\left[\mathbb I_{\mathcal R'}\cdot \hat{\mathbb I}_{\mathcal R'} \right] $$
$$= \sum_{k \geq 1, s\geq 0} \sum_{R' \in {\rm FG}(k,s)} \mathbb E[ \mathbb I_{\mathcal R'}]\cdot\mathbb E[ \hat{\mathbb I}_{\mathcal R'}],
\end{equation}
where the second equality is by linearity of expectation, and the third equality is by the independence assumption ($I_3$).
Given $\mathcal R'\in {\rm FG}(k,s)$, let $C_1, C_2, \ldots, C_{s+c}$ be the random variables (ordered in any way) that correspond to the catalysis probabilities of
the $s$ products of $\mathcal R'$ and the $c$ elements of $F\setminus C^{-}$. We can then write:
\begin{equation}\label{nice}
\mathbb E[ \mathbb I_{\mathcal R'}] =\mathbb P(\mathbb I_{R'}=1) = \mathbb E[\mathbb P(\mathbb I_{R'}=1|C_1, C_2, \ldots, C_{s+c})],
\end{equation}
where the second expectation is with respect to the random variables $C_i$.
The event $\mathbb I_{R'}=1$ occurs precisely when each of the $r$ reactions in $R'$ is catalysed by at least one of the $s+c$ elements in
$(\pi(R')\setminus F) \cup (F\setminus C^{-})$. By the independence assumption ($I_2$),
\begin{equation}
\label{epr1}
\mathbb P(\mathbb I_{R'}=1|C_1, C_2, \ldots, C_{s+c}) = \prod_{r' \in R'} \left(1- \prod_{j=1}^{s+c} (1-C_j)\right) = \left(1- \prod_{j=1}^{s+c} (1-C_j)\right)^k.
\end{equation}
Set $V:= \prod_{j=1}^{s+c} (1-C_j)$. \textcolor{black}{Eqns.~(\ref{nice}) and (\ref{epr1}) then give:}
\begin{equation}
\label{epr2}
\textcolor{black}{\mathbb E[ \mathbb I_{\mathcal R'}] = \mathbb E[(1-V)^k] = \sum_{i=0}^k (-1)^i \binom{k}{i} \mathbb E[V^i],}
\end{equation}
\textcolor{black}{where the second equality is from the binomial expansion $(1-V)^k = \sum_{i=0}^k (-1)^i \binom{k}{i} V^i$, and linearity of expectation.}
Moreover, for each $i\geq 0$, we have:
\begin{equation}
\label{epr3}
\mathbb E[V^i] = \mathbb E\left[ \left[\prod_{j=1}^{s+c} (1-C_j)\right]^i\right]=\mathbb E\left[ \prod_{j=1}^{s+c} (1-C_j)^i\right] =\prod_{j=1}^{s+c} \mathbb E[(1-C_j)^i]\\
=\prod_{j=1}^{s+c} \lambda_i = \lambda_i^{s+c},
\end{equation}
where the first two equalities are trivial algebraic identities, the third is by the independence assumption ($I_1$), the fourth is by definition and the last is trivial.
\textcolor{black}{Substituting Eqn.~(\ref{epr3}) into (\ref{epr2})} gives:
\begin{equation}
\label{epr4}
\mathbb E[ \mathbb I_{\mathcal R'}] = \sum_{i=0}^k (-1)^i \binom{k}{i}\lambda_i^{s+c}.
\end{equation}
Turning to inhibition, a RAF subset $R'$ of $R$ in ${\rm FG}(k,s)$ is a uRAF precisely if no reaction in $R'$ is blocked by any
of the $s+b$ elements of $(\pi(R')\setminus F) \cup (F\setminus B^{-})$. By the independence assumption ($I'_2$),
$$\mathbb P(\hat{\mathbb I}_{R'}=1|B_1, B_2, \ldots, B_{s+b}) = \prod_{r' \in R'}\left(\prod_{j=1}^{s+b} (1-B_j)\right)$$
$$= \left(\prod_{j=1}^{s+b} (1-B_j)\right)^k =\prod_{j=1}^{s+b} (1-B_j)^k. $$
Applying expectation (using the independence assumption ($I'_1$)), together with the identity $\mathbb E[(1-B_j)^k] = \hat{\lambda}_k$ gives:
\begin{equation}
\label{epr5}
\mathbb E[\hat{ \mathbb I}_{\mathcal R'}] =\hat{\lambda}_k^{s+b}.
\end{equation}
Combining \textcolor{black}{Eqns.~(\ref{epr4}) and (\ref{epr5})} into Eqn.~(\ref{nicer}) gives the first equation in Part (i). The second is then obtained by putting $\hat{\lambda}_i = 1$ for all $i$.
\bigskip
{\em Parts (ii) and (iii):}
Observe that the function $u=(1-y)^k$ for $k \geq 1$ is convex and strictly convex when $k>1$.
Thus, by Jensen's Inequality, for any random variable $Y$, we have:
\begin{equation}
\label{in}
\mathbb E[(1-Y)^k] \geq (1-\mathbb E[Y])^k,
\end{equation}
with a strict inequality when $Y$ is nondegenerate and $k>1$.
For Part (ii), let $\textcolor{black}{V= } \prod_{j=1}^{s+c} (1-C_j)$. Then \textcolor{black}{by the first equality in Eqn.~(\ref{epr2}) we have:}
$$\mathbb E[ \mathbb I_{\mathcal R'}] = \mathbb E[(1-V)^k],$$
\textcolor{black}{and by Inequality~(\ref{in}) (with $Y=V$) we have:
\begin{equation}
\label{in2}
\mathbb E[ \mathbb I_{\mathcal R'}] \geq (1-\mathbb E[V])^k,
\end{equation}
\textcolor{black}{and the inequality is strict when $V$ is nondegenerate and $k>1$. }
By the independence assumption $(I_1)$, and noting that $\mathbb E[(1-C_j)] = 1-\mu_C$ we have:
\begin{equation}
\label{in3}
\mathbb E[V] = \mathbb E[ \prod_{j=1}^{s+c} (1-C_j)] = \prod_{j=1}^{s+c}\mathbb E[(1-C_j)] = (1-\mu_C)^{s+c},
\end{equation}
and substituting Eqn.~(\ref{in3}) into Inequality~(\ref{in2}) gives:}
$$\mathbb E[ \mathbb I_{\mathcal R'}] \geq (1-(1-\mu_C)^{s+c})^k,$$
with equality only for the uniform model.
This gives Part (ii).
\bigskip
For Part (iii)(a), Inequality (\ref{in}) implies that $\hat{\lambda}_k =\mathbb E[(1-B)^k)] \geq (1-\mu_B)^k$.
\textcolor{black}{Let $H(k,s) := \left(\sum_{i=0}^k (-1)^i \binom{k}{i} \lambda_i^{s+c}\right)$. By Eqn. (\ref{epr4}), $H(k,s) = \mathbb E[ \mathbb I_{\mathcal R'}]$ for $\mathcal R' \in {\rm FG}(k,s)$ and so $H(k,s) \geq 0$.
Thus, by Eqn.~(\ref{mumu2}) we have:
$$\mu_{\rm uRAF}= \sum_{k\geq 1, s\geq 0} n_{k,s} \cdot H(k,s) \cdot \hat{\lambda}_k^{s+b} \geq \sum_{k\geq 1, s\geq 0} n_{k,s} \cdot H(k,s) \cdot (1-\mu_B)^{k(s+b)}, $$
and the right-hand side of this inequality is the value of $\mu_{\rm uRAF}$ for the uniform model of inhibition. }
\bigskip
For Part (iii)(b),
suppose that $Y$ is a random variable taking values in $[0,1]$ with mean $\eta$ and let $Y_0$ be the random variable that
takes the value 1 with probability $\eta$ and $0$ otherwise. Then $\mathbb E[Y_0^m] = \eta$ for all $m \geq 1$, and $\mathbb E[Y^m] \leq \mathbb E[Y^2] \leq \eta$ for all $m\geq 1$ (since $Y^m \leq Y^2 \leq Y$ because $Y$ takes values in $[0,1]$); moreover,
$\mathbb E[Y^2]= \eta$ if and only if $\mathbb E[Y(1-Y)] = 0$, which implies that $Y=Y_0$.
Now apply this to $Y= (1-B)$ and $m=k$ to deduce for the distributions on $B$ that have a given mean $\mu_B$, $\hat{\lambda}_k$ is maximised when the distribution takes the value $1$ with probability $\mu_B$ and
zero otherwise.
\hfill$\Box$
\section{Applications}
\subsection{Inhibition-catalysis trade-offs under the uniform model}
For any model in which catalysis and inhibition are uniform, Theorem~\ref{thm1} provides a simple prediction concerning how the expected number of uRAFs compares with a model with zero inhibition (and a lower catalysis rate). To simplify the statement, we will assume $b=c$ and we will write $\mu_{\rm uRAF}(p, tp)$ to denote the dependence of $\mu_{\rm uRAF}$ on
$\mu_C=p$ and $\mu_B = tp$ for some value of $t$.
We will also write $p = \nu /N$, where $N$ is the total number of molecule types that are in the food set or can be generated by a sequence of reactions in $\mathcal R$. We assume in the following result that $p$ is small (in particular, $< 1/2$) and $N$ is large (in particular, $(1-\nu/N)^N$ can be approximated by $e^{-\nu}$).
The following result (which extends Theorem 2 from \cite{hor16}) applies to any chemical reaction system and provides a lower bound on the expected number of uRAFs in terms of the expected number of RAFs in the system with no inhibition (and half the catalysis rate); its proof relies on Theorem~\ref{thm1}.
\textcolor{black}{Roughly speaking, Corollary~\ref{thm2} states that for any chemical reaction system with uniform catalysis, if one introduces a limited degree of inhibition then by doubling the original catalysis rate, the expected number of uninhibited RAFs is at least as large as the original number of expected RAF before inhibition was present (and at the original catalysis rate). }
\begin{corollary}
\label{thm2}
For all non-negative values of $t$ with $t \leq \frac{1}{\nu}\ln(1+e^{-\nu})$, the following inequality holds:
$$
\mu_{\rm uRAF}(2p, tp) \geq \mu_{\rm RAF}(p, 0).
$$
\end{corollary}
\begin{proof}
\textcolor{black}{By Theorem~\ref{thm1}, and Remark (4) following this theorem, and noting that $\mu_C =p$ and $\mu_B=tp$ we have:
\begin{equation}
\label{por2}
\mu_{\rm uRAF}(2p, tp)= \sum_{k \geq 1, s\geq 0} n_{k,s} \left[(1- (1-2p)^{s+c})\cdot (1-tp)^{s+c}\right]^k,
\end{equation}
which can be re-written as:}
\begin{equation}
\label{por2plus}
\mu_{\rm uRAF}(2p, tp)= \sum_{k \geq 1, s\geq c} n_{k,s-c} \left[(1- (1-2p)^{s})\cdot (1-tp)^{s}\right]^k.
\end{equation}
Thus (putting $t=0$ in this last equation) we obtain:
\begin{equation}
\label{por3}
\mu_{\rm RAF}(p, 0)= \sum_{k \geq 1, s\geq c} n_{k,s-c} \left[1- (1-p)^{s}\right]^k.
\end{equation}
Now, for each $x\in (0, 0.5)$, we have: $$1-(1-2x)^s\geq 1-(1-x)^{2s} = (1-(1-x)^s)(1+(1-x)^s).$$
Thus (with $x=p$), we see that the term inside the square brackets in Eqn.~(\ref{por2plus}) exceeds the term in square brackets in Eqn.~(\ref{por3}) by a factor of
$(1+(1-p)^s)(1-tp)^s$, and this is minimised when $s = N$ (the largest possible value $s$ can take). \textcolor{black}{ Setting $s=N$ and writing $p = \nu/N$} we have
$$(1+(1-p)^s)(1-tp)^s \textcolor{black}{ = (1+ (1-\nu/N)^N(1-t\nu/N)^N} \sim (1+e^{-\nu}) e^{-t\nu}$$ and the last term on the right is at least 1 when $t$ satisfies the stated inequality (namely, $t \leq \frac{1}{\nu}\ln(1+e^{-\nu})$). \textcolor{black}{Thus $(1+(1-p)^s)(1-tp)^s \geq 1$, for all $s$ between 1 and $N$ and so}
each term in Eqn.~(\ref{por2plus}) is greater or equal to the corresponding term in square brackets in Eqn.~(\ref{por3}), which justifies the inequality in Corollary~\ref{thm2}.
\end{proof}
\subsection{Explicit calculations for two models on a subclass of networks}
\label{relation}
For the remainder of this section, we consider {\em elementary} chemical reaction systems (i.e. systems for which each reaction has all its reactants in the food set, as studied in \cite{ste}), with the further conditions that:
(i) each reaction has exactly one product,
(ii) different reactions produce different products,
(iii) no reaction is inhibited, and
(iv) no food element catalyses any reaction.
We can associate with each such system a directed graph $\mathcal G$ on the set $X-F$ of products of the reactions, with an arc from $x$ to $y$ if $x$ catalyses the reaction that produces $y$
(this models a setting investigated in \cite{j1, j2}).
RAF subsets are then in one-to-one correspondence with the subgraphs of $\mathcal G$ for which each vertex has indegree at least one. In particular, a RAF exists if and only if there is a directed cycle in $\mathcal G$ (which could be an arc from a vertex to itself).\footnote{An asymptotic study of the emergence of first cycles in large random directed graphs was explored in \cite{bol}.} In this simple set-up, if $N$ denotes the number of reactions (= number of non-food molecule types) then:
$$n_{k,s} = \begin{cases}
\binom{N}{k}, & \mbox{ if $k=s$;}\\
0, & \mbox{ otherwise.}
\end{cases}
$$
Applying Theorem~\ref{thm1}(i) gives:
\begin{equation}
\label{ab1}
\mu_{\rm RAF} = \sum_{j=1}^N \binom{N}{j} \left ( \sum_{i=0}^j (-1)^i\binom{j}{i} \lambda_i^j\right).
\end{equation}
Regarding catalysis, consider first the {\bf all-or-nothing model}, for which $\lambda_i= 1-\pi=1-\mu_C$ for $i\geq 1$ (and $\lambda_0=1$).
Eqn.~(\ref{ab1}) simplifies to:
\begin{equation}
\label{ab1a}
\mu_{\rm RAF} = 2^N - (2-\mu_C)^N,
\end{equation}
and we provide a proof of this in the Appendix.
This expression can also be derived by the following direct argument. First, note that a subset $S$ of the $N$ products of reactions does not correspond to a RAF if and only if each of the $|S|$ elements $x$ in $S$ has $C_x=0$.
The random variable $W=|\{x: C_x =1\}|$ follows the binomial distribution $Bin(N, \mu_C)$, and the proportion of sets of size $N$ that avoid a given set $S$ of size $m$
is $2^{-m}$. Thus the expected proportion of subsets that are not RAFs is the expected value of $2^{-W}$ where $W$ is the binomial distribution above. Applying standard combinatorial identities then leads to Eqn.~(\ref{ab1a}).
The probability of a RAF for the all-or-nothing models is also easily computed:
\begin{equation}
\label{ab2a}
P_{\rm RAF} = 1-(1-\mu_C)^N.
\end{equation}
Notice that one can select $\mu_C$ to tend to 0 in such a way $P_{\rm RAF}$ converges to 0 exponentially quickly with $N$ while $\mu_{\rm RAF}$ tends to infinity at an exponential rate with $N$ (this requires $\mu_C$ to decay sufficiently fast with $N$ but not too fast, e.g. $\mu_C = \Theta(N^{-1-\delta})$ for $\delta>0$).
Comparing Eqns.~(\ref{ab1a}) and (\ref{ab2a}), we also observe the following identity: $$\mu_{\rm RAF}(\mu_C) = 2^N P_{\rm RAF}(\mu_C/2 ).$$
By contrast, for the {\bf uniform model}, applying straightforward algebra to Eqn.~(\ref{ab1}) leads to
\begin{equation}
\label{ab3x}
\mu_{\rm RAF} = \sum_{j=1}^N \binom{N}{j} \left(1- (1-\mu_C)^j\right)^j.
\end{equation}
\textcolor{black}{ We now use these formulae to investigate the relationship between $P_{\rm RAF}$ and $\mu_{\rm RAF}$ in elementary chemical reaction systems (satisfying conditions (i)--(iv)) as $N$ becomes large; in particular the impact of the choice of model (all-or-nothing vs uniform) on this relationship. }
\bigskip
\noindent {\bf Asymptotic properties of the two models at the catalysis level where RAFs arise:} For the all-or-nothing and uniform models, RAFs arise with a given (positive) probability, provided that $\mu_C$ converges to 0 no faster than $N^{-1}$
as $N$ grows. Thus, it is helpful to write $\mu_C = \gamma/N$ to compare their behaviour as $N$ grows.
For the all-or-nothing model, Eqns.~(\ref{ab1a}) and (\ref{ab2a}) reveal that:
$$\frac{\mu_{\rm RAF}}{ P_{\rm RAF}} =
2^N \frac{\left(1-\left(1-\frac{\gamma}{2N}\right)^N\right)}{\left(1-\left(1-\frac{\gamma}{N}\right)^N\right)}
\sim 2^N \left(\frac{1-\exp(-\gamma/2)}{1-\exp(-\gamma)}\right),$$
where $\sim$ is asymptotic equivalence as $N$ becomes large (with $\gamma$ being fixed),
and so:
\begin{equation}
\label{abu}
\frac{\mu_{\rm RAF}}{ P_{\rm RAF}} \sim 2^{N-1}(1 + O(\gamma)),
\end{equation}
Let us compare this with the uniform model with the same $\mu_C$ (and hence $\gamma$) value.
It can be shown that when $\gamma< e^{-1}$, we have:
\begin{equation}
\label{ab0}
\lim_{N \rightarrow \infty} \sum_{j=1}^N \binom{N}{j} \left(1- (1-\gamma/N)^j\right)^j = \gamma + o(\gamma).
\end{equation}
where $o(\gamma)$ has order $\gamma^2$ as $\gamma \rightarrow 0$ (a proof is provided in the Appendix).
By Theorem 1 of \cite{hor2} (and for any value of $N$ and assuming $\gamma<1$), we have:
\begin{equation}
\label{ab3y}
1-\exp(-\gamma) \leq P_{\rm RAF} \leq -\ln(1-\gamma).
\end{equation}
In particular, for small $\gamma$ and the uniform model we have:
\begin{equation}
\label{ab4y}
P_{\rm RAF} = \gamma + o(\gamma).
\end{equation}
Eqns.~(\ref{ab1}), (\ref{ab0}), and (\ref{ab4y}) provide the following result for the uniform model when $\gamma < e^{-1}$:
\begin{equation}
\label{abu2}
\frac{\mu_{\rm RAF}}{ P_{\rm RAF}} \sim 1 + O(\gamma),
\end{equation}
where $\sim$ again denotes asymptotic equivalence as $N$ becomes large (with $\gamma$ fixed).
Comparing Eqns.~(\ref{abu}) and (\ref{abu2}) reveals a key difference in the ratio $\mu_{\rm RAF}/ P_{\rm RAF}$ between the all-or-nothing and uniform models when $N$ is large and $\gamma$ is small: the former equation involves an exponential term in $N$, while the second does not. This can be explained as follows. In the all-or-nothing model, the existence of a RAF comes down to whether or not there is a reaction $r$ that generates a universal catalyst; when there is, then any subset of the $N$ reactions that contains $r$ is a RAF. By contrast, with the uniform model at a low catalysis level where RAF are improbable, if a RAF exists, there is likely to be only one. \textcolor{black}{Note that the results in this section are particular to chemical reaction systems that are elementary and satisfy properties (i)--(iv) as described at the start of this section.}
\section{Concluding comments}
In this paper, we have focused on the expected number of RAFs and uRAFs (rather than the probability of at least one such set existing), as this quantity can be described explicitly, and generic results described via this expression can be derived (e.g. in Parts (ii) and (iii) of Theorem~\ref{thm1} and Corollary~\ref{thm2}). Even so, the expressions in Theorem~\ref{thm1} involve quantities $n_{k,s}$ that may be difficult to quantify exactly; thus in the second part of the paper, we consider more restrictive types of systems.
In our analysis, we have treated inhibition and catalysis as simple and separate processes. However, a more general approach would allow reactions to proceed under rules that are encoded by Boolean expressions. For example, the expression $(a \wedge b) \vee c \vee (d \wedge \neg e)$ assigned to a reaction $r$ would allow $r$ to proceed if at least one of the following holds: (i) both $a$ and $b$ are present as catalysts, or (ii) $c$ is present as a catalyst or (iii) $d$ is present as a catalyst and $e$ is not present as an inhibitor. Extending the results in this paper to this more general setting could be an interesting exercise for future work.
\section{Acknowledgements}
\textcolor{black}{We thank the two reviewers for a number of helpful comments on an earlier version of this manuscript.}
| proofpile-arXiv_065-260 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
To realize pulsed emission in fiber lasers, Q-switching is one of the preferred technology to generate short and high energy pulses which are widely employed in optical communications, industrial processing, sensing, medicine and spectroscopy, etc. \cite{chenieee20}. Besides, nonlinear frequency conversion \cite{peremansol}, Doppler LIDAR \cite{ouslimani} and coherent beam combinations \cite{heoe14,zhouieee15} require that narrow bandwidths of the short pulses to elevate the conversion efficiency, measurement accuracy and beam quality. Generally, a Q-switching and a band-limited element are both necessary to achieve, separately, a Q-switched pulse emission and a narrow spectral bandwidth effect. On the one hand, active modulators with external signals (such as an acoustic optical modulator or a piezoelectric actuator \cite{lees32,Posada2017,Kaneda2004}) and passive saturable absorbers (e.g., semiconductor saturable absorption mirrors and two-dimensional materials) have both been exploited to obtain Q-switched operation \cite{Tse2008,Li2017Mode,lipr6,Yao2017Graphene}; On the other hand, bandpass filter, phase-shifted FBG and multimode interference filter \cite{Tse2008,Chakravarty2017,Popa2011} are also employed to narrow the bandwidth. Besides, some configurations based on the spectral narrowing effect (e.g., suppressing ASE gain self-saturation and coupled interference filtering) have also been adopted to achieve narrow spectra \cite{Yao19oe,Anting2003}. However, one has to face such a fact that such separated functions usually result in highly complex laser cavity with a pretty low reliability. In the last decade, a highly integrated reliable saturable absorber filter with both of saturable absorption and spectral filtering in one device was achieved by forming a filter in an unpumped (without 975 nm pump light) rare-earth-doped fiber \cite{poozeshjlt,yehoe15,yehlpl4}. However, these saturable absorber filters were commonly used to realize continuous-wave narrow bandwidth lasing because it is difficult for the rare-earth-doped fibers to meet the FSA Q-switching criterion due to their small absorption cross-sections and low doping concentrations in the corresponding radiation bands \cite{tsaioe}. Tasi, et. al. proposed a method of {\it{mismatch of mode field areas}} to make the unpumped EDF satisfy the Q-switching criterion $C_q>1$ or even $C_q>1.5$ \cite{oltsai}, but the spectral filtering and narrow bandwidth output were not involved in the laser.
In this work, we proposed a method to achieve an SDIG by inserting a segment of unpumped EDF between a circulator and a fiber Bragg grating (FBG). Theoretical analysis and experimental observations confirmed both the saturable absorption and spectral filtering can be realized simultaneously with such an SDIG. Further investigation showed the FSA Q-switching criterion in our laser can be degenerated to $C_q=1$ due to the spectral filtering of the SDIG. In addition, the spectral width of the Q-switched pulses can be easily modulated by the length of the SDIG and the input pump power. The proposed configuration is quite efficient to generate the Q-switched pulses with narrow bandwidths.
\section{Experimental configuration}
\begin{figure}[h!]
\centering\includegraphics[width=8.5cm]{fig1}
\caption{The schematic diagram of the all-fiber Q-switched laser. Inside the gray box is the SDIG.}
\label{fig1}
\end{figure}
\noindent The architect of the proposed all-fiber Q-switched laser is depicted in Fig. \ref{fig1}. In the cavity, two pieces of EDFs (Liekki, Er110-4/125) are utilized for gain medium (with a length of 50 cm) and SDIG, respectively. All the components are directly connected by single-mode fibers (SMF-28e) and the core/inner cladding diameters of the EDFs and SMFs are 4/125 $\mu$m and 9/125 $\mu$m, respectively. The gain medium is pumped by a pigtailed diode laser emitting continuous wave at 975 nm through a 980/1550 nm wavelength division multiplexing (WDM). When the light goes through a 30/70 optical coupler (OC), 30$\%$ of the energy outputs and 70$\%$ continues to propagate in the cavity. Then, a circulator (CIR) with three ports and a reflective FBG (98$\%$ reflectivity at the central wavelength of 1550 nm) with the 3 dB bandwidth of 0.5 nm controls the light from port 1 to port 2, through the EDF2, reflected by the FBG, back to port 2 and port 3. Finally, the light enters into the WDM and finishes one roundtrip. The $\sim$10.3-m-long all-fiber cavity is compact and misalignment free, and all the components are commoditized. For measuring the output pulse, a real-time digital storage oscilloscope (DSO, Agilent Technologies, DSO9104A) with a bandwidth of 2.5 GHz, an optical spectrum analyzer (OSA, YOKOGAWA, AQ6370C) and a frequency spectrum analyzer (Agilent Technologies, N9000A) are employed to monitor the pulse trains, optical spectra and radio frequency signals, respectively.
\begin{figure}[h!]
\centering\includegraphics[width=8.5cm]{fig2}
\caption{Characteristics of the SDIG. (a) Absorption and emission cross-sections of EDF; (b) and (c) imaginary and real part of the susceptibility versus normalized pump power from $q=0$ to $1$, respectively; (d) reflect bandwidth of the SDIG with respect to the length of EDF and the refractive rate change $\Delta n$ (inset).}
\label{fig2}
\end{figure}
\noindent It was demonstrated that a fiber with high concentration of active ions can be used as a saturable absorber for generating a pulsed regime due to the ion clusters induced nonradiative transition~\cite{kurkovqe2010}. The doping concentration of the EDF in this work is $\rho=8\times 10^{25}\quad\rm ions/m^3$, which will be verified to be enough for a FSA in the experiment. In the cavity, the coherence of two counter-propagating lasing forms a standing-wave field between the circulator and the FBG. When the EDF2 absorbs the standing-wave field energy, a period spatial refractive index distribution is induced in the EDF2 due to the spatial selective saturation of the transition between the ground state and the excited state \cite{stepanovjpd}. Thus the SDIG is achieved. Figure. \ref{fig2}(a) depicts the absorption and emission cross-sections of the EDF2. The pink region represents the optical spectrum at 1550$\pm$0.25 nm which is limited by the FBG. In this region, the emission cross-section is larger than the absorption cross-section, and, nevertheless, there is saturable absorption characteristic in the EDF. The result shows inconformity with the opinion that the absorption cross-section should be larger than the emission cross-section \cite{tsaioe,oltsai}. Besides, in this setup, the saturable absorber Q-switching criterion is degenerated to $C_q=1$. We contribute that the grating region of the SDIG reflects the light step by step, and little energy reaches the back part of the SDIG. Thus, the back part of the SDIG still offers saturable absorption effect for a given power which is saturate for the FSAs without spectral filtering. In other words, the spectral filtering expands the Q-switching condition of the EDF. In the EDF2, the erbium ion transition occurs between energy levels $^4\rm I_{15/2}$ and $^4\rm I_{13/2}$ if the incident light is limited in 1550$\pm$0.25 nm region. Under this circumstance, the EDF can be regarded as a two-level system. Once the EDF2 absorbs light, the electric field of the light will result in the change of the susceptibility whose imaginary part $\chi''(\omega)$ is related to the absorption and emission cross-sections $\sigma_a$, $\sigma_e$ and atomic populations densities $N_1$, $N_2$. $\chi''(\omega)$ can expressed as \cite{desurvire}
\begin{equation}
-\chi''(\omega)=\frac{n_{eff}c}{\omega}[\sigma_e(\omega)N_2-\sigma_a(\omega)N_1],
\end{equation}
where $n_{eff}=1.46$ is the refractive index of the EDF and $c$ represents the light speed in vacuum. The relationship of the real and imaginary parts of the atomic susceptibility is expressed by Kramers-Kronig relation (KKR)
\begin{equation}
\chi'(\omega)=\frac{1}{\pi}P.V.\int_{-\infty}^{+\infty}\frac{\chi''(\omega')}{\omega'-\omega}\rm d\omega',
\end{equation}
where $N_1=\rho_0/(1+q)$ and $N_2=\rho_0q/(1+q)$ describe the population densities at the two energy levels and $q$ is the normalized input power. $q=0$ and $q=1$ represent the EDF with no input power and saturation state, respectively. Figures \ref{fig2}(b) and (c) depict the imaginary and real parts of the susceptibility with different $q$. When $q$ is increased, $\chi''(\omega)$ enlarges and the absorption rate of the EDF reduces gradually. Meanwhile, the reduced $\chi'(\omega)$ reflects the decrease in refractive index change through $\delta n(\omega)=(\Gamma_s/2n)\chi'(\omega)$. In the EDF, the overlap factor $\Gamma_s=0.5$. From Fig. \ref{fig2}(c), the refractive index change at 1550 nm is calculated as $2.89\times10^{-6}<\delta n<9.25\times10^{-6}$, corresponding to a maximum refractive index difference $\Delta n$ of the EDF of $6.36\times10^{-6}$.
Inside the unpumped EDF, the formed DIG is considered as a Bragg reflective grating \cite{stepanovjpd}. Thus the FWHM bandwidth is described by \cite{zhangoe16}
\begin{equation}
\Delta \lambda=\lambda\kappa\sqrt{(\frac{\Delta n}{2 n_{eff}})^2+(\frac{\lambda}{2n_{eff}L_g})^2},
\end{equation}
where $\lambda$ and $L_g$ are the central wavelength of light and the length of EDF, respectively. $\kappa=2\Delta n/(\lambda n_{eff})$ is the coupling coefficient of the DIG. The reflective bandwidth versus $L_g$ and $\Delta n$ is shown in Fig. \ref{fig2}(d). The reflectance bandwidth decreases as the EDF lengthens and $\Delta n$ (related to the input power of pump source) becomes small. The marks represent the lengths and $\Delta n$ of the used EDFs in this work. The reflective bandwidths $\Delta \lambda$ are calculated as 69.2 pm, 50.3 pm and 30.0 pm with the EDF lengths of 7 cm, 10 cm and 20 cm, respectively. Apparently, the saturable absorption and spectral filtering can both be achieved for the SDIG, thus it can be used as a narrow bandwidth SA.
\section{Experimental results}
\begin{figure}
\centering\includegraphics[width=8.5cm]{fig3}
\caption{Pulse characteristics including (a) average powers, (b) repeat frequencies, (c) pulse durations and (d) single pulse energies versus pump power.}
\label{fig3}
\end{figure}
\noindent In our experiment, we modulated the pump power from 1-650 mW and measured the Q-switching performances in terms of the average powers, repeat frequencies, pulse durations and single pulse energies when the EDF2 with lengths of 7 cm, 10 cm and 20 cm was spliced in the cavity one by one, as shown in Fig. \ref{fig3}, the laser with the three different lengths of EDFs operates in Q-switching regime when the pump power is increased from the lasing thresholds (70 mW, 100 mW and 250 mW for the lengths of EDF2 of 7 cm, 10 cm and 20 cm, respectively) to the maximum 650 mW. The self-started characteristic manifests the effective and high efficiency of the SDIG for Q-switching. From Fig. \ref{fig3} (a), the average powers increase linearly from 0.56 mW, 0.62 mW, 5.83 mW to 27.13 mW, 25.74 mW and 21.38 mW with the gradually raised pump power, and the slope efficiencies are 4.72$\%$, 4.44$\%$ and 3.83$\%$ corresponding to the lengths of EDF2 of 7 cm, 10 cm and 20 cm, respectively. The low slope efficiencies mainly originate from the high loss induced by the narrow reflect bandwidth of the SDIG. Furthermore, during the reduction process of the pump power, the bistable state appears and the obtained minimum emission powers are 0.46 mW, 0.62 mW and 1.05 mW at the pump powers of 63 mW, 80 mW and 120 mW, respectively. During the same modulation process of the pump power, the repeat frequencies and single pulse energies promote while the growth rates reduce [Figs. \ref{fig3}(b) and (d)]. The pulse durations showed in Fig. \ref{fig3}(c) become narrow at first and reach their steady values gradually with the increased pump power. Comparing the results in Fig. \ref{fig3}, we contribute that a larger cavity loss is induced by a longer SDIG due to the large lasing absorption length and narrower bandwidth of the SDIG, leading to the lower average powers, less slope efficiencies and repeat frequencies, larger single pulse energies and broader pulse durations. The deduction is identical with the prediction of the Eq. 3 and Fig. \ref{fig2}(d).
\begin{figure}
\centering\includegraphics[width=8.5cm]{fig4}
\caption{Typical output pulse performance. (a) pulse trains, (b) single pulse waveforms, (c) fundamental frequencies and (d) RF spectrums at the pump power of 450 mW.}
\label{fig4}
\end{figure}
\begin{figure}
\centering\includegraphics[width=8.5cm]{fig5}
\caption{ Optical spectrums of the Q-switched laser with the lengths of the SDIGs of (a) 7 cm, (b) 10 cm and (c) 20 cm, respectively.}
\label{fig5}
\end{figure}
When the pump power is fixed at 450 mW, the experimental results including the pulse intensities and radio frequency characteristics with the three EDF2 are detected, which are shown in Fig. \ref{fig4}. From Fig. \ref{fig4}(a), the pulse trains in the three lengths of EDF2 are all stable. The pulse intervals of 11.7 $\mu$s, 12.8 $\mu$s and 16.5 $\mu$s correspond to the repeat frequencies of 85.53 kHz, 78.05 kHz and 60.63 kHz in Fig. \ref{fig4}(d), respectively. Figure \ref{fig4}(b) shows the single pulse performances in the expanded time domain. Pulse durations of 1.14 $\mu$s, 1.25 $\mu$s and 1.75 $\mu$s are obtained through Gauss fitting of the pulse data in the three cases. With a shorter EDF2, the noise signal on the Q-switched pulse envelop becomes more obvious, thus the laser tends to unstable. Conversely, acquiring a purer Q-switched pulse needs a longer EDF2. The output performances in the frequency domain are depicted in Figs. \ref{fig4}(c) and (d), the fundamental frequencies (@BW: 2 kHz) manifest the signal-to-noise ratio (SNR) of the three Q-switched pulses exceed 50 dB. Besides, high order harmonic signals exist in the frequency range from 0 to 1 MHz. Obviously, with a fixed pump power, a longer SDIG decreases the repeat frequency and broadens the pulse duration, which is consistent with the results in Fig. \ref{fig3}.
\begin{table*}
\centering
\caption{Comparison diagram of the Er-doped fiber lasers based on FSAs ($\sim$ represents the estimated values according to the figures in these literatures).}
\begin{tabular}{ccccccc}
\hline
SAs & Average power (mW) & Repetition rate (kHz) & Central wavelength (nm) & Pulse duration ($\mu$s) & Spectral width (pm) & Refs \\
\hline
\hline
Tm & - & 0.1-6 & 1570 & 0.42 & - & \cite{oetsai}\\
Tm & 100-720 & 0.3-2 & 1580 & 0.1 & $\sim$1100 & \cite{lplkurkov}\\
Tm & 0.18-0.24 & 3.9-12.7 & 1557.6 & 7.4-20.6 & - & \cite{cpltiu}\\
Tm & 136 (Max) & 54.1-106.7 & 1560 & 3.28 & $\sim$200 & \cite{joprahman}\\
Tm & 1.57 & 14.45-78.49 & 1555.14, 1557.64 & 6.94-35.84 & - & \cite{oclatiff}\\
Tm-Ho & 0.6-1.1 & 1-15 & 1535-1573 & 8.2-10 & - & \cite{lptao}\\
Tm-Ho & $\sim$1-12.5 & 5.5-42 & 1557.5 & 7.8 (Min) & 63 & \cite{lpltao}\\
Tm-Ho & 27.61 (Max) & 11.6-57.14 & 1529.69, 1531.74, 1533.48 & 10.46-61.8 & - & \cite{ieeeanzueto}\\
Sm & 80 (Max) & $\sim$70.2 & 1550.0 & 0.45 (Min) & $\sim$50 & \cite{predaol}\\
Cr & 10.68 (Max) & 68.12-115.9 & 1558.5 & 3.85 (Min) & $\sim$200 & \cite{oftdutta}\\
Er & $\sim$2.07 & 0.5-1 & 1530 & 0.08-0.32 & - & \cite{oltsai}\\
Er & 0.56-27.13 & 17.94-118.79 & 1549.56 & 5.32-1.20 & 29.1 (Min) & This work\\
\hline
\end{tabular}
\label{table1}
\end{table*}
As to the optical spectra, the variations of shapes and bandwidths concerning the pump power are measured and shown in Fig. \ref{fig5}. The central wavelengths of the optical spectra are around 1549.6 nm, and the shapes remain almost unchanged when the pump power is altered. Due to the bandwidth limitation provided by the SDIGs in EDF2, the full-width-half maximum (FWHM) bandwidth and spectrum structures for each length of the EDF2 broadens with the raising pump power. Besides, when the EDF2 becomes longer, the spectral width obtained from the $\Delta\lambda$ value of the OSA is narrowed significantly. The largest spectral widths are 74.2 pm, 47.8 pm and 37.2 pm with the lengths of the SDIG of 7 cm, 10 cm and 20 cm, respectively. The minimum spectral width of 29.1 pm is obtained when the length of the SDIG and pump power are 20 cm and 250 mW, respectively. The results show that the bandwidth of the SDIG is narrower with a longer EDF2 and lower pump power, which coincide with the theoretical analysis in the section above. Therefore, one can realize a narrower bandwidth and even a single-longitudinal-mode Q-switched pulses with a high power pump source and a piece of longer EDF2.
The pulse characteristics comprising average power, repletion rate, central wavelength, pulse duration and spectral width in several typical published articles on Er-doped fiber lasers Q-switched by different FSAs (including Tm-doped, Tm/Ho-doped, Sm-doped, Cr-doped and Er-doped fibers) are displayed in Table. \ref{table1}. Clearly, the spectral width of this work is the narrowest one in these lasers, indicating the effective filtering of the SDIG. Besides, the tunable ranges of repetition rates and pulse durations of our laser also perform well which are benefit from the compact configuration.
\section{Conclusion}
We have achieved an Er-doped Q-switched fiber laser with narrow bandwidth pulse emissions based on a self-designed SDIG. Such an FSA based SDIG can provide saturable absorption and spectral filtering simultaneously, which is efficient for realizing Q-switching operation in fiber lasers. Further results manifest that the spectral width of the output Q-switched pulse can be narrowed by increasing the length of SDIG and reducing the pump power. The narrowest spectral width of 29.1 pm is achieved when the SDIG length and pump power are 20 cm and 250 mW, respectively. The theoretical and experimental results are in good agreement. Our method provides a promising way to obtain narrow bandwidth Q-switched fiber lasers with low cost and compact size, which may exhibit significant potentials in nonlinear frequency conversion, Doppler LIDAR and coherent beam combinations.
\begin{acknowledgments}
This work is supported by National Nature Science Foundation of China (61905193); National Key R\&D Program of China (2017YFB0405102); Key Laboratory of Photoelectron of Education Committee Shaanxi Province of China (18JS113); Open Research Fund of State Key Laboratory of Transient Optics and Photonics (SKLST201805); Northwest University Innovation Fund for Postgraduate Students (YZZ17099).
\end{acknowledgments}
\nocite{*}
| proofpile-arXiv_065-261 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\IEEEPARstart{W}{ith} the rapid development of computer technique, multi-dimensional data,
which is also known as tensors \cite{kolda2009tensor}, has received much attention
in various application fields, such as data mining \cite{kolda2008scalable, morup2011applications},
signal and image processing \cite{cichocki2007nonnegative, cichocki2015tensor, sidiropoulos2017tensor, zhang2019nonconvex},
and neuroscience \cite{mori2006principles}. Many underlying tensor data is
nonnegative due to their physical meaning such as the pixels of images.
An efficient approach to exploit the intrinsic structure of a nonnegative tensor is tensor factorization,
which can explore its hidden information.
Moreover, the underling tensor data may also suffer from missing entries and noisy corruptions
during its acquiring process.
In this paper, we focus on the sparse nonnegative tensor factorization (NTF) and completion
problem from partial and noisy observations, where the observed entries are corrupted
by general noise such as additive Gaussian noise, additive Laplace noise, and Poisson observations.
Tensors arise in a variety of real-world applications that
can represent the multi-dimensional correlation of the underlying tensor data,
e.g., the spatial and spectral dimensions for hyperspectral images
and the spatial and time dimensions for video data.
In particular, for second-order tensors,
NTF reduces to nonnegative matrix factorization (NMF),
which can extract meaningful features
and has a wide variety of practical applications in scientific and engineering areas,
see \cite{ding2008convex, lee1999learning, gillis2020nonnegative, pan2019generalized, pan2018orthogonal}
and references therein.
Here the order of a tensor is the number of dimensions, also known as ways or modes \cite{kolda2009tensor}.
It has been demonstrated that NMF is
able to learn localized features with obvious interpretations \cite{lee1999learning}.
Moreover, Gillis \cite{gillis2012sparse} proposed a sparse NMF model with a sparse factor,
which provably led to optimal and sparse solutions under a separability assumption.
Gao et al. \cite{gao2005improving} showed that sparse NMF can
improve molecular cancer class discovery than the direct application of the basic NMF.
Zhi et al. \cite{zhi2010graph} also showed that sparse NMF provided
better facial representations and achieved higher recognition rates than NMF for facial expression recognition.
More applications about the advantages of sparse NMF over NMF can be referred to
\cite{gillis2010using, kim2007sparse, soltani2017tomographic}.
Besides, Soni et al. \cite{soni2016noisy} proposed a general class of
matrix completion tasks with noisy observations, which could reduce to
sparse NMF when the underlying factor matrices are nonnegative and all
entries of the noisy matrix are observed.
They showed that the error
bounds of estimators of sparse NMF are lower than those of NMF \cite{soni2016noisy}.
Furthermore, Sambasivan et al.
\cite{sambasivan2018minimax} derived the minimax lower bounds of the expected per-element square
error under general noise
observations.
Due to exploiting the intrinsic structure of the underlying tensor data,
which contains correlation in different modes,
NTF has also been widely applied in a variety of fields,
see, e.g., \cite{ chi2012tensors, hong2020generalized, morup2008algorithms, pan2021orthogonal, veganzones2015nonnegative}.
There are some popular NTF approaches,
such as nonnegative Tucker decomposition \cite{li2016mr},
nonnegative CANDECOMP/PARAFAC (CP) decomposition \cite{veganzones2015nonnegative},
nonnegative tensor train decomposition \cite{lee2016nonnegative},
which are derived by different
applications, see also \cite{kolda2009tensor, vervliet2019exploiting}.
For example, Xu \cite{xu2015alternating} proposed an alternating proximal
gradient method for sparse nonnegative Tucker decomposition,
while it is only efficient for additive Gaussian noise.
Qi et al. \cite{qi2018muti} utilized Tucker decomposition to establish the
redundant basis of the space of multi-linear maps with the sparsity
constraint, and further proposed multi-dimensional synthesis/analysis sparse
models to represent multi-dimensional signals effectively and efficiently.
Moreover, M{\o}rup et al. \cite{morup2008algorithms}
showed that sparse nonnegative Tucker decomposition yields a
parts-based representation as seen in NMF for two-way data,
which is a simpler and more interpretable decomposition
than the standard nonnegative Tucker decomposition for multi-dimensional data.
Furthermore, they showed that sparse nonnegative
Tucker decomposition can help reduce ambiguities
by imposing constraints of sparseness in
the decomposition for model selection and component identification.
For nonnegative CP decomposition,
Veganzones et al. \cite{veganzones2015nonnegative} proposed
a novel compression-based nonnegative CP decomposition without
sparse constraints for blind spectral unmixing of hyperspectral images,
which was only utilized for the observations with additive Gaussian noise.
Kim et al.
\cite{kim2013sparse} proposed a sparse CP decomposition model,
which improved the analysis and inference of multi-dimensional data
for dimensionality reduction, feature selection as well as signal recovery.
Another kind of NTF is based on the recently proposed
tensor-tensor product \cite{Kilmer2011Factorization},
whose algebra operators have been proposed and studied
for third-order tensors \cite{Kilmer2011Factorization, Kilmer2013Third}
and then generalized to higher-order tensors \cite{martin2013order} and transformed tensor-tensor product \cite{song2020robust}.
Besides, Kilmer et al. \cite{Kilmer2011Factorization} established the framework of
tensor singular value decomposition (SVD).
This kind of tensor-tensor product and tensor SVD
has been applied in a great number of areas such as facial recognition \cite{hao2013facial},
tensor completion \cite{ng2020patch, corrected2019zhang, zhang2014novel,Zhang2017Exact, zhang2021low},
and image processing \cite{semerci2014tensor, zheng2020mixed}.
Recently, this kind of sparse NTF models
have been proposed and studied
on dictionary learning problems, e.g.,
tomographic image reconstruction \cite{soltani2016tensor},
image compression and image deblurring \cite{newman2019non}.
The sparse factor in this kind of NTF with tensor-tensor product is due to the sparse
representation of patched-dictionary elements for tensor dictionary learning \cite{soltani2016tensor}.
One needs to learn a nonnegative tensor patch dictionary from training data,
which is to solve a sparse NTF problem with tensor-tensor product. It was demonstrated that
the tensor-based dictionary
learning algorithm exhibits better performance
than the matrix-based method in terms of approximation accuracy.
However, there is no theoretical result about the error bounds of nonnegative sparse
tensor factorization models. Both different noise settings and missing values
are not studied in the literature.
In this paper, we propose a sparse NTF and completion model with tensor-tensor product
from partial and noisy observations for third-order tensors, where the observations
are corrupted by a general class of noise models.
The proposed model consists of a data-fitting term for the observations
and the tensor $\ell_0$ norm for the sparse factor,
where the two tensor factors operated by tensor-tensor product are nonnegative
and the data-fitting term is derived by the maximum likelihood estimate.
Theoretically, we show that the error
bounds of the estimator of the proposed model can be established under general noise observations.
The detailed error bounds under specific noise distributions including additive Gaussian noise,
additive Laplace noise, and Poisson observations can be derived.
Moreover, the minimax lower bounds are shown to be matched with the established upper bounds
up to a logarithmic factor of the sizes of the underlying tensor. These theoretical results for tensors
are better than those obtained for matrices \cite{soni2016noisy}, and this illustrates the advantage of the use of
nonnegative sparse tensor models for completion and denoising.
Then an alternating direction method of multipliers (ADMM) based algorithm \cite{Gabay1976A, wang2015global}
is developed to solve the general noise observation models.
Numerical examples are presented to
show the performance of the proposed sparse NTF and completion
is better than that of the matrix-based factorization \cite{soni2016noisy}.
The main contributions of this paper are summarized as follows.
(1) Based on tensor-tensor product, a sparse NTF and completion model
from partial and noisy observations
is proposed under general noise distributions.
(2) The upper bounds of the estimators of the proposed model are established under general noise observations.
Then the upper bounds are specialized to the widely used noise observations including additive Gaussian noise,
additive Laplace noise, and Poisson observations.
(3) The minimax lower bounds are derived for the previous noise observations,
which match the upper bounds with a logarithmic factor for different noise models.
(4) An ADMM based algorithm is developed to solve the resulting model.
And numerical experiments are presented to demonstrate the effectiveness of the proposed tensor-based method
compared with the matrix-based method in \cite{soni2016noisy}.
The rest of this paper is organized as follows.
Some notation and notions are provided in Section \ref{Prelim}.
We propose a sparse NTF and completion model based on tensor-tensor product
from partial and noisy observations in Section \ref{ProMod},
where the observations are corrupted by a general class of noise.
In Section \ref{upperbound}, the upper bounds of estimators of the proposed model are established,
which are specialized to three widely used noise models
including additive Gaussian noise, additive Laplace noise, and Poisson observations.
Then the minimax lower bounds are also derived for the previous
observation models in Section \ref{lowerbou}.
An ADMM based algorithm is developed to solve the resulting model in Section \ref{OptimAlg}.
Numerical experiments are reported to validate
the effectiveness of the proposed method in Section \ref{NumeriExper}.
Finally, the conclusions and future work are given in Section \ref{Conclu}.
All proofs of the theoretical results are delegated
to the appendix.
\section{Preliminaries}\label{Prelim}
Throughout this paper, $\mathbb{R}$
represents the space with real numbers.
$\mathbb{R}_+^{n_1\times n_2\times n_3}$ denotes the tensor space that
all elements of the tensors are nonnegative.
Scalars are represented by lowercase letters, e.g., $x$.
Vectors and matrices are represented by
lowercase boldface letters and uppercase boldface letters, respectively,
e.g., $\mathbf{x}$ and $\mathbf{X}$.
Tensors are denoted by capital Euler script letters, e.g., $\mathcal{X}$.
The $(i,j,k)$th entry of a tensor $\mathcal{X}$ is denoted as $\mathcal{X}_{ijk}$.
The $i$th frontal slice of a tensor $\mathcal{X}$ is a matrix denoted by $\mathbf{X}^{(i)}$,
which is a matrix by fixing the third index and vary the first two indexes of $\mathcal{X}$.
The $\ell_2$ norm of a vector $\mathbf{x}\in\mathbb{R}^{n}$,
denoted by $\|\mathbf{x}\|$, is defined as $\|\mathbf{x}\|=\sqrt{\sum_{i=1}^{n}x_i^2}$,
where $x_i$ is the $i$th entry of $\mathbf{x}$.
The tensor $\ell_\infty$ norm of a tensor
$\mathcal{X}$ is defined as $\|\mathcal{X}\|_\infty=\max_{i,j,k}|\mathcal{X}_{ijk}|$.
The tensor $\ell_0$ norm of $\mathcal{X}$, denoted by $\|\mathcal{X}\|_0$, is defined as the count of all nonzero entries of $\mathcal{X}$.
The inner product of two tensors $\mathcal{X}, \mathcal{Y}\in\mathbb{R}^{n_1\times n_2\times n_3}$ is defined as $\langle \mathcal{X}, \mathcal{Y} \rangle=\sum_{i=1}^{n_3}\langle \mathbf{X}^{(i)}, \mathbf{Y}^{(i)} \rangle$, where $\langle \mathbf{X}^{(i)}, \mathbf{Y}^{(i)} \rangle=tr((\mathbf{X}^{(i)})^T\mathbf{X}^{(i)})$.
Here $\cdot^T$ and $tr(\cdot)$ denote the transpose and the trace of a matrix, respectively.
The tensor Frobenius norm of $\mathcal{X}$ is defined as $\|\mathcal{X}\|_F=\sqrt{\langle \mathcal{X},\mathcal{X} \rangle}$.
Let $p_{x_1}(y_1)$ and $p_{x_2}(y_2)$ be the
probability density functions or probability mass functions
with respect to the random variables $y_1$ and $y_2$ with parameters $x_1$ and $x_2$, respectively.
The Kullback-Leibler (KL) divergence of $p_{x_1}(y_1)$ from $p_{x_2}(y_2)$ is defined as
$$
D(p_{x_1}(y_1)||p_{x_2}(y_2))=\mathbb{E}_{p_{x_1}(y_1)}\left[\log\frac{p_{x_1}(y_1)}{p_{x_2}(y_2)}\right].
$$
The Hellinger affinity between $p_{x_1}(y_1)$ and $p_{x_2}(y_2)$ is defined as
$$
H(p_{x_1}(y_1)||p_{x_2}(y_2))
=\mathbb{E}_{p_{x_1}}\left[\sqrt{\frac{p_{x_2}(y_2)}{p_{x_1}(y_1)}}\right]
=\mathbb{E}_{p_{x_2}}\left[\sqrt{\frac{p_{x_1}(y_1)}{p_{x_2}(y_2)}}\right].
$$
The joint distributions of higher-order
and multi-dimensional variables, denoted
by $p_{\mathcal{X}_1}(\mathcal{Y}), p_{\mathcal{X}_2}(\mathcal{Y})$,
are the joint distributions of the vectorization of the tensors.
Then the KL divergence of $p_{\mathcal{X}_1}(\mathcal{Y})$ from $ p_{\mathcal{X}_2}(\mathcal{Y})$ is defined as
$$
D(p_{\mathcal{X}_1}(\mathcal{Y})||p_{\mathcal{X}_2}(\mathcal{Y}))
:=\sum_{i,j,k}D(p_{\mathcal{X}_{ijk}}(\mathcal{Y}_{ijk})||p_{\mathcal{Y}_{ijk}}(\mathcal{Y}_{ijk})),
$$
and its Hellinger affinity is defined as
$$
H(p_{\mathcal{X}_1}(\mathcal{Y})||p_{\mathcal{X}_2}(\mathcal{Y}))
:=\prod_{i,j,k}H(p_{\mathcal{X}_{ijk}}(\mathcal{Y}_{ijk}),p_{\mathcal{Y}_{ijk}}(\mathcal{Y}_{ijk})).
$$
Now we define the tensor-tensor product between two third-order tensors \cite{Kilmer2011Factorization}.
\begin{definition}\label{TenTensPro}
\cite[Definition 3.1]{Kilmer2011Factorization}
Let $\mathcal{X}\in\mathbb{R}^{n_1\times n_2\times n_3}$
and $\mathcal{Y}\in\mathbb{R}^{n_2\times n_4\times n_3}$.
The tensor-tensor product, denoted as $\mathcal{X}\diamond\mathcal{Y}$,
is an $n_1\times n_4\times n_3$ tensor defined by
$$
\mathcal{X}\diamond\mathcal{Y}:=
\textup{Fold}\left(\textup{Circ}(\textup{Unfold}(\mathcal{X}))\cdot \textup{Unfold}(\mathcal{Y})\right),
$$
where
$$
\textup{Unfold}(\mathcal{X})=\begin{pmatrix} \mathbf{X}^{(1)} \\ \mathbf{X}^{(2)}
\\ \vdots \\ \mathbf{X}^{(n_3)} \end{pmatrix}, \
\textup{Fold}\begin{pmatrix} \mathbf{X}^{(1)} \\ \mathbf{X}^{(2)}
\\ \vdots \\ \mathbf{X}^{(n_3)} \end{pmatrix}=\mathcal{X}, \
\textup{Circ}\begin{pmatrix} \mathbf{X}^{(1)} \\ \mathbf{X}^{(2)}
\\ \vdots \\ \mathbf{X}^{(n_3)} \end{pmatrix}=
\begin{pmatrix}
\mathbf{X}^{(1)} & \mathbf{X}^{(n_3)} & \cdots & \mathbf{X}^{(2)} \\
\mathbf{X}^{(2)} & \mathbf{X}^{(1)} & \cdots & \mathbf{X}^{(3)}\\
\vdots & \vdots & & \vdots \\
\mathbf{X}^{(n_3)} &\mathbf{X}^{(n_3-1)}&\cdots & \mathbf{X}^{(1)} \end{pmatrix}.
$$
\end{definition}
By the block circulate structure,
the tensor-tensor product of two third-order tensors
can be implemented efficiently by fast Fourier transform \cite{Kilmer2011Factorization}.
\begin{definition}\cite{Kilmer2011Factorization}
The transpose of a tensor $\mathcal{X}\in\mathbb{R}^{n_1\times n_2\times n_3}$,
is the tensor $\mathcal{X}^T\in\mathbb{R}^{n_2\times n_1\times n_3}$ obtained by transposing each of the frontal
slices and then reversing the order of transposed frontal slices 2 through $n_3$, i.e.,
$$
(\mathcal{X}^T)^{(1)} = (\mathbf{X}^{(1)})^T, \ (\mathcal{X}^T)^{(i)} = (\mathbf{X}^{(n_3+2-i)})^T, \ i=2,\ldots, n_3.
$$
\end{definition}
\begin{definition}\cite[Definition 3.4]{Kilmer2011Factorization}
An $n\times n\times m$ identity tensor $\mathcal{I}$ is the tensor whose
first frontal slice is the $n\times n$ identity matrix, and whose other frontal slices are all zeros.
\end{definition}
\begin{definition}\cite[Definition 3.5]{Kilmer2011Factorization}
A tensor $\mathcal{A}\in\mathbb{R}^{n\times n\times m}$ is said to have an inverse, denoted by $\mathcal{A}^{-1}\in\mathbb{R}^{n\times n\times m}$, if $\mathcal{A}\diamond\mathcal{A}^{-1}=\mathcal{A}^{-1}\diamond\mathcal{A}=\mathcal{I}$, where $\mathcal{I}\in\mathbb{R}^{n\times n\times m}$ is the identity tensor.
\end{definition}
The proximal mapping of a closed proper function $f:\mathfrak{C}\rightarrow (-\infty, +\infty]$ is defined as
$$
\textup{Prox}_{f}(y)=\arg\min_{x\in\mathfrak{C}}\left\{f(x)+\frac{1}{2}\|x-y\|^2\right\},
$$
where $\mathfrak{C}$ is a finite-dimensional Euclidean space.
Next we provide a brief summary of the notation used throughout this paper.
\begin{itemize}
\item $\lfloor x\rfloor$ is the integer part of $x$.
$\lceil x\rceil$ is smallest integer that is larger or equal to $x$.
\item Denote $m\vee n=\max\{m,n\}$ and $m\wedge n=\min\{m,n\}$.
\end{itemize}
\section{Sparse NTF and Completion via Tensor-Tensor Product}\label{ProMod}
Let $\mathcal{X}^*\in\mathbb{R}_+^{n_1\times n_2\times n_3}$ be an unknown nonnegative tensor we aim to estimate,
which admits a following nonnegative factorization:
$$
\mathcal{X}^*=\mathcal{A}^* \diamond \mathcal{B}^*,
$$
where $\mathcal{A}^*\in\mathbb{R}_+^{n_1\times r\times n_3}$ and
$\mathcal{B}^*\in\mathbb{R}_+^{r\times n_2\times n_3}$
are prior unknown factor tensors with $r\leq \min\{n_1,n_2\}$.
We assume that each entries of $\mathcal{X}^*, \mathcal{A}^*,
\mathcal{B}^*$ are bounded, i.e.,
$$
0\leq \mathcal{X}_{ijk}^*\leq \frac{c}{2}, \ \ \
0\leq \mathcal{A}_{ijk}^*\leq 1, \ \ \ 0\leq \mathcal{B}_{ijk}^*\leq b, \ \ \ \forall \ i,j,k,
$$
where $\frac{c}{2}$ is used for simplicity of subsequent analysis.
We remark that the amplitude $1$ of each entry $\mathcal{A}_{ijk}$ of $\mathcal{A}^*$ can be arbitrary.
Besides, our focus is that the factor tensor $\mathcal{B}^*$ is sparse.
However,
only a noisy and incompleted version of the underlying tensor $\mathcal{X}^*$ is available in practice.
Let $\Omega\subseteq\{1,2,\ldots, n_1\}\times \{1,2,\ldots, n_2\}\times \{1,2,\ldots, n_3\}$
be a subset at which the entries of the observations $\mathcal{Y}$ are collected.
Denote $\mathcal{Y}_{\Omega}\in\mathbb{R}^m$ to be a vector
such that the entries of $\mathcal{Y}$ in the index $\Omega$ are vectorized into a vector by lexicographic order,
where $m$ is the number of observed entries.
Assume that $n_1, n_2, n_3\geq 2$ throughout this paper.
Suppose that the location
set $\Omega$ is generated according to an independent
Bernoulli model with probability $\gamma=\frac{m}{n_1n_2n_3}$ (denoted by Bern($\gamma$)),
i.e., each index $(i,j,k)$ belongs to $\Omega$ with probability $\gamma$, which is denoted as $\Omega\sim \text{Bern}(\gamma)$.
Mathematically, the joint probability density function
or probability mass function of observations $\mathcal{Y}_\Omega$ is given by
\begin{equation}\label{obserPo}
p_{\mathcal{X}_\Omega^*}(\mathcal{Y}_{\Omega})
:=\prod_{(i,j,k)\in \Omega}p_{\mathcal{X}_{ijk}^*}(\mathcal{Y}_{ijk}).
\end{equation}
By the maximum likelihood estimate, we propose the following sparse NTF and completion model with nonnegative constraints:
\begin{equation}\label{model}
\widetilde{\mathcal{X}}^{\lambda}\in\arg\min_{\mathcal{X}=\mathcal{A} \diamond \mathcal{B}\in\Gamma}\left\{-\log p_{\mathcal{X}_\Omega}(\mathcal{Y}_{\Omega})+\lambda\|\mathcal{B}\|_0\right\},
\end{equation}
where $\lambda>0$ is the regularization parameter and $\Gamma$ is defined by
\begin{equation}\label{TauSet}
\Gamma:=\{\mathcal{X}=\mathcal{A} \diamond \mathcal{B}:
\ \mathcal{A}\in\mathfrak{L}, \ \mathcal{B}\in\mathfrak{D}, \ 0\leq \mathcal{X}_{ijk}\leq c \}.
\end{equation}
Here $\Gamma$ is a countable set of estimates constructed as follows:
First, let
\begin{equation}\label{denu}
\vartheta:=2^{\lceil\beta\log_2(n_1\vee n_2)\rceil}
\end{equation}
for a specified $\beta\geq 3,$
we construct $\mathfrak{L}$ to be the set
of all tensors $\mathcal{A}\in\mathbb{R}_+^{n_1\times r\times n_3}$
whose entries are discretized to one of $\vartheta$
uniformly sized bins in the range $[0,1]$,
and $\mathfrak{D}$
to be the set of all tensors $\mathcal{B}\in\mathbb{R}_+^{r\times n_2\times n_3}$
whose entries either take the value $0$, or are discretized to
one of $\vartheta$ uniformly sized bins in the range $[0,b]$.
\begin{remark}
When all entries of $\mathcal{Y}$ are observed and $\mathcal{Y}$ is corrupted by additive Gaussian noise,
the model (\ref{model}) reduces to sparse NTF with tensor-tensor product,
whose relaxation, replaced the tensor $\ell_0$ norm by the tensor $\ell_1$ norm,
has been applied in patch-based dictionary learning for image data \cite{soltani2016tensor, newman2019non}.
\end{remark}
\begin{remark}
We do not specialize the noise in model (\ref{model}), and just need the joint
probability density function
or probability mass function of observations in (\ref{obserPo}).
In particular, our model can address the observations with some widely used noise distributions,
such as additive Gaussian noise, additive Laplace noise, and Poisson observations.
\end{remark}
\section{Upper Bounds}\label{upperbound}
In this section, we establish a general upper error
bound of the sparse NTF and completion model from partial observations under a general class of noise in (\ref{model}),
and then derive the upper bounds of the special noise models
including additive Gaussian noise, additive Laplace noise, and Poisson observations.
Now we establish the upper error bound of the
estimator $\widetilde{\mathcal{X}}^{\lambda}$ in (\ref{model}),
whose proof follows the line of the proof of \cite[Theorem 1]{soni2016noisy},
see also \cite[Theorem 3]{raginsky2010compressed}.
The key technique of this proof is the well-known Kraft-McMillan inequality \cite{Brockway1957Two, kraft1949device}.
Then we construct the penalty of the underlying tensor $\mathcal{X}$ with
the tensor-tensor product of two nonnegative tensors, where one factor tensor is sparse.
\begin{theorem}\label{maintheo}
Suppose that
$\kappa\geq \max_{\mathcal{X}\in\Gamma}\max_{i,j,k} D(p_{\mathcal{X}_{ijk}^*}||p_{\mathcal{X}_{ijk}})$.
Let $\Omega\sim \textup{Bern}(\gamma)$, where $\gamma=\frac{m}{n_1n_2n_3}$ and $4\leq m\leq n_1n_2n_3$.
Then, for any
$
\lambda\geq 4(\beta+2)\left( 1+\frac{2\kappa}{3}\right) \log\left((n_1\vee n_2)\sqrt{n_3}\right)
$,
the estimator $\widetilde{\mathcal{X}}^{\lambda}$ in (\ref{model}) satisfies
\[
\begin{split}
&~ \frac{\mathbb{E}_{\Omega,\mathcal{Y}_{\Omega}}[-2\log H(p_{\widetilde{\mathcal{X}}^{\lambda}},p_{\mathcal{X}^*})]}{n_1n_2n_3}\\
\leq & \ 3\min_{\mathcal{X}=\mathcal{A}\diamond\mathcal{B}\in\Gamma}\left\lbrace
\frac{D(p_{\mathcal{X}^*}|| p_{\mathcal{X}})}{n_1n_2n_3}+
\left( \lambda+\frac{8\kappa(\beta+2) \log\left((n_1\vee n_2)\sqrt{n_3}\right)}{3}\right)
\frac{rn_1n_3+\|\mathcal{B}\|_0}{m} \right\rbrace \\
& \ +\frac{8\kappa\log(m)}{m}.
\end{split}
\]
\end{theorem}
The detailed proof of Theorem \ref{maintheo} is left to Appendix \ref{ProoA}.
From Theorem \ref{maintheo}, we can observe that the
upper bound of $\frac{\mathbb{E}_{\Omega,\mathcal{Y}_{\Omega}}
\left[-2\log H(p_{\widetilde{\mathcal{X}}^\lambda},p_{\mathcal{X}^*})\right]}{n_1n_2n_3}$ is of the order of $O(
\frac{rn_1n_3+\|\mathcal{B}\|_0}{m}\log(n_1\vee n_2))$
if the KL divergence $D(p_{\mathcal{X}^*}|| p_{\mathcal{X}})$
is not too large in the set $\Gamma$.
The explicit upper bounds with respect to $D(p_{\mathcal{X}^*}|| p_{\mathcal{X}})$ in $\Gamma$ and $\kappa$
will be given for the observations with special noise distributions.
\begin{remark}
For the upper error bounds of estimators of observations with special noise distributions,
the main difference of proofs between the matrix case \cite{soni2016noisy} and the tensor case is to establish the upper bound of $\min_{\mathcal{X}\in\Gamma}
\|\mathcal{X}^*-\mathcal{X}\|_F^2$, where $\Gamma$ is defined as (\ref{TauSet}).
We need to estimate this bound based on the tensor-tensor product structure $\mathcal{X}=\mathcal{A} \diamond \mathcal{B}\in \Gamma$,
which can be obtained by Lemma \ref{xxappr}.
The key issue in Lemma \ref{xxappr} is to construct the surrogates of entries of the two factor tensors $\mathcal{A}^*, \mathcal{B}^*$ in the set $\Gamma$,
where $\mathcal{X}^*=\mathcal{A}^*\diamond \mathcal{B}^*$.
\end{remark}
In the following subsections, we establish the upper error bounds of the estimators for the observations with three special noise models,
including additive Gaussian noise,
additive Laplace noise, and Poisson observations.
By Theorem \ref{maintheo},
the main steps of proofs for the special noise models are to establish the lower bound of $-2\log(H(p_{\mathcal{X}^*},p_{\widetilde{\mathcal{X}}^\lambda}))$
and the upper bound of $\min_{\mathcal{X}=\mathcal{A}\diamond\mathcal{B}\in\Gamma}D(p_{\mathcal{X}^*}|| p_{\mathcal{X}})$, respectively.
Before deriving the upper error bounds of the observations with special noise models,
we fix the choices of $\beta$ and $\lambda$ based on Theorem \ref{maintheo}, which are defined as follows:
\begin{equation}\label{beta}
\beta=\max\left\{3,1+\frac{\log(3rn_3^{1.5}b/c)}{\log(n_1\vee n_2)}\right\},
\end{equation}
and
\begin{equation}\label{lambda}
\lambda=4(\beta+2)\left( 1+\frac{2\kappa}{3}\right) \log\left(n_1\vee n_2\right).
\end{equation}
\subsection{Additive Gaussian Noise}
Assume that each entry of the underlying tensor
is corrupted by independently additive zero-mean Gaussian noise with standard deviation $\sigma>0$,
that is
\begin{equation}\label{Gauyom}
\mathcal{Y}_{ijk}=\mathcal{X}_{ijk}^*+\sigma^2\epsilon_{ijk},
\end{equation}
where $\epsilon_{ijk}$ obeys the independently standard normal distribution (i.e., $\epsilon_{ijk}\sim N(0,1)$) for any $(i,j,k)\in\Omega$.
Then the observation $\mathcal{Y}_\Omega$ can be regarded as a vector and
its joint probability density function in (\ref{obserPo}) is given as
\begin{equation}\label{YomeGasu}
p_{\mathcal{X}_\Omega^*}(\mathcal{Y}_{\Omega})
=\frac{1}{(2\pi\sigma^2)^{|\Omega|/2}}\exp\left(-\frac{1}{2\sigma^2}\|\mathcal{Y}_\Omega-\mathcal{X}_\Omega^*\|^2\right),
\end{equation}
where $|\Omega|$ denotes the number of entries of $\Omega$, i.e., $|\Omega|=m$.
Now we establish the explicit upper error bound of the estimator in (\ref{model}) with the observations $\mathcal{Y}_{\Omega}$ satisfying (\ref{Gauyom}).
\begin{Prop}\label{Gauuupp}
Let $\Omega\sim \textup{Bern}(\gamma)$, where $\gamma=\frac{m}{n_1n_2n_3}$ and $4\leq m\leq n_1n_2n_3$.
Assume that $\beta$ and $\lambda$ are defined as (\ref{beta}) and (\ref{lambda}), respectively,
where $\kappa=\frac{c^2}{2\sigma^2}$ in (\ref{lambda}).
Suppose that $\mathcal{Y}_{\Omega}$ satisfies (\ref{Gauyom}).
Then the estimator $\widetilde{\mathcal{X}}^{\lambda}$ in (\ref{model}) satisfies
\[
\begin{split}
\frac{\mathbb{E}_{\Omega,\mathcal{Y}_{\Omega}}
\left[\|\widetilde{\mathcal{X}}^{\lambda}-\mathcal{X}^*\|_F^2\right]}{n_1n_2n_3}
\leq \frac{22c^2\log(m)}{m} + 16(3\sigma^2+2c^2)(\beta+2)
\left(\frac{rn_1n_3+\|\mathcal{B}^*\|_0}{m}\right)\log(n_1\vee n_2) .
\end{split}
\]
\end{Prop}
The detailed proof of Proposition \ref{Gauuupp} is left to Appendix \ref{ProoB}.
From Proposition \ref{Gauuupp}, we can see that the upper bound of
$\frac{\mathbb{E}_{\Omega,\mathcal{Y}_{\Omega}}
\left[\|\widetilde{\mathcal{X}}^{\lambda}-\mathcal{X}^*\|_F^2\right]}{n_1n_2n_3}$
for the observations with additive Gaussian noise
is of the order $O(
(\sigma^2+c^2)(\frac{rn_1n_3+\|\mathcal{B}^*\|_0}{m})\log(n_1\vee n_2))$.
Now we give a comparison with a matrix-based method in \cite[Corollary 3]{soni2016noisy}
if we ignore the intrinsic structure of a tensor.
Note that we cannot compare with the matrix-based method directly since the underlying data is the tensor form.
However,
we can stack these frontal slices of the underlying tensor (with size $n_1\times n_2\times n_3$)
into a matrix, whose size is $n_1n_3\times n_2$.
In this case, the estimator $\mathcal{X}_1$ obtained
by the matrix-based method in \cite[Corollary 3]{soni2016noisy} satisfies
\begin{equation}\label{MBMGN}
\frac{\mathbb{E}_{\Omega,\mathcal{Y}_{\Omega}}
\left[\|\mathcal{X}_1-\mathcal{X}^*\|_F^2\right]}{n_1n_2n_3}
= O\left((\sigma^2+c^2)\left(\frac{\widetilde{r}n_1n_3
+\|\mathcal{B}^*\|_0}{m}\right)\log((n_1n_3)\vee n_2)\right),
\end{equation}
where $\widetilde{r}$ is the rank of the resulting matrix.
In particular, we choose $\widetilde{r}$ in the matrix-based method the same as $r$ in the tensor-based method with tensor-tensor product.
In real-world applications, $n_1n_3>n_2$ in general.
For example, if $n_3$ denotes the frame
in video datasets or spectral dimensions in hyperspectral image datasets, $n_3$ is large.
Therefore, if $n_1n_3>n_2$, the upper error bound of the matrix-based method in (\ref{MBMGN})
is larger than that of the tensor-based method in Proposition \ref{Gauuupp}.
Especially, when $n_1=n_2$, the logarithmic factor
in Proposition \ref{Gauuupp} is $\log(n_1)$, while it is $\log(n_1n_3)=\log(n_1)+\log(n_3)$
in (\ref{MBMGN}).
\begin{remark}
We also compare the upper error bound with that of the noisy tensor completion in \cite{wang2019noisy},
which did not consider the sparse factor.
The upper error bound of the estimator $\mathcal{X}_t$ in \cite[Theorem 1]{wang2019noisy} satisfies
\begin{equation}\label{upbtc}
\frac{\|\mathcal{X}_t-\mathcal{X}^*\|_F^2}{n_1n_2n_3}\leq C_t(\sigma^2\vee c^2)\left(\frac{r\max\{n_1,n_2\}n_3}{m}\right)\log((n_1+n_2)n_3)
\end{equation}
with high probability, where $C_t>0$ is a constant.
We note that the upper error bound of our method can be improved
potentially when $n_2>n_1$ and $\mathcal{B}^*$ is sparse.
In fact, the upper bound in Proposition \ref{Gauuupp} is of the order $O(\frac{rn_1n_3}{m}\log(n_2))$, while the
upper bound in \cite{wang2019noisy} is of the order $O(\frac{rn_2 n_3}{m}\log((n_1+n_2)n_3))$.
Moreover, the two upper bounds roughly coincide except for the logarithmic factor
when $\mathcal{B}^*$ is not sparse, i.e., $\|\mathcal{B}^*\|_0=rn_2n_3$.
However, when $n_1\geq n_2$, the improvement of the upper bound of Proposition \ref{Gauuupp} is mainly on
the logarithmic factor, which is much smaller than that of (\ref{upbtc}).
\end{remark}
\begin{remark}
From Proposition \ref{Gauuupp}, we know that the upper error bound decreases when the number of observations increases.
In particular,
when we observe all entries of $\mathcal{Y}$, i.e., $m=n_1n_2n_3$,
Proposition \ref{Gauuupp} is the upper error bound of the sparse NTF model
with tensor-tensor product in \cite{newman2019non, soltani2016tensor},
which has been used to construct a tensor patch dictionary prior for CT and facial images, respectively.
This demonstrates that the upper error bound of sparse NTF with tensor-tensor
product in \cite{newman2019non, soltani2016tensor} is lower than that of
sparse NMF in theory, where Soltani et al. \cite{soltani2016tensor} just
showed the performance of sparse NTF with tensor-tensor product is better than that of sparse NMF in experiments.
\end{remark}
\subsection{Additive Laplace Noise}
Suppose that each entry of the underlying tensor
is corrupted by independently additive Laplace noise with
the location parameter being zero and the diversity being $\tau>0$ (denoted by Laplace($0,\tau$)),
that is
\begin{equation}\label{Lapayom}
\mathcal{Y}_{ijk}=\mathcal{X}_{ijk}^*+\epsilon_{ijk},
\end{equation}
where $\epsilon_{ijk}\sim$ Laplace($0,\tau$) for any $(i,j,k)\in\Omega$.
Then the joint probability density function of the observation $\mathcal{Y}_\Omega$
is given by
\begin{equation}\label{lapnoid}
p_{\mathcal{X}_{\Omega}^*}(\mathcal{Y}_{\Omega})
=\left(\frac{1}{2\tau}\right)^{|\Omega|}\exp\left(-\frac{\|\mathcal{Y}_{\Omega}-\mathcal{X}_{\Omega}^*\|_1}{\tau}\right).
\end{equation}
Now we establish the upper error bound of estimators in (\ref{model})
for the observations with additive Laplace noise.
\begin{Prop}\label{AddErp}
Let $\Omega\sim \textup{Bern}(\gamma)$, where $\gamma=\frac{m}{n_1n_2n_3}$ and $4\leq m\leq n_1n_2n_3$.
Assume that $\mathcal{Y}_{\Omega}$ obeys to (\ref{Gauyom}).
Let $\beta$ and $\lambda$ be defined as (\ref{beta})
and (\ref{lambda}), respectively, where $\kappa= \frac{c^2}{2\tau^2}$ in (\ref{lambda}).
Then the estimator $\widetilde{\mathcal{X}}^{\lambda}$ in (\ref{model}) satisfies
\[
\begin{split}
&~ \frac{\mathbb{E}_{\Omega,\mathcal{Y}_{\Omega}}
\left[\|\widetilde{\mathcal{X}}^{\lambda}-\mathcal{X}^*\|_F^2\right]}{n_1n_2n_3}\\
\leq & \ \frac{3c^2(2\tau+c)^2\log(m)}{m\tau^2}+2\left(3+\frac{c^2}{\tau^2}\right)(2\tau+c)^2(\beta+2)
\left(\frac{rn_1n_3+\|\mathcal{B}^*\|_0}{m}\right)\log\left(n_1\vee n_2\right).
\end{split}
\]
\end{Prop}
The detailed proof of Proposition \ref{AddErp} is delegated to Appendix \ref{ProoC}.
Similar to the case of observations with additive Gaussian noise, we compare
the upper error bound in Proposition \ref{AddErp} with that of \cite[Corollary 5]{soni2016noisy},
which satisfies
\begin{equation}\label{LapMaxm}
\frac{\mathbb{E}_{\Omega,\mathcal{Y}_{\Omega}}
\left[\|\mathcal{X}_2-\mathcal{X}^*\|_F^2\right]}{n_1n_2n_3}=O\left((\tau+c)^2\tau c \left(\frac{\widetilde{r}n_1n_3+\|\mathcal{B}^*\|_0}{m}\right)\log((n_1n_3)\vee n_2)\right),
\end{equation}
where $\mathcal{X}_2$ is the estimator by the matrix-based method and
$\widetilde{r}$ is the rank of the resulting matrix by matricizing the underlying tensor.
Therefore, the difference of the upper error bounds
between Proposition \ref{AddErp} and \cite[Corollary 5]{soni2016noisy}
is mainly on the logarithmic factor.
If $n_1n_3>n_2$, which holds in various real-world scenarios,
the logarithmic factor in (\ref{LapMaxm}) is $\log(n_1n_3)$,
while it is $\log(n_1\vee n_2)$ in Proposition \ref{AddErp}.
In particular, when $n_1=n_2$,
the logarithmic factor in (\ref{LapMaxm}) is $\log(n_1n_3)$,
while it is $\log(n_1)$ in Proposition \ref{AddErp}.
\subsection{Poisson Observations}
Suppose that each entry of $\mathcal{Y}_\Omega$ follows a Poisson distribution,
i.e.,
\begin{equation}\label{Posyijk}
\mathcal{Y}_{ijk}=\text{Poisson}(\mathcal{X}_{ijk}^*), \ \ \forall \ (i,j,k)\in\Omega,
\end{equation}
where $y=\text{Poisson}(x)$ denotes that $y$ obeys a Poisson distribution
with parameter $x>0$, each $\mathcal{Y}_{ijk}$ is independent and $\mathcal{X}_{ijk}^*>0$.
The joint probability mass function of $\mathcal{Y}_\Omega$ is given as follows:
\begin{equation}\label{Poissobse}
p_{\mathcal{X}_\Omega^*}(\mathcal{Y}_{\Omega})
=\prod_{(i,j,k)\in\Omega}\frac{(\mathcal{X}_{ijk}^*)^{\mathcal{Y}_{ijk}}\exp(-\mathcal{X}_{ijk}^*)}{\mathcal{Y}_{ijk}!}.
\end{equation}
Now we establish the upper error bound of estimators in (\ref{model}) for the observations obeying (\ref{Posyijk}),
which mainly bases on Theorem \ref{maintheo}.
The key step is to give the upper bound of $D(p_{\mathcal{X}^*}||p_{\mathcal{X}})$.
\begin{Prop}\label{uppPoissobs}
Let $\Omega\sim \textup{Bern}(\gamma)$, where $\gamma=\frac{m}{n_1n_2n_3}$
and $4\leq m\leq n_1n_2n_3$.
Suppose that each entry of $\mathcal{X}^*$ is positive, i.e.,
$\zeta:=\min_{i,j,k}\mathcal{X}_{ijk}^*>0$,
and each entry of the candidate $\mathcal{X}\in\Gamma$ also satisfies $\mathcal{X}_{ijk}\geq \zeta$.
Let $\beta$ and $\lambda$ be defined as (\ref{beta})
and (\ref{lambda}), respectively, where $\kappa= {c}/{\zeta}$ in (\ref{lambda}).
Assume that $\mathcal{Y}_{\Omega}$ obeys to the distribution in (\ref{Posyijk}).
Then the estimator $\widetilde{\mathcal{X}}^{\lambda}$ in (\ref{model}) satisfies
\[
\begin{split}
&~\frac{\mathbb{E}_{\Omega,\mathcal{Y}_{\Omega}}\|\widetilde{\mathcal{X}}^\lambda-\mathcal{X}^*\|_F^2}{n_1n_2n_3}\\
\leq & \ \frac{4c^3(3+8\log(m))}{\zeta m}+
48c\left(1+\frac{4c^2}{3\zeta}\right)
\frac{(\beta+2) \left(rn_1n_3+\|\mathcal{B}^*\|_0\right)\log\left(n_1\vee n_2\right)}{m}.
\end{split}
\]
\end{Prop}
We leave the detailed proof of Proposition \ref{uppPoissobs} to Appendix \ref{ProoD}.
Similar to the case of observations with additive Gaussian noise,
we compare the upper error bound in Proposition \ref{uppPoissobs} with
that of the matrix-based method in \cite[Corollary 6]{soni2016noisy}.
The resulting upper error bound of the matrix-based method is of the order
\begin{equation}\label{PoUmbm}
O\left(c\left(1+\frac{c^2}{\zeta}\right)
\left(\frac{\widetilde{r}n_1n_3+\|\mathcal{B}^*\|_0}{m}\right)\log\left((n_1n_3)\vee n_2\right)\right),
\end{equation}
where $\widetilde{r}$ is the rank of the resulting matrix.
The mainly difference of the upper error bounds between the tensor-
and matrix-based methods is the logarithmic factor.
Hence,
if $n_1n_3>n_2$, which holds in various real-world scenarios,
the logarithmic factor in (\ref{PoUmbm}) is $\log(n_1n_3)$,
while it is $\log(n_1\vee n_2)$ in Proposition \ref{uppPoissobs}.
In particular, the logarithmic factor in Proposition \ref{uppPoissobs} is $\log(n_1)$ when $n_1=n_2$.
\begin{remark}
The constants of the upper bound in Proposition \ref{uppPoissobs}
have some differences compared with the matrix-based method in \cite[Corollary 6]{soni2016noisy},
which will also influence the recovery error in practice.
\end{remark}
In addition,
Cao et al. \cite{cao2016Poisson} proposed a matrix-based model
for matrix completion with Poisson noise removal
and established the upper error bound of the estimator, where the low-rank
property is utilized by the upper bound of the nuclear norm of a matrix in a constrained set.
The error bound of the estimator $\mathcal{X}_3$ in \cite[Theorem 2]{cao2016Poisson} satisfies
\begin{equation}\label{Poiupbd}
\frac{\|\mathcal{X}_3-\mathcal{X}^*\|_F^2}{n_1n_2n_3}\leq
C_p\left(\frac{c^2\sqrt{\widetilde{r}}}{\zeta}\right)\frac{n_1n_3+n_2}{m}\log^{\frac{3}{2}}(n_1n_2n_3)
\end{equation}
with high probability, where $C_p>0$ is a given constant.
Therefore, if $\log(n_1n_2n_3)>\widetilde{r}$, the upper error bound
of the tensor-based method has a great improvement on the logarithmic factor if $\mathcal{B}^*$ is sparse.
Specifically, when $n_1=n_2$ and $\log(n_1n_2n_3)>\widetilde{r}$,
the logarithmic factor of (\ref{Poiupbd}) is $\log(n_1n_2n_3)$,
while it is $\log(n_1)$ in Proposition \ref{uppPoissobs}.
Recently, Zhang et al. \cite{zhang2021low} proposed a method for low-rank tensor completion with Poisson observations, which combined the transformed tensor nuclear norm ball constraint with maximum likelihood estimate. When $m\geq \frac{1}{2}(n_1+n_2)n_3\log(n_1+n_2)$ and all entries of multi-rank of the underlying tensor $\mathcal{X}^*$ are $r_1$,
the upper error bound of the estimator $\mathcal{X}_{tc}$ in \cite[Theorem 3.1]{zhang2021low} is
$$
\frac{\|\mathcal{X}_{tc}-\mathcal{X}^*\|_F^2}{n_1n_2n_3}\leq
C_{tc}n_3\sqrt{\frac{(n_1+n_2)r_1}{m}}\log(n_1n_2n_3)
$$
with high probability, where $C_{tc}>0$ is a given constant. In this case, since $r_1$ is small in general and ${(n_1+n_2)r_1}/{m}<1$, the upper error bound in \cite[Theorem 3.1]{zhang2021low} is larger than that of Proposition \ref{uppPoissobs}.
\section{Minimax Lower Bounds}\label{lowerbou}
In this section, we study the sparse NTF and completion problem with incomplete and noisy observations,
and establish the lower bounds on the
minimax risk for the candidate estimator in the following set:
\begin{equation}\label{Ucnae}
\begin{split}
\mathfrak{U}(r,b,s):
=\Big\{\mathcal{X}=\mathcal{A}\diamond\mathcal{B}\in\mathbb{R}_+^{n_1\times n_2\times n_3}:& \
\mathcal{A}\in\mathbb{R}_+^{n_1\times r \times n_3}, \ 0\leq \mathcal{A}_{ijk}\leq 1,\\
& ~ ~ \mathcal{B}\in\mathbb{R}_+^{r\times n_2\times n_3}, \ 0\leq \mathcal{B}_{ijk}\leq b, \ \|\mathcal{B}\|_0\leq s\Big\},
\end{split}
\end{equation}
which implies that the underlying tensor has a nonnegative factorization with tensor-tensor product and one factor tensor is sparse.
We only know the joint probability density function
or probability mass function of observations $\mathcal{Y}_\Omega$ given by (\ref{obserPo}).
Let $\widetilde{\mathcal{X}}$ be an estimator of $\mathcal{X}^*$.
The risk of estimators with incomplete observations is defined as
\begin{equation}\label{mirisk}
\mathfrak{R}_{\widetilde{\mathcal{X}}}
=\frac{\mathbb{E}_{\Omega,\mathcal{Y}_{\Omega}}[\|\widetilde{\mathcal{X}}-\mathcal{X}^*\|_F^2]}{n_1n_2n_3}.
\end{equation}
The worst-case performance of an estimator $\widetilde{\mathcal{X}}$
over the set $\mathfrak{U}(r,k,c)$ is defined as
$$
\inf_{\widetilde{\mathcal{X}}}\sup_{\mathcal{X}^*\in\mathfrak{U}(r,b,s)}\mathfrak{R}_{\widetilde{\mathcal{X}}}.
$$
The estimator is defined to achieve the minimax risk when it is the smallest maximum risk among all possible estimators.
Denote
\begin{equation}\label{deltasp}
\Delta:=\min\left\{1,\frac{s}{n_2n_3}\right\}.
\end{equation}
Now we establish the lower bounds of the minimax risk,
whose proof follows a similar line of \cite[Theorem 1]{sambasivan2018minimax} for noisy matrix completion,
see also \cite[Theorem 3]{klopp2017robust}.
The main technique is to define suitable packing sets
for two factor tensors $\mathcal{A}$ and $\mathcal{B}$ in (\ref{Ucnae}) based on tensor-tensor product.
Then we construct binary sets for the two packing sets in the tensor form,
which are subsets of (\ref{Ucnae}).
The line is mainly on the general results for the
risk estimate based on KL divergence \cite[Theorem 2.5]{tsybakov2009}
and the measures of two probability distributions.
In this case, we need to establish the lower bounds of Hamming distance between any two binary
sequences based on Varshamov-Gilbert bound \cite[Lemma 2.9]{tsybakov2009}.
First we establish the minimax lower bound with a general class of noise models in (\ref{obserPo}),
whose joint probability density function
or probability mass function of observations is given.
\begin{theorem}\label{lowbounMai}
Suppose that the KL divergence of the scalar probability density function or probability mass function satisfies
\begin{equation}\label{DKLpq}
D(p(x)||q(x))\leq \frac{1}{2\nu^2}(x-y)^2,
\end{equation}
where $\nu>0$ depends on the distribution of observations in (\ref{obserPo}).
Assume that $\mathcal{Y}_\Omega$ follows from (\ref{obserPo}).
Let $r\leq\min\{n_1,n_2\}$ and
$r\leq s\leq rn_2n_3$.
Then there exist $C,\beta_c>0$ such that the minimax risk in (\ref{mirisk}) satisfies
$$
\inf_{\widetilde{X}}\sup_{\mathcal{X}^*
\in\mathfrak{U}(r,b,s)}\frac{\mathbb{E}_{\Omega,\mathcal{Y}_\Omega}\|\widetilde{\mathcal{X}}-\mathcal{X}^*\|_F^2}{n_1n_2n_3}\geq
C \min\left\{\Delta b^2,\beta_c^2\nu^2\left(\frac{s + rn_1n_3}{m}\right)\right\},
$$
where $\Delta$ is defined as (\ref{deltasp}).
\end{theorem}
From Theorem \ref{lowbounMai}, we know that the minimax lower bound matches
the upper error bound in Theorem \ref{maintheo} with a logarithmic factor $\log(n_1\vee n_2)$, which
implies that the upper error bound in Theorem \ref{maintheo} is nearly optimal up to
a logarithmic factor $\log(n_1\vee n_2)$.
\begin{remark}
For the minimax lower bound with general noise observations in Theorem \ref{lowbounMai}, the main differences of proofs between \cite{sambasivan2018minimax} and Theorem \ref{lowbounMai}
are the constructions of packing sets (the sets in (\ref{GenesubX}), (\ref{CXZG}), (\ref{GenesubXB})) for the set $\mathfrak{U}(r,b,s)$ in (\ref{Ucnae}),
where the tensor-tensor product is used in the set (\ref{GenesubX}).
Moreover, being different from the proof of \cite{sambasivan2018minimax},
we need to construct the subsets of the packing sets
(the sets in (\ref{SubXAA}) and (\ref{SubXBB})),
where the tensor in the subsets has special nonnegative tensor factorization structures with the tensor-tensor product form.
The special block tensors are constructed for one factor tensor and special sets with block structure tensors are
constructed for the other factor tensor (see (\ref{GeneXACsub}) and (\ref{GeneXbSubBb})).
\end{remark}
In the next subsections, we establish the explicit
lower bounds for the special noise distributions,
including additive Gaussian noise, additive Laplace noise, and Poisson observations,
where the condition (\ref{DKLpq}) can be satisfied easily in each case.
\subsection{Additive Gaussian Noise}
In this subsection, we establish the minimax lower bound for the observations with additive Gaussian noise,
i.e., $\mathcal{Y}_\Omega$ obeys to (\ref{Gauyom}).
By Theorem \ref{lowbounMai}, the key issue is to give the explicit $\nu$ in (\ref{DKLpq}).
\begin{Prop}\label{ProupbG}
Assume that $\mathcal{Y}_\Omega$ follows from (\ref{Gauyom}).
Let $r\leq\min\{n_1,n_2\}$ and
$r\leq s\leq rn_2n_3$.
Then there exist $C,\beta_c>0$ such that the minimax risk in (\ref{mirisk}) satisfies
$$
\inf_{\widetilde{X}}\sup_{\mathcal{X}^*
\in\mathfrak{U}(r,b,s)}\frac{\mathbb{E}_{\Omega,\mathcal{Y}_\Omega}\|\widetilde{\mathcal{X}}-\mathcal{X}^*\|_F^2}{n_1n_2n_3}\geq
C \min\left\{\Delta b^2,\beta_c^2\sigma^2\left(\frac{s + rn_1n_3}{m}\right)\right\},
$$
where $\Delta$ is defined as (\ref{deltasp}).
\end{Prop}
\begin{remark}
From Proposition \ref{ProupbG}, we know that the minmax lower bound matches the upper error bound
in Proposition \ref{Gauuupp}
with a logarithmic factor $\log(n_1\vee n_2)$, which implies that the upper error bound
in Proposition \ref{Gauuupp} is nearly optimal.
\end{remark}
\begin{remark}
When we observe all entries of $\mathcal{Y}$, i.e., $m=n_1n_2n_3$, Proposition \ref{ProupbG}
is just the minimax lower bound of sparse NTF with tensor-tensor product, which has been applied in dictionary learning \cite{newman2019non}.
\end{remark}
\subsection{Additive Laplace Noise}
In this subsection, we establish the minimax lower bound for the observations with additive Laplace noise,
i.e., $\mathcal{Y}_\Omega$ obeys to (\ref{Lapayom}).
Similar to the case of additive Gaussian noise,
we only need to give $\nu$ in (\ref{DKLpq}) in Theorem \ref{lowbounMai}.
\begin{Prop}\label{lapUpb}
Assume that $\mathcal{Y}_\Omega$ follows from (\ref{Lapayom}).
Let $r\leq\min\{n_1,n_2\}$ and
$r\leq s\leq rn_2n_3$.
Then there exist $C,\beta_c>0$ such that the minimax risk in (\ref{mirisk}) satisfies
$$
\inf_{\widetilde{X}}\sup_{\mathcal{X}^*
\in\mathfrak{U}(r,b,s)}\frac{\mathbb{E}_{\Omega,\mathcal{Y}_\Omega}\|\widetilde{\mathcal{X}}-\mathcal{X}^*\|_F^2}{n_1n_2n_3}\geq
C \min\left\{\Delta b^2,\beta_c^2\tau^2\left(\frac{s + rn_1n_3}{m}\right)\right\}.
$$
\end{Prop}
\begin{remark}
It follows from Proposition \ref{lapUpb} that the rate attained
by our estimator in Proposition \ref{AddErp} is optimal up to a logarithmic factor $\log(n_1\vee n_2)$,
which is similar to the case of observations with additive Gaussian noise.
\end{remark}
\subsection{Poisson Observations}
In this subsection, we establish the minimax lower bound for Poisson observations,
i.e., $\mathcal{Y}_\Omega$ obeys to (\ref{Posyijk}).
There is a slight difference compared with additive Gaussian noise and Laplace noise,
we need to assume that all entries of the underlying tensor are strictly positive,
i.e., $\zeta:=\min_{i,j,k}\mathcal{X}_{ijk}^*>0$.
Suppose that $\zeta<b$.
Being different from the candidate set (\ref{Ucnae}),
each entry of the candidate tensor is also strictly positive.
The candidate set is defined as follows:
\begin{equation}\label{UbarPoissn}
\begin{split}
\widetilde{\mathfrak{U}}(r,b,s,\zeta):
=\Big\{\mathcal{X}=\mathcal{A}\diamond\mathcal{B}\in\mathbb{R}_+^{n_1\times n_2\times n_3}:& \
\mathcal{X}_{ijk}\geq \zeta, \ \mathcal{A}\in\mathbb{R}_+^{n_1\times r \times n_3}, \ 0\leq \mathcal{A}_{ijk}\leq 1,\\
&~ ~ \mathcal{B}\in\mathbb{R}_+^{r\times n_2\times n_3}, \ 0\leq \mathcal{B}_{ijk}\leq b, \ \|\mathcal{B}\|_0\leq s\Big\}.
\end{split}
\end{equation}
Then we know that $\widetilde{\mathfrak{U}}(r,b,s,\zeta)\subseteq\mathfrak{U}(r,b,s)$.
Now the lower bound of candidate estimators for Poisson observations
is given in the following proposition, whose proof
follows a similar line of the matrix case in \cite[Theorem 6]{sambasivan2018minimax}.
For the sake of completeness, we give it here.
Similar to Theorem \ref{lowbounMai}, the main differences
between the matrix- and tensor-based methods are
the packing sets for the two nonnegative factorization factors $\mathcal{A}$ and $\mathcal{B}$.
We mainly use the results in \cite[Theorem 2.5]{tsybakov2009} for the constructed packing sets
and the Varshamov-Gilbert bound \cite[Lemma 2.9]{tsybakov2009} for the binary sets.
\begin{Prop}\label{Poisslow}
Suppose that $\mathcal{Y}_\Omega$ follows from (\ref{Posyijk}).
Let $r\leq\min\{n_1,n_2\}$ and
$n_2n_3< s\leq rn_2n_3$.
Then there exist $0<\widetilde{\beta}_c<1$ and $\widetilde{C}>0$ such that
$$
\inf_{\widetilde{X}}\sup_{\mathcal{X}^*
\in\widetilde{\mathfrak{U}}(r,b,s,\zeta)}\frac{\mathbb{E}_{\Omega,\mathcal{Y}_\Omega}\|\widetilde{\mathcal{X}}-\mathcal{X}^*\|_F^2}{n_1n_2n_3}\geq
\widetilde{C}\min\left\{\widetilde{\Delta} b^2,\widetilde{\beta}_c^2\zeta\left(\frac{s-n_2n_3+rn_1n_3}{m}\right)\right\},
$$
where $\widetilde{\Delta}:=\min\{(1-\varsigma)^2, \Delta_1\}$
with $\varsigma:=\frac{\zeta}{b}$ and $\Delta_1:=\min\{1,\frac{s-n_2n_3}{n_2n_3}\}$.
\end{Prop}
\begin{remark}
From Proposition \ref{Poisslow},
we note that the lower bound of Poisson observations is of the order $O(\frac{s-n_2n_3+rn_1n_3}{m})$.
In particular, when $s\geq 2n_2n_3$, the lower bound in Proposition \ref{Poisslow}
matches the upper bound in Proposition \ref{uppPoissobs} up to a logarithmic factor $\log(n_1\vee n_2)$.
\end{remark}
\begin{remark}
For the minimax lower bound with Poisson observations in Proposition \ref{Poisslow}, the main differences of proofs between \cite{sambasivan2018minimax} and Proposition \ref{Poisslow}
are the constructions of packing sets (the sets in (\ref{PoscsubX1}), (\ref{CXZ}), (\ref{PoscsubX1B1})) for the set $\widetilde{\mathfrak{U}}(r,b,s,\zeta)$ in (\ref{UbarPoissn}),
where the tensor-tensor product is used in the set (\ref{PoscsubX1}).
Moreover, the subsets of the packing sets with two nonnegative factor tensors
(the sets in (\ref{PoissXA1A}) and (\ref{PoissXB1B})) need to be constructed, where the tensor-tensor product is also used in the two subsets.
Besides, in the two subsets,
the special block tensors for one factor tensor and special sets with block tensors for the other factor tensor (see the sets in (\ref{PoisXAsubC1}) and (\ref{PoisXBsubB1})) are constructed.
\end{remark}
\section{Optimization Algorithm}\label{OptimAlg}
In this section, we present an ADMM based algorithm \cite{Gabay1976A, wang2015global} to solve the model (\ref{model}).
Note that the feasible set $\Gamma$ in (\ref{TauSet}) is discrete which makes the algorithm design difficult.
In order to use continuous optimization
techniques, the discrete assumption of $\Gamma$ is dropped.
This may be justified by choosing a very large
value of $\vartheta$ and by noting that continuous optimization algorithms
use finite precision arithmetic when executed on a computer.
Now we consider to solve the following relaxation model:
\begin{equation}\label{MidelSolv}
\begin{split}
\min_{\mathcal{X},\mathcal{A},\mathcal{B}} \ & -\log p_{\mathcal{X}_\Omega}(\mathcal{Y}_{\Omega})+\lambda\|\mathcal{B}\|_0, \\
\text{s.t.}\ & \mathcal{X}=\mathcal{A}\diamond \mathcal{B},\
0\leq \mathcal{X}_{ijk}\leq c, \
0\leq \mathcal{A}_{ijk}\leq 1, \
0\leq \mathcal{B}_{ijk}\leq b.
\end{split}
\end{equation}
Let $\mathfrak{X}'=\{\mathcal{X}\in\mathbb{R}_+^{n_1\times n_2\times n_3}:0\leq \mathcal{X}_{ijk}\leq c\}$,
$\mathfrak{A}=\{\mathcal{A}\in\mathbb{R}_+^{n_1\times r\times n_3}:0\leq \mathcal{A}_{ijk}\leq 1\}$,
$\mathfrak{B}'=\{\mathcal{B}\in\mathbb{R}_+^{r\times n_2\times n_3}:0\leq \mathcal{B}_{ijk}\leq b\}$,
and $\mathcal{Q}=\mathcal{X}, \mathcal{M}=\mathcal{A}$, $\mathcal{N}=\mathcal{B}, \mathcal{Z}=\mathcal{B}$.
Then problem (\ref{MidelSolv}) can be rewritten equivalently as
\begin{equation}\label{ModelOtheFo}
\begin{split}
\min_{\mathcal{X},\mathcal{A},\mathcal{B},\mathcal{Q},\mathcal{M},\mathcal{N},\mathcal{Z}} \ &
-\log p_{\mathcal{X}_\Omega}(\mathcal{Y}_{\Omega})+\lambda\|\mathcal{N}\|_0
+\delta_{\mathfrak{X}'}(\mathcal{Q})+
\delta_{\mathfrak{A}}(\mathcal{M})+\delta_{\mathfrak{B}'}(\mathcal{Z}), \\
\text{s.t.}\ & \mathcal{X}=\mathcal{A}\diamond \mathcal{B},
\mathcal{Q} = \mathcal{X}, \mathcal{M}=\mathcal{A}, \mathcal{N}=\mathcal{B}, \mathcal{Z}=\mathcal{B},
\end{split}
\end{equation}
where $\delta_{\mathfrak{A}}(x)$ denotes the indicator function
of $\mathfrak{A}$, i.e., $\delta_{\mathfrak{A}}(x)=0$ if $x\in \mathfrak{A}$ otherwise $\infty$.
The augmented Lagrangian function associated with (\ref{ModelOtheFo}) is defined as
\[
\begin{split}
&L(\mathcal{X},\mathcal{A},\mathcal{B},\mathcal{Q},\mathcal{M},\mathcal{N},\mathcal{Z},\mathcal{T}_i)\\
:=&
-\log p_{\mathcal{X}_\Omega}(\mathcal{Y}_{\Omega})+\lambda\|\mathcal{N}\|_0
+\delta_{\mathfrak{X}'}(\mathcal{Q})+
\delta_{\mathfrak{A}}(\mathcal{M})+\delta_{\mathfrak{B}'}(\mathcal{Z})-\langle \mathcal{T}_1, \mathcal{X}-\mathcal{A}\diamond \mathcal{B} \rangle\\
&-\langle \mathcal{T}_2, \mathcal{Q}- \mathcal{X}\rangle - \langle\mathcal{T}_3, \mathcal{M}-\mathcal{A} \rangle
-\langle \mathcal{T}_4, \mathcal{N}-\mathcal{B} \rangle -\langle \mathcal{T}_5, \mathcal{Z}-\mathcal{B}\rangle \\ &+\frac{\rho}{2}\Big(\|\mathcal{X}-\mathcal{A}\diamond \mathcal{B}\|_F^2
+\|\mathcal{Q} - \mathcal{X}\|_F^2+\| \mathcal{M}-\mathcal{A}\|_F^2+\| \mathcal{N}-\mathcal{B}\|_F^2+\|\mathcal{Z}-\mathcal{B}\|_F^2\Big),
\end{split}
\]
where $\mathcal{T}_i$ are Lagrangian multipliers, $i=1,\ldots, 5$, and
$\rho>0$ is the penalty parameter. The iteration of ADMM is given as follows:
\begin{align}
&\mathcal{X}^{k+1}=\arg\min_{\mathcal{X}} L(\mathcal{X},\mathcal{A}^k,\mathcal{B}^k,\mathcal{Q}^k,\mathcal{M}^k,\mathcal{N}^k,\mathcal{Z}^k,\mathcal{T}_i^k) \nonumber \\ \label{Xk1}
&~~~~~~=\textup{Prox}_{(-\frac{1}{2\rho}\log p_{\mathcal{X}_\Omega}(\mathcal{Y}_{\Omega}))}\left(\frac{1}{2}\left(\mathcal{Q}^k+\mathcal{A}^k\diamond \mathcal{B}^k+\frac{1}{\rho}(\mathcal{T}_1^k-\mathcal{T}_2^k)\right)\right), \\
&\mathcal{A}^{k+1}=\arg\min_{\mathcal{A}} L(\mathcal{X}^{k+1},\mathcal{A},\mathcal{B}^k,\mathcal{Q}^k,\mathcal{M}^k,\mathcal{N}^k,\mathcal{Z}^k,\mathcal{T}_i^k)\nonumber \\ \label{Ak1}
&~~~~~~=\left(\mathcal{M}^k+(\mathcal{X}^{k+1}-\frac{1}{\rho}\mathcal{T}_1^k)\diamond
(\mathcal{B}^k)^T-\frac{1}{\rho}\mathcal{T}_3^k\right)\diamond (\mathcal{B}^k\diamond (\mathcal{B}^k)^T+\mathcal{I})^{-1}, \\
&\mathcal{B}^{k+1}=\arg\min_{\mathcal{B}} L(\mathcal{X}^{k+1},\mathcal{A}^{k+1},\mathcal{B},\mathcal{Q}^k,\mathcal{M}^k,
\mathcal{N}^k,\mathcal{Z}^k,\mathcal{T}_i^k)\nonumber\\ \label{bK1}
&~~~~~~=\left((\mathcal{A}^{k+1})^T\diamond \mathcal{A}^{k+1}+2\mathcal{I}\right)^{-1}\diamond \nonumber \\
&~~~~~~~~~~
\left((\mathcal{A}^{k+1})^T\diamond\mathcal{X}^{k+1}+\mathcal{N}^k+\mathcal{Z}^k
-\frac{1}{\rho}\left((\mathcal{A}^{k+1})^T\diamond\mathcal{T}_1^k+
\mathcal{T}_4^k+\mathcal{T}_5^k\right)\right), \\
\label{QK1}
&\mathcal{Q}^{k+1}=\arg\min_{\mathcal{Q}} L(\mathcal{X}^{k+1},\mathcal{A}^{k+1},\mathcal{B}^{k+1},\mathcal{Q},\mathcal{M}^{k},\mathcal{N}^k,
\mathcal{Z}^{k},\mathcal{T}_i^k)
=\Pi_{\mathfrak{X}'}\Big(\mathcal{X}^{k+1}+\frac{1}{\rho}\mathcal{T}_2^k\Big), \\ \label{Mk1}
&\mathcal{M}^{k+1}=\arg\min_{\mathcal{M}} L(\mathcal{X}^{k+1},\mathcal{A}^{k+1},\mathcal{B}^{k+1},\mathcal{Q}^{k+1},\mathcal{M},\mathcal{N}^k,
\mathcal{Z}^{k+1},\mathcal{T}_i^k)
=\Pi_{\mathfrak{A}}\Big(\mathcal{A}^{k+1}+\frac{1}{\rho}\mathcal{T}_3^k\Big),\\ \label{Nk1}
&\mathcal{N}^{k+1}=\arg\min_{\mathcal{N}} L(\mathcal{X}^{k+1},\mathcal{A}^{k+1},\mathcal{B}^{k+1},\mathcal{Q}^{k+1},\mathcal{M}^{k+1},\mathcal{N},
\mathcal{Z}^{k},\mathcal{T}_i^k)\nonumber \\
&~~~~~~~=\textup{Prox}_{\frac{\lambda}{\rho}\|\cdot\|_0}\Big(\mathcal{B}^{k+1}+\frac{1}{\rho}\mathcal{T}_4^k\Big), \\ \label{Zk1}
&\mathcal{Z}^{k+1}=\arg\min_{\mathcal{Z}} L(\mathcal{X}^{k+1},\mathcal{A}^{k+1},\mathcal{B}^{k+1},\mathcal{Q}^{k+1},\mathcal{M}^{k+1},
\mathcal{N}^{k+1},\mathcal{Z},\mathcal{T}_i^k)
=\Pi_{\mathfrak{B}'}\Big(\mathcal{B}^{k+1}+\frac{1}{\rho}\mathcal{T}_5^k\Big), \\ \label{Tk12}
&\mathcal{T}_1^{k+1}=\mathcal{T}_1^k-\rho(\mathcal{X}^{k+1}-\mathcal{A}^{k+1}\diamond \mathcal{B}^{k+1}), \
\mathcal{T}_2^{k+1}= \mathcal{T}_2^{k}-\rho(\mathcal{Q}^{k+1}- \mathcal{X}^{k+1}), \\ \label{Tk34}
& \mathcal{T}_3^{k+1}= \mathcal{T}_3^k-\rho(\mathcal{M}^{k+1}-\mathcal{A}^{k+1}), \
\mathcal{T}_4^{k+1}=\mathcal{T}_4^{k}-\rho(\mathcal{N}^{k+1}-\mathcal{B}^{k+1}), \\ \label{Tk5}
&\mathcal{T}_5^{k+1}=\mathcal{T}_5^k-\rho(\mathcal{Z}^{k+1}-\mathcal{B}^{k+1}),
\end{align}
where $\Pi_{\mathfrak{X}'}(\mathcal{X}), \Pi_{\mathfrak{A}}(\mathcal{X})$, and
$\Pi_{\mathfrak{B}'}(\mathcal{X})$ denote
the projections of $\mathcal{X}$ onto the sets $\mathfrak{X}'$, $\mathfrak{A}$, and $\mathfrak{B}'$, respectively.
Now the ADMM for solving (\ref{ModelOtheFo}) is stated in Algorithm \ref{AlSNDM}.
\begin{algorithm}[htbp]
\caption{Alternating Direction Method of Multipliers for Solving (\ref{ModelOtheFo})} \label{AlSNDM}
{\bf Input}. Let $\rho>0$ be a given constant. Given $\mathcal{A}^0, \mathcal{B}^0, \mathcal{Q}^0,
\mathcal{M}^0, \mathcal{N}^0, \mathcal{Z}^0, \mathcal{T}_i^0, i=1,\ldots,5$.
For $k=0,1,\ldots,$ perform the following steps: \\
{\bf Step 1}. Compute $\mathcal{X}^{k+1}$ via (\ref{Xk1}). \\
{\bf Step 2}. Compute $\mathcal{A}^{k+1}$ by (\ref{Ak1}). \\
{\bf Step 3}. Compute $\mathcal{B}^{k+1}$ by (\ref{bK1}). \\
{\bf Step 4}. Compute $\mathcal{Q}^{k+1}, \mathcal{M}^{k+1},
\mathcal{N}^{k+1}, \mathcal{Z}^{k+1}$ by (\ref{QK1}), (\ref{Mk1}), (\ref{Nk1}), and (\ref{Zk1}), respectively. \\
{\bf Step 5}. Update $\mathcal{T}_1^{k+1}$, $\mathcal{T}_2^{k+1}$, $\mathcal{T}_3^{k+1}$,
$\mathcal{T}_4^{k+1}$, $\mathcal{T}_5^{k+1}$ via (\ref{Tk12}), (\ref{Tk34}), and (\ref{Tk5}), respectively. \\
{\bf Step 6}. If a termination criterion is not satisfied, set $k:=k+1$ and go to Step 1.
\end{algorithm}
Algorithm \ref{AlSNDM} is an ADMM based algorithm for solving nonconvex optimization problems.
Although great efforts have been made about the convergence of ADMM
for nonconvex models in recent years \cite{wang2015global, Hong2016Convergence},
the existing ADMM based algorithm cannot been applied to our model directly
since both the objective function and constraints are nonconvex.
Moreover, the data-fitting term is nonsmooth when the observations are corrupted by additive Laplace noise,
which also gives rise to the difficulty of the convergence of ADMM for solving our model.
\begin{remark}\label{theProMap}
In Algorithm \ref{AlSNDM}, one needs to compute the proximal mapping
$\textup{Prox}_{(-\frac{1}{2\rho}\log p_{\mathcal{X}_\Omega}(\mathcal{Y}_{\Omega}))}(\mathcal{S})$,
where $\mathcal{S}=\frac{1}{2}(\mathcal{Q}^k+\mathcal{A}^k\diamond \mathcal{B}^k+\frac{1}{\rho}(\mathcal{T}_1^k-\mathcal{T}_2^k))$.
In particular, for the additive Gaussian noise, additive Laplace noise,
and Poisson observations, the proximal mappings at $\mathcal{S}$ are given by
\begin{itemize}
\item Additive Gaussian noise:
$$
\textup{Prox}_{(-\frac{1}{2\rho}\log p_{\mathcal{X}_\Omega}(\mathcal{Y}_{\Omega}))}(\mathcal{S})
=\mathcal{P}_\Omega\left(\frac{\mathcal{Y}+2\rho\sigma^2\mathcal{S}}{1+2\rho\sigma^2}\right)+
\mathcal{P}_{\overline{\Omega}}(\mathcal{S}),
$$
where $\overline{\Omega}$ is the complementary set of $\Omega$.
\item Additive Laplace noise:
$$\textup{Prox}_{(-\frac{1}{2\rho}\log p_{\mathcal{X}_\Omega}(\mathcal{Y}_{\Omega}))}(\mathcal{S})
=\mathcal{P}_\Omega\left(\mathcal{Y}_{\Omega}+\textup{sign}
(\mathcal{S}-\mathcal{Y}_{\Omega})\circ\max\left\{|\mathcal{S}
-\mathcal{Y}_\Omega|-\frac{1}{2\rho\tau},0\right\}\right)+\mathcal{P}_{\overline{\Omega}}(\mathcal{S}),
$$
where $\textup{sign}(\cdot)$ denotes the signum function and $\circ$ denotes the point-wise product.
\item Poisson observations:
$$
\textup{Prox}_{(-\frac{1}{2\rho}\log p_{\mathcal{X}_\Omega}(\mathcal{Y}_{\Omega}))}(\mathcal{S})
=\mathcal{P}_\Omega\left(\frac{2\rho \mathcal{S}-\mathbb{I}+\sqrt{(2\rho \mathcal{S}-\mathbb{I})^2
+8\rho \mathcal{Y}}}{4\rho}\right)+\mathcal{P}_{\overline{\Omega}}(\mathcal{S}),
$$
where $\mathbb{I}\in\mathbb{R}^{n_1\times r\times n_3}$ denotes the tensor with all entries being $1$, and the square and root are performed in point-wise manners.
\end{itemize}
\end{remark}
\begin{remark}
We also need to compute the proximal mapping of tensor $\ell_0$ norm \cite{donoho1994ideal} in Algorithm \ref{AlSNDM}.
Note that the tensor $\ell_0$ norm is separable. Then we just need to derive its scalar form.
For any $t>0$, the proximal mapping of $t \|\cdot\|_0$ is
given by (see, e.g., \cite[Example 6.10]{beck2017first})
$$
\textup{Prox}_{t\|\cdot\|_0}(y)=
\left\{
\begin{array}{ll}
0, & \mbox{if} \ |y|<\sqrt{2t}, \\
\{0,y\}, & \mbox{if} \ |y|=\sqrt{2t}, \\
y, & \mbox{if}\ |y|>\sqrt{2t}.
\end{array}
\right.
$$
\end{remark}
\begin{remark}
The ADMM based algorithm is developed to solve the model (\ref{ModelOtheFo}). However, the problem
(\ref{ModelOtheFo}) is nonconvex, and it is difficult to obtain its global optimal solution in experiments,
while the estimators of the upper error bounds in Section \ref{upperbound} are globally optimal.
\end{remark}
\begin{remark}
The main cost of ADMM in Algorithm \ref{AlSNDM} is the tensor-tensor product and tensor inverse operations.
First, we consider the computational cost of the tensor-tensor product for two tensors $\mathcal{A}\in \mathbb{R}^{n_1\times r\times n_3}$ and $ \mathcal{B}\in \mathbb{R}^{r\times n_2\times n_3}$,
which is implemented by fast Fourier transform \cite{Kilmer2011Factorization}.
The application of discrete Fourier transform
to an $n_3$-vector is of $O(n_3\log(n_3))$ operations.
After Fourier transform along the tubes, we need to compute $n_3$ matrix products with sizes $n_1$-by-$r$ and $r$-by-$n_2$,
whose cost is $O(rn_1n_2n_3)$. Therefore, for the tensor-tensor product of $\mathcal{A}\in \mathbb{R}^{n_1\times r\times n_3}$ and $ \mathcal{B}\in \mathbb{R}^{r\times n_2\times n_3}$,
the total cost is $O(r(n_1+n_2)n_3\log(n_3)+rn_1n_2n_3)$.
Second, for the inverse operation of an $n\times n\times n_3$ tensor, one takes fast Fourier transform along the third-dimension and operates the inverse for each frontal slice
in the Fourier domain. Then the total cost of the tensor inverse is $O(n^2n_3\log(n_3)+n^3n_3)$.
For the ADMM, its main cost is to compute $\mathcal{A}^{k+1}$ and $\mathcal{B}^{k+1}$. The complexities of computing $\mathcal{A}^{k+1}$ and $\mathcal{B}^{k+1}$ are $O(n_2(r+n_1)n_3\log(n_3)+rn_1n_2n_3)$
and $O(n_1(r+n_2)n_3\log(n_3)+rn_1n_2n_3)$, respectively.
Note that $r\leq\min\{n_1,n_2\}$.
If we take one of the proximal mappings in Remark \ref{theProMap} for $\mathcal{X}^{k+1}$,
the total cost of ADMM is $O(n_1n_2n_3\log(n_3)+rn_1n_2n_3)$.
\end{remark}
\section{Numerical Results}\label{NumeriExper}
In this section, some numerical experiments are conducted to
demonstrate the effectiveness of the proposed tensor-based method
for sparse NTF and completion with different noise observations,
including additive Gaussian noise, additive Laplace noise, and Poisson observations.
We will compare the sparse NTF and completion method with the matrix-based method in \cite{soni2016noisy}.
The Karush-Kuhn-Tucker (KKT) conditions of (\ref{ModelOtheFo}) are given by
\begin{equation}\label{KKTCon}
\left\{
\begin{array}{ll}
0\in \partial_\mathcal{X}\left(-\log(p_{\mathcal{X}_{\Omega}}(\mathcal{Y}_{\Omega}))\right)
-\mathcal{T}_1+\mathcal{T}_2, \\
\mathcal{T}_1\diamond\mathcal{B}^T +\mathcal{T}_3=0, \ \mathcal{A}^T\diamond\mathcal{T}_1 +\mathcal{T}_4+\mathcal{T}_5 = 0, \\
0\in \partial{\delta_{\mathfrak{X}'}(\mathcal{Q})}-\mathcal{T}_2, \ 0\in \partial{\delta_{\mathfrak{A}}(\mathcal{M})}-\mathcal{T}_3, \\
0\in\partial(\lambda\|\mathcal{N}\|_0)-\mathcal{T}_4, \ 0\in\partial{\delta_{\mathfrak{B}'}(\mathcal{Z})}-\mathcal{T}_5,\\
\mathcal{X}=\mathcal{A}\diamond \mathcal{B},
\mathcal{Q} = \mathcal{X}, \mathcal{M}=\mathcal{A}, \mathcal{N}=\mathcal{B}, \mathcal{Z}=\mathcal{B},
\end{array}
\right.
\end{equation}
where $\partial f(x)$ denotes the subdifferential of $f$ at $x$.
Based on the KKT conditions in (\ref{KKTCon}),
we adopt the following relative residual to measure the accuracy:
$$
\eta_{max}:=\max\{\eta_1,\eta_2,\eta_3,\eta_4,\eta_5,\eta_6\},
$$
where
\[
\begin{split}
& \eta_1=\frac{\|\mathcal{X}-\textup{Prox}_{(-\log(p_{\mathcal{X}_{\Omega}}
(\mathcal{Y}_{\Omega})))}(\mathcal{T}_1-\mathcal{T}_2+\mathcal{X})\|_F}{1+\|\mathcal{X}\|_F
+\|\mathcal{T}_1\|_F+\|\mathcal{T}_2\|_F}, \
\eta_2 = \frac{\|\mathcal{Q}-\Pi_{\mathfrak{X}'}(\mathcal{T}_2+\mathcal{Q})\|_F}
{1+\|\mathcal{T}_2\|_F+\|\mathcal{Q}\|_F}, \\
&\eta_3 = \frac{\|\mathcal{M}-\Pi_{\mathfrak{A}}(\mathcal{T}_3
+\mathcal{M})\|_F}{1+\|\mathcal{T}_3\|_F+\|\mathcal{M}\|_F}, \
\eta_4 = \frac{\|\mathcal{N}-\textup{Prox}_{\lambda\|\cdot\|_0}
(\mathcal{T}_4+\mathcal{N})\|_F}{1+\|\mathcal{T}_4\|_F+\|\mathcal{N}\|_F}, \\
& \eta_5 = \frac{\|\mathcal{Z}-\Pi_{\mathfrak{B}'}(\mathcal{T}_5+\mathcal{Z})\|_F}{1+\|\mathcal{T}_5\|_F+\|\mathcal{Z}\|_F}, \
\eta_6 = \frac{\|\mathcal{X}-\mathcal{A}\diamond \mathcal{B}\|_F}{1+\|\mathcal{X}\|_F + \|\mathcal{A}\|_F + \|\mathcal{B}\|_F}.
\end{split}
\]
Algorithm 1 is terminated if $\eta_{max}<=10^{-4}$ or the number of iterations researches the maximum of $300$.
In order to measure the quality of the recovered tensor,
the relative error (RE) is used to evaluate the performance of different methods,
which is defined as
$$
\textup{RE}=\frac{\|\widetilde{\mathcal{X}}-\mathcal{X}^*\|_F}{\|\mathcal{X}^*\|_F},
$$
where $\widetilde{\mathcal{X}}$ and $\mathcal{X}^*$ are the recovered tensor and the ground-truth tensor, respectively.
We generate the nonnegative tensors $\mathcal{A}^*\in\mathbb{R}_+^{n_1\times r\times n_3}$
and $\mathcal{B}^*\in\mathbb{R}_{+}^{r\times n_2\times n_3}$ at random.
$\mathcal{A}^*$ is generated by the MALTAB command $\textup{rand}(n_1,r,n_3)$
and $\mathcal{B}^*$ is a nonnegative sparse tensor generated by the tensor toolbox
command $b\cdot\textup{sptenrand}([r,n_2,n_3],s)$ \cite{TTB_Software}, where $b$ is the magnitude of
$\mathcal{B}^*$ and $s$ is the sparse ratio.
Then $\mathcal{X}^*=\mathcal{A}^*\diamond \mathcal{B}^*$ and we choose $c=2\|\mathcal{X}^*\|_\infty$.
The size of the testing third-order tensors is $n_1=n_2=n_3=100$ in the following two experiments.
The initial values will also influence the performance of ADMM. For the initial values
$\mathcal{A}^0,\mathcal{B}^0, \mathcal{M}^0, \mathcal{N}^0, \mathcal{Z}^0, \mathcal{T}_3^0, \mathcal{T}_4^0, \mathcal{T}_5^0$ of ADMM,
we choose them as two random tensors with the same size as that of $\mathcal{A}^*,\mathcal{B}^*$.
For the initial values $\mathcal{Q}^0, \mathcal{T}_1^0, \mathcal{T}_2^0$, we choose them as the observations $\mathcal{Y}_\Omega$ in $\Omega$
and zeros outside $\Omega$.
\begin{figure}[!t]
\centering
\subfigure[Gaussian]{
\begin{minipage}[b]{0.31\textwidth}
\centerline{\scriptsize } \vspace{1.5pt}
\includegraphics[width=2.1in,height=1.6in]{GaussianSR.eps}
\end{minipage}
}
\subfigure[Laplace]{
\begin{minipage}[b]{0.31\textwidth}
\centerline{\scriptsize } \vspace{1.5pt}
\includegraphics[width=2.1in,height=1.6in]{LaplaceSR.eps}
\end{minipage}
}
\subfigure[Poisson]{
\begin{minipage}[b]{0.31\textwidth}
\centerline{\scriptsize } \vspace{1.5pt}
\includegraphics[width=2.1in,height=1.6in]{PoissonSR.eps}
\end{minipage}
}
\caption{\small RE versus SR of different methods for different noise observations. (a) Gaussian. (b) Laplace. (c) Poisson.}\label{Diffrennoise}
\end{figure}
\begin{figure}[!t]
\centering
\subfigure[Gaussian]{
\begin{minipage}[b]{0.31\textwidth}
\centerline{\scriptsize } \vspace{1.5pt}
\includegraphics[width=2.1in,height=1.6in]{DiffrankGaussian.eps}
\end{minipage}
}
\subfigure[Laplace]{
\begin{minipage}[b]{0.31\textwidth}
\centerline{\scriptsize } \vspace{1.5pt}
\includegraphics[width=2.1in,height=1.6in]{DiffrankLaplace.eps}
\end{minipage}
}
\subfigure[Poisson]{
\begin{minipage}[b]{0.31\textwidth}
\centerline{\scriptsize } \vspace{1.5pt}
\includegraphics[width=2.1in,height=1.6in]{DiffrankPoisson.eps}
\end{minipage}
}
\caption{\small RE versus $r$ of different methods for different noise observations. (a) Gaussian. (b) Laplace. (c) Poisson.}\label{DiffrennoiseR}
\end{figure}
As discussed in Section \ref{ProMod}, we aim to estimate the ground-truth
tensor $\mathcal{X}^*$. Note that the two factors may not be unique.
Therefore, we only compare the recovered tensor
$\widetilde{\mathcal{X}}$ with the ground-truth tensor $\mathcal{X}^*$ in the experiments.
In fact, we just establish the error upper bounds of $\widetilde{\mathcal{X}}$,
and do not analyze the error bounds of each factor tensor independently in theory.
First we analyze the recovery performance of different methods versus SRs.
In Figure \ref{Diffrennoise}, we display the REs of the recovered tensors with different sampling ratios
and $r$, where the sparse ratio $s=0.3$ and the observed entries are corrupted
by Gaussian noise, Laplace noise, and Poisson noise, respectively.
We set $\sigma=0.1$ and $\tau=0.1$ for Gaussian noise and Laplace noise, respectively,
and $r=10$ and $b=2$.
The SRs vary from $0.3$ to $0.9$ with step size $0.1$.
It can be seen from this figure that the relative errors decrease
when the sampling ratios increase for both matrix- and tensor-based methods.
Moreover, the relative errors obtained by the tensor-based method are lower than those
obtained by the matrix-based method.
Compared with the matrix-based method, the improvements of the
tensor-based method for additive Gaussian noise and Laplace noise
are much more than those for Poisson observations,
where the main reason is that the constants of the upper bound in Proposition \ref{uppPoissobs}
are slightly larger than those of the matrix-based method in \cite{soni2016noisy} for Poisson observations.
\begin{figure}[!t]
\centering
\subfigure[Gaussian]{
\begin{minipage}[b]{0.31\textwidth}
\centerline{\scriptsize } \vspace{1.5pt}
\includegraphics[width=2.1in,height=1.6in]{DiffSizeGaussian.eps}
\end{minipage}
}
\subfigure[Laplace]{
\begin{minipage}[b]{0.31\textwidth}
\centerline{\scriptsize } \vspace{1.5pt}
\includegraphics[width=2.1in,height=1.6in]{DiffSizeLaplace.eps}
\end{minipage}
}
\subfigure[Poisson]{
\begin{minipage}[b]{0.31\textwidth}
\centerline{\scriptsize } \vspace{1.5pt}
\includegraphics[width=2.1in,height=1.6in]{DiffSizePoisson.eps}
\end{minipage}
}
\caption{\small RE versus size of tensors of different methods for different noise observations.
(a) Gaussian. (b) Laplace. (c) Poisson.}\label{DiffrennoiseSize}
\end{figure}
The recovery performance of different methods versus $r$ is also analyzed, where $\textup{SR} = 0.5$
and $r$ varies from $5$ to $40$ with step size $5$ for additive Gaussian noise and Laplace noise,
and from $5$ to $30$ with step size $5$ for Poisson observations.
Again we set $b=2$, and $\sigma=0.1$ and $\tau=0.1$ for Gaussian noise and Laplace noise, respectively.
It can be seen from Figure \ref{DiffrennoiseR} that
the relative errors
obtained by the tensor-based method are lower than those obtained
by the matrix-based method for the three noise models.
Besides, we can observe that REs increase when $r$ increases for both matrix- and tensor-based methods.
Again the tensor-based method performs better than the matrix-based method for different $r$ and noise observations.
Compared with Poisson observations, the improvements of the tensor-based method are
much more for the additive Gaussian noise and Laplace noise.
In Figure \ref{DiffrennoiseSize}, we test different sizes $n:=n_1=n_2=n_3$ of tensors,
and vary $n$ from $50$ to $500$ with step size $50$, where $\textup{SR}=0.5$, the sparse ratio $s=0.3$, $r=10$, and $b=2$.
Here we set $\sigma=0.1$ for Gaussian noise and $\tau=0.1$ for Laplace noise, respectively.
It can be observed from this figure that the relative errors of the tensor-based method are smaller
than those of the matrix-based method for different noise distributions.
The relative errors of both matrix- and tensor-based methods decrease as $n$ increases.
Furthermore, for different size $n$ of the testing tensors, the improvements of relative errors of the tensor-based method for
Gaussian noise and Laplace noise are much more than those for Poisson observations.
\section{Concluding Remarks}\label{Conclu}
In this paper, we have studied the sparse NTF and completion
problem based on tensor-tensor product from partial and noisy observations,
where the observations are corrupted by general noise distributions.
A maximum likelihood estimate of partial observations is derived for the data-fitting term
and the tensor $\ell_0$ norm is adopted to enforce the sparsity of the sparse factor.
Then an upper error bound is established for a general class of noise models,
and is specialized to widely used noise distributions
including additive Gaussian noise, additive Laplace noise, and Poisson observations.
Moreover, the minimax lower bounds are also established for the previous noise models,
which match the upper error bounds up to logarithmic factors.
An ADMM based algorithm is developed to solve the resulting model.
Preliminary numerical experiments are presented to demonstrate the superior performance
of the proposed tensor-based model compared with the matrix-based method \cite{soni2016noisy}.
It would be of great interest to study the upper error bounds of the convexification model
by using the tensor $\ell_1$ norm to replace the tensor $\ell_0$ norm for the sparse factor.
It would also be of great interest to establish the convergence of ADMM for our proposed model with general noise observations,
which is nonconvex and has multi-block variables.
Moreover, future work may extend the theory of sparse NTF and completion model with tensor-tensor product to that
with transformed tensor-tensor product under suitable unitary transformations \cite{song2020robust},
which is more effective than tensor-tensor product for robust tensor completion \cite{song2020robust, ng2020patch}
and data compression \cite{zeng2020decompositions}.
\section*{Acknowledgments}
The authors would like to thank
the associate editor and anonymous referees for their helpful comments and constructive suggestions on improving the quality of this paper.
\appendices
\section{Proof of Theorem \ref{maintheo}}\label{ProoA}
We begin by stating the following lemma which will be
useful in the proof of Theorem \ref{maintheo}.
\begin{lemma}\label{leapp}
Let
$\Gamma$ be a countable collection of candidate
reconstructions $\mathcal{X}$ of $\mathcal{X}^*$ in (\ref{TauSet}) and its penalty $\textup{pen}(\mathcal{X})\geq 1$
satisfying $\sum_{\mathcal{X}\in\Gamma}2^{-\textup{pen}(\mathcal{X})}\leq 1$.
For any integer $4\leq m\leq n_1n_2n_3$, let $\Omega\sim \textup{Bern}(\gamma)$.
Moreover, the corresponding observations are obtained by
$p_{\mathcal{X}_{\Omega}^*}(\mathcal{Y}_{\Omega})=\prod_{(i,j,k)\in\Omega}p_{\mathcal{X}_{ijk}^*}(\mathcal{Y}_{ijk})$,
which are assumed to be conditionally independent given $\Omega$.
If
\begin{equation}\label{kappar}
\kappa\geq \max_{\mathcal{X}\in\Gamma}\max_{i,j,k} D(p_{\mathcal{X}_{ijk}^*}(\mathcal{Y}_{ijk})||p_{\mathcal{X}_{ijk}}(\mathcal{Y}_{ijk})),
\end{equation}
then for any
$\xi\geq2\left(1+\frac{2\kappa}{3} \right) \log(2)$,
the following penalized maximum likelihood estimator
\begin{equation}\label{mxial}
\widetilde{\mathcal{X}}^\xi\in\arg\min_{\mathcal{X}\in\Gamma}\left\{-\log p_{\mathcal{X}_\Omega}(\mathcal{Y}_{\Omega})+\xi\cdot\textup{pen}(\mathcal{X})\right\},
\end{equation}
satisfies
$$
\frac{\mathbb{E}_{\Omega,\mathcal{Y}_{\Omega}}\left[-2\log H(p_{\widetilde{\mathcal{X}}^\xi},p_{\mathcal{X}^*})\right]}{n_1n_2n_3}
\leq 3\cdot\min_{\mathcal{X}\in\Gamma}\left\lbrace \frac{D(p_{\mathcal{X}^*}|| p_{\mathcal{X}})}{n_1n_2n_3}+\left( \xi+\frac{4\kappa\log(2)}{3}\right)\frac{\textup{pen}(\mathcal{X})}{m} \right\rbrace +\frac{8\kappa\log(m)}{m},
$$
where the expectation is taken with respect to the joint distribution of $\Omega$ and $\mathcal{Y}_{\Omega}$.
\end{lemma}
The proof of Lemma \ref{leapp} can be derived
easily based on the matrix case \cite[Lemma 8]{soni2016noisy}, see also \cite{li1999estimation}.
At its essence, the three steps of proof in \cite[Lemma 8]{soni2016noisy} are mainly in point-wise manners for
the KL divergence, logarithmic Hellinger affinity, and maximum likelihood estimate.
Therefore, we can extend them to the tensor case easily.
For the sake of brevity, we omit the details here.
Next we give a lemma with respect to the upper bound
of the tensor $\ell_\infty$ norm between a tensor
and its closest surrogate in $\Gamma$.
\begin{lemma}\label{xxappr}
Consider a candidate reconstruction of the form
$\widetilde{\mathcal{X}}^*=\widetilde{\mathcal{A}}^*\diamond \widetilde{\mathcal{B}}^*,$
where each entry of $\widetilde{\mathcal{A}}^*\in\mathfrak{L}$
is the closest discretized surrogates of the entries of $\mathcal{A}^*$,
and each entry of $\widetilde{\mathcal{B}}^*\in\mathfrak{D}$
is the closest discretized surrogates of the nonzero entries of $\mathcal{B}^*$, and zero otherwise.
Then
$$
\|\widetilde{\mathcal{X}}^*-\mathcal{X}^*\|_\infty\leq\frac{3rn_3b}{\vartheta},
$$
where $\vartheta$ is defined as (\ref{denu}).
\end{lemma}
\begin{IEEEproof}
Let $\widetilde{\mathcal{A}}^*=\mathcal{A}^*+\Delta_{\mathcal{A}^*}$
and $\widetilde{\mathcal{B}}^*=\mathcal{B}^*+\Delta_{\mathcal{B}^*}$.
Then
$$
\widetilde{\mathcal{X}}^*-\mathcal{X}^*
=\widetilde{\mathcal{A}}^*\diamond \widetilde{\mathcal{B}}^*-\mathcal{A}^*\diamond\mathcal{B}^*
=\mathcal{A}^*\diamond\Delta_{\mathcal{B}^*}+\Delta_{\mathcal{A}^*}\diamond\mathcal{B}^*+
\Delta_{\mathcal{A}^*}\diamond\Delta_{\mathcal{B}^*}.
$$
By the definitions of $\widetilde{\mathcal{A}}^*$ and $\widetilde{\mathcal{B}}^*$,
we know that
\begin{equation}\label{DeltaAB}
\|\Delta_{\mathcal{A}^*}\|_\infty\leq \frac{1}{\vartheta-1} \ \
\textup{and} \ \ \|\Delta_{\mathcal{B}^*}\|_\infty\leq \frac{b}{\vartheta-1}.
\end{equation}
By the definition of tensor-tensor product of two tensors, we deduce
\[
\begin{split}
\mathcal{A}^*\diamond\Delta_{\mathcal{B}^*}& =\textup{Fold}\left(\textup{Circ}\begin{pmatrix} (\mathbf{A^*})^{(1)} \\ (\mathbf{A^*})^{(2)}
\\ \vdots \\ (\mathbf{A^*})^{(n_3)} \end{pmatrix}\cdot \textup{Unfold}(\Delta_{\mathcal{B}^*})\right) \\
&=
\textup{Fold}\begin{pmatrix} (\mathbf{A^*})^{(1)}(\Delta_{\mathcal{B}^*})^{(1)}+(\mathbf{A^*})^{(n_3)}(\Delta_{\mathcal{B}^*})^{(2)}+\cdots +(\mathbf{A^*})^{(2)}(\Delta_{\mathcal{B}^*})^{(n_3)}
\\ (\mathbf{A^*})^{(2)}(\Delta_{\mathcal{B}^*})^{(1)}+ (\mathbf{A^*})^{(1)}(\Delta_{\mathcal{B}^*})^{(2)}+\cdots + (\mathbf{A^*})^{(3)}(\Delta_{\mathcal{B}^*})^{(n_3)}
\\ \vdots \\ (\mathbf{A^*})^{(n_3)}(\Delta_{\mathcal{B}^*})^{(1)} +(\mathbf{A^*})^{(n_3-1)}(\Delta_{\mathcal{B}^*})^{(2)} +\cdots +(\mathbf{A^*})^{(1)}(\Delta_{\mathcal{B}^*})^{(n_3)} \end{pmatrix}.
\end{split}
\]
It follows from (\ref{DeltaAB}) and $0\leq \mathcal{A}_{ijk}^*\leq 1$ that
$$
\|\mathcal{A}^*\diamond\Delta_{\mathcal{B}^*}\|_\infty\leq n_3\max_{i,j}\|(\mathbf{A^*})^{(i)} (\Delta_{\mathcal{B}^*})^{(j)}\|_\infty\leq \frac{rn_3b}{\vartheta-1}.
$$
Similarly, we can get that $\|\Delta_{\mathcal{A}^*}\diamond\mathcal{B}^*\|_\infty\leq \frac{rn_3b}{\vartheta-1}$ and $
\|\Delta_{\mathcal{A}^*}\diamond\Delta_{\mathcal{B}^*}\|_\infty\leq \frac{rn_3b}{(\vartheta-1)^2}$.
Therefore, we obtain that
\[
\begin{split}
\|\widetilde{\mathcal{A}}^*\diamond \widetilde{\mathcal{B}}^*-\mathcal{A}^*\diamond\mathcal{B}^*\|_\infty
& \leq \|\mathcal{A}^*\diamond\Delta_{\mathcal{B}^*}\|_\infty+
\|\Delta_{\mathcal{A}^*}\diamond\mathcal{B}^*\|_\infty+
\|\Delta_{\mathcal{A}^*}\diamond\Delta_{\mathcal{B}^*}\|_\infty \\
&\leq \frac{rn_3b}{\vartheta-1}+ \frac{rn_3b}{\vartheta-1}+\frac{rn_3b}{(\vartheta-1)^2}\\
&\leq\frac{3rn_3b}{\vartheta},
\end{split}
\]
where
the last inequality holds by $\vartheta\geq 8$ in (\ref{denu}).
The proof is completed.
\end{IEEEproof}
\begin{remark}
By the construction of $\widetilde{\mathcal{B}}^*$ in Lemma \ref{xxappr},
we know that $\|\widetilde{\mathcal{B}}^*\|_0=\|\mathcal{B}^*\|_0$,
which will be used to establish the upper bounds in the specifical noise models.
\end{remark}
Now we return to prove Theorem \ref{maintheo}.
First, we need to define the penalty
$\textup{pen}(\mathcal{X})$ on the candidate reconstructions
$\mathcal{X}$ of $\mathcal{X}^*$ in the set $\Gamma$ such that the summability condition
\begin{equation}\label{penalt}
\sum_{\mathcal{X}\in\Gamma}2^{-\textup{pen}(\mathcal{X})}\leq1
\end{equation}
holds.
Notice that the condition in (\ref{penalt}) is the well-known Kraft-McMillan
inequality for coding entries of $\Gamma$ with an alphabet of size $2$
\cite{Brockway1957Two, kraft1949device}, see also \cite[Section 5]{cover2006elements}.
If we choose the penalties to be code lengths
for some uniquely decodable binary code of the entries $\mathcal{X}\in\Gamma$,
then (\ref{penalt}) is satisfied automatically \cite[Section 5]{cover2006elements},
which will provide the construction of the penalties.
Next we consider the discretized tensor factors $\mathcal{A}\in\mathfrak{L}$
and $ \mathcal{B}\in\mathfrak{D}$.
Fix an ordering of the indices of entries of $\mathcal{A}$
and encode the amplitude of each entry using $\log_2(\vartheta)$ bits.
Let $\widetilde{\vartheta}:=2^{\lceil\log_2(rn_2)\rceil}$.
Similarly, we encode each nonzero entry of $\mathcal{B}$ using $\log_2(\widetilde{\vartheta})$
bits to denote its location and $\log_2(\vartheta)$ bits for its amplitude.
By this construction, a total of $rn_1n_3\log_2(\vartheta)$ bits
are used to encode $\mathcal{A}$.
Note that $\mathcal{B}$ has $\|\mathcal{B}\|_0$ nonzero entries.
Then a total of $\|\mathcal{B}\|_0(\log_2(\widetilde{\vartheta})+\log_2(\vartheta))$ bits
are used to encode $\mathcal{B}$.
Therefore, we define the penalties $\textup{pen}(\mathcal{X})$ to all $\mathcal{X}\in\Gamma$
as the encoding lengths, i.e.,
$$
\textup{pen}(\mathcal{X})=rn_1n_3\log_2(\vartheta)
+\|\mathcal{B}\|_0(\log_2(\widetilde{\vartheta})+\log_2(\vartheta)).
$$
By the above construction, it is easy to see that such codes are uniquely decodable.
Thus, by Kraft-McMillan inequality \cite{Brockway1957Two, kraft1949device},
we get that $\sum_{\mathcal{X}\in\Gamma}2^{-\textup{pen}(\mathcal{X})}\leq1$.
Let $\lambda = \xi(\log_2(\vartheta)+\log_2(\widetilde{\vartheta}))$,
where $\xi$ is the regularization parameter in (\ref{mxial}).
Note that
$
\xi\cdot\textup{pen}(\mathcal{X})=\lambda\|\mathcal{B}\|_0+\xi rn_1n_3\log_2(\vartheta).
$
Then the minimizer $\widetilde{\mathcal{X}}^{\lambda}$ in (\ref{model})
is the same as the minimizer $\widetilde{\mathcal{X}}^\xi$ in (\ref{mxial}).
Therefore, by Lemma \ref{leapp},
for any $\xi\geq2\left(1+\frac{2\kappa}{3} \right) \log(2)$,
we get that
\begin{equation}\label{lamxl}
\begin{split}
&\ \frac{\mathbb{E}_{\Omega,\mathcal{Y}_{\Omega}}\left[-2\log H(p_{\widetilde{\mathcal{X}}^\lambda},p_{\mathcal{X}^*})\right]}{n_1n_2n_3}\\
\leq &\ 3\min_{\mathcal{X}\in\Gamma}\left\lbrace \frac{D(p_{\mathcal{X}^*}|| p_{\mathcal{X}})}{n_1n_2n_3}+\left( \xi+\frac{4\kappa\log(2)}{3}\right)\frac{\textup{pen}(\mathcal{X})}{m} \right\rbrace +\frac{8\kappa\log(m)}{m}\\
\leq & \ 3\min_{\mathcal{X}\in\Gamma}\left\lbrace \frac{D(p_{\mathcal{X}^*}|| p_{\mathcal{X}})}{n_1n_2n_3}+\left( \xi+\frac{4\kappa\log(2)}{3}\right)\left(\log_2(\vartheta)+\log_2(\widetilde{\vartheta})\right) \frac{rn_1n_3+\|\mathcal{B}\|_0}{m} \right\rbrace +\frac{8\kappa\log(m)}{m} \\
= & \ 3\min_{\mathcal{X}\in\Gamma}\left\lbrace \frac{D(p_{\mathcal{X}^*}|| p_{\mathcal{X}})}{n_1n_2n_3}+\left( \lambda+\frac{4\kappa\log(2)}{3}\left(\log_2(\vartheta)+\log_2(\widetilde{\vartheta})\right)\right) \frac{rn_1n_3+\|\mathcal{B}\|_0}{m} \right\rbrace +\frac{8\kappa\log(m)}{m},
\end{split}
\end{equation}
where the second inequality holds by the definition of $\textup{pen}(\mathcal{X})$ and the nonnegativity of $\log_2(\vartheta)$ and $\log_2(\widetilde{\vartheta})$.
Note that
\begin{equation}\label{lohvvb}
\log_2(\vartheta)+\log_2(\widetilde{\vartheta})
\leq 2\beta\log_2\left(n_1\vee n_2\right)+2\log_2(rn_2)
\leq\frac{ 2(\beta+2)\log\left(n_1\vee n_2\right)}{\log(2)},
\end{equation}
where the last inequality follows from $rn_2\leq (n_1\vee n_2)^2$.
Hence, for any
$$
\lambda\geq 4(\beta+2)\left( 1+\frac{2\kappa}{3}\right) \log\left(n_1\vee n_2\right),
$$
which is equivalent to $\xi\geq2\left(1+\frac{2\kappa}{3} \right) \log(2)$,
we have
\[
\begin{split}
&\frac{\mathbb{E}_{\Omega,\mathcal{Y}_{\Omega}}[-2\log H(p_{\widetilde{\mathcal{X}}^{\lambda}},p_{\mathcal{X}^*})]}{n_1n_2n_3}\\
\leq & \ 3\min_{\mathcal{X}\in\Gamma}\left\lbrace
\frac{D(p_{\mathcal{X}^*}|| p_{\mathcal{X}})}{n_1n_2n_3}+
\left( \lambda+\frac{8\kappa(\beta+2) \log\left(n_1\vee n_2\right)}{3}\right)
\frac{rn_1n_3+\|\mathcal{B}\|_0}{m} \right\rbrace +\frac{8\kappa\log(m)}{m},
\end{split}
\]
where the inequality follows from (\ref{lamxl}) and (\ref{lohvvb}).
This completes the proof.
\section{Proof of Proposition \ref{Gauuupp}}\label{ProoB}
By Theorem \ref{maintheo}, we only need to give
the lower bound of $\mathbb{E}_{\Omega,\mathcal{Y}_{\Omega}}
[-2\log H(p_{\widetilde{\mathcal{X}}^{\lambda}},p_{\mathcal{X}^*})]$
and upper bound of $\min_{\mathcal{X}\in\Gamma}\lbrace
\frac{D(p_{\mathcal{X}^*}|| p_{\mathcal{X}})}{n_1n_2n_3}\rbrace$, respectively.
It follows from \cite[Exercise 15.13]{wainwright2019high} that
the KL divergence of two Gaussian distributions is $ D(p_{\mathcal{X}_{ijk}^*}||p_{\mathcal{X}_{ijk}})=(\mathcal{X}_{i,j,k}^*-\mathcal{X}_{i,j,k})^2/(2\sigma^2)$,
which yields
\begin{equation}\label{KLGaussian}
D(p_{\mathcal{X}^*}|| p_{\mathcal{X}})=\frac{\|\mathcal{X}-\mathcal{X}^*\|_F^2}{2\sigma^2}.
\end{equation}
Note that $ D(p_{\mathcal{X}_{ijk}^*}||p_{\mathcal{X}_{ijk}})
\leq c^2/(2\sigma^2)$ for any $\mathcal{X}\in\Gamma$ and $i,j,k$.
We can choose $\kappa=c^2/(2\sigma^2)$ based on the assumption in Theorem \ref{maintheo}.
Moreover, by \cite[Appendix C]{carter2002deficiency}, we get that
$$
-2\log(H(p_{\mathcal{X}_{ijk}^*},p_{\widetilde{\mathcal{X}}_{ijk}^\lambda}))
=(\widetilde{\mathcal{X}}_{ijk}^\lambda-\mathcal{X}_{ijk}^*)^2)/(4\sigma^2),
$$
which yields that
$
-2\log(H(p_{\mathcal{X}^*},p_{\widetilde{\mathcal{X}}^\lambda}))=\frac{\|\widetilde{\mathcal{X}}^\lambda-\mathcal{X}^*\|_F^2}{4\sigma^2}.
$
Therefore, by Theorem \ref{maintheo}, we get that
\begin{equation}\label{ErrobG}
\begin{split}
&\frac{\mathbb{E}_{\Omega,\mathcal{Y}_{\Omega}}
\left[\|\widetilde{\mathcal{X}}^{\lambda}-\mathcal{X}^*\|_F^2\right]}{n_1n_2n_3}\\
\leq & \ 3\min_{\mathcal{X}\in\Gamma}\left\lbrace
\frac{2\|\mathcal{X}-\mathcal{X}^*\|_F^2}{n_1n_2n_3}+
4\sigma^2\left( \lambda+\frac{4c^2(\beta+2) \log(n_1\vee n_2)}{3\sigma^2}\right)
\frac{rn_1n_3+\|\mathcal{B}\|_0}{m} \right\rbrace \\
& +\frac{16c^2\log(m)}{m}.
\end{split}
\end{equation}
Next we need to establish an upper bound of $\min_{\mathcal{X}\in\Gamma}\|\mathcal{X}-\mathcal{X}^*\|_F^2$.
Note that
\begin{equation}\label{vuppb}
\begin{split}
\vartheta&=2^{\lceil\log_2(n_1\vee n_2)^\beta\rceil}\geq 2^{\beta\log_2(n_1\vee n_2)}\\
&\geq2^{\log_2(n_1\vee n_2)}\cdot2^{\log_2(n_1\vee n_2)\frac{\log(3rn_3^{1.5}b/c)}{\log(n_1\vee n_2)}}\\
&=\frac{3(n_1\vee n_2)r{n_3}^{1.5}b}{c},
\end{split}
\end{equation}
where the second inequality holds by (\ref{beta}).
Since $n_1,n_2\geq 2$, we have $\vartheta\geq\frac{6r{n_3}^{1.5}b}{c}$,
which implies that
$\|\widetilde{\mathcal{X}}^*\|_\infty\leq \frac{3rn_3b}{\vartheta}+\|\mathcal{X}^*\|_\infty\leq c$
by Lemma \ref{xxappr}, where $\widetilde{\mathcal{X}}^*$ is defined in Lemma \ref{xxappr}.
Therefore, $\widetilde{\mathcal{X}}^*=\widetilde{\mathcal{A}}^*\diamond\widetilde{\mathcal{B}}^*\in\Gamma$.
By Lemma \ref{xxappr}, we have that
\begin{equation}\label{lowxtgm}
\min_{\mathcal{X}\in\Gamma}\left\{\frac{2\|\mathcal{X}-\mathcal{X}^*\|_F^2}{n_1n_2n_3}\right\}
\leq \frac{2\|\widetilde{\mathcal{X}}^*-\mathcal{X}^*\|_F^2}{n_1n_2n_3}
\leq \frac{18(rn_3b)^2}{\vartheta^2}\leq \frac{2c^2}{m},
\end{equation}
where the last inequality follows from the fact $m\leq(n_1\vee n_2)^2n_3$ and (\ref{vuppb}).
Moreover, it follows from the construction of $\widetilde{\mathcal{B}}^*$ in Lemma \ref{xxappr}
that $\|\widetilde{\mathcal{B}}^*\|_0=\|\mathcal{B}^*\|_0$.
As a consequence, combining (\ref{lambda}), (\ref{ErrobG}) with (\ref{lowxtgm}), we obtain that
\[
\begin{split}
&\frac{\mathbb{E}_{\Omega,\mathcal{Y}_{\Omega}}
\left[\|\widetilde{\mathcal{X}}^{\lambda}-\mathcal{X}^*\|_F^2\right]}{n_1n_2n_3}\\
\leq & \ \frac{6c^2}{m}+
16(3\sigma^2+2c^2)(\beta+2) \log((n_1\vee n_2)\sqrt{n_3})
\left(\frac{rn_1n_3+\|\mathcal{B}^*\|_0}{m}\right) +\frac{16c^2\log(m)}{m}\\
\leq & \ \frac{22c^2\log(m)}{m} + 16(3\sigma^2+2c^2)(\beta+2)
\left(\frac{rn_1n_3+\|\mathcal{B}^*\|_0}{m}\right)\log(n_1\vee n_2) .
\end{split}
\]
This completes the proof.
\section{Proof of Proposition \ref{AddErp}}\label{ProoC}
A random variable is said to have a Laplace distribution, denoted by {Laplace}($\mu, b$) with parameters $b>0, \mu$,
if its probability density function is
$
f(x|\mu,b)=\frac{1}{2b}\exp(-\frac{|x-\mu|}{b}).
$
Before deriving the upper bound of observations with additive Laplace noise,
we establish the KL divergence and logarithmic Hellinger affinity between two distributions.
\begin{lemma}\label{KLHeLap}
Let $p(x)\sim\textup{Laplace}(\mu_1, b_1)$ and $q(x)\sim\textup{Laplace}(\mu_2, b_2)$. Then
$$
D(p(x)||q(x))=\log\left(\frac{b_2}{b_1}\right)+\frac{|\mu_2-\mu_1|}{b_2}+\frac{b_1}{b_2}
\exp\left(-\frac{|\mu_2-\mu_1|}{b_1}\right).
$$
Moreover, if $b_1=b_2$, then
$$
-2\log(H(p(x),q(x)))=\frac{|\mu_2-\mu_1|}{b_1}-2\log\left(1+\frac{|\mu_2-\mu_1|}{2b_1}\right).
$$
\end{lemma}
\begin{IEEEproof}
By the definition of the KL divergence of $p(x)$ from $q(x)$, we deduce
\[
\begin{split}
D(p(x)||q(x))&=\mathbb{E}_p\left[\log(p(x))-\log(q(x))\right]\\
&=\log\left(\frac{b_2}{b_1}\right)-\frac{1}{b_1}\mathbb{E}_p[|x-\mu_1|]+\frac{1}{b_2}\mathbb{E}_p[|x-\mu_2|].
\end{split}
\]
Without loss of generality, we assume that $\mu_1<\mu_2$.
By direct calculations, one can get that $\mathbb{E}_p[|x-\mu_1|]=b_1$ and $\mathbb{E}_p[|x-\mu_2|]=\mu_2-\mu_1+b_1\exp(-\frac{\mu_2-\mu_1}{b_1})$.
Then, we get that
$$
D(p(x)||q(x))=\log\left(\frac{b_2}{b_1}\right)-1+\frac{\mu_2-\mu_1}{b_2}+\frac{b_1\exp(-\frac{\mu_2-\mu_1}{b_1})}{b_2}.
$$
Therefore, by the symmetry, for any $\mu_1,\mu_2$, we have
$$
D(p(x)||q(x))=\log\left(\frac{b_2}{b_1}\right)-1+\frac{|\mu_2-\mu_1|}{b_2}+\frac{b_1\exp\big(-\frac{|\mu_2-\mu_1|}{b_1}\big)}{b_2}.
$$
Moreover, if $b_1=b_2$, the Hellinger affinity is
$$
H(p(x),q(x))=\frac{1}{2b_1}\int_{-\infty}^{+\infty}\exp\left(-\frac{|x-\mu_1|}{2b_1}-\frac{|x-\mu_2|}{2b_1}\right)dx.
$$
With simple manipulations, we obtain
$$
-2\log(H(p(x),q(x)))=\frac{|\mu_2-\mu_1|}{b_1}-2\log\left(1+\frac{|\mu_2-\mu_1|}{2b_1}\right).
$$
The proof is completed.
\end{IEEEproof}
Now we return to prove Proposition \ref{AddErp}.
By Lemma \ref{KLHeLap}, we have that
\begin{equation}\label{KLLappo}
D(p_{\mathcal{X}_{ijk}^*}||p_{\mathcal{X}_{ijk}})
=\frac{|\mathcal{X}_{ijk}^*-\mathcal{X}_{ijk}|}{\tau}
-\left(1-\exp\left(-\frac{|\mathcal{X}_{ijk}^*-\mathcal{X}_{ijk}|}{\tau}\right)\right)
\leq\frac{1}{2\tau^2}(\mathcal{X}_{ijk}^*-\mathcal{X}_{ijk})^2,
\end{equation}
where the inequality follows from the fact that $e^{-x}\leq 1-x+\frac{x^2}{2}$ for $x>0$.
Hence, we choose $\kappa=\frac{c^2}{2\tau^2}$.
Notice that
\[
\begin{split}
-2\log(H(p_{\mathcal{X}_{ijk}^*},p_{\mathcal{X}_{ijk}}))
&=\frac{|\mathcal{X}_{ijk}^*-\mathcal{X}_{ijk}|}{\tau}
-2\log\left(1+\frac{|\mathcal{X}_{ijk}^*-\mathcal{X}_{ijk}|}{2\tau}\right)\\
&\geq \frac{2(\mathcal{X}_{ijk}^*-\mathcal{X}_{ijk})^2}{(2\tau+c)^2},
\end{split}
\]
where the last inequality follows from the Taylor's expansion,
see also the proof of Corollary 5 in \cite{soni2016noisy}.
Therefore, we have
$D(p_{\mathcal{X}^*}||p_{\mathcal{X}})\leq \frac{1}{2\tau^2}\|\mathcal{X}^*-\mathcal{X}\|_F^2$
and
\[
\begin{split}
-2\log(A(p_{\mathcal{X}^*},p_{\mathcal{X}}))\geq \frac{2}{(2\tau+c)^2}\|\mathcal{X}^*-\mathcal{X}\|_F^2.
\end{split}
\]
It follows from Theorem \ref{maintheo} that
\begin{equation}\label{ErLLP}
\begin{split}
&\frac{\mathbb{E}_{\Omega,\mathcal{Y}_{\Omega}}
\left[\|\widetilde{\mathcal{X}}^{\lambda}-\mathcal{X}^*\|_F^2\right]}{n_1n_2n_3}\\
\leq & \ \frac{3(2\tau+c)^2}{2}\cdot\min_{\mathcal{X}\in\Gamma}\left\lbrace
\frac{\|\mathcal{X}^*-\mathcal{X}\|_F^2}{2\tau^2 n_1n_2n_3}+
\left( \lambda+\frac{4c^2(\beta+2) \log(n_1\vee n_2)}{3\tau^2}\right)
\frac{rn_1n_3+\|\mathcal{B}\|_0}{m} \right\rbrace \\
&\ +\frac{2c^2(2\tau+c)^2\log(m)}{m\tau^2}.
\end{split}
\end{equation}
For the discretitzed surrogate $\widetilde{\mathcal{X}}^*$ of $\mathcal{X}^*$, by Lemma \ref{xxappr}, we get
$$
\min_{\mathcal{X}\in\Gamma}\left\{
\frac{\|\mathcal{X}^*-\mathcal{X}\|_F^2}{2\tau^2 n_1n_2n_3}\right\}
\leq \frac{\|\widetilde{\mathcal{X}}^*-\mathcal{X}^*\|_F^2}{2\tau^2 n_1n_2n_3}
\leq \frac{(3rn_3b)^2}{2\tau^2\vartheta^2}
\leq \frac{c^2}{2\tau^2(n_1\vee n_2)^2n_3}\leq\frac{c^2}{2\tau^2m},
$$
where the third inequality follows from (\ref{vuppb}) and the last inequality
follows from the fact that $m\leq (n_1\vee n_2)^2n_3$.
Note that $\|\widetilde{\mathcal{B}}^*\|_0=\|\mathcal{B}^*\|_0$ by the construction of $\widetilde{\mathcal{X}}^*$ in Lemma \ref{xxappr}.
Combining (\ref{ErLLP}) with (\ref{lambda}), we obtain that
\[
\begin{split}
&\frac{\mathbb{E}_{\Omega,\mathcal{Y}_{\Omega}}
\left[\|\widetilde{\mathcal{X}}^{\lambda}-\mathcal{X}^*\|_F^2\right]}{n_1n_2n_3}\\
\leq & \ \frac{3c^2(2\tau+c)^2}{4m\tau^2}
+2\left(3+\frac{c^2}{\tau^2}\right)(2\tau+c)^2(\beta+2)\log\left(n_1\vee n_2\right)
\left(\frac{rn_1n_3+\|\mathcal{B}^*\|_0}{m}\right) \\
&+\frac{2c^2(2\tau+c)^2\log(m)}{m\tau^2}\\
\leq & \ \frac{3c^2(2\tau+c)^2\log(m)}{m\tau^2}+2\left(3+\frac{c^2}{\tau^2}\right)(2\tau+c)^2(\beta+2)
\left(\frac{rn_1n_3+\|\mathcal{B}^*\|_0}{m}\right)\log\left(n_1\vee n_2\right).
\end{split}
\]
This completes the proof.
\section{Proof of Proposition \ref{uppPoissobs}}\label{ProoD}
For the KL divergence of Poisson observations,
it follows from \cite[Lemma 8]{cao2016Poisson} that
\begin{equation}\label{DKLPoi}
D(p_{\mathcal{X}_{ijk}^*}||p_{\mathcal{X}_{ijk}})
\leq \frac{1}{\mathcal{X}_{ijk}}(\mathcal{X}_{ijk}^*-\mathcal{X}_{ijk})^2\leq \frac{1}{\zeta}(\mathcal{X}_{ijk}^*-\mathcal{X}_{ijk})^2.
\end{equation}
Then we can choose $\kappa=\frac{c^2}{\zeta}$.
Note that
\[
\begin{split}
(\mathcal{X}_{ijk}^*-\mathcal{X}_{ijk})^2
=\left(\left(\sqrt{\mathcal{X}_{ijk}^*}-
\sqrt{\mathcal{X}_{ijk}}\right)\left(\sqrt{\mathcal{X}_{ijk}^*}+\sqrt{\mathcal{X}_{ijk}}\right)\right)^2
\leq 4c(\sqrt{\mathcal{X}_{ijk}^*}-
\sqrt{\mathcal{X}_{ijk}})^2.
\end{split}
\]
Therefore, by \cite[Appendix IV]{raginsky2010compressed}, we have
\begin{equation}\label{PoAil}
\begin{split}
-2\log(H(p_{\mathcal{X}_{ijk}^*},p_{\mathcal{X}_{ijk}}))
&=\left(\sqrt{\mathcal{X}_{ijk}^*}-\sqrt{\mathcal{X}_{ijk}}\right)^2\geq \frac{1}{4c}\left(\mathcal{X}_{ijk}^*-\mathcal{X}_{ijk}\right)^2.
\end{split}
\end{equation}
Therefore, we get
$D(p_{\mathcal{X}^*}||p_{\mathcal{X}})\leq \frac{\|\mathcal{X}^*-\mathcal{X}\|_F^2}{\zeta}$
and
$
-2\log(A(p_{\mathcal{X}^*},p_{\mathcal{X}}))\geq \frac{1}{4c}\|\mathcal{X}^*-\mathcal{X}\|_F^2.
$
For the discreteized surrogate $\widetilde{\mathcal{X}}^*
=\widetilde{\mathcal{A}}^*\diamond\widetilde{\mathcal{B}}^*$ of $\mathcal{X}^*$, by Lemma \ref{xxappr}, we have
\begin{equation}\label{KLPois}
\min_{\mathcal{X}\in\Gamma}\left\{\frac{\|\mathcal{X}-\mathcal{X}^*\|_F^2}{\zeta n_1n_2n_3}\right\}
\leq \frac{\|\widetilde{\mathcal{X}}^*-\mathcal{X}^*\|_F^2}{\zeta n_1n_2n_3}
\leq \frac{9(rn_3b)^2}{\zeta\vartheta^2}\leq \frac{c^2}{\zeta(n_1\vee n_2)^2n_3}\leq \frac{c^2}{\zeta m},
\end{equation}
where the third inequality follows from (\ref{vuppb}).
Notice that $\|\widetilde{\mathcal{B}}^*\|_0=\|\mathcal{B}^*\|_0$.
Therefore, combining (\ref{PoAil}), (\ref{KLPois}),
and Theorem \ref{maintheo}, we conclude
\[
\begin{split}
& \ \frac{\mathbb{E}_{\Omega,\mathcal{Y}_{\Omega}}\|\widetilde{\mathcal{X}}^\lambda-\widetilde{\mathcal{X}}^*\|_F^2}{n_1n_2n_3}\\
\leq & \ \frac{12c^3}{\zeta m}+
12c\left( \lambda+\frac{8\kappa(\beta+2) \log\left(n_1\vee n_2\right)}{3}\right)
\frac{rn_1n_3+\|\mathcal{B}^*\|_0}{m} +\frac{32c^3\log(m)}{\zeta m}\\
=& \ \frac{4c^3(3+8\log(m))}{\zeta m}+
48c\left(1+\frac{4c^2}{3\zeta}\right)
\frac{(\beta+2) \left(rn_1n_3+\|\mathcal{B}^*\|_0\right)\log\left(n_1\vee n_2\right)}{m},
\end{split}
\]
where the equality follows from (\ref{lambda}).
The proof is completed.
\section{Proof of Theorem \ref{lowbounMai}}\label{ProoE}
Let
\begin{equation}\label{GenesubX}
\mathfrak{X}:=\{\mathcal{X}=\mathcal{A}\diamond\mathcal{B}:
\mathcal{A}\in\mathfrak{C},\mathcal{B}\in\mathfrak{B}\},
\end{equation}
where $\mathfrak{C}\subseteq\mathbb{R}^{n_1\times r\times n_3}$ is defined as
\begin{equation}\label{CXZG}
\mathfrak{C}:=\left\{\mathcal{A}\in\mathbb{R}^{n_1\times r\times n_3}: \ \mathcal{A}_{ijk}\in\{0,1,a_0\}\right\} \ \ \text{with} \ \ a_0=\min\left\{1, \ \frac{\beta_a\nu}{b\sqrt{\Delta}}\sqrt{\frac{rn_1n_3}{m}}\right\},
\end{equation}
and $\mathfrak{B}$ is defined as
\begin{equation}\label{GenesubXB}
\mathfrak{B}:=\left\{\mathcal{B}\in\mathbb{R}^{r\times n_2\times n_3}: \
\mathcal{B}_{ijk}\in\{0,b,b_0\}, \|\mathcal{B}\|_0\leq s\right\}
\ \ \text{with} \ \ b_0=\min\left\{b, \ \frac{\beta_b\nu}{\sqrt{\Delta}}\sqrt{\frac{s}{m}}\right\}.
\end{equation}
Here $\beta_a,\beta_b>0$ are two constants which will be defined later.
From the construction, we get that $\mathfrak{X}\subseteq \mathfrak{U}(r,b,s)$.
Now we define a subset $\mathfrak{X}_{\mathcal{A}}$ such that $\mathfrak{X}_{\mathcal{A}}\subseteq\mathfrak{X}$.
Denote
\begin{equation}\label{SubXAA}
\mathfrak{X}_{\mathcal{A}}:=\left\{\mathcal{X}:=\mathcal{A}\diamond\widetilde{\mathcal{B}}:
\mathcal{A}\in\widetilde{\mathfrak{C}}, \
\widetilde{\mathcal{B}}=b(\mathcal{I}_r \ \cdots \ \mathcal{I}_r \ \mathbf{0}_\mathcal{B}) \in \mathfrak{B}\right\},
\end{equation}
where $\widetilde{\mathcal{B}}$ is a block tensor with $\lfloor \frac{s\wedge (n_2n_3)}{rn_3}\rfloor$
blocks $\mathcal{I}_r$, $\mathcal{I}_r\in\mathbb{R}^{r\times r\times n_3}$ is the identity tensor,
$\mathbf{0}_{\mathcal{B}}\in\mathbb{R}^{r\times(n_2- \lfloor \frac{s\wedge (n_2n_3)}{rn_3}\rfloor r)\times n_3}$ is the zero tensor with all entries being zero, and
\begin{equation}\label{GeneXACsub}
\widetilde{\mathfrak{C}}:=\left\{\mathcal{A}\in\mathbb{R}^{n_1\times r\times n_3}:\mathcal{A}_{ijk}\in\{0,a_0\},\
1\leq i\leq n_1, 1\leq j\leq r, 1\leq k\leq n_3\right\}.
\end{equation}
By the definition of identity tensor, we get
$\|\widetilde{\mathcal{B}}\|_0
= r \lfloor \frac{s\wedge (n_2n_3)}{rn_3}\rfloor
\leq r\frac{s\wedge (n_2n_3)}{rn_3}\leq s$.
It follows from the construction of $\widetilde{\mathcal{B}}=b(\mathcal{I}_r \
\cdots \ \mathcal{I}_r \ \mathbf{0}_{\mathcal{B}})$ that $\widetilde{\mathcal{B}}\in\mathfrak{B}$.
Therefore, $\mathfrak{X}_{\mathcal{A}}\subseteq\mathfrak{X}$.
By the definition of tensor-tensor product, we have that
\[
\begin{split}
\mathcal{A}\diamond \mathcal{I}_r=\textup{Fold}\left(\begin{pmatrix}
\mathbf{A}^{(1)} & \mathbf{A}^{(n_3)} & \cdots & \mathbf{A}^{(2)} \\
\mathbf{A}^{(2)} & \mathbf{A}^{(1)} & \cdots & \mathbf{A}^{(3)}\\
\vdots & \vdots & & \vdots \\
\mathbf{A}^{(n_3)} &\mathbf{A}^{(n_3-1)}&\cdots & \mathbf{A}^{(1)} \end{pmatrix}\cdot
\begin{pmatrix} \mathbf{I}_r \\ \mathbf{0}
\\ \vdots \\ \mathbf{0} \end{pmatrix}\right) =\textup{Fold}\begin{pmatrix} \mathbf{A}^{(1)} \\ \mathbf{A}^{(2)}
\\ \vdots \\ \mathbf{A}^{(n_3)} \end{pmatrix} =\mathcal{A},
\end{split}
\]
where $\mathbf{I}_r$ is the $r\times r$ identity matrix.
Hence, for any $\mathcal{X}\in\mathfrak{X}_\mathcal{A}$,
we have that
$$
\mathcal{X}= b\mathcal{A}\diamond (\mathcal{I}_r \
\cdots \ \mathcal{I}_r \ \mathbf{0}_{\mathcal{B}})=b(\mathcal{A}\ \cdots \ \mathcal{A} \ \mathbf{0}_{\mathcal{X}}),
$$
where $\mathbf{0}_{\mathcal{X}}\in\mathbb{R}^{n_1\times(n_2-\lfloor
\frac{s\wedge (n_2n_3)}{rn_3}\rfloor r)\times n_3}$ is a zero tensor.
Notice that each entry of $\mathcal{A}$ is $0$ or $a_0$.
Therefore, by the Varshamov-Gilbert bound \cite[Lemma 2.9]{tsybakov2009},
we have that there exists a subset $\mathfrak{X}_\mathcal{A}^0\subseteq \mathfrak{X}_\mathcal{A}$
with $|\mathfrak{X}_\mathcal{A}^0|\geq 2^{rn_1n_3/8}+1$,
such that for any $\mathcal{X}_i,\mathcal{X}_j\in\mathfrak{X}_\mathcal{A}^0$,
\begin{equation}\label{KLADis}
\begin{split}
\|\mathcal{X}_i-\mathcal{X}_j\|_F^2&\geq \frac{r n_1 n_3}{8}
\left\lfloor \frac{s\wedge (n_2n_3)}{rn_3}\right\rfloor a_0^2b^2\\
&\geq \frac{n_1n_2n_3}{16} \min\left\{b^2\Delta, \beta_a^2\nu^2\frac{rn_1n_3}{m}\right\},
\end{split}
\end{equation}
where the last inequality of (\ref{KLADis}) holds
by $\lfloor x\rfloor\geq \frac{x}{2}$ for any $x\geq 1$.
For any $\mathcal{X}\in\mathfrak{X}_{\mathcal{A}}^0$, we have that
\begin{equation}\label{DKLPr}
\begin{split}
D(p_{\mathcal{X}_\Omega}(\mathcal{Y}_\Omega)||p_{\mathbf{0}_\Omega}(\mathcal{Y}_\Omega))
&=\frac{m}{n_1n_2n_3}\sum_{i,j,k}D(p_{\mathcal{X}_{ijk}}(\mathcal{Y}_{ijk})||p_{\mathbf{0}_{ijk}}(\mathcal{Y}_{ijk}))\\
&\leq \frac{m}{n_1n_2n_3}\sum_{i,j,k}\frac{1}{2\nu^2}|\mathcal{X}_{ijk}|^2\\
&\leq \frac{m}{n_1n_2n_3}\frac{1}{2\nu^2}(rn_1n_3)\left\lfloor \frac{s\wedge (n_2n_3)}{rn_3}\right\rfloor a_0^2b^2\\
& \leq \frac{m}{2\nu^2} \min\left\{\Delta b^2, \beta_a^2\nu^2\frac{rn_1n_3}{m}\right\},
\end{split}
\end{equation}
where the first inequality follows from (\ref{DKLpq}),
the second inequality follows from $|\mathcal{X}_{ijk}|\leq b\|\mathcal{A}\|_\infty$,
and the last inequality follows from (\ref{CXZG}).
Therefore, combining (\ref{CXZG}) with (\ref{DKLPr}), we get that
$$
\sum_{\mathcal{X}\in\mathfrak{X}_\mathcal{A}^0}
D(p_{\mathcal{X}_\Omega}(\mathcal{Y}_\Omega)||p_{\mathbf{0}_\Omega}(\mathcal{Y}_\Omega))
\leq \left(|\mathfrak{X}_\mathcal{A}^0|-1\right)\frac{\beta_a^2rn_1n_3}{2}\leq \left(|\mathfrak{X}_\mathcal{A}^0|-1\right)
\frac{4\beta_a^2\log(|\mathfrak{X}_{\mathcal{A}}^0|-1)}{\log(2)},
$$
where the last inequality holds by $rn_1n_3\leq 8\log_2(|\mathfrak{X}_{\mathcal{A}}^0|-1)$.
Therefore,
by choosing $0<\beta_a\leq \frac{\sqrt{\alpha_1\log(2)}}{2}$ with $0<\alpha_1<\frac{1}{8}$, we have
$$
\frac{1}{|\mathfrak{X}_\mathcal{A}^0|-1}\sum_{\mathcal{X}\in\mathfrak{X}_\mathcal{A}^0}
D(p_{\mathcal{X}_\Omega}(\mathcal{Y}_\Omega)||p_{\mathbf{0}_\Omega}(\mathcal{Y}_\Omega))\leq \alpha_1\log(|\mathfrak{X}_{\mathcal{A}}^0|-1).
$$
Hence, by \cite[Theorem 2.5]{tsybakov2009}, we deduce
\begin{equation}\label{Aminmax}
\begin{split}
&\inf_{\widetilde{X}}\sup_{\mathcal{X}^*
\in\mathfrak{X}_{\mathcal{A}}}\mathbb{P}\left(\frac{\|\widetilde{\mathcal{X}}-\mathcal{X}^*\|_F^2}{n_1n_2n_3} \geq\frac{1}{32} \min\left\{b^2\Delta, \beta_a^2\nu^2\frac{rn_1n_3}{m}\right\}\right)\\
\geq & \inf_{\widetilde{X}}\sup_{\mathcal{X}^*
\in\mathfrak{X}_{\mathcal{A}}^0}\mathbb{P}\left(\frac{\|\widetilde{\mathcal{X}}-\mathcal{X}^*\|_F^2}{n_1n_2n_3} \geq\frac{1}{32} \min\left\{b^2\Delta, \beta_a^2\nu^2\frac{rn_1n_3}{m}\right\}\right)\geq \theta_1,
\end{split}
\end{equation}
where
$$
\theta_1=\frac{\sqrt{|\mathfrak{X}_{\mathcal{A}}^0|-1}}
{1+\sqrt{|\mathfrak{X}_{\mathcal{A}}^0|-1}}\left(1-2\alpha_1
-\sqrt{\frac{2\alpha_1}{\log(|\mathfrak{X}_{\mathcal{A}}^0|-1)}}\right)\in(0,1).
$$
Similar to the previous discussion, we construct $\mathfrak{X}_\mathcal{B}$ as follows:
\begin{equation}\label{SubXBB}
\mathfrak{X}_\mathcal{B}:=\left\{\mathcal{X}=\widetilde{\mathcal{A}}\diamond \mathcal{B}:\ \mathcal{B}\in\widetilde{\mathfrak{B}}\right\},
\end{equation}
where $\widetilde{\mathcal{A}}$ is a block tensor defined as
$$
\widetilde{\mathcal{A}}:=\begin{pmatrix} \mathcal{I}_{r'} & \mathbf{0}
\\\vdots & \vdots\\ \mathcal{I}_{r'} & \mathbf{0} \\ \mathbf{0}
& \mathbf{0} \end{pmatrix}\in\mathbb{R}^{n_1\times r\times n_3}
$$
and $\widetilde{\mathfrak{B}}$ is a set defined as
\begin{equation}\label{GeneXbSubBb}
\widetilde{\mathfrak{B}}:=\left\{\mathcal{B}\in\mathbb{R}^{r\times n_2\times n_3}: \
\mathcal{B}=\begin{pmatrix} \mathcal{B}_{r'} \\ \mathbf{0} \end{pmatrix},
\mathcal{B}_{r'}\in\mathbb{R}^{r'\times n_2\times n_3},
( \mathcal{B}_{r'})_{ijk}\in\{0,b_0\}, \|\mathcal{B}_{r'}\|_0\leq s \right\}.
\end{equation}
Here $r'=\lceil\frac{s}{n_2n_3}\rceil$,
$\mathcal{I}_{r'}\in\mathbb{R}^{r'\times r'\times n_3}$ is the identity tensor,
there are $\lfloor\frac{n_1}{r'}\rfloor$ block tensors $\mathcal{I}_{r'}$ in $\widetilde{\mathcal{A}}$,
and $\mathbf{0}$ is a zero tensor with all entries being zero whose dimention can be known from the context.
Thus $\mathfrak{X}_{\mathcal{B}}\subseteq \mathfrak{X}$.
Note that
\[
\begin{split}
\mathcal{I}_{r'} \diamond \mathcal{B}_{r'} = \textup{Fold}\left(\begin{pmatrix}
\mathbf{I}_{r'} & \mathbf{0} & \cdots & \mathbf{0} \\
\mathbf{0} &\mathbf{I}_{r'} & \cdots & \mathbf{0}\\
\vdots & \vdots & & \vdots \\
\mathbf{0} &\mathbf{0} &\cdots & \mathbf{I}_{r'} \end{pmatrix}\cdot
\begin{pmatrix} \mathbf{B}_{r'}^{(1)} \\ \mathbf{B}_{r'}^{(2)}
\\ \vdots \\ \mathbf{B}_{r'}^{(n_3)} \end{pmatrix}\right)=
\textup{Fold}
\begin{pmatrix} \mathbf{B}_{r'}^{(1)} \\ \mathbf{B}_{r'}^{(2)}
\\ \vdots \\ \mathbf{B}_{r'}^{(n_3)} \end{pmatrix}=\mathcal{B}_{r'}.
\end{split}
\]
For any $\mathcal{X}\in\mathfrak{X}_{\mathcal{B}}$, we have
$$
\mathcal{X}=\begin{pmatrix} \mathcal{I}_{r'} & \mathbf{0}
\\\vdots & \vdots\\ \mathcal{I}_{r'} & \mathbf{0} \\ \mathbf{0}
& \mathbf{0} \end{pmatrix} \diamond \begin{pmatrix} \mathcal{B}_{r'} \\ \mathbf{0} \end{pmatrix}=\begin{pmatrix} \mathcal{B}_{r'} \\ \vdots \\ \mathcal{B}_{r'} \\ \mathbf{0} \end{pmatrix},
$$
where $ ( \mathcal{B}_{r'})_{ijk}\in\{0,b_0\}$, $\|\mathcal{B}_{r'}\|_0\leq s$,
and there are $\lfloor\frac{n_1}{r'}\rfloor$ blocks $ \mathcal{B}_{r'}$ in $\mathcal{X}$.
By the Varshamov-Gilbert bound \cite[Lemma 2.9]{tsybakov2009},
there is a subset $\mathfrak{X}_{\mathcal{B}}^0\subseteq \mathfrak{X}_{\mathcal{B}}$ such that
for any $\mathcal{X}_i,\mathcal{X}_j\in\mathfrak{X}_{\mathcal{B}}^0$,
\begin{equation}\label{XBKL}
|\mathfrak{X}_{\mathcal{B}}^0|\geq 2^{r'n_2n_3/8}+1 \geq 2^{s/8}+1
\end{equation}
and
\[
\begin{split}
\|\mathcal{X}_i-\mathcal{X}_j\|_F^2&
\geq \frac{r'n_2n_3}{8}\left\lfloor\frac{n_1}{r'}\right\rfloor b_0^2
\geq \frac{s}{8}\left\lfloor\frac{n_1}{r'}\right\rfloor b_0^2 \\
& \geq \frac{sn_1}{16r'}b_0^2=\frac{n_1n_2n_3}{16}\frac{s}{n_2n_3\lceil\frac{s}{n_2n_3}\rceil}b_0^2\\
&\geq \frac{n_1n_2n_3}{16}\min\left\{\frac{1}{2},\frac{s}{n_2n_3}\right\}b_0^2
\geq \frac{n_1n_2n_3}{32}\Delta\min\left\{b^2,\frac{\beta_b^2\nu^2}{\Delta}\frac{s}{m}\right\}\\
&=\frac{n_1n_2n_3}{32}\min\left\{\Delta b^2,\frac{\beta_b^2\nu^2s}{m}\right\},
\end{split}
\]
where the third inequality follows from $\lfloor x\rfloor\geq \frac{x}{2}$ for any $x\geq 1$ and
the fourth inequality follows from the fact that $\frac{x}{\lceil x\rceil}\geq \min\{\frac{1}{2},x\}$ for any $x>0$.
For any $\mathcal{X}\in\mathfrak{X}_{\mathcal{B}}^0$,
the KL divergence of of observations with parameters $\mathcal{X}_\Omega$ from $\mathbf{0}_\Omega$ is given by
\[
\begin{split}
D(p_{\mathcal{X}_\Omega}(\mathcal{Y}_{\Omega})||p_{\mathbf{0}_\Omega}(\mathcal{Y}_{\Omega}))
& =\frac{m}{n_1n_2n_3}\sum_{i,j,k}D(p_{\mathcal{X}_{ijk}}(\mathcal{Y}_{ijk})||p_{\mathbf{0}_{ijk}}(\mathcal{Y}_{ijk}))
\leq \frac{m}{2 \nu^2n_1n_2n_3}\sum_{i,j,k}|\mathcal{X}_{ijk}|^2\\
&\leq \frac{m}{2 \nu^2n_1n_2n_3}n_1(s\wedge (n_2n_3))b_0^2=\frac{m}{2\nu^2}\min\left\{\Delta b^2, \frac{\beta_b^2\nu^2s}{m}\right\}\\
&\leq \frac{\beta_b^2s}{2}\leq 4\beta_b^2\frac{\log(|\mathfrak{X}_\mathcal{B}^0|-1)}{\log(2)},
\end{split}
\]
where the second inequality follows from the fact that the nonzero entries of $\mathcal{X}$ is not larger than
$s\lfloor\frac{n_1}{r'}\rfloor=n_1(s\wedge (n_2n_3))$, and the last inequality holds by
$s\leq 8\log_{2}(|\mathfrak{X}_\mathcal{B}^0|-1)$.
By choosing $0<\beta_b\leq \frac{\sqrt{\alpha_2\log(2)}}{2}$ with $0<\alpha_2<\frac{1}{8}$, we obtain that
$$
\frac{1}{|\mathfrak{X}_\mathcal{B}^0|-1}
\sum_{\mathcal{X}\in\mathfrak{X}_\mathcal{B}^0} D(p_{\mathcal{X}_\Omega}(\mathcal{Y}_{\Omega})||p_{\mathbf{0}_\Omega}(\mathcal{Y}_{\Omega}))
\leq\alpha_2\log(|\mathfrak{X}_\mathcal{B}^0|-1).
$$
Therefore, by \cite[Theorem 2.5]{tsybakov2009}, we have that
\begin{equation}\label{Bminmax}
\begin{split}
&\inf_{\widetilde{X}}\sup_{\mathcal{X}^*
\in\mathfrak{X}_{\mathcal{B}}}\mathbb{P}\left(\frac{\|\widetilde{\mathcal{X}}-\mathcal{X}^*\|_F^2}{n_1n_2n_3} \geq\frac{1}{32} \min\left\{\Delta b^2,\frac{\beta_b^2\nu^2s}{m}\right\}\right)\\
\geq & \inf_{\widetilde{X}}\sup_{\mathcal{X}^*
\in\mathfrak{X}_{\mathcal{B}}^0}\mathbb{P}\left(\frac{\|\widetilde{\mathcal{X}}-\mathcal{X}^*\|}{n_1n_2n_3} \geq\frac{1}{32} \min\left\{\Delta b^2,\frac{\beta_b^2\nu^2s}{m}\right\}\right)\geq \theta_2,
\end{split}
\end{equation}
where
$$
\theta_2=\frac{\sqrt{|\mathfrak{X}_\mathcal{B}^0|-1}}
{1+\sqrt{|\mathfrak{X}_\mathcal{B}^0|-1}}\left(1-2\alpha_2
-\sqrt{\frac{2\alpha_2}{\log(|\mathfrak{X}_\mathcal{B}^0|-1)}}\right)\in(0,1).
$$
Let $\beta_c=\min\{\beta_a,\beta_b\}$ and $\theta_c=\min\{\theta_1,\theta_2\}.$
Combining (\ref{Aminmax}) and (\ref{Bminmax}), we deduce
$$
\inf_{\widetilde{X}}\sup_{\mathcal{X}^*
\in\mathfrak{U}(r,b,k)}\mathbb{P}\left(\frac{\|\widetilde{\mathcal{X}}-\mathcal{X}^*\|_F^2}{n_1n_2n_3} \geq\frac{1}{64} \min\left\{\Delta b^2,\beta_c^2\nu^2\left(\frac{s+rn_1n_3}{m}\right)\right\}\right)\geq \theta_c.
$$
By Markov's inequality, we conclude
$$
\inf_{\widetilde{X}}\sup_{\mathcal{X}^*
\in\mathfrak{U}(r,b,k)}\frac{\mathbb{E}_{\Omega,\mathcal{Y}_\Omega}\|\widetilde{\mathcal{X}}-\mathcal{X}^*\|_F^2}{n_1n_2n_3}\geq
\frac{\theta_c}{64} \min\left\{\Delta b^2,\beta_c^2\nu^2\left(\frac{s+rn_1n_3}{m}\right)\right\}.
$$
This completions the proof.
\section{Proof of Proposition \ref{ProupbG}}\label{AppdeF}
By (\ref{KLGaussian}), we choose $\nu=\sigma$.
It follows from Theorem \ref{lowbounMai} that we can get the desired result.
\section{Proof of Proposition \ref{lapUpb}}\label{AppdeG}
By (\ref{KLLappo}), we can choose $\nu=\tau$.
Then the conclusion can be obtained easily by Theorem \ref{lowbounMai}.
\section{Proof of Proposition \ref{Poisslow}}\label{ProoH}
Let \begin{equation}\label{PoscsubX1}
\mathfrak{X}_1:=\{\mathcal{X}=\mathcal{A}\diamond\mathcal{B}:
\mathcal{A}\in\mathfrak{C}_1,\mathcal{B}\in\mathfrak{B}_1\},
\end{equation}
where $\mathfrak{C}_1\subseteq\mathbb{R}^{n_1\times r\times n_3}$ is defined as
\begin{equation}\label{CXZ}
\mathfrak{C}_1:=\left\{\mathcal{A}\in\mathbb{R}^{n_1\times r\times n_3}:\
\mathcal{A}_{ijk}\in\{0,1,\varsigma,a_0\}\right\} \ \ \text{with} \ \ a_0=\min\left\{1-\varsigma, \ \frac{\beta_a\sqrt{\zeta}}{b}\sqrt{\frac{rn_1n_3}{m}}\right\},
\end{equation}
and $\mathfrak{B}$ is defined as
\begin{equation}\label{PoscsubX1B1}
\mathfrak{B}_1:=\left\{\mathcal{B}\in\mathbb{R}^{r\times n_2\times n_3}:
\mathcal{B}_{ijk}\in\{0,\zeta, b,b_0\}, \|\mathcal{B}\|_0\leq s\right\}
\ \ \text{with} \ \ b_0=\min\left\{b, \ \beta_b\sqrt{\frac{\zeta}{\Delta_1}}\sqrt{\frac{s-n_2n_3}{m}}\right\}.
\end{equation}
Let
\begin{equation}\label{PoissXA1A}
\widetilde{\mathfrak{X}}_\mathcal{A}:=\left\{\mathcal{X}
:=(\mathcal{A}+\mathcal{A}_\varsigma)\diamond\mathcal{B}: \ \mathcal{A}\in\widetilde{\mathfrak{C}}_1, \
\mathcal{B}=b(\mathcal{I}_r \ \cdots \ \mathcal{I}_r \ \mathcal{B}_\mathcal{I} )\in\mathfrak{B}_1\right\},
\end{equation}
where $\mathcal{I}_r\in\mathbb{R}^{r\times r\times n_3}$ is the identity tensor,
there are $\lfloor\frac{n_2}{r}\rfloor$ blocks $\mathcal{I}_r$
in $\mathcal{B}$, $\mathcal{B}_\mathcal{I}=\begin{pmatrix} \mathcal{I}_{\mathcal{B}} \\ \mathbf{0} \end{pmatrix}$,
$\mathcal{I}_{\mathcal{B}}\in\mathbb{R}^{(n_2-r\lfloor\frac{n_2}{r}\rfloor)\times (n_2-r\lfloor\frac{n_2}{r}\rfloor)\times n_3}$ is the identity tensor,
$\mathcal{A}_\varsigma\in \mathbb{R}^{n_1\times r\times n_3}$
with $(\mathcal{A}_\varsigma)_{ijk}=\varsigma$, and
\begin{equation}\label{PoisXAsubC1}
\widetilde{\mathfrak{C}}_1:=\left\{\mathcal{A}\in\mathbb{R}^{n_1\times r\times n_3}: \
\mathcal{A}_{ijk}\in\{0,a_0\}\right\}\subseteq \mathfrak{C}_1.
\end{equation}
From the construction of $\mathcal{B}$, we know that $\|\mathcal{B}\|_0=n_2< s$.
For any $\mathcal{X}\in\widetilde{\mathfrak{X}}_\mathcal{A}$, we obtain that
\begin{equation}\label{XDCL}
\mathcal{X}=(\mathcal{A}+\mathcal{A}_\varsigma)\diamond\mathcal{B}=\zeta\mathbb{I}\diamond(\mathcal{I}_r \ \cdots \ \mathcal{I}_r \ \mathcal{B}_\mathcal{I} )+\mathcal{A}\diamond\mathcal{B},
\end{equation}
where $\mathbb{I}\in\mathbb{R}^{n_1\times r\times n_3}$
denotes a tensor with all entries being $1$.
By the definition of tensor-tensor product, we have that
\[
\begin{split}
\mathbb{I}\diamond(\mathcal{I}_r \ \cdots \ \mathcal{I}_r \ \mathcal{B}_\mathcal{I})& =
\textup{Fold}\left(\begin{pmatrix}
\mathbf{E}_{n_1r} & \mathbf{E}_{n_1r} & \cdots & \mathbf{E}_{n_1r} \\
\mathbf{E}_{n_1r} &\mathbf{E}_{n_1r} & \cdots & \mathbf{E}_{n_1r} \\
\vdots & \vdots & & \vdots \\
\mathbf{E}_{n_1r} &\mathbf{E}_{n_1r} &\cdots & \mathbf{E}_{n_1r} \end{pmatrix}\cdot
\begin{pmatrix}
\mathbf{I}_{r} & \mathbf{I}_r & \cdots & \mathbf{I}_r & \mathbf{I}_B \\
\mathbf{0} &\mathbf{0} & \cdots &\mathbf{0} & \mathbf{0}\\
\vdots & \vdots & & \vdots & \vdots \\
\mathbf{0} &\mathbf{0} &\cdots & \mathbf{0} &\mathbf{0} \end{pmatrix}\right)\\
& =
\textup{Fold}
\begin{pmatrix} \mathbf{E}_{n_1n_2} \\ \mathbf{E}_{n_1n_2}
\\ \vdots \\ \mathbf{E}_{n_1n_2} \end{pmatrix}=\mathbb{I}_{n_1n_2},
\end{split}
\]
where $\mathbf{E}_{n_1r}\in \mathbb{R}^{n_1\times r}$ is an $n_1\times r$ matrix with all entries being $1$, $\mathbf{I}_{r}$
is the $r\times r$ identity matrix,
$\mathbf{I}_{B0}=\begin{pmatrix} \mathbf{I}_{B} \\ \mathbf{0} \end{pmatrix}$
with $\mathbf{I}_{B}\in\mathbb{R}^{(n_2-r\lfloor\frac{n_2}{r}\rfloor)\times (n_2-r\lfloor\frac{n_2}{r}\rfloor)}$
being the identity matrix,
and $\mathbb{I}_{n_1n_2}\in\mathbb{R}^{n_1\times n_2 \times n_3}$ is a tensor with all entries being $1$.
Therefore, we have $\widetilde{\mathfrak{X}}_\mathcal{A}\subseteq\widetilde{\mathfrak{U}}(r,b,s,\zeta)$.
By applying the Varshamov-Gilbert
bound \cite[Lemma 2.9]{tsybakov2009} to the last term of (\ref{XDCL}), there is a subset
$\widetilde{\mathfrak{X}}_\mathcal{A}^0\subseteq\widetilde{\mathfrak{X}}_\mathcal{A}$
such that for any $\mathcal{X}_1,\mathcal{X}_2\in\widetilde{\mathfrak{X}}_\mathcal{A}^0$,
$$
\|\mathcal{X}_1-\mathcal{X}_2\|_F^2\geq \frac{rn_1n_3}{8}\left\lfloor\frac{n_2}{r}\right\rfloor a_0^2b^2\geq
\frac{n_1n_2n_3}{8}\min\left\{(1-\varsigma)^2b^2, \ \frac{\beta_a^2\zeta r n_1n_3}{m}\right\}
$$
and $|\widetilde{\mathfrak{X}}_\mathcal{A}^0|\geq 2^{\frac{rn_1n_3}{8}}+1$.
Let $\mathcal{X}_0=\zeta\mathbb{I}\diamond(\mathcal{I}_r \ \cdots \ \mathcal{I}_r \ \mathcal{B}_\mathcal{I} )$.
For any $\mathcal{X}\in\widetilde{\mathfrak{X}}_\mathcal{A}^0$, the KL divergence of $p_{\mathcal{X}_\Omega}(\mathcal{Y}_{\Omega})$ from $p_{(\mathcal{X}_0)_\Omega}(\mathcal{Y}_{\Omega})$ is
\[
\begin{split}
D(p_{\mathcal{X}_\Omega}(\mathcal{Y}_{\Omega})||p_{(\mathcal{X}_0)_\Omega}(\mathcal{Y}_{\Omega}))
&=\frac{m}{n_1n_2n_3}\sum_{i,j,k}D(p_{\mathcal{X}_{ijk}}(\mathcal{Y}_{ijk})||p_{(\mathcal{X}_0)_{ijk}}(\mathcal{Y}_{ijk})) \\
&\leq \frac{m}{n_1n_2n_3}\sum_{i,j,k}\frac{(\mathcal{X}_{ijk}-\zeta)^2}{\zeta}\\
&\leq \frac{m(a_0b)^2}{\zeta}\leq \beta_a^2rn_1n_3,
\end{split}
\]
where the first inequality follows from (\ref{DKLPoi}),
the second inequality follows from (\ref{XDCL}),
and the last inequality follows from (\ref{CXZ}).
Note that $rn_1n_3\leq \frac{8\log(|\widetilde{\mathfrak{X}}_\mathcal{A}^0|-1)}{\log(2)}$.
Then, by choosing $0<\beta_a
\leq \frac{\sqrt{\widetilde{\alpha}_1\log(2)}}{2\sqrt{2}}$ and $0<\widetilde{\alpha}_1<\frac{1}{8}$,
we get that
$$
\frac{1}{|\widetilde{\mathfrak{X}}_\mathcal{A}^0|-1}\sum_{\mathcal{X}\in\widetilde{\mathfrak{X}}_\mathcal{A}^0}
D(p_{\mathcal{X}_\Omega}(\mathcal{Y}_\Omega)||p_{\mathbf{0}_\Omega}(\mathcal{Y}_\Omega))\leq \widetilde{\alpha}_1\log(|\widetilde{\mathfrak{X}}_{\mathcal{A}}^0|-1).
$$
Therefore, by \cite[Theorem 2.5]{tsybakov2009}, we have that
\begin{equation}\label{PXKL}
\begin{split}
&\inf_{\widetilde{X}}\sup_{\mathcal{X}^*
\in\widetilde{\mathfrak{X}}_{\mathcal{A}}}\mathbb{P}\left(\frac{\|\widetilde{\mathcal{X}}-\mathcal{X}^*\|_F^2}{n_1n_2n_3} \geq
\frac{1}{8}\min\left\{(1-\varsigma)^2b^2, \ \frac{\beta_a^2\zeta r n_1n_3}{m}\right\}\right)\\
\geq & \inf_{\widetilde{X}}\sup_{\mathcal{X}^*
\in\widetilde{\mathfrak{X}}_{\mathcal{A}}^0}\mathbb{P}\left(\frac{\|\widetilde{\mathcal{X}}-\mathcal{X}^*\|}{n_1n_2n_3} \geq\frac{1}{8} \min\left\{(1-\varsigma)^2b^2, \ \frac{\beta_a^2\zeta r n_1n_3}{m}\right\}\right)\geq \widetilde{\theta}_1,
\end{split}
\end{equation}
where
$$
\widetilde{\theta}_1=\frac{\sqrt{|\widetilde{\mathfrak{X}}_{\mathcal{A}}^0|-1}}
{1+\sqrt{|\widetilde{\mathfrak{X}}_{\mathcal{A}}^0|-1}}\left(1-2\widetilde{\alpha}_1
-\sqrt{\frac{2\widetilde{\alpha}_1}{\log(|\widetilde{\mathfrak{X}}_{\mathcal{A}}^0|-1)}}\right)\in(0,1).
$$
Similar to the previous discussion, we define a subset $\widetilde{\mathfrak{X}}_\mathcal{B}\subseteq\mathbb{R}^{n_1\times n_2\times n_3}$ as
\begin{equation}\label{PoissXB1B}
\widetilde{\mathfrak{X}}_\mathcal{B}:=\left\{\mathcal{X}
=(\mathcal{A}_0+\mathcal{A}_1)\diamond\mathcal{B}: \mathcal{B}\in\widetilde{\mathfrak{B}}_1\right\},
\end{equation}
where
$$
\mathcal{A}_0:=(\mathcal{M}_1 \ \mathbf{0})\in\mathbb{R}^{n_1\times r\times n_3} \ \text{with} \ \mathcal{M}_1\in\mathbb{R}^{n_1\times 1\times n_3},
$$
and
$$
\mathcal{A}_1:=
\begin{pmatrix}
\mathbf{0}_{r'1} & \mathcal{I}_{r'} & \mathbf{0} \\
\vdots & \vdots & \vdots \\
\mathbf{0}_{r'1} & \mathcal{I}_{r'} & \mathbf{0} \\
\mathbf{0}_{r'1} & \mathbf{0} & \mathbf{0} \end{pmatrix}\in\mathbb{R}^{n_1\times r\times n_3}.
$$
Here $\mathbf{0}_{r'1}\in\mathbb{R}^{r'\times 1\times n_3}$ is a zero tensor,
and $ \mathcal{I}_{r'}\in\mathbb{R}^{r'\times r'\times n_3}$ is the identity tensor,
$\mathcal{M}_1\in\mathbb{R}^{n_1\times 1 \times n_3}$ denotes a tensor that the first frontal slice is all one and other frontal slices are zeros.
$
\widetilde{\mathfrak{B}}_1\subseteq\mathfrak{B}_1
$
is defined as
\begin{equation}\label{PoisXBsubB1}
\widetilde{\mathfrak{B}}_1:=\left\{\mathcal{B}=
\begin{pmatrix} \zeta\mathbb{I}_1 \\ \mathcal{B}_{r'}\\ \mathbf{0} \end{pmatrix},
\mathbb{I}_1\in\mathbb{R}^{1\times n_2\times n_3}, \mathcal{B}_{r'}\in \mathbb{R}^{r'\times n_2\times n_3},
(\mathcal{B}_{r'})_{ijk}\in\{0,b_0\},\|\mathcal{B}_{r'}\|_0\leq s-n_2n_3 \right\},
\end{equation}
where $r'=\lceil\frac{s}{n_2n_3}\rceil-1$ and $\mathbb{I}_1$ represents a tensor with all entries being ones.
By the definition of tensor-tensor product and the structure of $\mathcal{A}_1$,
we get that
$\mathcal{A}_1\diamond\mathcal{B}=\mathcal{A}_1\diamond\mathcal{B}'$.
For any $\mathcal{X}\in\widetilde{\mathfrak{X}}_\mathcal{B}$, we have
\begin{equation}\label{EXXp}
\begin{split}
\mathcal{X}& =\mathcal{A}_0\diamond\mathcal{B}+\mathcal{A}_1\diamond\mathcal{B}\\
& = \textup{Fold}\left(\begin{pmatrix}
\mathbf{N}_{n_1r} & \mathbf{0} & \cdots & \mathbf{0} \\
\mathbf{0} & \mathbf{N}_{n_1r} & \cdots & \mathbf{0} \\
\vdots & \vdots & & \vdots \\
\mathbf{0} &\mathbf{0} &\cdots & \mathbf{N}_{n_1r} \end{pmatrix}\cdot
\begin{pmatrix}
\mathbf{B}^{(1)} \\
\mathbf{B}^{(2)} \\
\vdots \\
\mathbf{B}^{(n_3)}
\end{pmatrix} \right) + \mathcal{A}_1\diamond\mathcal{B}'\\
& =
\textup{Fold}
\begin{pmatrix} \zeta \mathbf{E}_{n_1n_2} \\ \zeta \mathbf{E}_{n_1n_2}
\\ \vdots \\ \zeta \mathbf{E}_{n_1n_2} \end{pmatrix} +\mathcal{A}_1\diamond\mathcal{B}' \\
& =\zeta \mathbb{I}_n+
\mathcal{A}_1\diamond\mathcal{B}',
\end{split}
\end{equation}
where $\mathbf{N}_{n_1r}= (\mathbf{E}_{n_11} \ \mathbf{0}_{n_1(r-1)})\in\mathbb{R}^{n_1\times r}$ with $\mathbf{E}_{n_11} \in\mathbb{R}^{n_1\times 1}$ being a column vector (all $1$),
$$
\mathbf{B}^{(i)}=
\begin{pmatrix} \zeta \mathbf{E}_{1n_2} \\ \mathbf{B}_{r'}^{(i)}\\ \mathbf{0} \end{pmatrix}
$$ with $\mathbf{E}_{1n_2} \in\mathbb{R}^{1\times n_2}$ being a row vector (all $1$) and and $\mathbf{B}_{r'}^{(i)}$ being the $i$th frontal slice of $\mathcal{B}_{r'}$,
$\mathbb{I}_n\in\mathbb{R}^{n_1\times n_2\times n_3}$ is a tensor with all entries being $1$, and
$$
\mathcal{B}'=
\begin{pmatrix} \mathbf{0}_1 \\ \mathcal{B}_{r'}\\ \mathbf{0} \end{pmatrix}
$$
with $\mathbf{0}_1\in\mathbb{R}^{1\times n_2\times n_3}$ being a zero tensor.
Therefore, $\mathcal{X}\in \widetilde{\mathfrak{U}}(r,b,s,\zeta)$, which implies that
$\widetilde{\mathfrak{X}}_\mathcal{B}\subseteq\widetilde{\mathfrak{U}}(r,b,s,\zeta)$.
Therefore, by applying the Varshamov-Gilbert
bound \cite[Lemma 2.9]{tsybakov2009} to the last term of (\ref{EXXp}),
for any $\mathcal{X}_1,\mathcal{X}_2\in\widetilde{\mathfrak{X}}_{\mathcal{B}}^0$,
there exists a subset
$\widetilde{\mathfrak{X}}_{\mathcal{B}}^0\subseteq\widetilde{\mathfrak{X}}_{\mathcal{B}}$
such that
$|\widetilde{\mathfrak{X}}_{\mathcal{B}}^0|\geq2^{\frac{s-n_2n_3}{8}}+1$
\[
\begin{split}
\|\mathcal{X}_1-\mathcal{X}_2\|_F^2&\geq \left(\frac{s-n_2n_3}{8}\right)\left\lfloor\frac{n_1}{r'}\right\rfloor b_0^2\\
&\geq \left(\frac{s-n_2n_3}{16}\right)\cdot \frac{n_1}{r'}\cdot
\min\left\{b^2, \ \beta_b^2\frac{\zeta}{\Delta_1}\frac{s-n_2n_3}{m}\right\}\\
& \geq \frac{n_1n_2n_3}{32}\Delta_1
\min\left\{b^2, \ \beta_b^2\frac{\zeta}{\Delta_1}\frac{s-n_2n_3}{m}\right\}\\
& = \frac{n_1n_2n_3}{32}
\min\left\{\Delta_1 b^2, \ \beta_b^2\zeta\frac{s-n_2n_3}{m}\right\},
\end{split}
\]
where the third inequality holds by the fact that $\frac{x}{\lceil x\rceil}\geq \min\{\frac{1}{2},x\}$ for any $x>0$.
The KL divergence of $p_{\mathcal{X}_\Omega}(\mathcal{Y}_{\Omega})$ from
$p_{(\mathcal{X}_0)_\Omega}(\mathcal{Y}_{\Omega})$ is
\[
\begin{split}
D(p_{\mathcal{X}_\Omega}(\mathcal{Y}_{\Omega})||p_{(\mathcal{X}_0)_\Omega}(\mathcal{Y}_{\Omega}))
&=\frac{m}{n_1n_2n_3}\sum_{i,j,k}D(p_{\mathcal{X}_{ijk}}(\mathcal{Y}_{ijk})||p_{(\mathcal{X}_0)_{ijk}}(\mathcal{Y}_{ijk})) \\
&\leq \frac{m}{n_1n_2n_3}\sum_{i,j,k}\frac{(\mathcal{X}_{ijk}-\zeta)^2}{\zeta}\\
&\leq m\frac{b_0^2}{\zeta}\Delta_1\leq \beta_b^2(s-n_2n_3)\leq \frac{8\beta_b^2\log(|\mathfrak{X}_\mathcal{B}^0|-1)}{\log(2)},
\end{split}
\]
where the second inequality follows from
$\|\mathcal{A}_1\diamond\mathcal{B}'\|_0\leq \lfloor\frac{n_1}{r'}\rfloor(s-n_2n_3)\leq n_1n_2n_3\Delta_1$
and the last inequality follows from $|\widetilde{\mathfrak{X}}_{\mathcal{B}}^0|\geq2^{\frac{s-n_2n_3}{8}}+1$.
Therefore,
by choosing $0<\beta_b\leq \frac{\sqrt{\widetilde{\alpha}_2\log(2)}}{2\sqrt{2}}$ with $0<\widetilde{\alpha}_2<1/8$, we have
$$
\frac{1}{|\mathfrak{X}_\mathcal{B}^0|-1}\sum_{\mathcal{X}\in \mathfrak{X}_\mathcal{B}^0}
D(p_{\mathcal{X}_\Omega}(\mathcal{Y}_{\Omega})||p_{(\mathcal{X}_0)_\Omega}(\mathcal{Y}_{\Omega}))
\leq \widetilde{\alpha}_2\log(|\mathfrak{X}_\mathcal{B}^0|-1).
$$
By \cite[Theorem 2.5]{tsybakov2009}, we obtain that
\begin{equation}\label{PXKL2}
\begin{split}
& \inf_{\widetilde{X}}\sup_{\mathcal{X}^*
\in\widetilde{\mathfrak{X}}_{\mathcal{B}}}\mathbb{P}\left(\frac{\|\widetilde{\mathcal{X}}-\mathcal{X}^*\|_F^2}{n_1n_2n_3} \geq
\frac{1}{32}\min\left\{\Delta_1 b^2, \ \beta_b^2\zeta\frac{s-n_2n_3}{m}\right\}\right)\\
\geq & \inf_{\widetilde{X}}\sup_{\mathcal{X}^*
\in\widetilde{\mathfrak{X}}_{\mathcal{B}}^0}\mathbb{P}\left(\frac{\|\widetilde{\mathcal{X}}-\mathcal{X}^*\|}{n_1n_2n_3} \geq\frac{1}{32}\min\left\{\Delta_1 b^2, \ \beta_b^2\zeta\frac{s-n_2n_3}{m}\right\}\right)\geq \widetilde{\theta}_2,
\end{split}
\end{equation}
where
$$
\widetilde{\theta}_2=\frac{\sqrt{|\widetilde{\mathfrak{X}}_{\mathcal{B}}^0|-1}}
{1+\sqrt{|\widetilde{\mathfrak{X}}_{\mathcal{B}}^0|-1}}\left(1-2\widetilde{\alpha}_2
-\sqrt{\frac{2\widetilde{\alpha}_2}{\log(|\widetilde{\mathfrak{X}}_{\mathcal{B}}^0|-1)}}\right)\in(0,1).
$$
By combining (\ref{PXKL}) and (\ref{PXKL2}), we deduce
$$
\inf_{\widetilde{X}}\sup_{\mathcal{X}^*
\in\mathfrak{U}(r,b,k)}\mathbb{P}\left(\frac{\|\widetilde{\mathcal{X}}-\mathcal{X}^*\|_F^2}{n_1n_2n_3} \geq\frac{1}{64} \min\left\{\widetilde{\Delta} b^2,\widetilde{\beta}_c^2\zeta\left(\frac{s-n_2n_3+rn_1n_3}{m}\right)\right\}\right)\geq \widetilde{\theta}_c,
$$
where $\widetilde{\Delta}:=\min\{(1-\varsigma)^2, \Delta_1\}$,
$\widetilde{\beta}_c:=\{\beta_a,\beta_b\}$,
and $\widetilde{\theta}_c=\min\{ \widetilde{\theta}_1, \widetilde{\theta}_2\}$.
By Markov's inequality, the desired conclusion is obtained easily.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
| proofpile-arXiv_065-262 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}\label{sec:intro}
The essential part of human intelligence for understanding the story and predicting unobserved facts largely depends on the ability to memorize the past and reasoning for relational information based on the pieces of memory.
In this context, research on artificial intelligence has focused on designing a human-like associative memory network that can easily store and recall both events and relational information from a part of the information.
In neural network research, many approaches generally model sequential data with memory systems, such as Long Short Term Memory~(LSTM)~\cite{hochreiter1997long} or memory augmented neural networks~(MANN).
Especially, the recent approach in MANN constructs an associative memory with a content-based addressing mechanism and stores both input data and its relational information to a single external memory.
MANN has already proven to be an essential component on many tasks which need long-term context understanding~\cite{weston2014memory, sukhbaatar2015end, graves2014neural, graves2016hybrid, gulcehre2018dynamic}.
Also, compared to recurrent neural networks, it can store more information from sequential input data and correctly recall desired information from memory with a given cue.
However, even with its promising performance on a wide range of tasks, MANN still has difficulties in solving complex relational reasoning problems~\cite{weston2015towards}.
Since the content-based addressing model implicitly encodes data items and their relational information into one vector representation, they often result in a lossy representation of relational information which is not rich enough for solving relational reasoning tasks.
To address such weakness, some researchers find relational information by leveraging interaction between memory entities with multi-head attention~\cite{palm2018recurrent, santoro2018relational}. Others focus on long sequence memorization performance of memory~\cite{trinh2018learning, le2019learning, munkhdalai2019metalearned}.
Another attempts to apply a self-attention to memory contents and explicitly encode relational information to a separate external memory~\cite{le2020self}.
However, all those models need to explicitly find relational information among memory entities with a highly computational attention mechanism and have to repeatedly recompute it on every memory update.
\begin{figure}[t]
\centering\includegraphics[width=\linewidth]{proposed_method.pdf}
\caption{(a) The DAM with $K$ sub-memory blocks (DAM-K) and attentive interpolation,~$g_t^{at}$. (b) memory refreshing Loss.}~\label{fig.proposed.method}
\end{figure}
In this research, we approach the same problem in a much simpler and efficient way which is inspired by how our brain represents and restores information. We hypothesize that if we can encode input data into richer representations, MANN can provide enhanced relation modeling performance without exhaustive self-attention based relation searching. Based on this assumption, we find the weakness of the MANN model and facilitate it with the human brain-like mechanisms. One of the main weaknesses of conventional MANN is its lossy representation of relational information~\cite{le2020self}. In terms of content-based addressing memory, it can be caused by both a single memory-based representation and long-temporal data association performance.
Although MANN learns to correlate sequential events across time, its representation is not rich enough to reflect complex relational information existing in input data.
Therefore, for the enhanced relation learning, we focus on the richness of representation which implicitly embeds associations existing in input data.
For this purpose, we introduce a novel Distributed Associative Memory (DAM) architecture which is inspired by how the information is represented in our brains~\cite{lashley1950search, doi:10.1076/jhin.10.3.308.9086}.
In DAM, we replace the single external memory with multiple smaller sub-memory blocks and update those memory blocks simultaneously and independently.
The basic operations for each associative memory block are based on the content-based addressing mechanism of MANN, but its parallel memory architecture allows each sub-memory system to evolve over time independently. Through this procedure, the input information is encoded and stored in distributed representations. The distributed representation is a concept that stems from how the brain stores information in its neural networks and well known for its efficiency and powerful representational diversity.
Furthermore, similar to the underlying insight of multi-head attention~\cite{vaswani2017attention}, our DAM model can jointly attend to information from different representation subspaces at different sub-memory blocks and is able to provide a more rich representation of the same common input data.
To retrieve rich information for relational reasoning, we apply a soft-attention based interpolation to the diverse representations distributed across multiple memories.
Moreover, to enrich long-term relational information in the memory, we introduce a novel Memory Refreshing Loss (MRL) which fortifies relational modeling ability of the memory and generally enhances long-term memorization performance of MANN. The MRL forces the memory network to learn to reproduce the number of stochastically sampled input data only based on the stored memory contents.
As if, other associated pieces of memory are reminded together whenever a person recalls a certain event in his memory, the data reproducing task enables MANN to have better association and memorization ability for input data. In our brain mechanisms, a similar concept is maintenance rehearsal operation which is repeatedly verbalizing or thinking about a piece of information.
MRL is designed to reproduce a predefined percentage of input representations in the memory matrix on average and, while optimizing two different tasks at the same time, keep the balance between MRL and target objective loss by dynamically re-weighting each task~\cite{liu2006influence, cui2019class}.
By combining the above two approaches, DAM and MRL, our architecture provides rich representation which can be successfully used for tasks requiring both memorization and relational reasoning.
We apply our architecture to Differential Neural Computer (DNC)~\cite{graves2016hybrid}, which is one of the representative content-based addressing memory, to construct novel distributed associative memory architecture with MRL. DNC has promising performance on diverse tasks but also known to be poor at complex relational reasoning tasks. In experiments, we show that our architecture greatly enhances both memorization and relation reasoning performance of DNC, and even achieves the state-of-the-art records.
\section{Related Works}
\subsection{Biological Brain Mechanism}
Our memory architecture is mainly inspired by the information processing mechanisms of the human brain. In our brain, forging new memories for facts and events, or retrieving information to serve the current task all depend on how information is represented and processed throughout the brain. In many researches~\cite{lashley1950search, doi:10.1076/jhin.10.3.308.9086, crick1983function, thompson1991memory}, there is already a broad consensus that distributed neural representations play a vital role in constructing and retrieving memories.
For example, the first indications of the distributed character of memory in the cerebral cortex were provided in Lashley’s~\cite{lashley1950search} neuropsychological experiments~\cite{fuster1998distributed}.
Also, this distributed representation concept is frequently applied for content-addressable memory, automatic generalization, adaptive rule selection~\cite{hinton1984distributed}. Our model adopts this distributed representations concept with the multiple content-addressable memory network architecture to enrich the input data representation.
Moreover, in psychology researches, it is well known that human memory can be enhanced by repetitive rehearsal process of past information. Whether it is short-term or long-term memory, rehearsal can provide improved recall performance and working memory for current task~\cite{rundus1980maintenance, greene1987effects}. Based on those researches, we design a new auxiliary loss function that is similar to the rehearsal process in our brain. Our loss function repetitively reconstructs some amount of previous input data based on the contents of memory while training for a target task. Since two different objectives are simultaneously trained with a multi-tasking learning setting, this procedure is comparable to the psychological case study when a person is intentionally rehearsing some information while recognizing its use for a different task.
In our research, it is demonstrated that these biologically inspired contributions can effectively improve relational data modeling performance of memory augmented neural networks as in the human brain.
\subsection{Neural Networks with Memory Augmentation}
There are several MANN approaches that have focused on relation reasoning or long-term memorization enhancement of network. They can be categorized as follows.
\paragraph*{Multiple Memory based MANN}
In memory slot-based MANNs, the content-based addressing is implemented with a dynamic long-term memory which is composed of multiple memory slots~\cite{santoro2018relational, danihelka2016associative, henaff2016tracking, goyal2019recurrent}.
For multiple memory matrix-based models, researchers improve a single memory architecture by adding task-relevant information, asynchronous data input, and relational information to an additional memory matrix (e.g. dual memory)~\cite{le2020self, munkhdalai2017neural, le2018dual}.
Our DAM adopts a multiple of the same type of memory matrix for distributed representation. Compared to other approaches, distributed memory architecture is much simpler and shows better performance on the same problems.
\paragraph*{Memory Networks for Relational Reasoning}
For relational reasoning, some MANN models explicitly find relational information by comparing their memory entities. Relational Memory Core~(RMC)~\cite{santoro2018relational} leverages interaction mechanisms among memory entities to update memory with relational information.
Self-attentive Associative Memory~(STM)~\cite{le2020self} adopts self-attention for memory contents and store relational information to separate relation memory.
Compared to those methods, DAM provides relational information through diverse representations of input data and long-term association performance of memory.
\paragraph*{Losses for Long-term Dependency}
For long-term memorization of the input pattern, \cite{munkhdalai2019metalearned} used a meta objective loss which forces a model to memorize input patterns in the meta-learning framework.
Also, for longer sequence modeling, \cite{trinh2018learning} adopted unsupervised auxiliary loss which
reconstructs or predicts a sub-sequence of past input data.
Compared to \cite{trinh2018learning}, MRL does not rely on a random anchor point and the sub-sequence reconstruction rather enforces memorization of every past input data that are associated with a target task.
MRL focuses on enhancing data association while reproducing input representations, but also considering a balance with target objective loss by applying
dynamic weighting method for dual-task optimization.
\subsection{Differentiable Neural Computer}\label{sec:dnc}
We first briefly summarize DNC architecture which is a baseline model for our approaches.
DNC~\cite{graves2016hybrid} is a memory augmented neural network inspired by conventional computer architecture and mainly consists of two parts, a controller and an external memory.
When input data are provided to the controller, usually LSTM, it generates a collection of memory operators called as an interface vector~$\bm\xi_t$ for accessing an external memory. It consists of several \textit{keys} and \textit{values} for read/write operations and constructed with the controller internal state~$\bm{h}_{t}$ as $\bm\xi_t = W_{\xi} \bm{h}_{t}$ at each time step $t$.
Based on these memory operators, every read/write operation on DNC is performed.
During the writing process, DNC finds a writing address,~$\bm{w}_t^w \in [0,1]^A$, where $A$ is a memory address size, along with writing memory operators, e.g. write-in \textit{key}, and built-in functions.
Then it updates write-in \textit{values},~$\bm{v}_t \in \mathbb{R}^L$, in the external memory,~$\bm{M}_{t-1} \in \mathbb{R}^{A \times L}$, along with erasing value,~$\bm{e}_t \in [0,1]^L$, where $L$ is a memory length size as follows:
\begin{equation}
\label{eq.original.write}
\bm{M}_t=\bm{M}_{t-1}\circ(\bm{E}-\bm{w}_t^{w}\bm{e}_t^\top)+\bm{w}_t^{w}\bm{v}_t^\top
\end{equation}
where $\circ$ denotes element-wise multiplication and $\bm{E}$ is $\bm{1}^{A \times L}$.
In the reading process, DNC searches a reading address,~$\bm{w}_t^{r,i} \in [0,1]^A$, for $R$ read heads, along with read memory operators, e.g. read-out \textit{key}.
Then, it reads out information from the external memory:
\begin{equation}
\label{eq.original.read}
\bm{r}_t^i = \bm{M}_t{\bm{w}_t^{r,i}}^\top
\end{equation}
Finally, the output is computed as $\bm{y}_t=W_y[\bm{h}_t;\bm{r}_t] \in \mathbb{R}^{d_o}$, where $\bm{r}_{t} = \{\bm{r}^i_{t} \in \mathbb{R}^L;1 \leq i \leq R\}$.
Through these operations, DNC can learn how to store input data and utilize stored information to solve a given task.
These whole mechanisms make DNC suitable for a general purposed memory augmented neural network.
\section{Proposed Method} \label{sec:proposed}
In this section, we introduce two methods that improve both memorization and relational reasoning ability of conventional DNC, a distributed associative memory architecture, and an MRL function.
For a clear explanation, we illustrate DAM mechanism with a single read head case. For $R$ read head cases of DAM, the details are in the Appendix.
\subsection{Distributed Associative Memory Architecture}
\label{mdma}
The distributed associative memory architecture consists of a controller network and $K$ associative memory blocks where each memory block is a content addressable memory similar to the original DNC~\cite{graves2016hybrid}.
Figure~\ref{fig.proposed.method}(a) shows the overall read/write process of the proposed DAM.
For the writing operation, the controller of DAM produces multiple writing operator vectors for multiple memory blocks.
Each writing operator vector is used for the content-based addressing of one of the multiple memory blocks, and it is independent of other memory blocks.
Since it is produced based on the current input and previous hidden states of the controller,
it can independently store its own representation of the same input contents.
This writing process enables DAM to store the diverse representations of the same input data to multiple memory blocks with much flexibility.
Furthermore, for the reading process, all memory blocks are read at the same time and read values are interpolated with soft attention to produce single read-out information.
Through this attention-based reading process, DAM retrieves the most suitable information for the current task from representations distributed in the multiple memory blocks.
Based on these read/write operations, DAM learns how to store and retrieve the diverse representations of input data for different purposed tasks.
The following sections detail the main operations.
\subsubsection{Controller for Multiple Associative Memory Blocks}
\label{controller}
At each time step $t$, the controller receives an external input,~$\bm{i}_{t}$, read-out of the previous time step,~$\bm{r}_{t-1}$, and previous hidden state of controller,~$\bm{h}_{t-1}$,
to update its current hidden state,~$\bm{h}_t$. After layer normalization, it produces an interface vector,~$\bm\xi_t \in \mathbb{R}^{K*(L*R+3L+3R+3)}$, which includes read and write parameters for multiple memory access.
\subsubsection{Write into Multiple Sub-Memory Blocks}
\label{write}
The multiple memory writing processes in our architecture are based on the content-based memory accessing mechanism of DNC. A single memory block is addressed and updated with the same procedure of DNC, and such single memory block updating is applied to all blocks independently at the same time.
As shown in Eq.~(\ref{eq.our.controller}), each memory block has its own interface vector relevant weight ~$W_{\xi,1},\cdots,W_{\xi,k}$,~where $k \in \{1,\cdots,K\}$. Theses weights are multiplied with a controller hidden state vector, $\bm{h}_{t}$, and used for memory operations of each independent memory block as following.
\begin{equation}
\openup 1ex
\label{eq.our.controller}
\bm\xi_t=[\bm\xi_{t,1},\cdots,\bm\xi_{t,K},\hat{g}_{t}^{at}] =[W_{\xi,1},\cdots,W_{\xi,K},W_{\xi,at}]\bm{h}_{t}
\end{equation}
where $\bm\xi_{t,k}$ is a interface vector for each memory block and $\hat{g}_{t}^{at}$ is an attentive gate at time~$t$.
Based on a writing operator obtained from $\bm\xi_{t,k}$, DAM updates input information into each memory block,~$\bm{M}_{t-1,k}$, independently and simultaneously, following Eq.~(\ref{eq.original.write}).
That independent and simultaneous writing procedures of sub-memory blocks allow that our DAM learns to construct diverse representations for the same common input data.
The following attention-based reading process is designed to integrate representations distributed across sub-memory blocks, and it contributes to enrich representation for relational reasoning tasks.
\subsubsection{Read from Multiple Sub-Memory Blocks}
\label{read}
As in the writing process, DAM obtains a reading operator from $\bm\xi_{t,k}$, and computes reading address, ~$\bm{w}_{t,k}^{r} \in [0,1]^A$, for each memory block.
Based on those addresses, DAM reads values from each memory block and derives read-out value,~$\bm{r}_t \in \mathbb{R}^L$, from them, using a processed attentive gate,~$g_{t}^{at} \in [0,1]^K$, as follows:
\begin{equation}
\label{eq.read.out}
\bm{r}_{t} = \sum_{k=1}^{K} g_{t,k}^{at} \bm{M}_{t,k}^\top{\bm{w}_{t,k}^{r}}
\end{equation}
where $g_{t,k}^{at} = Softmax(\hat{g}_{t,k}^{at})$ for $k=1,\cdots,K$.
Compared to Eq.~(\ref{eq.original.read}) of DNC, this reading process integrates representations stored in multiple memory blocks with the attentive gate and enables DAM to learn to provides the most appropriate distributed representation for a target task.
\subsection{Memory Refreshing Loss}
\label{memory.refreshing.loss}
To enhance the relation modeling performance of a memory network, we design a novel auxiliary task, Memory Refreshing Loss (MRL), which can further improve the memorization performance of any given MANN.
Our MRL function is inspired by the psychological case study on the rehearsal process of the human brain.
In the study, if a person repeatedly rehearses given words or numbers while knowing its use for the following task, the overall memory performance is enhanced~\cite{rundus1980maintenance, greene1987effects, souza2015refreshing, camos2017maintenance}.
Similarly, the main role of MRL task is forcing a memory network to reproduce sampled input data based on its memory content while training. When MRL task is trained with the main target task of the model in a multi-task learning setting, main task-related representation, and its encoded association can be further emphasized while training~\cite{caruana1997promoting, ben2003exploiting, alonso2016multitask, rei2017semi}.
First, we define a task-specific target objective function,~$\mathcal{L}^{task}$, of conventional MANN and MRL~$\mathcal{L}^{mr}_t$ as follows:
\begin{equation}
\label{eq.original.loss}
\mathcal{L}^{task} = \sum_{t=1}^{T} A(t) \ell_{task}(\bm{o}_t,\bm{y}_t)
\end{equation}
where $T$ is a whole sequence size, $A(t)$ is a function at time $t$, which indicates whether current phase is in answer or not, if its value is 1, then $t$ is in answer phases (otherwise 0). $\bm{o}_t$ is a target answer and $\ell_{task}(\cdot,\cdot)$ is a task target dependent loss function.
\begin{equation}
\label{eq.association.reinforcing.loss}
\mathcal{L}^{mr}_t = \ell_{mr}(\bm{i}_t,\bm{y}_t)
\end{equation}
where $\ell_{mr}(\cdot,\cdot)$ is an input sequence dependent loss function, and $\bm{i}_t$ is an input, $\bm{y}_t$ is an output at time step~$t$, respectively.
Our MRL function is defined to use a sampled input sequence as its target data as shown in Eq.~(\ref{eq.association.reinforcing.loss}), and this procedure leads the model to refresh given input information while it is learning the given task.
The error measure for MRL is adopted based on the input item type or main task characteristic. In this research, we use cross-entropy loss or $L2$ loss depending on a given task.
As shown in Fig.~\ref{fig.proposed.method}(b), MRL forces a model to learn to reproduce sampled input sequences from stored representations in memory.
When sampling input data, each item of input sequence is sampled with Bernoulli trial with probability,~$p$, in which we call it as reproducing probability and it is defined as follows:
\begin{equation}
\label{eq.sample}
P(\alpha(t)=1) = 1 - P(\alpha(t)=0) = p
\end{equation}
where $\alpha(t)$ is an indicator function that represents sampling status at time $t$.
For an input sequence of length $n$, the series of Bernoulli trial-based samplings is the same as a Binomial sampling of the input sequence. Therefore, for any input sequence, on average, $np$ samples are reconstructed by MRL because an expected value of Binomial sampling is a product between trial probability, $p$, and the number of trials, $n$.
This random sampling policy prevents the model from learning to simply redirect given input to the output of the model at every time step.
When adding MRL to the task-specific target objective for multi-task learning, we also need a new strategy that can control the balance between MRL and original target task loss.
Since, as the number of the story input increases, the MRL can overwhelm the total loss of the model.
To prevent this loss imbalance problem, we apply a re-weighting method~\cite{cui2019class, liu2006influence}, which dynamically keeps the balance between the target task objective~$\mathcal{L}^{task}$ and MRL~$\mathcal{L}^{mr}$.
Moreover, we also introduce a scaling factor, $\gamma$, to ensure the main portion of training loss can be the original target objective function.
\begin{equation}
\label{eq.weight}
\gamma =
\left\{
\begin{array}{ll}
\hat{\gamma} & \mbox{if }~\hat{\gamma} \geq 1, \\
1 & \mbox{otherwise}.
\end{array}
\right.
\end{equation}
where $\hat{\gamma} = \frac{\sum_{t=1}^{T}S(t)\alpha(t)}{\sum_{t=1}^{T}A(t)}$ and $S(t)$ is an indicator function which represents whether current time step $t$ is in the story phase or not.
Finally, the total loss for the training of proposed model follows:
\begin{equation}
\begin{gathered}
\label{eq.our.loss}
\mathcal{L} = \gamma\mathcal{L}^{task} + \sum_{t=1}^{T}\alpha(t)\mathcal{L}^{mr}_t
\end{gathered}
\end{equation}
From above two memory related tasks, $\mathcal{L}^{task}$ and $\mathcal{L}^{mr}$, while a model learns to reproduce input representations, target task-related representations and their association are further emphasized at the same time. As a result, MRL works as an auxiliary task that reinforces data association for a target objective.
\section{Experiments and Results} \label{sec:experiments}
We evaluate each of our main contributions, Distributed Associative Memory architecture~(DAM) and MRL, separately for ablation study, and show the performance of DAM-MR for complex relational reasoning tasks, such as bAbI, $N^{th}$ farthest task, and Convex hull task.
In all experiments, we adopt well-known neural network generalization techniques that are used in \cite{franke2018robust} for our baseline DNC model. The detailed parameter settings and adopted generalization techniques are shown in the Appendix.
\subsection{Distributed Associative Memory Architecture Evaluation}
The distributed memory architecture is evaluated in three aspects. First, we show the verification of the basic memory network capability of DAM with Algorithmic tasks. Second, for the evaluation of memory efficiency in data association performance, DAM is configured to have a similar total memory size as a single memory model and evaluated with the Representation Recall task. Third, scalability experiments of DAM show the effect of the number of sub-memory blocks on the relation reasoning performance. In this experiment, we represent DAM architecture with $K$ sub memory blocks as DAM-$K$. The scalability experiments are performed with two settings, one is iteratively dividing a fixed total memory size to obtain multiple sub-memory blocks, the other is use a fixed sub-memory block size and adopting additional sub-memory blocks while increasing total memory size.
\begin{figure*}[t]
\centering
\begin{subfigure}[t]{.4\textwidth}
\centering
\includegraphics[width=\linewidth]{algorithmic_copy.pdf}
\caption{}
\label{fig.algorithimic.copy}
\end{subfigure}
\hspace{12pt}
\begin{subfigure}[t]{.4\textwidth}
\centering
\includegraphics[width=\linewidth]{algorithmic_ar.pdf}
\caption{}
\label{fig.algorithimic.ar}
\end{subfigure}
\caption{Mean training curves on the algorithmic tasks which are (a) the copy task and (b) the associative recall task. The shadowed area shows a standard deviation of 10 trials.}
\label{fig.algorithmic}
\end{figure*}
\subsubsection{Algorithmic Tasks}
We show the effect of DAM on the basic memory network performance with the copy and the associative recall tasks from \cite{graves2014neural}.
The copy task is designed to show whether a model can store and recall arbitrary long sequential data correctly, and the associative recall task is intended to show whether a model can recall the information associated with a given cue by remembering temporal relation between input data. As shown in Fig.~\ref{fig.algorithmic}, simply adopting DAM architecture enhances the relation recall performance of the memory model. We can obtain more benefits by adding additional sub-memory blocks to DAM architecture (by increasing $K$, from 2 to 3), however, for the copy task, as shown in Fig.~\ref{fig.algorithmic}(a), the effect of the number of memory blocks is small because it is not a task designed for the evaluation of relation reasoning, rather focusing on simple memorization performance.
\begin{figure*}
\begin{subfigure}[t]{.32\textwidth}
\centering
\includegraphics[width=\linewidth]{8segment.pdf}
\caption{}
\label{fig.rr.8segment}
\end{subfigure}
\hfill
\begin{subfigure}[t]{.32\textwidth}
\centering
\includegraphics[width=\linewidth]{16segment.pdf}
\caption{}
\label{fig.rr.segment}
\end{subfigure}
\hfill
\begin{subfigure}[t]{.33\textwidth}
\centering
\includegraphics[width=\linewidth]{rr_scalability.pdf}
\caption{}
\label{fig.rr.scalability}
\end{subfigure}
\caption{Mean training curves for (a) 8 segment and (b) 16 segment on the Representation Recall task. (c) Mean accuracy of DAM models on the Representation Recall task. The shadowed area shows a standard deviation of 10 trials.}
\label{fig.rr}
\end{figure*}
\subsubsection{Representation Recall Task}
We design a new algorithmic task, called a Representation Recall (RR) task, which evaluates how much representation details a memory model can remember and recall from memory.
This task uses randomly generated binary vectors as input sequences.
From the sequence, a binary vector is randomly selected and divided into $2N$ sub-parts. Among them, $N$ sub-parts are provided as a cue for a model to predict the rest of the sub-parts. In order to solve this task, the model is required to remember $\frac{2N!}{N!(2N-N)!}$ combinations of relations existing in each input, and therefore task complexity increases as $N$ increases.
To show the efficiency of a model with a fair comparison, we configure DAM by dividing the original single external memory into the group of $1/2$, $1/4$, and $1/8$ sized sub-memory blocks.
The mean training curves of DAM-$K$ ($K$=2, 4, and 8) are compared with the original DNC while increasing the task complexity $N$ as in shown Figs.~\ref{fig.rr}(a) and (b).
The result demonstrates that our proposed architecture learns the task much faster than other DNC based models, and also shows better accuracy and learning stability (smaller standard deviation in learning curve).
Furthermore, we compared the final accuracy of DAM-$K$($K$=2, 4, and 8) on RR task while increasing the task complexity from 2 to 16 segments.
As shown in Fig.~\ref{fig.rr}(c), if task complexity increases, the final accuracy inevitably degrades. However, DAM with more sub-memory blocks suffers less performance degradation. Although all of the models have the same total memory size, a DAM model with more sub-memory blocks provides richer representation which includes more details of input.
\begin{figure*}[t]
\begin{subfigure}[t]{.32\textwidth}
\centering
\includegraphics[width=\linewidth]{ar_scalability.pdf}
\caption{}
\end{subfigure}
\hfill
\begin{subfigure}[t]{.31\textwidth}
\centering
\includegraphics[width=\linewidth]{ar_scalability_same_convergence.pdf}
\caption{}
\end{subfigure}
\begin{subfigure}[t]{.32\textwidth}
\centering
\includegraphics[width=\linewidth]{ar_scalability_same.pdf}
\caption{}
\end{subfigure}
\hfill
\caption{Scalability Experiments with Associative Recall task. (a) Scalability test while increasing single memory size. (b) Training curve while iteratively dividing a fixed memory size. (c) Scalability test while iteratively dividing a fixed memory size.}
\label{fig.ar.scalability}
\end{figure*}
\subsubsection{Scalability Experiments}
\paragraph*{Associative Recall task}
We perform a scalability experiment on the associative recall task with fixed memory address size $A=16$. We set the memory length $L$ of a sub-memory block to $108$, $54$, and $36$, and compares the performance of DAM according to the number of sub-memory blocks, $K$ in Figure~\ref{fig.ar.scalability}.
The result shown in Figure~\ref{fig.ar.scalability}(a) corresponds to the case when a single memory size of DAM-1 is linearly increased and the same total memory size is divided to obtain multiple smaller memory blocks for DAM-2 and 3.
As shown in the results, the model with a smaller memory block size provides more accuracy. Figure~\ref{fig.ar.scalability}(c) shows the case when we adopt a single fixed total memory size for DAM-1, 2, and 3, and each of DAM-2 and 3 represents the number of sub-memory blocks under the same condition.
Similar to Fig.~\ref{fig.ar.scalability}(a), as we divide the memory size to obtain more sub-memory blocks, better accuracy is obtained if there is no information loss at a single sub-memory block. Figure~\ref{fig.ar.scalability}(b) shows the training curves for the case of Fig.~\ref{fig.ar.scalability}(c). It shows DAM architecture can expedite the training speed of the model even with the smaller number of memory slots.
\begin{figure}[!t]
\centering\includegraphics[width=.4\linewidth]{scalability.pdf}
\caption{Mean error rate of DAM models on the bAbI task.}~\label{scalability}
\end{figure}
\paragraph*{bAbI task}
For the evaluation of the scalability of distributed associative memory architecture without the effect of information loss at a sub-memory block,
we adopt a fixed size sub-memory block that has a larger length than a half of the input size and then increase the number of
sub-memory blocks to produce several models, DAM-2, 3, and 4.
We evaluate all model's performance with complex reasoning tasks, bAbI task, to show the effect of $K$ (representation diversity) on relational reasoning performance.
The bAbI task~\cite{weston2015towards} is a set of 20 different tasks for evaluating text understanding and reasoning, such as basic induction and deduction.
In Fig.~\ref{scalability}, the DAM-1 represents a baseline model that has a single external memory and includes modifications from~\cite{franke2018robust} for the generalization performance enhancement. For the comparison with DAM-$K$ ($K$=2, 3, and 4), we linearly increase its single external memory size.
The overall graph shows that, as the degree of distribution increases, performance on bAbI tasks is also enhanced accordingly for both mean error rate and standard deviation of results.
If we use more sub-memory blocks to further increase $K$, we can obtain gradual performance enhancement, which clearly shows the benefits of distributed associative memory architecture.
\begin{figure*}[t]
\centering
\begin{subfigure}[t]{.4\textwidth}
\centering
\includegraphics[width=\linewidth]{copy_mr_dnc.pdf}
\caption{}
\label{fig.mr.copy.dnc}
\end{subfigure}
\hspace{12pt}
\begin{subfigure}[t]{.4\textwidth}
\centering
\includegraphics[width=\linewidth]{copy_mr_dam3.pdf}
\caption{}
\label{fig.mr.copy.dam3}
\end{subfigure}
\medskip
\begin{subfigure}[t]{.4\textwidth}
\centering
\includegraphics[width=\linewidth]{ar_mr_dnc.pdf}
\caption{}
\label{fig.mr.ar.dnc}
\end{subfigure}
\hspace{12pt}
\begin{subfigure}[t]{.4\textwidth}
\centering
\includegraphics[width=\linewidth]{ar_mr_dam3.pdf}
\caption{}
\label{fig.mr.ar.dam3}
\end{subfigure}
\caption{Mean training curves for different reproducing probability values at (a) DNC and (b) DAM-3 on the copy task. Mean training curves for different reproducing probability values at (c) DNC and (d) DAM-3 on the associative recall task. The shadowed area shows a standard deviation of 10 trials.}
\label{fig.mr.algorithmic}
\end{figure*}
\subsection{Memory Refreshing Loss Evaluation}
To show the effect of MRL on MANN, we apply it to Algorithmic Tasks (Copy and Associative Recall task).
In Fig.~\ref{fig.mr.algorithmic}, we show the mean training curves according to the reproducing probability,~$p$, on the copy task, and the associative recall task, respectively. For DAM-MR, although we show only DAM3-MR, other configurations (DAM2-MR, DAM4-MR) have similar results.
As shown in Fig.~\ref{fig.mr.algorithmic}, the MRL function expedites the learning speed of models in most cases.
For the original DNC,
it makes the training speed of model much faster and it is further increased by the high reproducing probability on both tasks.
For DAM-MR, the MRL enhances the training speed of the models but DAM is not sensitive to the change of reproducing probability. From those results, we can see that the effect of MRL is related to the property of a given task.
\subsection{DAM-MR Evaluation on Relational Reasoning Tasks}
As shown in the ablation study, each component of the proposed architecture has a significant impact on the original DNC performance. To show the performance of the whole combined model, DAM-MR, we compare our architecture to other DNC's variations and attention-based MANN on following relational reasoning tasks. To support our argument on the relational information retrieving approach, we adopt recent memory network models which are applying extensive self-attention~\cite{le2020self} or multi-head attention for encoding relational information~\cite{santoro2018relational} as our counterparts.
Specifically, Self-attentive Associative Memory (STM)~\cite{le2020self}, Relational Memory Core (RMC)~\cite{santoro2018relational} and Universal Transformer (UT)~\cite{dehghani2018universal} use self-attention and multi-head attention in their memory architecture.
\begin{table*}[t]
\caption{
Test accuracy [\%] on $N^{th}$ Farthest task.}
\begin{center}
\begin{tabular}{lccr}
\toprule
\large{\bf{Model}} & \large{\bf{Accuracy}} \\
\midrule
DNC~\cite{santoro2018relational} & 25 \\
RMC~\cite{santoro2018relational} & 91 \\
TPR~\cite{le2020self} & 13 \\
STM~\cite{le2020self} & 98 \\
\midrule
RMC-MR ($p=0.3$) & 94 \\
DARMC4-MR ($p=0.3$) & \textbf{98.2} \\
\midrule
DAM6-MR ($p=0.3$) & 97.8 \\
\bottomrule
\label{table.nfar}
\end{tabular}
\end{center}
\end{table*}
\begin{table*}[t]
\caption{Test accuracy [\%] on Convex hull task.}
\label{table.convexhull}
\begin{center}
\begin{small}
\begin{tabular}{lcc}
\toprule
\bf{Model} & \bf{$N=5$} & \bf{$N=10$} \\
\midrule
LSTM~\cite{le2020self} & 89.15 & 82.24 \\
ALSTM~\cite{le2020self} & 89.92 & 85.22 \\
DNC~\cite{le2020self} & 89.42 & 79.47 \\
RMC~\cite{le2020self} & 93.72 & 81.23 \\
STM~\cite{le2020self} & 96.85 & 91.88 \\
\midrule
\textbf{DAM6-MR ($p=0.3$)} & 95.6 & 89.8 \\
\textbf{DAM8-MR ($p=0.3$)} & 95.4 & 90.5 \\
\midrule
\textbf{DARMC8-MR ($p=0.3$)} & \textbf{97.2} & \textbf{92.3} \\
\bottomrule
\end{tabular}
\end{small}
\end{center}
\end{table*}
\subsubsection{$N^{th}$ Farthest}
This task evaluates a model capacity for relational reasoning across time. It asks a model to find the
$N^{th}$ farthest vector from a given query vector, and this requires the memorization of relational information between vectors, such as distance, and sorting mechanism.
With this task, the long temporal relation modeling performance of a model can be demonstrated.
Table~\ref{table.nfar} shows a comparison of the $N^{th}$ Farthest task results between our model and other MANN models which are designed for relational reasoning tasks. In the results, even though the original DNC can not solve the task at all, our DAM-MR shows surprisingly high performance on the task. Even compared to Relational Memory Core~(RMC)~\cite{santoro2018relational}, which explicitly finds relational information from memory based on multi-head attention mechanism, DAM6-MR shows superior performance. For STM~\cite{le2020self}, our model
shows a slightly lower accuracy. However, if we consider STM's self-attention computations for finding every possible relation with outer products, our DAM-MR is a quite simple and efficient architecture that does not introduce any explicit relation seeking operations or high-order storage. Therefore, in the aspect of model efficiency, DAM-MR is showing a novel way of modeling relational information which is a promising alternative for self-attention based approach. Moreover, if we apply our DAM architecture and MRL to the RMC, it even shows
better performance than STM. Although RMC already has its own way of searching relational information from its memory, which overlaps with DAM's purpose, our DAM-MR provides further performance improvement on the task.
This result demonstrates the additional benefit of DAM architecture as a generally applicable design choice for MANN.
\begin{table*}[t]
\caption{The mean word error rate~[\%] for 10 runs of different DNC based models trained jointly on all 20 bAbI task.}
\begin{center}
\begin{small}
\begin{tabular}{lcc}
\toprule
\bf{Model} & \bf{Mean} & \bf{Best} \\
\midrule
DNC~\cite{graves2016hybrid} & 16.7 \scriptsize{$\pm$ 7.6} & 3.8 \\
SDNC~\cite{rae2016scaling} & 6.4 \scriptsize{$\pm$ 2.5} & 2.9 \\
rsDNC~\cite{franke2018robust} & 6.3 \scriptsize{$\pm$ 2.7} & 3.6 \\
DNC-MD~\cite{csordas2019improving} & 9.5 \scriptsize{$\pm$ 1.6} & n/a \\
NUTM~\cite{Le2020Neural} & 5.6 \scriptsize{$\pm$ 1.9} & 3.3 \\
\midrule
\textbf{DAM2-MR ($p=0.1$)} & \textbf{1.5} \scriptsize{$\pm$ 1.3} & 0.16 \\
\textbf{DAM2-MR ($p=0.3$)} & 2.5 \scriptsize{$\pm$ 1.0} & \textbf{0.14} \\
\bottomrule
\label{table.babi}
\end{tabular}
\end{small}
\end{center}
\end{table*}
\subsubsection{Convex hull task}
The convex hull task~\cite{vinyals2015pointer} is predicting a list of points that forms a convex hull sorted by coordinates.
The input list consists of $N$ points with 2D coordinates.
In this experiment,
we train the model with $N \in [5,20]$ and test with $N =5,10$ cases.
The output is a sequence of 20-dimensional one-hot vectors representing
the features of the solution points in the convex-hull. As shown in Table~\ref{table.convexhull}, DAM-MR shows better accuracy than RMC~\cite{santoro2018relational} and similar performance with STM~\cite{le2020self}. Moreover, DARMC-MR, which is DAM applied RMC, shows even better performance than STM, which also demonstrates the effectiveness and generality of our DAM architecture.
\begin{table*}[t]
\caption{The mean word error rate [\%] for best run of MAMN models trained jointly on all 20 bAbI tasks.}
\begin{center}
\begin{small}
\begin{tabular}{lc}
\toprule
\bf{Model} & \bf{Best} \\
\midrule
Transformer~\cite{dehghani2018universal} & 22.1 \\
UT~\cite{dehghani2018universal} & 0.29 \\
MNM-p~\cite{munkhdalai2019metalearned} & 0.175 \\
MEMO~\cite{Banino2020MEMO} & 0.21 \\
STM~\cite{le2020self} & 0.15 \\
\midrule
\textbf{DAM2-MR ($p=0.1$)} & 0.16 \\
\textbf{DAM2-MR ($p=0.3$)} & \textbf{0.14} \\
\bottomrule
\label{table.babi.best}
\end{tabular}
\end{small}
\end{center}
\end{table*}
\subsubsection{bAbI QA task}
The bAbI task~\cite{weston2015towards} is a set of 20 different tasks for evaluating text understanding and reasoning, such as basic induction and deduction.
Each task consists of stories for questions and correct answers for the questions, e.g.
$\mathit{Daniel~travelled~to~the~bathroom.}$
$\mathit{Mary}$
$\mathit{moved~to~the~office.}$
$\mathit{Where~is~Daniel?}$
$\mathit{bathroom}$.
In evaluation, a model is supposed to remember the story and recall related information to provide correct answer for the given questions.
Table~\ref{table.babi} shows experimental results on the bAbI task.
In this experimental result, our proposed model, DAM2-MR with $p=0.1$, shows the best mean performance on the bAbI task, among all other DNC based approaches.
These results demonstrate that our proposed architecture efficiently learns the bAbI task by using distributed associative memory architecture and memory refreshing loss.
Particularly, in Table~\ref{table.babi.best}, the best result of DAM2-MR records the state-of-the-art performance on the bAbI task, even compared to other types of recent MANN models.
\section{Conclusion} \label{sec:conclusion}
In this paper, we present a novel DAM architecture and an MRL function to enhance the data association performance of memory augmented neural networks.
The proposed distributed associative memory architecture stores input contents to the multiple sub-memory blocks with diverse representations and retrieves required information with soft-attention based interpolation over multiple distributed memories.
We introduce a novel MRL to explicitly improve the long-term data association performance of MANN.
Our MRL is designed to reproduce the contents of associative memory with sampled input data and also provides a dynamic task balancing with respect to the target objective loss.
We implement our novel architecture with DNC and test its performance with challenging relational reasoning tasks.
The evaluation results demonstrate that our DAM-MR correctly stores input information and robustly recalls the stored information based on the purpose of the given tasks.
Also, it shows that our model not only improves the learning speed of DNC, but reinforces the relation reasoning performance of the model.
Eventually, our DAM-MR significantly outperforms all other variations of DNC and shows the state-of-the-art performance on complex relation reasoning tasks, bAbI, even compared to other types of memory augmented network models.
As future works, we are going to optimize the task balancing strategy between MRL and target loss, and perform further research on how MRL affects multiple sub-memory blocks while optimizing the memory model.
\section*{Acknowledgment}
This work was partly supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIT) (No. 2021R1A2C3011169) (50\%). It was also supported by Electronics and Telecommunications Research Institute(ETRI) grant funded by the Korean government. [21ZS1100, Core Technology Research for Self-Improving Integrated Artificial Intelligence System](50\%).
| proofpile-arXiv_065-263 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The phenomenal spread of the COVID-19 pandemic will have unprecedented consequences for human life and livelihood. In the absence of a treatment or vaccine to develop immunity against the disease, governments around the world have used non-pharmaceutical, risk mitigation strategies such as lockdowns, shelter-in-place, school and business closures, travel bans or restrictions to limit movement and prevent contagion. The magnitude and effectiveness of such mitigation strategies in preventing contagion and reducing the number of deaths is shown in Europe where such mitigation strategies have reduced the reproduction number over time $(R_t)$ below 1, which means that the virus will gradually stop spreading. Since the beginning of the epidemic, an estimated 3.1 million deaths were averted across 11 European countries attributable to these risk mitigation strategies \cite{flaxman2020estimating}.
In the United States, the adoption, and enforcement of non-pharmaceutical, risk mitigation strategies have varied by state and across time. The first confirmed COVID-19 case was reported on January 21, 2020, in Washington State \cite{ghinai2020first}. While transmissions were documented since, a national emergency was declared later on March 13 \cite{house2020proclamation}. At that time, international travel restrictions were enforced. By March 16, six bay area counties declared shelter-in-place orders and on March 19, California was the first state to issue a state-wide order. Since then, several communities and states implemented stay-at-home orders and social distancing measures. As of March 30, there were 162,600 confirmed COVID-19 cases in the U.S. \cite{house2020proclamation} and 30 states had announced shelter-in-place orders. On April 1, two additional states and the District of Columbia issued statewide shelter-in-place orders followed by 7 more states by April 6.
Historically, among the U.S. cities that were hit by the 1918 Spanish flu, social distancing played a pivotal role in flattening the pandemic curve. In fact, the cities which delayed enforcing social distancing saw the highest peaks in new cases of the disease. Policies aimed at reducing human transmission of COVID-19 included lockdown, travel restrictions, quarantine, curfew, cancellation and postponing events, and facility closures. Measuring the dynamic impact of these interventions is challenging \cite{adiga2020interplay,dascritical} and confounded by several factors such as differences in the specific modes and dates of the policy-driven measures adopted by or enforced across states, regions, and countries, and, of course, the actual diversity of human behaviors at these locations.
Given the current ubiquitous usage of mobile devices among the U.S. populations, social mobility as measured by aggregating the geospatial statistics of their daily movements could serve as a proxy measure to assess the impact of such policies as social distancing on human transmission. In the particular context of the current pandemic, human mobility data could be estimated using geolocation reports from user smartphones and other mobile devices that were made available by multiple providers including Google and Apple, among others. In this study, we obtained such data from Descartes Labs, which made anonymized location-specific time series data on mobility index freely available to researchers through their GitHub site: \url{https://github.com/descarteslabs/DL-COVID-19.} Thus, we obtained location-specific bivariate time series on daily mobility index and disease incidence, i.e., new cases of COVID-19 in the U.S.
In this study, we are interested to (a) measure and compare the temporal dependencies between mobility ($M$) and new cases ($N$) across 151 cities in the U.S. with relatively high incidence of COVID-19 by May 31, 2010. We believe that these dependency patterns vary not only over time but across locations and populations. For this purpose, we proposed a novel application of Optimal Transport to compute the distance between patterns of ($N$, mobility, time) and its variants for each pair of cities. This allowed us to (b) group the cities into different hierarchical clusterings, and (c) compute the barycenter to describe the overall dynamic pattern of each identified cluster. Finally, we also used city-specific socioeconomic covariates to analyze the composition of each cluster. A pipeline for our analytical framework is described in the following section.
\begin{figure}
\centering
\includegraphics[scale=0.2]{heatmap_panel.pdf}
\caption{The dendrograms show 3 hierarchical clusterings of cities (a), (b) and (c) respectively based on ($N$, $M$, $t$), ($N$, $\Delta M$, $t$) and ($N$, $M'$, $t$) using Ward's linkage. Based on visual inspection of the seriated distance matrix, 10 clusters were identified in each case, as shown on the heatmaps.}
\label{fig:f1}
\end{figure}
\section{Data and Methods}
\subsection{Datasets}
\subsubsection{COVID-19 incidence and population data}
Based on cumulative COVID-19 cases data from the Johns Hopkins Coronavirus Resource Center (\url{https://coronavirus.jhu.edu/}), for this study, we compiled time series data on daily new cases of the disease for more than 300 U.S. counties from 32 states and the District of Columbia and matched by five-digit FIPS code or county name to dynamic and static variables from additional data sources. Since a single county may consist of multiple individual cities, we include the list of all city labels within each aggregate group to represent a greater metropolitan area. A total of 151 of such metropolitan areas that had at least 1,000 reported cases of COVID-19 by May 31, 2020, were selected for this study. Population covariates for these areas were collected from the online resources of the U.S. Census Bureau and the U.S. Centers for Disease Control and Prevention (CDC) (\url{https://www.census.gov/quickfacts/}, \url{https://svi.cdc.gov/}).
\subsubsection{Human mobility index data}
Anonymized geolocated mobile phone data from several providers including Google and Apple, timestamped with local time, were recently made available for analysis of human mobility patterns during the pandemic. Based on geolocation pings from a collection of mobile devices reporting consistently throughout the day, anonymous aggregated mobility indices were calculated for each county at Descartes Lab. The maximum distance moved by each node, after excluding outliers, from the first reported location was calculated. Using this value, the median across all devices in the sample is computed to generate a mobility metric for select locations at county level. Descartes Labs further defines a normalized mobility index as a proportion of the median of the maximum distance mobility to the ``normal'' median during an earlier time-period multiplied by a factor of 100. Thus, the mobility index provides a baseline comparison to evaluate relative changes in population behavior during COVID-19 pandemic.\cite{warren2020mobility}.
\begin{figure}
\centering
\includegraphics[scale=0.6]{boxplot_final.pdf}
\caption{The boxplots show the differences across the identified 10 clusters of cities in terms of the values of the 8 most significant covariates: (a) Reaction Time (RT), (b) hispanic percent, (c) black percent, (d) population size, (e) senior percent, (f) population density 2010, (g) persons per household, and (h) SVI ses. We jittered the overlapping RT points for easy visualization.}
\label{fig:f6}
\end{figure}
\subsection{Methods}
Below we list the steps of the overall workflow of our framework, and briefly describe the same in the following paragraphs of this section.
\begin{algorithm}
\caption{The workflow of the analytical framework}
\begin{algorithmic}[1]
\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\renewcommand{\algorithmicensure}{\textbf{Steps of the Analysis:}}
\REQUIRE {For each of $k (=151)$ given cities, a bivariate time series: mobility ($M$) and new cases ($N$) for each date ($t$) over a fixed time-interval (March 1 -- May 31, 2020).}
\ENSURE .
\STATE As measures of mobility, along with $M$, also consider its variants $\Delta M$ and $M'$ computed with equations \ref{eq1} and \ref{eq2}.
\STATE Performed normalized ranking of variables ($M$/$\Delta M$/$M'$, $N$ and $t$) to represent each city as a discrete set of ranked points in unit cube ($[0, 1]^3$)
\STATE Compute optimal transport (OT) distance between the pointsets representing each pair of cities.
\STATE Cluster the cities based on the OT distance matrix. Three different hierarchical clusterings $HC1$, $HC2$ and $HC3$ were obtained based on Ward's linkage method and 3 variants of mobility: $M$, $\Delta M$, and $M'$ respectively.
\STATE Apply HCMapper to compare the dendrograms of different clusterings ($HC1$, $HC2$ and $HC3$). Select the clustering ($HC3$) that yields the most spatially homogeneous clusters.
\STATE Compute Wasserstein barycenter for each cluster of the selected clustering ($HC3$).
\STATE Analyze the composition of the clusters by applying random forest classifier on 15 city-specific covariates as feature set. Identify the contributions of the covariates to discriminate among the clusters.
\end{algorithmic}
\end{algorithm}
\subsubsection{Temporal patterns of mobility}
To better understand the temporal patterns of mobility, in addition to the given non-negative mobility index $M$, we also use two variants: delta mobility $\Delta M$ and $M'$ defined as follows:
\begin{equation}
\Delta M(t)= M(t)-M(t-1)\\
\label{eq1}
\end{equation}
and
\begin{equation}
M'(t)=((M(t)-M(t-1))+0.5(M(t+1)-M(t-1)))/2.
\label{eq2}
\end{equation}
Here $\Delta M$ is the first difference, and $M'$ approximately the local derivative \cite{keogh2001derivative}, of the time series $M$, and yet, unlike $M$, these are not restricted to be non-negative.
\subsubsection{Representing a city as discrete set of points} With the above definitions, the temporal relationship between mobility (and its variants) and new cases of each city in our data can be depicted as tuples ($M/\Delta M/M'$, $N$, $t$). We represent the time series by performing a normalized ranking of the variables so as to represent each city by a discrete set of points in unit cube $[0, 1]^3$. This normalized ranking is frequently used as a estimator for empirical copulas with good convergence properties \cite{deheuvels1980non}. The cities can have different representations by considering the three definitions of mobility metrics, and in each case, we can have different groupings of cities. A comparative analysis of all groupings can provide a correlation structure between groups of cities from different perspectives.
\subsubsection{Comparing cities using optimal transport}
To distinguish between the temporal dependence between mobility and new cases of two cities, we use Wasserstein distance from optimal transport theory. We compute Wasserstein distance between two discrete sets of points in unit cube, corresponding to two cities, as the minimum cost of transforming the discrete distribution of one set of points to the other set. It can be computed without the need of such steps as fitting kernel densities or arbitrary binning that can introduce noise to data. Wasserstein distance between two distributions on a given metric space $M$ is conceptualized by the minimum ``cost" to transport or morph one pile of dirt into another -- the so-called `earth mover's distance'. This ``global'' minimization over all possible ways to morph takes into consideration the ``local'' cost of morphing each grain of dirt across the piles \cite{peyre2019computational}.
Given a metric space $\mathcal{M}$, the distance optimally transports the probability $\mu$ defined over $\mathcal{M}$ to turn it into $\nu$:
\begin{equation}
W_p(\mu,\nu)=\left(\inf_{\lambda \in \tau(\mu,\nu)} \int _{\mathcal{M} \times \mathcal{M}} d(x,y)^p d\lambda (x,y)\right)^{1/p},
\end{equation} where $p \ge 1$, $\tau(\mu,\nu)$ denotes the collection of all measures on $\mathcal{M}\times \mathcal{M}$ with marginals $\mu$ and $\nu$. The intuition and motivation of this metric came from optimal transport problem, a classical problem in mathematics, which was first introduced by the French mathematician Gaspard Monge in 1781 and later formalized in a relaxed form by L.~Kantorovitch in 1942.
\subsubsection{Clustering the cities}
Upon computing optimal transport based distances for each pair of cities, hierarchical clustering of the cities is performed using Ward's minimum variance method \cite{inbook}. For the 3 variants of mobility ($M/\Delta M/M'$), we obtained 3 different hierarchical clusterings: $\mathrm{HC}{}1$, $\mathrm{HC}{}2$ and $\mathrm{HC}{}3$ respectively. Based on visual inspection of the distance matrix seriated by the hierarchical clustering, and looping over the number of clusters, we take a relevant flat cut in the dendrogram. For each case, we got 10 clusters, each consisting of cities that are similar with respect to their dependence between mobility and new cases.
\subsubsection{Comparing the clusterings}
The resulting clusters are compared using a visualization tool called HCMapper \cite{marti2015hcmapper}. HCMapper can compare a pair of dendrograms of two different hierarchical clusterings computed on the same dataset. It aims to find clustering singularities between two models by displaying multiscale partition-based layered structures. The three different clustering results are compared with HCMapper to sought out the structural instabilities of clustering hierarchies. In particular, the display graph of HCMapper has $n$ columns, where $n$ represents the number of hierarchies we want to compare (here $n=3$). Each column consists of the same number of flat clusters, which is depicted as rectangles within the column. Rectangle size is proportional to the number of cities within the clusters, while an edge between two clusters tells the number of shared cities between them. Thus, a one-to-one mapping between the clusters of two columns preferably depicts a similar perfect clustering while too many edges crossing between two columns describe a dissimilar structure.
We also checked the spatial homogeneity of a clustering in terms of the average number of clusters in which the cities of each state were assigned to, over all states that are represented in our data. Moran's $I$ to assess the spatial correlation among the cluster labels was also computed.
\subsubsection{Summarizing the distinctive cluster patterns}
We summarize the overall pattern of each identified cluster by computing its barycenter in Wasserstein space. It efficiently describes the underlying temporal dependence between the measures of mobility (here we use $M'$) and incidence within each cluster.
Wasserstein distances have several important theoretical and practical properties \cite{villani2008optimal, pele2009fast}. Among these, a barycenter in Wasserstein space is an appealing concept which already shows a high potential in different applications such as, in artificial intelligence, machine learning and Statistics \cite{carlier2015numerical,le2017existence,benamou2015iterative,cuturi2014fast}.
A Wasserstein barycenter \cite{agueh2011barycenters, cuturi2014fast} of $n$ measures $\nu_1 \ldots \nu_n$ in $\mathbb{P} \in P(\mathcal{M})$ is defined as a minimizer of the function $f$ over $\mathbb{P}$, where
\begin{equation}
f(\mu)=\frac{1}{N}\sum_{i=1}^N W_p^p(\nu_i,\mu).
\end{equation}
A fast algorithm \cite{cuturi2014fast} was proposed to minimize the sum of optimal transport distances from one measure (the variable) to a set of fixed measures using gradient descent. These gradients are computed using matrix scaling algorithms in a considerable lower computational cost. We have used the method proposed in \cite{cuturi2014fast} and implemented in the POT library (\url{https://pythonot.github.io/}) to compute the barycenter of each cluster.
\subsubsection{Analysis of the clusters using static covariates}
To understand the composition of the identified clusters, i.e., what could explain the similarity in the temporal dependence between mobility and new cases of the cities that belong to a cluster, we used different city-specific population covariates, while checking their relative contributions to discriminating the clusters. These covariates include (a) date of Stay-at-home order, (b) population size, (c) persons per household, (d) senior percentage, (e) black percent, (f) hispanic percent, (g) poor percent, (h) population density 2010, (i) SVI ses (j) SVI minority, (k) SVI overall, and (l) Gini index. Here SVI stands for Social Vulnerability Index of CDC, and ``ses" socioeconomic status. In addition, we also compute the `reaction time' (RT) of each city as the number of days between the stay-at-home-order at a given city and a common reference stating point date (taken as 15 March, 2020).
This step also provides a form of external validation of the clustering results as none of the above covariates were used for clustering. To demonstrate, we conducted this step with the clustering $\mathrm{HC}{}3$ obtained from the time series $M'$.
Using the covariates as features of the cities, a random forest classifier is trained to learn the cluster labels. The aim is to see how the clustering could be explained by the covariates. To find which of the features contribute most to discriminate the clusters of cities we computed the mean Shapley values \cite{NIPS2017_7062}. A Shapley value quantifies the magnitude of the impact of the features on the classification task. The ranking of the covariates/features based on the mean Shapley values determines the most relevant features in this regard.
\section{Results}
In this study, we used bivariate time series on daily values of mobility index and COVID-19 incidence over a 3-month time-period (March 1 -- May 31, 2020) for 151 U.S. cities that have reported at least 1,000 cases by the end of this period. By transforming the data for each city to a corresponding discrete set of ranked points on the unit cube, we computed the Optimal Transport distance as measure of temporal dependency between mobility and new cases for each pair of cities. Three definitions of mobility ($M$/$\Delta M$/$M'$) allowed us to generate 3 hierarchical clusterings: $HC1$, $HC2$ and $HC3$, as shown in Figure \ref{fig:f1} and Table \ref{longtab} . Each of the clusterings yielded 10 clusters of cities, which were compared for their sizes, singularities and divergences by the tool HCMapper, as shown in Figure \ref{fig:f2}.
\begin{figure}
\centering
\includegraphics[scale=0.6]{hcmapper.pdf}
\caption{HCMapper is used for comparison of 3 hierarchical clusterings of cities based on $\mathrm{HC}{}1$($N$, $M$, $t$), $\mathrm{HC}{}2$($N$, $\Delta M$, $t$) and $\mathrm{HC}{}3$($N$, $M'$, $t$). The cluster sizes and divergences across the clusterings are shown with blue rectangles and grey edges respectively.}
\label{fig:f2}
\end{figure}
Among the clusterings, $HC3$ appeared to have clusters of consistent sizes, and also the fewest singularities and divergences. Further, when we mapped the counties representing the cities with cluster-specific colors, as shown in Figure \ref{fig:f3}, we observed that the $HC3$ clusters showed high spatial correlation (Moran's $I$ p-value of 0). They also showed the least disagreements among the cluster assignments of cities with each state, although some states like California and Florida contained cities from more than one cluster (see table \ref{longtab}). We looked into possible explanations of such cluster-specific differences using local covariates, as described below.
\begin{figure}
\centering
\includegraphics[scale=2.3]{map_mobdash_new1.png}
\caption{The geographic distribution of the 10 identified clusters by $HC3$ are shown. The county corresponding to each city is mapped in its cluster-specific color.}
\label{fig:f3}
\end{figure}
Given the assumption of this study is that there are dynamic relationships between mobility and COVID-19 incidence that change not only over time but also across locations and populations, we computed Wasserstein barycenters of the 10 identified clusters, as shown in Figure \ref{fig:f4}, to describe the overall dependency structure that is specific to each cluster. The temporal changes in the dependencies are shown in 3-dimensional plots, as the shading changes from light (early points) to dark green (later points) along the z-axis (time).
\begin{figure}
\centering
\includegraphics[scale=0.25]{panel_cluster.pdf}
\caption{The overall temporal pattern of dependency between normalized measures of mobility and COVID-19 incidence for each identified cluster of cities is shown along 3-dimensions ($N$, $M'$, $t$). The Wasserstein barycenters of the 10 clusters are depicted within the unit cube with the darker dots representing later points in time (z-axis).}
\label{fig:f4}
\end{figure}
Finally, we sought to understand the factors that possibly underlie the dynamic patterns of each cluster as described above. Towards this, our results from Random Forest classification identified socioeconomic characteristics (or covariates) of the cities that could discriminate among the assigned cluster labels. The 8 most significantly discriminating covariates are shown in Figure \ref{fig:f5} along with their cluster-specific contributions measured by the mean Shapley values. Notably, none of these covariates were used for clustering, and are yet able to discriminate among the clusters. Figure \ref{fig:f6} shows the distinctive distributions of these covariates across the 10 identified clusters as boxplots. Reaction time is robustly the first and major contributor, which is indicative of the effects of stay-at-home on the different patterns of COVID-19 dynamics.
\begin{figure}
\centering
\includegraphics[scale=0.6]{shaple_val.pdf}
\caption{The relative contributions of 8 most significant static city-specific covariates in discrimination of the 10 clusters identified by $\mathrm{HC}{}3$ and shown with different colors. The contributions towards each cluster are measured by mean Shapley values for each covariate.}
\label{fig:f5}
\end{figure}
\vspace{0.1in}
{
\topcaption{Table of 151 cities with their respective Date (mm.dd.2020) of stay-at-home order, Reaction Time (RT), and clusters labels assigned by HC1, HC2 and HC3. The absence of stay-at-home order is denoted by NA.
\label{longtab}
\centering
\begin{supertabular}{|p{1.3in}|p{0.5in}|p{0.2in}|p{0.2in}|p{0.2in}|p{0.2in}|}\hline
County & Date & RT & HC1 & HC2 & HC3 \\ \hline
Jefferson, AL & 4.4 & 20 & 1 & 1 & 1 \\ \hline
Mobile, AL & 4.4 & 20 & 1 & 1 & 1 \\ \hline
Montgomery, AL & 4.4 & 20 & 1 & 1 & 1 \\ \hline
Maricopa, AZ & 3.31 & 16 & 1 & 1 & 1 \\ \hline
Pima, AZ & 3.31 & 16 & 3 & 1 & 1 \\ \hline
Yuma, AZ & 3.31 & 16 & 3 & 1 & 1 \\ \hline
Alameda, CA & 3.19 & 4 & 3 & 1 & 1 \\ \hline
Contra Costa, CA & 3.19 & 4 & 3 & 2 & 1 \\ \hline
Fresno, CA & 3.19 & 4 & 3 & 2 & 1 \\ \hline
Kern, CA & 3.19 & 4 & 3 & 2 & 3 \\ \hline
Los Angeles, CA & 3.19 & 4 & 3 & 2 & 3 \\ \hline
Orange, CA & 3.19 & 4 & 3 & 2 & 3 \\ \hline
Riverside, CA & 3.19 & 4 & 3 & 2 & 3 \\ \hline
Sacramento, CA & 3.19 & 4 & 2 & 2 & 3 \\ \hline
San Bernardino, CA & 3.19 & 4 & 2 & 2 & 3 \\ \hline
San Diego, CA & 3.19 & 4 & 2 & 2 & 3 \\ \hline
San Francisco, CA & 3.19 & 4 & 2 & 10 & 3 \\ \hline
San Mateo, CA & 3.19 & 4 & 2 & 10 & 3 \\ \hline
Santa Barbara, CA & 3.19 & 4 & 2 & 10 & 3 \\ \hline
Santa Clara, CA & 3.19 & 4 & 2 & 10 & 3 \\ \hline
Tulare, CA & 3.19 & 4 & 2 & 10 & 3 \\ \hline
Ventura, CA & 3.19 & 4 & 2 & 10 & 3 \\ \hline
Adams, CO & 3.26 & 11 & 2 & 9 & 3 \\ \hline
Arapahoe, CO & 3.26 & 11 & 2 & 9 & 2 \\ \hline
Denver, CO & 3.26 & 11 & 2 & 9 & 2 \\ \hline
El Paso, CO & 3.26 & 11 & 2 & 9 & 2 \\ \hline
Jefferson, CO & 3.26 & 11 & 10 & 9 & 2 \\ \hline
Weld, CO & 3.26 & 11 & 10 & 9 & 2 \\ \hline
Fairfield, CT & 3.23 & 8 & 10 & 9 & 2 \\ \hline
Hartford, CT & 3.23 & 8 & 10 & 9 & 2 \\ \hline
New Haven, CT & 3.23 & 8 & 10 & 9 & 2 \\ \hline
New Castle, DE & 3.24 & 9 & 10 & 9 & 2 \\ \hline
Washington, DC & 4.1 & 17 & 10 & 9 & 2 \\ \hline
Broward, FL & 4.3 & 19 & 10 & 9 & 2 \\ \hline
Duval, FL & 4.3 & 19 & 10 & 8 & 9 \\ \hline
Hillsborough, FL & 4.3 & 19 & 10 & 8 & 9 \\ \hline
Lee, FL & 4.3 & 19 & 10 & 8 & 9 \\ \hline
Miami-Dade, FL & 4.3 & 19 & 10 & 8 & 9 \\ \hline
Orange, FL & 4.3 & 19 & 9 & 8 & 9 \\ \hline
Palm Beach, FL & 4.3 & 19 & 9 & 8 & 9 \\ \hline
Pinellas, FL & 4.3 & 19 & 9 & 8 & 9 \\ \hline
Polk, FL & 4.3 & 19 & 9 & 8 & 9 \\ \hline
DeKalb, GA & 4.3 & 19 & 9 & 8 & 9 \\ \hline
Dougherty, GA & 4.3 & 19 & 9 & 8 & 9 \\ \hline
Fulton, GA & 4.3 & 19 & 9 & 8 & 9 \\ \hline
Cook, IL & 3.21 & 6 & 9 & 8 & 9 \\ \hline
DuPage, IL & 3.21 & 6 & 9 & 8 & 9 \\ \hline
Kane, IL & 3.21 & 6 & 9 & 8 & 10 \\ \hline
Lake, IL & 3.21 & 6 & 9 & 8 & 10 \\ \hline
Will, IL & 3.21 & 6 & 9 & 8 & 10 \\ \hline
Winnebago, IL & 3.21 & 6 & 9 & 8 & 10 \\ \hline
Allen, IN & 3.24 & 9 & 9 & 8 & 10 \\ \hline
Hamilton, IN & 3.24 & 9 & 9 & 8 & 10 \\ \hline
Lake, IN & 3.24 & 9 & 8 & 8 & 10 \\ \hline
Marion, IN & 3.24 & 9 & 8 & 8 & 10 \\ \hline
St. Joseph, IN & 3.24 & 9 & 8 & 5 & 10 \\ \hline
Black Hawk, IA & NA & 85 & 8 & 5 & 10 \\ \hline
Polk, IA & NA & 85 & 8 & 5 & 10 \\ \hline
Woodbury, IA & NA & 85 & 8 & 5 & 10 \\ \hline
Wyandotte, KS & 3.3 & 15 & 8 & 5 & 10 \\ \hline
Jefferson, KY & 3.26 & 11 & 8 & 5 & 10 \\ \hline
Caddo, LA & 3.23 & 8 & 8 & 5 & 10 \\ \hline
East Baton Rouge, LA & 3.23 & 8 & 8 & 5 & 10 \\ \hline
Jefferson, LA & 3.23 & 8 & 8 & 5 & 10 \\ \hline
Orleans, LA & 3.23 & 8 & 8 & 5 & 7 \\ \hline
Cumberland, ME & 4.2 & 18 & 8 & 5 & 7 \\ \hline
Baltimore City, MD & 3.3 & 15 & 8 & 5 & 7 \\ \hline
Bristol, MA & 3.24 & 9 & 8 & 5 & 7 \\ \hline
Essex, MA & 3.24 & 9 & 8 & 5 & 7 \\ \hline
Hampden, MA & 3.24 & 9 & 8 & 5 & 7 \\ \hline
Middlesex, MA & 3.24 & 9 & 8 & 5 & 7 \\ \hline
Norfolk, MA & 3.24 & 9 & 8 & 5 & 7 \\ \hline
Plymouth, MA & 3.24 & 9 & 8 & 5 & 7 \\ \hline
Suffolk, MA & 3.24 & 9 & 8 & 5 & 7 \\ \hline
Worcester, MA & 3.24 & 9 & 8 & 5 & 7 \\ \hline
Genesee, MI & 3.24 & 9 & 8 & 5 & 7 \\ \hline
Kent, MI & 3.24 & 9 & 8 & 5 & 7 \\ \hline
Macomb, MI & 3.24 & 9 & 8 & 5 & 7 \\ \hline
Oakland, MI & 3.24 & 9 & 8 & 5 & 7 \\ \hline
Washtenaw, MI & 3.24 & 9 & 8 & 5 & 7 \\ \hline
Wayne, MI & 3.24 & 9 & 8 & 5 & 7 \\ \hline
Hennepin, MN & 3.27 & 12 & 8 & 5 & 7 \\ \hline
Ramsey, MN & 3.27 & 12 & 8 & 5 & 7 \\ \hline
Hinds, MS & 4.3 & 19 & 8 & 5 & 8 \\ \hline
St. Louis City, MO & 4.6 & 22 & 8 & 5 & 8 \\ \hline
Douglas, NE & NA & 85 & 8 & 5 & 8 \\ \hline
Lancaster, NE & NA & 85 & 6 & 5 & 8 \\ \hline
Clark, NV & 4.1 & 17 & 6 & 5 & 8 \\ \hline
Washoe, NV & 4.1 & 17 & 6 & 5 & 8 \\ \hline
Hillsborough, NH & 3.27 & 12 & 6 & 5 & 8 \\ \hline
Camden, NJ & 3.21 & 6 & 6 & 5 & 8 \\ \hline
Essex, NJ & 3.21 & 6 & 6 & 5 & 8 \\ \hline
Hudson, NJ & 3.21 & 6 & 6 & 5 & 8 \\ \hline
Mercer, NJ & 3.21 & 6 & 6 & 6 & 6 \\ \hline
Passaic, NJ & 3.21 & 6 & 6 & 6 & 6 \\ \hline
Union, NJ & 3.21 & 6 & 7 & 6 & 6 \\ \hline
Bernalillo, NM & 3.24 & 9 & 7 & 6 & 6 \\ \hline
Albany, NY & 3.22 & 7 & 7 & 6 & 6 \\ \hline
Erie, NY & 3.22 & 7 & 7 & 6 & 6 \\ \hline
New York City, NY & 3.22 & 7 & 7 & 6 & 6 \\ \hline
Onondaga, NY & 3.22 & 7 & 7 & 6 & 6 \\ \hline
Westchester, NY & 3.22 & 7 & 7 & 6 & 6 \\ \hline
Durham, NC & 3.3 & 15 & 7 & 6 & 6 \\ \hline
Forsyth, NC & 3.3 & 15 & 7 & 7 & 6 \\ \hline
Guilford, NC & 3.3 & 15 & 7 & 7 & 6 \\ \hline
Mecklenburg, NC & 3.3 & 15 & 7 & 7 & 6 \\ \hline
Wake, NC & 3.3 & 15 & 7 & 7 & 6 \\ \hline
Cass, ND & NA & 85 & 4 & 7 & 6 \\ \hline
Cuyahoga, OH & 3.23 & 8 & 4 & 7 & 6 \\ \hline
Franklin, OH & 3.23 & 8 & 4 & 7 & 5 \\ \hline
Hamilton, OH & 3.23 & 8 & 4 & 7 & 5 \\ \hline
Lucas, OH & 3.23 & 8 & 4 & 3 & 5 \\ \hline
Mahoning, OH & 3.23 & 8 & 4 & 3 & 5 \\ \hline
Summit, OH & 3.23 & 8 & 4 & 3 & 5 \\ \hline
Oklahoma, OK & NA & 85 & 4 & 3 & 5 \\ \hline
Multnomah, OR & 3.23 & 8 & 4 & 3 & 5 \\ \hline
Allegheny, PA & 3.23 & 8 & 4 & 3 & 5 \\ \hline
Berks, PA & 4.1 & 17 & 4 & 3 & 5 \\ \hline
Lackawanna, PA & 4.1 & 17 & 4 & 3 & 5 \\ \hline
Lehigh, PA & 4.1 & 17 & 4 & 3 & 5 \\ \hline
Northampton, PA & 4.1 & 17 & 4 & 3 & 5 \\ \hline
Philadelphia, PA & 3.23 & 8 & 4 & 3 & 5 \\ \hline
Kent, RI & 3.28 & 13 & 5 & 3 & 5 \\ \hline
Providence, RI & 3.28 & 13 & 5 & 3 & 5 \\ \hline
Richland, SC & NA & 85 & 5 & 3 & 5 \\ \hline
Minnehaha, SD & NA & 85 & 5 & 4 & 5 \\ \hline
Davidson, TN & 3.31 & 16 & 5 & 4 & 5 \\ \hline
Rutherford, TN & 3.31 & 16 & 5 & 4 & 5 \\ \hline
Shelby, TN & 3.31 & 16 & 5 & 4 & 5 \\ \hline
Bexar, TX & 4.2 & 18 & 5 & 4 & 5 \\ \hline
Collin, TX & 4.2 & 18 & 5 & 4 & 5 \\ \hline
Dallas, TX & 4.2 & 18 & 5 & 4 & 5 \\ \hline
Denton, TX & 4.2 & 18 & 5 & 4 & 5 \\ \hline
El Paso, TX & 4.2 & 18 & 5 & 4 & 5 \\ \hline
Fort Bend, TX & 4.2 & 18 & 5 & 4 & 5 \\ \hline
Harris, TX & 4.2 & 18 & 5 & 4 & 4 \\ \hline
Potter, TX & 4.2 & 18 & 5 & 4 & 4 \\ \hline
Tarrant, TX & 4.2 & 18 & 5 & 4 & 4 \\ \hline
Travis, TX & 4.2 & 18 & 5 & 4 & 4 \\ \hline
Salt Lake, UT & NA & 85 & 5 & 4 & 4 \\ \hline
Utah, UT & NA & 85 & 5 & 4 & 4 \\ \hline
Alexandria, VA & 3.3 & 15 & 5 & 4 & 4 \\ \hline
Richmond City, VA & 3.3 & 15 & 5 & 4 & 4 \\ \hline
King, WA & 3.23 & 8 & 5 & 4 & 4 \\ \hline
Pierce, WA & 3.23 & 8 & 5 & 4 & 4 \\ \hline
Snohomish, WA & 3.23 & 8 & 5 & 4 & 4 \\ \hline
Yakima, WA & 3.23 & 8 & 5 & 4 & 4 \\ \hline
Brown, WI & 3.25 & 10 & 5 & 4 & 4 \\ \hline
Kenosha, WI & 3.25 & 10 & 5 & 4 & 4 \\ \hline
Milwaukee, WI & 3.25 & 10 & 5 & 4 & 4 \\ \hline
Racine, WI & 3.25 & 10 & 5 & 4 & 4 \\ \hline
\end{supertabular}
}
\section{Discussion}
The U.S. is alone among the countries in the industrialized world where the expected “flattening of the curve” did not take place yet. By May 31, 2020, there were 1.8 million confirmed COVID-19 cases and 99,800 deaths. 45 states were in various phases of re-opening and 5 states did not have shelter-in-place orders. By mid-June, cases had started to rise and as of June 26, there were 2.5 million confirmed cases and over 120,000 deaths. Some states that had begun to re-open parts of their economy have paused or delayed opening in the face of a surge of new cases.
Estimating the impact of mitigation strategies on cases and deaths in the U.S. is challenging particularly due to the lack of uniformity in timing, implementation, enforcement, and adherence across states. Nevertheless, early observations point to the utility of such measures, particularly shelter-in-place orders in reducing infection spread and deaths (per data from California and Washington State) \cite{washin}. Counties implementing shelter-in-place orders were associated with a 30.2\% reduction in weekly cases after 1 week, 40\% reduction after 2 weeks, and 48.6\% reduction after 3 weeks \cite{fowler2020effect} Conversely, model projections estimate a steady rise in cases and over 181,000 deaths if such mitigation strategies were to be eased and not re-enforced before October 1 \cite{washin1}.
Many researchers worldwide are currently investigating the changes in social and individual behaviors in response to the sudden yet prolonged outbreaks of COVID-19, e.g., \cite{adiga2020interplay,dascritical}. As the pandemic progresses, and until medical treatments or vaccination are available, new and diverse patterns of mobility, be they voluntary or via interventions, may emerge in each society. It is, therefore, of great importance to epidemiologists and policy-makers to understand the dynamic patterns of dependency between human mobility and COVID-19 incidence in order to precisely evaluate the impact of such measures. In this study, we have shown that such dependencies not only change over time but across locations and populations, and are possibly determined by underlying socioeconomic characteristics. Our analytical approach is particularly relevant considering the high socioeconomic costs of such measures.
We understand that our study has some limitations. We note that each step of our framework could be improved in isolation or as a pipeline, which we aim to do in our future work. We have also developed a prototype of an interactive tool to run online the steps of our analytical pipeline. It will be made publicly available shortly upon completion.
Here it is important to note the so-called ecological fallacy in inferring about individual health outcomes based on data or results that are obtained at either city or county levels. Such inference may suffer from incorrect assumptions and biases, which, however unintentional, must be avoided. Any views that might have reflected on the analysis or results of our study are those of the authors only, and not the organizations they are associated with.
| proofpile-arXiv_065-264 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
A \emph{$2$-$(v,k,\lambda)$ design} $\D$ is a pair $(\mathcal{P},\mathcal{B})$ with a set $\mathcal{P}$ of $v$ \emph{points} and a set $\mathcal{B}$ of \emph{blocks}
such that each block is a $k$-subset of $\mathcal{P}$ and each pair of distinct points is contained in $\lambda$ blocks.
We say $\D$ is \emph{nontrivial} if $2<k<v$, and \emph{symmetric} if $v=|\mathcal{B}|$.
All $2$-$(v,k,\lambda)$ designs in this paper are assumed to be nontrivial.
An automorphism of $\D$ is a permutation of the point set $\mathcal{P}$
which preserves the block set.
The set of all automorphisms of $\D$ under composition of permutations forms a
group, denoted by ${\rm Aut}(\D)$.
A subgroup $G$ of ${\rm Aut}(\D)$ leaves invariant a partition $\mathcal{C}$ of $\mathcal{P}$ if each element
of $G$ permutes the parts of $\mathcal{C}$ setwise. A partition $\mathcal{C}$ is trivial
if either $\mathcal{C}$ consists of singleton sets, or $\mathcal{C}=\{\mathcal{P}\}$; and $G$ is
\emph{point-primitive} if the only $G$-invariant partitions of $\mathcal{P}$ are the
trivial ones. Otherwise $G$ is said to be \emph{point-imprimitive}.
A \emph{flag} of $\D$ is a
pair $(\alpha,B)$ where $\alpha\in\mathcal{P}$, $B\in\mathcal{B}$, and $B$ contains $\alpha$.
A subgroup $G$ of ${\rm Aut}(\D)$ is said to be \emph{flag-transitive} if $G$ acts transitively on the set of flags of $\D$.
A seminal result of Higman and McLaughlin \cite{HM} in 1961 showed that, in the case where $\lambda=1$, a flag-transitive subgroup of automorphisms is point-primitive. This break-through spurred others to discover whether this implication might hold more generally.
In particular Dembowski \cite[2.3.7(a)]{Dem} proved (in his 1968 book) that the same conclusion holds if $\lambda$ is coprime to the number $r$ of blocks containing a given point. However it does not hold in general.
As pointed out by Davies \cite{Dav1}, Cameron and Kantor \cite[Theorem III]{CK79} showed that the design whose points are the $2^{n+1}-1$ points of the projective space ${\rm PG}(n,2)$, $n$ odd, and whose blocks are the hyperplane-complements, with natural incidence, admits $ {\rm P\Gamma L}((n+1)/2,4)$ as a flag-transitive group of automorphisms that is point-imprimitive. For these designs $\lambda = 2^{n-1}$ grows exponentially with $n$.
On the other hand, in \cite{Dav1}, Davies also
established that, for fixed $\lambda$, there are only finitely many flag-transitive, point-imprimitive 2-designs, by showing that the block-size $k$ and the number $v$ of points are both bounded in terms of $\lambda$. However he did not give explicit upper bounds. Some years later Cameron and the second author \cite[Proposition 4.1]{CP93} showed that also $v\leq (k-2)^2$, that is, $v$ is bounded above in terms of $k$, for flag-transitive, point-imprimitive designs, and that
the smallest possible block size is $k= 6$. Recently Zhan and Zhou \cite{ZZ18} found that there are exactly 14 examples with $k=6$, all with $v= (k-2)^2=16$.
Davies' examples above from projective geometry are all \emph{symmetric} designs, and indeed much progress has been made studying flag-transitive symmetric $2$-designs.
In \cite{Reg05}, O'Reilly-Regueiro showed that a flag-transitive, point-imprimitive, symmetric design must have $k\leq\lambda(\lambda+1)$, and further work (see \cite{LPR09, P07, PZ06}) refined this bound and classified all examples with $\lambda$ up to $4$.
In this paper we find explicit bounds for $k$, and hence for $v$, in terms of $\lambda$, without assuming that the design is symmetric.
\begin{theorem}\label{main}
Let $\cal D =(\mathcal{P}, \cal B)$ be a $2$-$(v,k,\lambda)$ non-trivial design admitting
a flag-transitive point-imprimitive group of automorphisms. Then $k\leq 2\lambda^2(\lambda-1)$ and $v\leq \left(2\lambda^2(\lambda-1)-2\right)^2$
\end{theorem}
Recall the result of Higman and McLaughlin \cite{HM} that for $\lambda=1$, all flag-transitive $2$-designs are point-primitive. A recent result of the authors and colleagues in \cite[Theorem 1.1]{DLPX} shows that there are up to isomorphism just two designs which prevent this conclusion holding also for $\lambda=2$: namely there are exactly two flag-transitive, point-imprimitive $2$-$(v,k,2)$ designs, and both of them are $2-(16,6,2)$ designs.
For the cases $\lambda=3,4$, we list in Proposition~\ref{la=3} all `numerically feasible' parameter sets for flag-transitive, point-imprimitive $2$-designs, that is to say, parameter sets which satisfy all the conditions imposed by our preliminary results in Section~\ref{prelim}.
We specify not only the parameters $v, k, \lambda$, but also the number $d$ of parts and the part-size $c$ of a nontrivial invariant point-partition, and the (constant) size $\ell$ of a non-empty intersection between a part of this partition and a block of the design (see Lemma~\ref{le}). Although we have not managed to complete the classification of all examples with $\lambda\leq 4$, which is given in \cite{LPR09, P07, PZ06} in the symmetric case, we have been able to classify all examples with less than $100$ points, and in so-doing, we constructed a design on $36$ points for which the full automorphism group is flag-regular (and so no proper subgroup is flag-transitive). We thought this design, which has the parameters in line 5 of Table~$\ref{tableintro}$, was new (we asked a few design experts and none had seen it before), but after finishing our analysis we discovered that the design was identified by Zhang and Zhou in \cite[Theorem 1.3]{ZZ}. However no construction of the design is given in \cite{ZZ}, see Remark~\ref{rem:shenglin}.
In Section~\ref{sec:36}, we give several constructions and discuss this interesting design further.
\begin{theorem}\label{class}
There are exactly eleven $2$-$(v,k,\lambda)$ non-trivial designs admitting
a flag-transitive point-imprimitive group $G$ of automorphisms, with $\lambda\leq 4$ and $v<100$, with two of them admitting two partitions of different sizes. If $G$ preserves a partition into $d$ parts of size $c$, then $(\lambda,v, k,r,c,d)$ are as in one of the lines of Table $\ref{tableintro}$, the penultimate column gives the number of designs with these parameters (up to isomorphism), and the last column gives a reference when possible.
\end{theorem}
\begin{table}[h]
\centering
\caption{\label{tableintro}The eleven designs for $\lambda\leq 4$ and $v<100$}
\begin{tabular}{|cccccc|c|c|}
\hline
$\lambda$&$v$&$k$&$r$&$c$&$d$&Number &Reference\\
\hline
$2$&$16$&$6$&$6$&$4$&$4$&$2$&{\rm \cite{AS, Burau, Huss}}\\
$3$&$45$&$12$&$12$&$9$&$5$&$1$&{\rm \cite{P07}}\\
$4$&$15$&$8$&$8$&$3$&$5$&$1$&{\rm \cite{CK79, Dav1}}\\
$4$&$16$&$6$&$12$&$4$&$4$&$2$&{\rm \cite{ZZ18}}\\
$4$&$36$&$8$&$20$&$6$&$6$&$1$&Construction~{\rm\ref{con1}}, \cite{ZZ}\\
$4$&$96$&$20$&$20$&$6$&$16$&$2$&{\rm \cite{LPR09}}\\
$4$&$96$&$20$&$20$&$16$&$6$&$4$&{\rm \cite{LPR09}}\\
\hline
\end{tabular}
\end{table}
\begin{remark}
(a) We note that the two designs with $(\lambda,v, k,r,c,d)=(4,96,20,20,6,16)$ in Table~\ref{tableintro}
are among the four flag-transitive designs for $(4,96,20,20,16,6)$ (see the classification in \cite{LPR09}). Thus there are exactly eleven designs satisfying the conditions of Theorem~\ref{class}, two of which admit two nontrivial partitions with different parameters $(c,d)$. See Remark~\ref{rem:t2} for more details.
(b)
The smallest value of $\lambda$ for which flag-transitive, point-imprimitive designs may exist is $\lambda=2$ and, as we mentioned above, in this case it follows from \cite[Theorem 1.1]{DLPX} that $\cal D$ is one of two known $2-(16,6,2)$ designs. Thus the upper bounds on $(k, v)$ in Theorem~\ref{main} when $\lambda=2$, namely $(8,36)$, are far from tight. Also, for $\lambda=4$, it follows from Proposition~\ref{la=3} that the bounds on both $k$ and $v$ in Theorem~\ref{main} are definitely not tight. If $\lambda=3$,
then the value of $k$ could possibly meet the bound $k=36$ of Theorem~\ref{main}, with the remaining parameters as in one of three lines of the table in Proposition~\ref{la=3}.
Thus we ask in general:
\end{remark}
\begin{question}\label{q1}
Can the functions of $\lambda$ bounding $k$ and $v$ in Theorem~\ref{main} be improved?
\end{question}
We think the answer to Question~\ref{q1} is `yes' (with the possible exception of $\lambda=3$) and would like to see improved polynomial bounds.
If $\lambda=3$, then an answer to the next question would settle the tightness of the bounds in Theorem~\ref{main} for that case.
\begin{question}\label{q2}
Does there exist a flag-transitive, point-imprimitive design with parameter set
\[
(\lambda,v, k,r,c,d) = (3,561, 36, 48, 17, 33),\ (3,561, 36, 48, 33, 17), \text{ or }
(3, 1156, 36, 99, 34, 34)?
\]
\end{question}
When $\lambda=3$, there are seven lines of the table in Proposition~\ref{la=3} which have not been treated in Theorem~\ref{class}, that is to say, four lines in addition to the parameter sets in Question~\ref{q2}. Also, for $\lambda=4$, there are eleven lines of the table in Proposition~\ref{la=3} which have not been treated in Theorem~\ref{class}.
\begin{problem}\label{prob}
Classify all the flag-transitive, point-imprimitive $2-(v,k,\lambda)$ designs with parameter sets $(\lambda,v, k,r,c,d)$ as in one of the 18 lines of the tables in Proposition~\ref{la=3} with $v\geq 100$.
\end{problem}
A complete answer to Problem~\ref{prob} would finish the classification of the flag-transitive, point-imprimitive $2-(v,k,\lambda)$ designs with $\lambda\leq 4$.
A partial answer to Problem~\ref{prob} is given in \cite[Theorem 1.3]{ZZ} under the additional assumption that the flag-transitive, point-imprimitive group is also point-quasiprimitive (that is, all nontrivial normal subgroups are point-transitive). Thus when attacking Problem~\ref{prob}, one may assume that the group is not point-quasiprimitive.
In Section \ref{prelim} we list some well-known facts about $2$-designs and prove some numerical conditions for flag-transitive point-imprimitive $2$-designs.
In Section \ref{sec:main} we prove Theorem \ref{main}. In Section \ref{sec:small} we determine all numerically feasible parameters sets for $\lambda=3, 4$. In Section \ref{sec:36} we give several constructions for a $2$-design on $36$ points, and we show that up to isomorphism this design is the unique flag-transitive, point-imprimitive $2-(36,8,4)$ design (Proposition~\ref{lem:36unique}).
Finally in Section \ref{sec:class}, we classify all flag-transitive point-imprimitive $2$-designs with $\lambda\leq 4$ and $v<100$, providing lots of information on their automorphism groups and how to construct them with Magma \cite{magma}.
\section{Preliminary results on designs}\label{prelim}
We first collect some useful results on flag-transitive designs.
\begin{lemma} \label{condition 1}
Let $\D =(\mathcal{P}, \cal B)$ be a $2$-$(v,k,\lambda)$ design and let $b=|\cal B|$.
Then the number of blocks of $\D$ containing each point of $\D$ is a constant $r$ satisfying the following:
\begin{enumerate}
\item[\rm(i)] $r(k-1)=\lambda(v-1)$;
\item[\rm(ii)] $bk=vr$;
\item[\rm(iii)] $b\geq v$ and $r\geq k$;
\item[\rm(iv)] $r^2>\lambda v$.
\end{enumerate}
In particular, if $\D$ is not symmetric then $b>v$ and $r>k$.
\end{lemma}
\par\noindent{\sc Proof~}
Parts~(i) and~(ii) follow immediately by simple counting.
Part~(iii) is Fisher's Inequality \cite[p.99]{Ryser}.
By~(i) and~(iii) we have
\[
r(r-1)\geq r(k-1)=\lambda(v-1)
\]
and so $r^2\geq\lambda v+r-\lambda$.
Since $\D$ is nontrivial, we deduce from (i) that $r>\lambda$.
Hence $r^2>\lambda v$, as stated in part~(iv).
\qed
We now prove the following important technical proposition.
\begin{lemma}
\label{le} Let $\cal D =(\mathcal{P}, \cal B)$ be a nontrivial $2$-$(v,k,\lambda)$ design admitting
a flag-transitive point-imprimitive group $G$ of automorphisms, which leaves invariant a nontrivial point-partition $\mathcal{C}$ into $d$ parts of size $c$. Then the non-empty intersections $B\cap \Delta$, for $B\in\cal B$ and $\Delta\in\mathcal{C}$, have a constant size $\ell$, say. Moreover, the integer $x=k-1-d(\ell-1)$ is positive, and the following equalities, inequalities and divisibility conditions hold:
\begin{enumerate}[(i)]
\item $\lambda\geq 2$;
\item $\ell\mid k\quad \mbox{and}\ 1<\ell < k$
\item $\lambda(c-1)=r(\ell-1)$
\item $k=xc+\ell$
\item $rx=\lambda (d-1)$
\item $k\mid \frac{\lambda c(c-1)(k-(x+1))}{(\ell-1)^2}$, in particular, if $\ell=2$ then $k\mid \lambda c(c-1)(x+1)$
\item $k\mid \lambda\ell(x+1)(x+\ell)$
\item $x(\ell-1)\leq \lambda-1$
\item $c\geq \frac{\lambda+\ell(\ell-1)}{\lambda-x(\ell-1)}$
\item $k\geq \frac{\lambda(x+\ell)}{\lambda-x(\ell-1)}$
\end{enumerate}
\end{lemma}
\par\noindent{\sc Proof~}
By the celebrated result of Higman and McLaughlin \cite{HM} mentioned above, if a $2-(v,k,1)$ design (linear space) is flag-transitive, then it is point-primitive. Thus $\lambda\geq 2$, proving (i).
Let
$\mathcal{C}=\{\Delta_1,\Delta_2,\dots,\Delta_d\}$, with $1<d<v$ and $|\Delta_i|=c>1$ for each $i$,
so that
\begin{equation}\label{Eq1}
v=cd.
\end{equation}
Let $B, B'\in \mathcal{B}$ and $\Delta, \Delta'\in \mathcal{C}$ such that $B\cap\Delta$ and $B'\cap\Delta'$ are non-empty, and choose $\alpha\in B\cap\Delta$ and $\alpha'\in B'\cap\Delta'$. Since $G$ is flag-transitive, there exists $g\in G$ such that $(B,\alpha)^g=(B',\alpha')$. As $\alpha^g=\alpha'$ we have $\Delta^g=\Delta'$, and hence $(B\cap\Delta)^g= B'\cap\Delta'$. Thus $\ell=|B\cap\Delta|$ is independent of $B$ and $\Delta$, and so $\ell\mid k$. Since $G$ is block-transitive and $\cal D$ is a $2$-design, it follows that each block contains a pair of points in the same part of $\mathcal{C}$, and a pair of points from different parts of $\mathcal{C}$. Thus $ 1<\ell < k$, and this proves (ii).
Fix a point $\alpha$, a block $B$ containing $\alpha$, and let $\Delta$ be the part of $\mathcal{C}$ containing $\alpha$.
Counting the point-block pairs $(\alpha',B')$ with $\alpha'\in\Delta\setminus\{\alpha\}$ and $B'$ containing $\alpha$ and $\alpha'$, we obtain
$
\lambda(c-1)=r(\ell-1),
$
proving (iii).
Multiplying both sides of this equation by $k-1$, and using Lemma~\ref{condition 1}(i) and equation \eqref{Eq1}, we find that $\lambda(k-1)(c-1)=r(k-1)(\ell-1) = \lambda (v-1)(\ell-1)$, and hence that
$$
(cd-1)(\ell-1)=(k-1)(c-1).
$$
Thus $cd(\ell-1)-(\ell-1)=c(k-1)-(k-1)$, from which we deduce that $k-\ell=c\left(k-1-d(\ell-1)\right)$.
Since $x=k-1-d(\ell-1)$ and since $\ell<k$, this implies that $x$ is a positive integer. Also it follows from this equation that
$
k=xc+\ell,$ proving (iv).
Using Lemma~\ref{condition 1}(i), part (iii) and \eqref{Eq1}, we get that
$$
rx=r(k-1)-dr(\ell-1)=\lambda(v-1)-d\lambda(c-1)=\lambda (d-1),
$$ proving (v).
By part (v) we have $d= 1+(rx/\lambda)$, and by (iii), $r = \lambda(c-1)/(\ell-1)$, so that $d=1+ x(c-1)/(\ell-1)$.Then part (iv) and \eqref{Eq1} imply that
\begin{align*}
vr&=cdr=c\left(\frac{c-1}{\ell-1}\cdot x+1\right)\left( \frac{\lambda(c-1)}{\ell-1}\right)\\
&=\frac{\lambda c(c-1)(cx-x+\ell-1)}{(\ell-1)^2}=\frac{\lambda c(c-1)(k-(x+1))}{(\ell-1)^2}.
\end{align*}
By Lemma~\ref{condition 1}(ii), $bk=vr$, so
$$
k\mid \frac{\lambda c(c-1)(k-(x+1))}{(\ell-1)^2}.
$$ In particular, for $\ell=2$, $k\mid\lambda c(c-1)(k-(x+1))$ and thus also $k\mid\lambda c(c-1)(x+1)$, proving (vi).
It follows that
$k$ divides $\lambda c(c-1)(x+1)$, and hence $k$ also divides $\lambda (xc)(xc-x)(x+1)$, which by part (iv) is equal to $\lambda (k-\ell)(k-\ell-x)(x+1)$. Thus
$$k\mid \lambda \ell(\ell+x)(x+1),$$ proving (vii).
On the other hand, since $r\geq k$ (by Lemma~\ref{condition 1}(iii)), and using part (iv) and part (iii), we have
\[
\lambda k>\lambda(k-\ell-x)=\lambda x(c-1)=rx(\ell-1)\geq kx(\ell-1),
\]
and so $x(\ell-1)<\lambda$. Since all these parameters are integers, (viii) follows.
Now using part (iii), the inequality $r\geq k$, and
part (iv), we find
$$
\lambda(c-1)=r(\ell-1) \geq k(\ell-1) = (xc+\ell)(\ell-1).
$$
Rearranging this inequality gives $c(\lambda - x(\ell-1))\geq \lambda +\ell(\ell-1)$, and since $\lambda-x(\ell-1)>0$ by part (viii), we have
$$
c\geq\frac{\lambda+\ell(\ell-1)}{\lambda-x(\ell-1)},
$$
proving (ix). Finally, using (iv) and (ix),
$$
k=xc+\ell\geq x.\frac{\lambda+\ell(\ell-1)}{\lambda-x(\ell-1)}+\ell=\frac{\lambda(x+\ell)}{\lambda-x(\ell-1)},
$$
proving (x).
\qed
\medskip
We will need the following technical lemma.
\begin{lemma}\label{le2}
Let $z$ be a real number greater than $1$.
The function $g(x,y)=(x+1)(y+1)(x+y+1)$ from $\mathbb{R}^2$ to $\mathbb{R}$, restricted to the hyperbola $xy=z$ with $x,y\geq 1$ decreases as $x$ increases between $1$ and $\sqrt{z}$, increases as $x$ increases between $\sqrt{z}$ and $z$, and has a maximum of $2(z+1)(z+2)$ at $(x,y)=(1,z)$ and $(z,1)$.
\end{lemma}
\par\noindent{\sc Proof~}
On the hyperbola $xy=z$, the function $g$ becomes
\begin{align*}
g(x,z/x)&=(x+1) (z/x+1)(x+z/x+1)\\
&=\frac{(x+1)(x+z)(x^2+x+z)}{x^2} \\
&=\frac{x^4+(z+2)x^3+(3z+1)x^2+z(z+2)x+z^2}{x^2} \\
&=x^2+(z+2)x+(3z+1)+z(z+2)x^{-1}+z^2 x^{-2}.
\end{align*}
We can now compute the derivative
\begin{align*}
g'(x,z/x)&=2x+(z+2)-z(z+2)x^{-2}-2z^2 x^{-3} \\
&=\frac{2x^4+(z+2)x^3-z(z+2)x-2z^2 }{x^3}\\
&=\frac{2(x^2-z)(x^2+z)+(z+2)x(x^2-z) }{x^3}\\
&=\frac{(x^2-z)(2(x^2+z)+(z+2)x) }{x^3}.
\end{align*}
Since $x\geq 1$ and $z>1$,
the denominator and second factor of the numerator are obviously positive, while the first factor of the numerator is negative when $x<\sqrt{z}$ and positive when $x>\sqrt{z}$. Therefore the maximum of $g(x,y)$ on the hyperbola is $g(1,z)=g(z,1)=2(z+1)(z+2)$.
\qed
\medskip
\section{Proof of Theorem~\ref{main}}\label{sec:main}
The preparatory results from Section \ref{prelim} allow us to obtain our first bound.
\begin{proposition}\label{firstbound}
Let $\cal D =(\mathcal{P}, \cal B)$ be a $2$-$(v,k,\lambda)$ non-trivial design admitting
a flag-transitive point-imprimitive group of automorphisms. Then $k\leq 2\lambda^2(\lambda+1)$.
\end{proposition}
\par\noindent{\sc Proof~} Let
all parameters be as in Lemma \ref{le}.
Then
$
k\leq \lambda \ell(\ell+x)(x+1),
$ by Lemma \ref{le}(vii).
At this point, it is convenient to change variables: let $y=\ell-1$ and $\mu=\lambda-1$, so that $1\leq y\leq \mu$, $\mu\geq 1$ and $xy\leq \mu$ by Lemma \ref{le}(viii).
Let $$g(x,y)=(x+1)(y+1)(x+y+1),$$
so that $k\leq (\mu+1) g(x,y)$. We wish to find the maximum of the function $g(x,y)$ on the domain $x,y\geq 1$, $xy\leq \mu$.
Since $g(x,y)$ increases with $y$, for a fixed $x$, the maximum of this function must be on the hyperbola $xy=\mu$.
By Lemma \ref{le2}, the maximum of $g(x,y)$ on that hyperbola is $2(\mu+1)(\mu+2)$ obtained at $(x,y)=(1,\mu)$ and $(\mu,1)$.
%
Therefore $k\leq 2(\mu+1)^2(\mu+2)= 2\lambda^2(\lambda+1)$.
\qed
\medskip
For $\lambda=2$ the bound in Proposition~\ref{firstbound} gives $k\leq 24$. Together with Liang and
Xia, the authors showed in \cite{DLPX} that there are only two imprimitive flag-transitive 2-designs, both of which are $2-(16,6,2)$ designs. Thus this bound is definitely not tight for all $\lambda$.
For $\lambda=3$ the bound in Proposition~\ref{firstbound} gives $k\leq 72$. That is also not the best possible, as looking at $\lambda=3$ in detail (splitting up into cases for possible $(\ell,x)$) we can show $k\leq 36$, see Proposition \ref{la=3} (note our list below matches with the cases listed in \cite{Dav1}).
For $\lambda=4$ the bound in Proposition~\ref{firstbound} gives $k\leq 160$ (which is better than what is stated in \cite{Dav1}), but we improve this in Proposition \ref{la=3} to $k\leq 80$.
Now we prove the main theorem. Note that we use here Proposition \ref{la=3} which is in the next section. That proposition only relies on results from Section \ref{prelim} so our argument is not circular.
\medskip
\par\noindent{\sc Proof~}[Theorem \ref{main}] Let $G$ be the flag-transitive automorphism group, and let $\mathcal{C}$ be the $G$-invariant non-trivial partition. Let
all parameters be as in Lemma \ref{le}.
If $\lambda=2$, we showed in \cite{DLPX} (by group-theoretic arguments) that $k=6<2\lambda^2(\lambda-1)$.
The statement is clearly true for $3\leq \lambda\leq 4$ by Proposition \ref{la=3} below, so assume $\lambda\geq 5$.
We first claim that $k$ cannot be equal to the bound found in Proposition \ref{firstbound}. Assume to the contrary that $k=2\lambda^2(\lambda+1)$. Looking at the proof of Proposition \ref{firstbound}, this implies that $k=\lambda \ell(\ell+x)(x+1)$, $x(\ell-1)=\lambda-1$, and $\ell=2$ or $\lambda$.
If $\ell=2$, then $x=\lambda-1$, and $x$ divides $k-\ell=2(\lambda^3+\lambda^2-1)$ by Lemma \ref{le}(iv). It follows that $\lambda-1$ divides $2$, so $\lambda=2$ or $3$, a contradiction since $\lambda\geq 5$.
If $\ell=\lambda$, then $x=1$, $c=k-\ell=2\lambda^3+2\lambda^2-\lambda$ by Lemma \ref{le}(iv), and $\lambda-1$ divides $\lambda(c-1)=\lambda(2\lambda^3+2\lambda^2-\lambda-1)$ by Lemma \ref{le}(iii). It also follows that $\lambda-1$ divides $2$, so $\lambda=2$ or $3$, contradicting $\lambda\geq 5$.
Thus the claim is proved.
For convenience, we now use the notation $y=\ell-1$ and $\mu=\lambda-1\geq 4$, as in the proof of Proposition \ref{firstbound}. Recall that $k$ divides $\lambda \ell(x+1)(x+\ell)=(\mu+1)g(x,y)$ by Lemma~\ref{le}(vii), and we have just shown that $k<2\lambda^2(\lambda+1)$.
We claim that $k\leq \max\{\lambda^2(\lambda+1), (\mu+1) X\}$, where $X$ is the second largest value of $g(x,y)$ on the domain $x,y\geq 1$, $xy\leq \mu$ with $x,y$ integers. We see this as follows. In Proposition \ref{firstbound}, it was shown that the maximum value of $g(x,y)$ on this domain is $2(\mu+1)(\mu+2)=2\lambda(\lambda+1)$. If $g(x,y)$ takes this value, that is, if $(\mu+1)g(x,y)=2\lambda^2(\lambda+1)$, then $k$ must be a proper divisor and hence $k\leq \lambda^2(\lambda+1)$. On the other hand, if
$(\mu+1)g(x,y)<2\lambda^2(\lambda+1)$, then $k\leq (\mu+1)g(x,y)\leq (\mu+1) X$. This proves the claim.
We now determine $X$, the second largest value of $g(x,y)$. Note that, if $xy<\mu$, then by Lemma \ref{le2}, $g(x,y)\leq g(1,xy)\leq g(1,\mu-1)=2\mu(\mu+1)$.
Hence $X$ is either $2\mu(\mu+1)$, or $g(x,\mu/x)$ for some integer $x\neq 1$ properly dividing $\mu$. If $\mu$ is prime then such an $x$ does not exist, and $X=2\mu(\mu+1)$.
Assume $\mu$ is not a prime, and let $p$ be the smallest prime factor of $\mu$, so that $\mu=pq$ where $q\geq p$. Note that $p\leq \sqrt{\mu}$. It follows from Lemma \ref{le2} that the largest value for $g(x,\mu/x)$, for some integer $x\neq 1$ dividing $\mu$, is $g(p,q)=(p+1)(q+1)(p+q+1)$.
We claim that $2\mu(\mu+1)>(p+1)(q+1)(p+q+1)$, with the unique exception of $\mu=4$.
Using that $\mu=pq$, the above inequality can be rewritten as
$$
q^2(2p^2-p-1)-q(p^2+p+2)-(p+1)^2>0.
$$
Suppose first that $p$ is odd. Then $2p^2-p-1\geq p^2+p+2,$ and it is elementary (taking the derivative with respect to $q$) to see that the left-hand side grows with $q$, and so
$$
q^2(2p^2-p-1)-q(p^2+p+2)-(p+1)^2\geq p^2(2p^2-p-1)-p(p^2+p+2)-(p+1)^2=2p^4-2p^3-3p^2-4p-1,
$$
which is positive, so the claimed inequality holds. Suppose now that $p=2$. If $p=2$ and $q=3$ then
$$
q^2(2p^2-p-1)-q(p^2+p+2)-(p+1)^2=12,
$$
and so $q^2(2p^2-p-1)-q(p^2+p+2)-(p+1)^2>0$ for $p=2$ and $q\geq 3$, but
$$
q^2(2p^2-p-1)-q(p^2+p+2)-(p+1)^2<0
$$
for $p=q=2$, that is for $\mu=\lambda-1=4.$ This proves the claim.
To summarise,
either $X=2\mu(\mu+1)$, or $\mu=4$ in which case $X=g(2,2)=45$.
Recall from above that $k\leq \max\{\lambda^2(\lambda+1),(\mu+1)X\}$.
If $\lambda>5$, then $(\mu+1)X=2\mu(\mu+1)^2=2\lambda^2(\lambda-1),$ which is larger than $\lambda^2(\lambda+1)$, so $k\leq 2\lambda^2(\lambda-1)$.
Assume now that $\lambda=5$. Then $(\mu+1)X=225,$ which is larger than $\lambda^2(\lambda+1)=150$, so $k\leq 225$. We claim that $k\leq 2\lambda^2(\lambda-1)=200$. Assume for a contradiction that $200<k\leq 225$. If $x$ or $y$ is equal to $1$ then $k$ divides $60,120,200$ or $300$ by Lemma \ref{le}(vii), but we have seen above that $k\neq 300$, so in all those cases we have $k\leq 200$. Thus we must have that $x=y=2$ in which case $k$ divides $225$. This implies that $k=225$. Using Lemma \ref{le} parts (iv), (iii), (v) to get $c$, $r$, $d$ respectively, we deduce that $(c,d,k,r,\ell)=(111,111,225,275,3)$. Let $\Delta\in\mathcal{C}$, $D=G^\mathcal{C}$, and $L=(G_{\Delta})^{\Delta}$. By \cite[Theorem 5.5]{PS} we may assume that $G\leq L\wr D\leq \Sym_{111}\wr \Sym_{111}$, acting imprimitively. Also, by the argument above, there cannot be any other values of $c,d$ yielding a flag-transitive example, and hence both $L$ and $D$ are primitive of degree $111$. The only such groups are $\Alt_{111}$ and $\Sym_{111}$.
Let $\alpha, \beta$ be distinct points of $\Delta$, and let $B_1, \dots, B_5\in\mathcal{B}$ be the $\lambda=5$ blocks containing $\{\alpha,\, \beta\}$. Then $G_{\{\alpha, \beta\}}$, which is a subgroup of $G_\Delta$, fixes $X:=\cup_{i=1}^5(B_i\cap\Delta)$ setwise, and since each $B_i\cap\Delta$ has size $3$, the set $X$ has size $s$ where $3\leq s\leq 7$. On the other hand, $(G_{\{\alpha,\beta\}})^\Delta$ is the setwise stabiliser in $L$ of the pair $\{\alpha,\beta\}$. Since $L$ is $\Alt_{111}$ or $\Sym(111)$, $G_{\{\alpha,\beta\}}$ has orbits in $\Delta$ of lengths $2, 109$, which is a contradiction.
Therefore, for $\lambda=5$, we also have that $k\leq 2\lambda^2(\lambda-1)$.
This proves the upper bound for $k$ in all cases. By \cite[Proposition 4.1]{CP93}, $v\leq (k-2)^2$, and the claimed upper bound on $v$ follows.
This finishes the proof.
\qed
\section{Numerically feasible parameter tuples for small \texorpdfstring{$\lambda$}{lambda}}\label{sec:small}
Recall that by the theorem of Higman and McLaughlin~\cite{HM}, $\lambda\neq 1$.
By \cite[Theorem 1.1]{DLPX}, when $\lambda=2$, the only values for $v, k, r$ are $v=16$, $k=r=6$. Lemma \ref{le}(viii) yields $\ell=2$, and by Lemma \ref{le}(iv), $c=k-2=4$ and hence $d=v/c=4$ also. For this reason, we only need to consider $\lambda\geq 3$.
For specific small $\lambda$, we can list all the pairs $(\ell,x)$ satisfying Lemma \ref{le}(viii), and do a more refined investigation leading to all the possible tuples $(\lambda,v,k,r,c,d,\ell)$ that are \emph{numerically feasible}, in the sense that they satisfy all of the restrictions from Lemmas~\ref{condition 1} and \ref{le}.
\begin{proposition}\label{la=3}
Suppose that $\lambda\in\{3,4\}$.
Then the numerically feasible parameters $(\lambda,v,k,r,c,d,\ell)$ for a $2$-$(v,k,\lambda)$ non-trivial design admitting
a flag-transitive point-imprimitive group of automorphisms are as in one of the rows of the following tables, where $c,d,\ell$ are as in Lemma \ref{le} and $r$ is the number of blocks through a point.
\begin{minipage}{.5\linewidth}
\centering
\begin{tabular}{|ccccccc|}
\hline
$\lambda$&$v$&$k$&$r$&$c$&$d$&$\ell$\\
\cline{1-7}
$3$&$16$&$6$&$9$&$4$&$4$&$2$\\
$3$&$45$&$12$&$12$&$5$&$9$&$2$\\
$3$&$45$&$12$&$12$&$9$&$5$&$3$\\
$3$&$100$&$12$&$27$&$10$&$10$&$2$\\
$3$&$120$&$18$&$21$&$8$&$15$&$2$\\
$3$&$120$&$18$&$21$&$15$&$8$&$3$\\
$3$&$256$&$18$&$45$&$16$&$16$&$2$\\
$3$&$561$&$36$&$48$&$17$&$33$&$2$\\
$3$&$561$&$36$&$48$&$33$&$17$&$3$\\
$3$&$1156$&$36$&$99$&$34$&$34$&$2$\\
\hline
\end{tabular}
\end{minipage}
\begin{minipage}{.5\linewidth}
\centering
\begin{tabular}{|ccccccc|}
\hline
$\lambda$&$v$&$k$&$r$&$c$&$d$&$\ell$\\
\hline
$4$&$15$&$8$&$8$&$3$&$5$&$2$\\
$4$&$16$&$6$&$12$&$4$&$4$&$2$\\
$4$&$36$&$8$&$20$&$6$&$6$&$2$\\
$4$&$45$&$12$&$16$&$5$&$9$&$2$\\
$4$&$45$&$12$&$16$&$9$&$5$&$3$\\
$4$&$96$&$20$&$20$&$6$&$16$&$2$\\
$4$&$96$&$20$&$20$&$16$&$6$&$4$\\
$4$&$100$&$12$&$36$&$10$&$10$&$2$\\
$4$&$196$&$16$&$42$&$14$&$14$&$2$\\
$4$&$231$&$24$&$40$&$11$&$21$&$2$\\
$4$&$231$&$24$&$40$&$21$&$11$&$3$\\
$4$&$280$&$32$&$36$&$10$&$28$&$2$\\
$4$&$280$&$32$&$36$&$28$&$10$&$4$\\
$4$&$435$&$32$&$42$&$15$&$29$&$2$\\
$4$&$484$&$24$&$84$&$22$&$22$&$2$\\
$4$&$1976$&$80$&$100$&$26$&$76$&$2$\\
$4$&$1976$&$80$&$100$&$76$&$26$&$4$\\
$4$&$2116$&$48$&$180$&$46$&$46$&$2$\\
\hline
\end{tabular}
\end{minipage}
\end{proposition}
Note that in each case the number of blocks can be determined using the formula given by Lemma~\ref{condition 1}(ii).
\medskip
\par\noindent{\sc Proof~} It is easy to check that all parameter sets in the tables satisfy all conditions from Lemmas~\ref{condition 1} and \ref{le}, with $x=k-1-d(\ell-1)$.
We now show that for each $\lambda=3,4$, the parameters must be as in one of the tables.
\medskip\noindent
\fbox{The case $\lambda=3$}\quad
Lemma \ref{le}(viii) yields three possibilities for $(\ell,x)$, namely $(2,1)$, $(2,2)$, $(3,1)$.
We now split the analysis into these 3 cases.
\begin{enumerate}[(i)]
\item $(\ell,x)=(2,1)$. By Lemma \ref{le}(vii) $k\mid 36$ and by Lemma \ref{le}(x) $k\geq {9/2}$. Moreover $k$ is even by Lemma \ref{le}(ii).
Thus $k\in\{6,12,18,36\}$. By Lemma \ref{le}(iv) $c=k-2$, and Lemma \ref{le}(vi) yields $k\mid {6 c(c-1)}$ which is satisfied in each case. Combining Lemma \ref{le}(iii) and (v) yields $d=c$. By Lemma \ref{le}(iii) $r=3(c-1)$.
Thus the possibilities for $(c,d,k,r,\ell)$ are $(4,4,6,9,2)$, $(10,10,12,27,2)$, $(16,16,18,45,2)$, $(34,34,36,99,2)$.
\item $(\ell,x)=(2,2)$.
By Lemma \ref{le}(vii) $k\mid 72$ and by Lemma \ref{le}(x) $k\geq 12$.
Thus $k\in\{12,18,24,36,72\}$. By Lemma \ref{le}(iv) $c=k/2-1$, and Lemma \ref{le}(vi) yields $k\mid {9 c(c-1)}$ which is not satisfied for $k=24$ and $k=72$. Combining Lemma \ref{le}(iii) and (v) yields $d=2c-1$. By Lemma \ref{le}(iii) $r=3(c-1)$.
Thus the possibilities for $(c,d,k,r,\ell)$ are $(5,9,12,12,2)$, $(8,15,18,21,2)$, $(17,33,36,48,2)$.
\item $(\ell,x)=(3,1)$.
By Lemma \ref{le}(vii) $k\mid 72$ and by Lemma \ref{le}(x) $k\geq 12$.
Thus $k\in\{12,18,24,36,72\}$. By Lemma \ref{le}(iv) $c=k-3$, and Lemma \ref{le}(vi) yields $k\mid \frac{3 c(c-1)(c+1)}{4}$ which is not satisfied for $k=24$ and $k=72$. Combining Lemma \ref{le}(iii) and (v) yields $d=(c+1)/2$. By Lemma \ref{le}(iii) $r=3(c-1)/2$.
Thus the possibilities for $(c,d,k,r,\ell)$ are $(9,5,12,12,3)$, $(15,8,18,21,3)$, $(33,17,36,48,3)$.
\end{enumerate}
\medskip\noindent
\fbox{The case $\lambda=4$}\quad
Lemma \ref{le}(viii) yields five possibilities for $(\ell,x)$, namely $(2,1)$, $(2,2)$, $(2,3)$, $(3,1)$, $(4,1)$.
We now split the study into these cases.
\begin{enumerate}[(i)]
\item $(\ell,x)=(2,1)$. By Lemma \ref{le}(vii) $k\mid 48$ and by Lemma \ref{le}(x) $k\geq 4$. Moreover $k$ is even by Lemma \ref{le}(ii).
Thus $k\in\{4, 6, 8, 12, 16, 24, 48\}$. By Lemma \ref{le}(iv) $c=k-2$, and Lemma \ref{le}(vi) yields $k\mid 8c(c-1)$ which is satisfied in each case. Combining Lemma \ref{le}(iii) and (v) yields $d=c$. By Lemma \ref{le}(iii) $r=4(c-1)$.
Thus the possibilities for $(c,d,k,r,\ell)$ are $(2,2,4,4,2)$, $(4,4,6,12,2)$, $(6,6,8,20,2)$, $(10,10,12,36,2)$, $(14,14,16,42,2)$, $(22,22,24,84,2)$, $(46,46,48,180,2)$. However, in the first case, $k=v$ so this design is trivial.
\item $(\ell,x)=(2,2)$.
By Lemma \ref{le}(vii) $k\mid 96$ and by Lemma \ref{le}(x) $k\geq 8$.
Thus $k\in\{8, 12, 16, 24, 32, 48, 96\}$. By Lemma \ref{le}(iv) $c=k/2-1$, and Lemma \ref{le}(vi) yields $k\mid 12 c(c-1)$ which is not satisfied for $k=16$, $k=48$ and $k=96$. Combining Lemma \ref{le}(iii) and (v) yields $d=2c-1$. By Lemma \ref{le}(iii) $r=4(c-1)$.
Thus the possibilities for $(c,d,k,r,\ell)$ are $(3,5,8,8,2)$, $(5,9,12,16,2)$, $(11,21,24,40,2)$, $(15,29,32,56,2)$.
\item $(\ell,x)=(2,3)$.
By Lemma \ref{le}(vii) $k\mid 160$ and by Lemma \ref{le}(x) $k\geq 20$.
Thus $k\in\{20, 32, 40, 80, 160\}$. By Lemma \ref{le}(iv) $c=(k-2)/3$ so $k$ cannot be $40$ nor $160$. Lemma \ref{le}(vi) yields $k\mid 16 c(c-1)$ which is satisfied in each remaining case. Combining Lemma \ref{le}(iii) and (v) yields $d=3c-2$. By Lemma \ref{le}(iii) $r=4(c-1)$.
Thus the possibilities for $(c,d,k,r,\ell)$ are $(6,16,20,20,2)$, $(10,28,32,36,2)$, $(26,76,80,100,2)$.
\item $(\ell,x)=(3,1)$.
By Lemma \ref{le}(vii) $k\mid 96$ and by Lemma \ref{le}(x) $k\geq 8$. Moreover $k$ is divisible by $3$ by Lemma \ref{le}(ii).
Thus $k\in\{ 12, 24, 48, 96\}$. By Lemma \ref{le}(iv) $c=k-3$, and Lemma \ref{le}(vi) yields $k\mid c(c-1)(k-2)$ and thus also $k\mid2 c(c-1)$, which is not satisfied for $k=48$ and $k=96$. Combining Lemma \ref{le}(iii) and (v) yields $d=(c+1)/2$. By Lemma \ref{le}(iii) $r=2(c-1)$.
Thus the possibilities for $(c,d,k,r,\ell)$ are $(9,5,12,16,3)$, $(21,11,24,40,3)$.
\item $(\ell,x)=(4,1)$.
By Lemma \ref{le}(vii) $k\mid 160$ and by Lemma \ref{le}(x) $k\geq 20$.
Thus $k\in\{20, 32, 40, 80, 160\}$. By Lemma \ref{le}(iv) $c=k-4$. Combining Lemma \ref{le}(iii) and (v) yields $d=(c+2)/3=(k-2)/3$ so $k$ cannot be $40$ nor $160$.
Lemma \ref{le}(vi) yields $k\mid \frac{4 c(c-1)(c+2)}{9}$ which is satisfied in each remaining case. By Lemma \ref{le}(iii) $r=4(c-1)/3$.
Thus the possibilities for $(c,d,k,r,\ell)$ are $(16,6,20,20,4)$, $(28,10,32,36,4)$, $(76,26,80,100,4)$.
\end{enumerate}
\qed
However not all numerically feasible tuples listed above lead to an example, as we will see in Section \ref{sec:class}.
\section{A flag-regular, point-imprimitive design on \texorpdfstring{$36$}{36} points}\label{sec:36}
In this section we construct a flag-transitive, point-imprimitive design corresponding to the numerically feasible parameter tuple:
\begin{equation}\label{eq:36tuple}
(\lambda,v,k,r,c,d,\ell)=(4,36,8,20,6,6,2)
\end{equation}
from Proposition~\ref{la=3}. We also prove in Proposition~\ref{lem:36unique} that, up to isomorphism, this example is the unique flag-transitive design with the parameter set \eqref{eq:36tuple}. Moreover, the design satisfies equality in the bound $v\leq (k-2)^2$ of \cite[Proposition 4.1]{CP93}.
This design is hard to find in the literature, and we have sought advice from colleagues Alfred Wasserman, Patric \"{O}sterg\aa rd and Charles Colbourn. Collectively we were unable to find it. The Handbook of Combinatorial Designs \cite{Handbook} mentions references for two designs with parameters $(\lambda,v,k,r)=(4,36,8,20)$. Firstly in \cite{Abel} an example with these parameters is given with `repeated blocks', and secondly a construction in \cite{VT} produces an example which we were able to construct computationally; we found that its automorphism group has order $2$. Thus the design we present is neither of the ones listed in \cite{Handbook}. After completing the first draft of this paper we became aware of a new paper of Zhang and Zhou in which this new design occurs \cite[Theorem 1.3]{ZZ}. We make some comments about their work in Remark~\ref{rem:shenglin} below.
We give several constructions for this design based on the symmetric group $\Sym_6$ of degree $6$. The first description gives sufficient information for the design to be constructed computationally, see also Remark~\ref{rem:con1}. It is based on the transitive permutation representation of $\Sym_6$ on $36$ points, and relies on an explicit description of an outer automorphism $\sigma$ of $\Sym_6$, namely $\sigma$ is determined by its action on a standard generating set for $\Sym_6$ as follows:
\begin{align}\label{eq:sigma}
(1,2)^\sigma&=(1,4)(2,6)(3,5)\quad\text{and}\quad (1,2,3,4,5,6)^\sigma=(1,3)(2,6,5).
\end{align}
\begin{construction}\label{con1}
Let $\mathcal{P}=\{(i,j) \mid\ 1\leq i, j\leq 6\}$, and let $G=\{(g,g^\sigma)|g\in\Sym_6\}$ acting coordinate-wise on $\mathcal{P}$. Let
$$
B=\{(1,1),(1,2),(2,1),(2,3),(3,2),(3,4),(4,3),(4,4)\},
$$
and let $\mathcal{B}=\{B^g \mid\, g\in G\}$, the $G$-orbit of $B$ under $G$, and define the design $\mathcal{D}= (\mathcal{P}, \mathcal{B})$.
\end{construction}
\begin{lemma}\label{lem:con1}
The design $\mathcal{D}= (\mathcal{P}, \mathcal{B})$ of Construction~$\ref{con1}$ is a $2-(36,8,4)$ design
with full automorphism group $G\cong\Sym_6$ acting flag-regularly and point imprimitively. Moreover, $G$ leaves invariant two nontrivial point-partitions, each with $d=6$ parts of size $c=6$, namely the `rows' and the `columns' of the square array $\mathcal{P}$.
\end{lemma}
\par\noindent{\sc Proof~}
By definition, $G$ is admitted as an automorphism group of $\mathcal{D}$, and leaves invariant the two nontrivial point-partitions formed by the rows and the columns of $\mathcal{P}$. Also $\mathcal{D}$ has $v=36$ points and block size $k=8$. A computation using Magma~\cite{magma} yields that $\mathcal{D}$ is a $2$-design with $\lambda=4$ and that $G$ is the full automorphism group.
\qed
\begin{remark}\label{rem:con1}
Computationally, using Magma~\cite{magma}, the design $\mathcal{D}$ of Construction~\ref{con1} can be constructed using the unique smallest block-transitive subgroup of automorphisms (which we note is not flag-transitive on $\D$), namely the index $2$ subgroup $H=\Alt_6$ of ${\rm Aut}(\D)=\Sym_6$. The group $H$ can be constructed up to conjugacy in $\Sym_{36}$, using Magma, as $H =$ {\tt TransitiveGroup(36, 555)}. Then the block-set of $\D$ can be constructed as the set of images of the $8$-element subset $B=\{ 1, 2, 7, 8, 22, 23, 25, 26\}$ of $\{ 1, 2,\dots, 36\}$ under the action of $H$. See also Table~\ref{table1}.
\end{remark}
\begin{remark}\label{rem:con2}
Construction~\ref{con1} gives some insight into the set of points and the group action on them, but provides little understanding of the nature of the blocks. We now give a different construction for (a design isomorphic to) $\mathcal{D}$ which gives a better understanding of the blocks in terms of the standard action of $\Sym_6$ of degree $6$.
For this description we note that $G=\Sym_6$ has a unique conjugacy class of subgroups of index $36$, namely the class of Frobenius subgroups $F\cong F_{20}$ of order $20$. This means that we may identify the point set $\mathcal{P}$ with $\{ F^g\mid\,g\in G\}$, where $G$ acts by conjugation.
Now $G$ has two conjugacy classes of subgroups of index $6$, corresponding to $\Sym_5$ and ${\rm PGL}_2(5)$, which are interchanged by the outer automorphism $\sigma$ given in \eqref{eq:sigma}. Each Frobenius group in $\mathcal{P}$ is contained in a unique subgroup from each of these classes, giving two distinct $G$-invariant partitions of $\mathcal{P}$, each with $d=6$ parts of size $6$, see \cite[Lemma 2.14]{PS}.
For the construction, we use one of these partitions: let $X=\{1,2,3,4,5,6\}$ be the set on which $G$ acts naturally, and note that each $F\in\mathcal{P}$ fixes a unique element of $X$. For each $x\in X$, let $\Delta_x$ be the set of six Frobenius groups in $\mathcal{P}$ which fix $x$, so that $\mathcal{C}=\{\Delta_x \mid\,x\in X\}$ is one of the $G$-invariant partitions described in the previous paragraph. (The second point-partition is based in a similar fashion on the set $Y$ of six transitive subgroups ${\rm PGL}_2(5)$.)
The blocks of the design are labelled by triples of the form $(x, x’, \pi)$, where $x, x’$ are distinct elements of $X$ and $\pi$ is a `bisection' of $X\setminus\{x,x'\}$, that is a partition with two parts of size $2$. For each pair $(x,x')$ there are three choices for $\pi$ and hence there are $6\times 5\times 3 = 90$ triples, hence $90$ blocks.
We need to define the $8$-subset of $\mathcal{P}$ forming
the block $B=B(x, x', \pi)$. We note that for each of the four elements $z\in X\setminus\{x,x’\}$, there are six Frobenius groups in $\Delta_z$, giving a set $\mathcal{P}(\pi)$ of $4\times 6=24$ points of $\mathcal{P}$.
Using Magma, we find that the setwise stabiliser $H:=G_{\mathcal{P}(\pi)}$ has three orbits of length $8$ in $\mathcal{P}(\pi)$. One of these orbits is the block $B(x,x',\pi)$, and one of the other orbits is the block $B'=B(x',x,\pi)$ (which has the same stabiliser $G_{B'}=G_B=G_{\mathcal{P}(\pi)}$). The normaliser $N_G(H)$ interchanges the triples $(x, x’, \pi)$ and $(x', x, \pi)$, and we find (with Magma) that $N_G(H)$ interchanges two of the $H$-orbits in $\mathcal{P}(\pi)$ and leaves the third invariant. We choose one of the two $H$-orbits
moved by $N_G(H)$ and call it $B(x,x',\pi)$, and we take the block set $\mathcal{B}$ of $\mathcal{D}$ to be the set of $G$-images of this $8$-subset $B(x,x',\pi)$. Thus $\mathcal{D}=(\mathcal{P},\mathcal{B})$ is well defined.
It is clear from the construction that $\mathcal{D}$ is a flag-transitive point-imprimitive $1$-design. It may be checked using Magma that $\mathcal{D}$ is in fact a $2-(36,8,4)$ design.
We note, finally, that the outer automorphism $\sigma$ in \eqref{eq:sigma} is not an automorphism of $\mathcal{D}$, but rather $\sigma$ maps $\mathcal{B}$ to a different collection of ninety $8$-element subsets of $\mathcal{P}$ which forms a design isomorphic to $\mathcal{D}$.
\end{remark}
Finally we prove that there is, up to isomorphism, a unique flag-transitive point-imprim\-it\-ive design with parameters as in \eqref{eq:36tuple}, and it follows from this that the designs in Construction~\ref{con1} and Remark~\ref{rem:con2} are isomorphic.
\begin{proposition}\label{lem:36unique}
Up to isomorphism, the design $\mathcal{D}$ in Construction~$\ref{con1}$ is the unique flag-transitive point-imprimitive design $\D=(\mathcal{P},\mathcal{B})$ with parameter set as in $\eqref{eq:36tuple}$.
\end{proposition}
\par\noindent{\sc Proof~}
Suppose that $\cal D=(\mathcal{P},\mathcal{B})$ has parameters $(\lambda,v,k,r,c,d,\ell)=(4,36,8,20,6,6,2)$ and admits a flag-transitive point-imprimitive group $G$. By Proposition~\ref{la=3}, this is the only
parameter tuple with $(\lambda,v,k)=(4,36,8)$, and hence each nontrivial $G$-invariant point-partition has $6$ parts of size $6$. Let $\mathcal{C}=\{\Delta_1,\dots,\Delta_6\}$ be a $G$-invariant partition of $\mathcal{P}$ with each $|\Delta_i|=6$, and let $D=G^\mathcal{C}$ and $L=(G_{\Delta_1})^{\Delta_1}$. We may assume that $G\leq L\wr D\leq \Sym_6\wr \Sym_6$, by \cite[Theorem 5.5]{PS}. Moreover both $L, D$ are primitive of degree $6$, as otherwise there would be another parameter set in Proposition \ref{la=3} for $(\lambda,v,k)=(4,36,8)$. This implies that $D$ and $L$ are $2$-transitive, and each has socle ${\rm PSL}_2(5)$ or $\Alt_6$. Thus, for distinct $i, j$, $G_{\{\Delta_i, \Delta_j\}}$ has index $\binom{6}{2}=15$ in $G$.
Now each block $B\in\mathcal{B}$ meets each of four parts $\Delta_i\in\mathcal{C}$ in $\ell=2$ points and is disjoint from the remaining two parts, and by Lemma \ref{condition 1}, $b=|\mathcal{B}|=vr/k=90$.
Thus $G_B$ has index $90$ in $G$ and fixes setwise the two parts, say $\{\Delta_i, \Delta_j\}$, which $B$ intersects trivially. Hence $G_B<G_{\{\Delta_i, \Delta_j\}}<G$ and $|G_{\{\Delta_i,\Delta_j\}}:G_B|=90/15=6$. In particular $G_B$ contains all Sylow $5$-subgroups of $G_{\{\Delta_i, \Delta_j\}}$. Let $P$ be a Sylow $5$-subgroup of $G_{\{\Delta_i, \Delta_j\}}$, so $P\leq G_B$. Since the group induced by $G_{\{\Delta_i, \Delta_j\}}$ on $\mathcal{C}$ is a subgroup of $\Sym_2\times\Sym_4$, the order of which is not divisible by $5$, it follows that $P$ is contained in $K=G_{(\mathcal{C})}$, the kernel of the $G$-action on $\mathcal{C}$. Note that $K\leq G_{\{\Delta_i, \Delta_j\}}$, so a Sylow $5$-subgroup of $G_{\{\Delta_i, \Delta_j\}}$ is also a Sylow $5$-subgroup of $K$ and vice-versa.
Suppose that $K\ne 1$. Since $K$ is normal in $G$, its orbits on points all have the same size. In particular, for each $\Delta\in\mathcal{C}$, $K^{\Delta}$ is a nontrivial normal subgroup of the primitive group $L=G_{\Delta}^{\Delta}$, and hence $K^{\Delta}$ contains the socle of $L$. Since this socle is ${\rm PSL}_2(5)$ or $\Alt_6$, it follows that $5$ divides $|K^\Delta|$, and hence for some choice of Sylow $5$-subgroup $P$ of $K$ we have $P^\Delta\ne 1$. Since all Sylow $5$-subgroups of $K$ are conjugate in $K$, and since $K$ fixes each $\Delta\in\mathcal{C}$ setwise, this implies that, for all $\Delta\in\mathcal{C}$, $P^\Delta\ne 1$ and has orbits of lengths $1, 5$ in $\Delta$. However, if $\Delta\not\in\{\Delta_i, \Delta_j\}$, then $P$ fixes setwise the $2$-subset $B\cap \Delta$ since $P\leq G_B$, which is a contradiction.
Hence $K=1$, so $G\cong G^\mathcal{C}\leq \Sym_6$. However also $|G|$ is divisible by the number of flags, which is $90\times 8 = |\Sym_6|$. Hence $G\cong \Sym_6$ and $G$ is regular on flags. Now $\Sym_6$ has a unique conjugacy class of subgroups of index $36$, namely the class of Frobenius groups $F_{20}$, and each such subgroup is contained in two distinct subgroups of index $6$ in $G$. Hence $G$ leaves invariant two distinct point-partitions with six parts of size six. This unique transitive action of $\Sym_6$ of degree $36$ can be found as {\tt TransitiveGroup(36,1252)} in Magma.
We checked with Magma~\cite{magma} that, up to isomorphism, the design $\D$ is as in Construction~\ref{con1}. To do this we searched in {\tt TransitiveGroup(36,1252)} for orbits of size $b=90$ on the set of $20250$ $8$-subsets which have $2$ points in each of four parts of both nontrivial invariant partitions. There are five such orbits, only two of which yield $2$-designs, and the designs are isomorphic.
\qed
\begin{remark}\label{rem:shenglin}
After completing our work, the paper \cite{ZZ} of Zhang and Zhou appeared, in which this unique flag-transitive $2-(36,8,4)$ design arises in their classification \cite[Theorem 1.3]{ZZ} of $2$-designs with $\lambda\leq 4$ admitting an automorphism group which is flag-transitive, and acts imprimitively and quasiprimitively on points. We make a few comments about this result. The analysis of this case in their proof \cite[pp. 431-432]{ZZ} does not give much detail. It is likely that their proof relies on extensive computation, but no references are made to this. Their proof proceeds by asserting (i) that there are seven conjugacy classes of subgroups of $G=S_6$ of index $90$ (the number of blocks), (ii) that five of these classes consist of subgroups $H$ which have an orbit $B$ on points of size $8$ such that the number of $G$-images of $B$ is $90$, and (iii) that ``it is easy to see that there are only two conjugacy classes of subgroups'' such that the set of $G$-images of the $H$-orbit $B$ is the block set of a $2$-design.
They also state that ``it is not hard to check that'' the two designs obtained are isomorphic. No details are given, either of the construction or of any computations they may have carried out to justify the assertions.
In \cite[Theorem 1.3(ii)]{ZZ} it is stated that there is a unique non-symmetric $2-(36,8,4)$ design. Our Proposition~\ref{lem:36unique} proves that there is a unique such design which is flag-transitive and point-imprimitive. However, as discussed at the beginning of this section, there exists at least one other $2$-design with these parameters obtained from a construction in \cite{VT}.
\end{remark}
\section{Classification for $v$ less than $100$ and $\lambda$ at most $4$ } \label{sec:class}
In Theorem \ref{class} we obtain a full classification of the designs for $v<100$ and $\lambda\leq 4$. This result will follow from the following citations and new results, and from the previous section. Up to isomorphism, we determine exactly eleven designs (two of the four designs with $v=96$ admit two distinct $(c,d)$ possibilities), and these correspond to exactly seven of the twelve numerically feasible tuples $(\lambda, v, k, r, c, d, \ell)$ in Proposition~\ref{la=3} with $\lambda\leq 4, v<100$.
\begin{table}[ht]
\caption{\label{table1}Construction of all examples with $\lambda\leq 4$ and $v<100$, note the groups $H$ listed are block-transitive and as small as possible}
\begin{tabular}{|cccc|p{6.5cm}|p{2.3cm}|p{1.8cm}|cc|}
\hline
$\lambda$&$v$&$k$&$r$& Block $B$ & Group $H$ & $|{\rm Aut}(\cal D)|$&$c$&$d$\\
\hline
$2$&$16$&$6$&$6$& $\{ 1, 2, 7, 9, 12, 13\}$&{\tt TG(16,3)}&$11520$&$4$&$4$\\
\cline{5-9}
&&&&$\{ 1, 2, 3, 7, 11, 15\}$ & {\tt TG(16,5)}&$768$&$4$&$4$\\%H minimal
\hline
$3$&$45$&$12$&$12$&$\{1, 2, 3, 4, 8, 10, 20, 22, 25, 34, 39, 41\}$&{\tt TG(45,63)} &$19440$&$9$&$5$\\
\hline
$4$&$15$&$8$&$8$&$\{ 1, 2, 3, 4, 8, 11, 12, 14 \}$&{\tt TG(15,1)}&$20160$&$3$&$5$\\
\hline
$4$&$16$&$6$&$12$&$\{1,2,3,6,11,13\}$ & {\tt TG(16,27)}&$6144$&$4$&$4$\\
\cline{5-9}
&&&&$\{1,2, 5, 7, 13, 16\}$ & {\tt TG(16,46)}&$1920$&$4$&$4$\\
\hline
$4$&$36$&$8$&$20$&$\{ 1, 2, 7, 8, 22, 23, 25, 26\}$&{\tt TG(36,555)} &$720$&$6$&$6$\\%H minimal
\hline
$4$&$96$&$20$&$20$&$\{ 1, 2, 3, 11, 12, 141, 17, 29, 31, 32, $&$H_1$ &$552960$&$16$&$6$\\
&&&&$36, 41, 47, 51, 57, 63, 68, 85, 87, 93\}$&&&&\\
\cline{5-9}
&&&&$\{ 1, 2, 3, 11, 14, 17, 24, 29, 31, 35, 43,$&$H_1$ &$184320$&$16$&$6$\\
&&&&$ 44, 48, 56, 64, 65, 69, 90, 95, 96\}$&&&$6$&$16$\\
\cline{5-9}
&&&&$\{ 1, 2, 3, 5, 11, 15, 21, 22, 23, 27, 41, $&$H_2$ &$138240$&$16$&$6$\\
&&&&$ 43, 62, 68, 77, 80, 86, 90, 92, 95 \}$&&&&\\
\cline{5-9}
&&&&$\{1, 2, 3, 5, 11, 15, 21, 23, 31, 34, 46, $&$H_2$ &$7680$&$16$&$6$\\
&&&&$ 58, 66, 67, 69, 70, 72, 73, 89, 96\}$&&&$6$&$16$\\
\hline
\end{tabular}
\end{table}
\begin{remark}
In Table~\ref{table1} we list a block $B$ and group $H$ which allow a quick construction of the corresponding design: namely $|H|$ is minimal such that $H$ is block-transitive (not necessarily flag-transitive) and hence the block-set is $B^H = \{ B^h\mid h\in H\}$. We also list the order of the full automorphism group. In most cases the group $H$ is described as {\tt TG(v,i)}, which is an abbreviation of the name {\tt TransitiveGroup(v,i)} for the $i^{th}$ group of degree $v$ in the database of transitive groups of small degree in Magma~\cite{magma}.
Since the transitive groups in Magma are only given for degrees up to $47$, we cannot describe $H$ in this way for the four designs with $v=96$ points. In these cases we construct the designs using the method described in \cite{LPR09}: for each design we give generators for a group $H$ which is block-regular (hence not flag-transitive). In all four cases the group $H$ is one of the following two groups $H_1$, $H_2$.
\begin{align*}
H_1=\langle & (1, 37, 2, 31)(3, 23, 8, 19)(4, 21, 9, 18)(5, 27, 7, 14)(6, 11, 10, 26)(12, 39, 24, 50)(13, 17, 25, 29)\\&(15, 40, 22, 42)(16, 38, 28, 20)(30, 33, 49, 45)(32, 55, 53, 65)(34, 47, 46, 35)(36, 43, 48, 41)\\&(44, 60, 62, 51)(52, 79, 70, 90)(54, 66, 63, 57)(56, 64, 58, 59)(61,
87, 81, 94)(67, 91, 78, 95)\\&(68, 76, 80, 74)(69, 83, 89, 73)(71, 96, 75, 86)(72, 88, 82, 93)(77, 92, 84, 85),\\&
(1, 62, 42)(2, 44, 40)(3, 29, 32)(4, 63, 28)(5, 59, 26)(6, 56, 27)(7, 64, 11)(8, 17, 53)(9, 54, 16)\\&(10, 58, 14)(12, 69, 81)(13, 23, 55)(15, 51, 37)(18, 38, 66)(19, 65, 25)(20, 57, 21)(22, 60, 31)\\&(24, 89, 61)(30, 95, 75)(33, 86, 67)(34, 68, 52)(35, 90, 74)(36, 92,
72)(39, 94, 83)(41, 93, 77)\\&(43, 88, 84)(45, 96, 78)(46, 80, 70)(47, 79, 76)(48, 85, 82)(49, 91, 71)(50, 87, 73),\\&
(1, 73, 2, 83)(3, 85, 8, 92)(4, 52, 9, 70)(5, 67, 7, 78)(6, 61, 10, 81)(11, 94, 26, 87)(12, 41, 24, 43)\\&(13, 80, 25, 68)(14, 91, 27, 95)(15, 71, 22, 75)(16, 93, 28, 88)(17, 76, 29, 74)(18, 79, 21, 90)\\&(19, 77, 23, 84)(20, 72, 38, 82)(30, 46, 49, 34)(31, 69, 37, 89)(32,
64, 53, 59)(33, 47, 45, 35)\\&(36, 39, 48, 50)(40, 86, 42, 96)(44, 57, 62, 66)(51, 54, 60, 63)(55, 56, 65, 58) \rangle
\end{align*}
\begin{align*}
H_2=\langle&(1, 55, 18, 26)(2, 90, 24, 66)(3, 50, 38, 51)(4, 49, 5, 60)(6, 61, 35, 63)(7, 80, 10, 89)(8, 48, 47, 76)\\&(9, 83, 39, 84)(11, 96, 40, 86)(12, 53, 16, 57)(13, 93, 23, 94)(14, 65, 15, 44)(17, 85, 21, 87)\\&(19, 56, 20, 64)(22, 77, 34, 62)(25, 67, 33, 95)(27, 70, 58, 69)(28,
92, 29, 68)(30, 73, 59, 72)\\&(31, 71, 43, 82)(32, 91, 42, 88)(36, 52, 37, 74)(41, 78, 46, 79)(45, 81, 54, 75),\\&
(1, 75, 21)(2, 6, 79)(3, 48, 24)(4, 73, 40)(5, 69, 9)(7, 12, 81)(8, 61, 66)(10, 18, 71)(11, 37, 70)\\&(13, 19, 62)(14, 92, 23)(15, 77, 33)(16, 82, 17)(20, 68, 25)(22, 64, 95)(26, 80, 45)(27, 49, 96)\\&(28, 56, 94)(29, 44, 67)(30, 74, 86)(31, 53, 89)(32, 38, 78)(34, 65,
93)(35, 76, 42)(36, 72, 39)\\&(41, 63, 91)(43, 55, 85)(46, 50, 90)(47, 51, 88)(52, 83, 58)(54, 57, 87)(59, 60, 84),\\&
(1, 38)(2, 39)(3, 18)(4, 15)(5, 14)(6, 12)(7, 13)(8, 58)(9, 24)(10, 23)(11, 42)(16, 35)(17, 25)\\&(19, 36)(20, 37)(21, 33)(22, 54)(26, 60)(27, 47)(28, 31)(29, 43)(30, 41)(32, 40)(34, 45)(44, 51)\\&(46, 59)(48, 92)(49, 55)(50, 65)(52, 53)(56, 61)(57, 74)(62, 79)(63, 64)(66,
89)(67, 96)(68, 76)\\&(69, 71)(70, 82)(72, 81)(73, 75)(77, 78)(80, 90)(83, 94)(84, 93)(85, 88)(86, 95)(87, 91)\rangle
\end{align*}
\end{remark}
\begin{remark}\label{rem:t2}
In Table \ref{table2}, we give additional information about the groups of these designs, obtained using Magma~\cite{magma}. In column ${\rm Aut}(\cal D)$, we list the full automorphism group of the design: for $v\ne 15, 96$, we give the group in the form {\tt TransitiveGroup(v,i)} as well as its structure; for $v=15$, ${\rm Aut}(\cal D)$ is a well known group given in its standard action; while for $v=96$ (which is not covered by the database in \cite{magma}) we give the structure of the group, but note that the full automorphism group can be easily found in Magma~\cite{magma} by constructing the design using data in Table \ref{table1} and calling for its full automorphism group. In column \emph{Largest FT imp.}, we list, up to conjugacy, the largest flag-transitive subgroup of ${\rm Aut}(\cal D)$ which preserves a partition with $d$ parts of size $c$. If ${\rm Aut}(\cal D)$ itself preserves such a partition, then we just write ${\rm Aut}(\cal D)$. We draw attention to exceptional behaviour for two of the designs with $v=96$, namely the second and fourth designs in the last block of Table~\ref{table2}, which are the designs numbered $2$ and $4$, respectively, in \cite[Table 1]{LPR09}. For these designs, ${\rm Aut}(\cal D)$ preserves a partition with $6$ blocks of size $16$ but not a partition with $16$ blocks of size $6$, while a proper flag-transitive subgroup preserves both. In column \emph{Smallest FT imp.}, we list, up to conjugacy, the flag-transitive subgroups of ${\rm Aut}(\cal D)$ which preserve a partition with $d$ parts of size $c$ and are of smallest size. Note there is not always a unique such subgroup, as shown in the table.
\end{remark}
\begin{table}[ht]
\caption{\label{table2}Full automorphism groups, largest flag-transitive subgroup preserving the partition (with the given $c$ and $d$), smallest flag-transitive subgroups preserving this partition }
\begin{tabular}{|p{.3cm}p{.3cm}p{.3cm}p{.3cm}p{.3cm}p{.3cm}|p{3.8cm}|p{3cm}|p{4.9cm}|}
\hline
$\lambda$&$v$&$k$&$r$&$c$&$d$& ${\rm Aut}(\cal D)$ & Largest FT imp.& Smallest FT imp.\\
\hline
$2$&$16$&$6$&$6$&$4$&$4$&{\tt TG(16,1753)}$\cong
2^4.\Sym_6$&{\tt TG(16,1063)}$\cong $&{\tt TG(16,183)}$\cong 2^4. 6$, \\
&&&&&&&$2^4.(\Sym_2\wr\Sym_3)\cong $&{\tt TG(16,184)}$\cong
2^3.\Alt_4$, \\
&&&&&&&$2^5.\Sym_4$&{\tt TG(16,185)}$\cong
2^3.\Alt_4$,\\
&&&&&&&& {\tt TG(16,194)}$\cong
2^4.\Sym_3$, \\
&&&&&&&&{\tt TG(16,195)}$\cong
2^2.\Sym_4$\\
\cline{7-9}
&&&&&&{\tt TG(16,1073)}$\cong 2^5.\Sym_4$&${\rm Aut}(\cal D)$&${\rm Aut}(\cal D)$\\ \hline
$3$&$45$&$12$&$12$&$9$&$5$&{\tt TG(45,628)}$\cong 3^4. 2.\Sym_5$&${\rm Aut}(\cal D)$&{\tt TG(45,314)}$\cong 3^4. 2.{\rm AGL}_1(5)$\\ \hline
$4$&$15$&$8$&$8$&$3$&$5$&${\rm PSL}_4(2)$&{\tt TG(15,21)}$\cong 3.{\rm P\Gamma L}_2(4) \cong (\Alt_5\times 3).2$&${\rm P\Gamma L}_2(4)\cong\Sym_5$\\ \hline
$4$&$16$&$6$&$12$&$4$&$4$&{\tt TG(16,1690)}$\cong 2^8.\Sym_4$&${\rm Aut}(\cal D)$& {\tt TG(16,419)}$\cong
2^4.\Alt_4$, \\
&&&&&&&&{\tt TG(16,420)}$\cong 2^4. \Alt_4$ (two classes), \\
&&&&&&&&{\tt TG(16,430)}$\cong
2^3.\Sym_4$, \\
&&&&&&&&{\tt TG(16,433)}$\cong
2^4.((2\times 3).2)$\\
\cline{7-9}
&&&&&&{\tt TG(16,1329)}$\cong 2^4.\Sym_5$&{\tt TG(16,776)}$\cong 2^4.\Sym_4$&{\tt TG(16,776)}$\cong 2^4.\Sym_4$\\ \hline
$4$&$36$&$8$&$20$&$6$&$6$&{\tt TG(36,1252)}$\cong \Sym_6$&${\rm Aut}(\cal D)$&${\rm Aut}(\cal D)$\\ \hline
$4$&$96$&$20$&$20$&$6$&$16$&$ 2^8.\Sym_6$&$ 2^4.\Sym_6$&$ 2^4.\Sym_5$\\
\cline{7-9}
&&&&&&$ 2^6.\Sym_5$&$ 2^4.\Sym_5$&$ 2^4.\Sym_5$\\
\hline
$4$&$96$&$20$&$20$&$16$&$6$&$ 2^8.(( 3\times\Alt_6). 2)$&${\rm Aut}(\cal D)$&$ 2^8.\Alt_5$\\
\cline{7-9}
&&&&&&$ 2^8.\Sym_6$&${\rm Aut}(\cal D)$&$ 2^4.\Sym_5$\\
\cline{7-9}
&&&&&&$ 2^6.(( 3.\Alt_6). 2)$&${\rm Aut}(\cal D)$& $ 2^5.\Sym_5$\\
\cline{7-9}
&&&&&&$ 2^6.\Sym_5$&${\rm Aut}(\cal D)$&$ 2^4.\Sym_5$ and $ 2^5.\Alt_5$\\
\hline
\end{tabular}
\end{table}
Suppose that $\D$ is a non-trivial $2$-$(v,k,\lambda)$ design admitting
a flag-transitive point-imprimitive group $G$ of automorphisms, with $\lambda\leq4$ and $v<100$.
As mentioned earlier $\lambda\geq 2$, and the classification for $\lambda=2$ is given in \cite{DLPX}. We use the work in \cite{DLPX} to describe the designs for $\lambda=2$.
For $\lambda=3,4$,
the tuple $(\lambda,v,k,r,c,d,\ell)$ must be numerically feasible and so appears in the table in Proposition~\ref{la=3}. We consider the possibilities for $\lambda$ separately.
\medskip\noindent
\fbox{The case $\lambda=2$}
This case was analysed in \cite[Theorem 1.1]{DLPX} where it was shown that that the only possible tuple is $(v,k,r,c,d,\ell)=(16,6,6,4,4,2)$, yielding two non-isomorphic examples, as in lines 1--2 of Table~\ref{table1}. These two designs were first constructed by Hussain \cite{Huss} in 1945.
The first example has a flag-regular imprimitive group, the full automorphism group is {\tt TransitiveGroup(16,1753)} of order 11520 (which is primitive), and contains flag-transitive imprimitive subgroups, the largest, isomorphic to {\tt TransitiveGroup(16,1063)}, having order 768.
The second example has full automorphism group {\tt TransitiveGroup(16,1073)}, which has order 768 and is imprimitive. This is also the unique flag-transitive subgroup of automorphisms.
\bigskip
\newpage\noindent
\fbox{The case $\lambda=3$}
In \cite{ZZ18}, Zhan and Zhou classify the flag-transitive imprimitive $2$-designs with $k=6$. It turns out they all have $v=16$, moreover there are two with $\lambda=2$ (as mentioned above), none with $\lambda=3$ and two with $\lambda=4$ (see below).
So the tuple $(v,k,r,c,d,\ell)=(16,6,9,4,4,2)$ is not possible.
By \cite[Corollary 1.2]{P07}, there is a unique flag-transitive, point-imprimitive $2-(45, 12, 3)$ design. It has (up to isomorphism) automorphism group {\tt TransitiveGroup(45,628)} which is imprimitive
preserving a partition with $d=5$ parts of size $c=9$, as in line 3 of Table~\ref{table1}. The smallest imprimitive flag-transitive subgroup is isomorphic to {\tt TransitiveGroup(45,314)}.
Moreover by \cite[Proposition 5.1]{P07}, the tuple $(v,k,r,c,d,\ell)=(45,12,12,5,9,2)$ is not
possible.
\medskip\noindent
\fbox{The case $\lambda=4$}
The projective example described by Huw Davies in \cite{Dav1}, and mentioned in the introduction, (whose blocks are the hyperplane complements), provides an example in the case $n=3$ for which $(v,k,r,c,d,\ell)=(15,8,8,3,5,2)$, and in fact, by \cite[Proposition 1.5, see also Section 4.1]{PZ06}, it is the unique example, up to isomorphism. Its full automorphism group ${\rm PSL}(4,2)$ is point-primitive, while the largest flag-transitive point-imprimitive subgroup is isomorphic to {\tt TransitiveGroup(15,21)} of order $360$, and contains ${\rm P\Gamma L}(2,4)\cong \Sym_5$ (which is regular on flags).
By \cite[Main Theorem and Table 3]{ZZ18}, the tuple $(v,k,r,c,d,\ell)=(16,6,12,4,4,2)$ admits exactly two examples.
For the first example, {\tt TransitiveGroup(16,1690)} is the full automorphism group, of order $6144$; it is point-imprimitive and has a subgroup that is regular on flags.
The second example has full automorphism group {\tt TransitiveGroup(16,1329)} of order $1920$, which is point-primitive; it has a unique flag-transitive imprimitive subgroup, and this subgroup is isomorphic to {\tt TransitiveGroup(16,776)} with flag stabiliser of order~$2$.
In \cite[Theorem 1.1]{LPR09} it is shown that, up to isomorphism, there are exactly four flag-transitive $2-(96,20,4)$ designs, and for each of them the full automorphism group preserves a point-partition with $d=6$ parts of size $c=16$, see \cite[Subsections 1.2 and 3.1]{LPR09}. Thus the parameter tuple $(v,k,r,c,d,\ell)=(96,20,20,16,6,4)$ admits four examples. The full automorphism groups of these four designs have orders $552960$, $184320$, $138240$ and $7680$ respectively, and all flag-transitive subgroups of them have been determined, see \cite[Section 5]{LPR09}. Each of the flag-transitive subgroups is listed in \cite[Table 2]{LPR09} and is of the form $C_2^a\rtimes H$ where $a\in\{4,5,6,8\}$. Assume $\cal D$ is one of these four designs where the tuple $(v,k,r,c,d,\ell)=(96,20,20,6,16,2)$ is realised by a flag-transitive subgroup of automorphisms. Then, by \cite[Lemma 3.1]{LPR09}, ${\rm Aut}(\cal D)$ has a flag-transitive subgroup of the form $C_2^4\rtimes H$ (and a block-transitive subgroup $C_2^4\rtimes \Alt_5$), and hence by \cite[Table 2]{LPR09}, $\D$ is the design with $|{\rm Aut}(\cal D)|$ equal to either $7680$ or $184320$.
We checked with Magma that in the first case the flag-transitive subgroup isomorphic to $C_2^4\rtimes \Sym_5$ admits two $G$-invariant partitions with $(c,d)=(6,16)$, while in the second case the flag-transitive subgroup isomorphic to $C_2^4\rtimes \Sym_5$ admits one $G$-invariant partition with $(c,d)=(6,16)$.
\medskip
By Proposition~\ref{lem:36unique}, there is, up to isomorphism, a unique flag-transitive point-imprimitive design with parameter set $(v,k,r,c,d,\ell)=(36,8,20,6,6,2)$, namely the design in Construction~\ref{con1}, and by Lemma~\ref{lem:con1} and Remark~\ref{rem:con1}, the entry in Table~\ref{table2} is valid.
\medskip
The remaining parameter sets, both with $v=45$, are dealt with in the following lemma.
\begin{proposition}\label{le:design45}
There is no flag-transitive, point-imprimitive $4$-design with parameter set
$$
(v,k,r,c,d,\ell) \text{ equal to } (45,12,16,9,5,3), \text{ or } (45,12,16,5,9,2).
$$
\end{proposition}
\par\noindent{\sc Proof~}
Suppose that such a design $\D=(\mathcal{P},\mathcal{B})$ exists, admitting a flag-transitive,
point-imprimitive automorphism group $G$.
Then $b = |\mathcal{B}| = vr/k=60$ and the number of flags is $f=bk=vr =720=2^4.3^2.5$. Thus $|G|=fz$ for some integer $z\geq1$.
Let $\mathcal{C}=\{\Delta_1,\dots,\Delta_d\}$ be a $G$-invariant partition of the point-set $\mathcal{P}$ with each $|\Delta_i|=c$,
where $(c,d)$ is either $(9,5)$ or $(5,9)$. Let $D=G^\mathcal{C}$, and $L=(G_{\Delta_1})^{\Delta_1}$, so by \cite[Theorem 5.5]{PS} we may assume that $G\leq L\wr D\leq \Sym_c\wr \Sym_d$, acting imprimitively. By Lemma \ref{la=3}, there are no numerically feasible parameter sets with $(\lambda,v)=(4,45)$ and $c$ or $d$ equal to $3$, and hence both $L$ and $D$ are primitive of degree $c$ and $d$ respectively. We note that each primitive group $X$ of degree $9$ has socle $T=C_3^2$ (affine type), or $PSL_2(8)$ or $\Alt_9$, and in the affine case, $T=O_3(X)$ is the largest normal $3$-subgroup of $X$, and is the group of translations (see, for example \cite[Theorem 3.15]{PS}).
\medskip\noindent
\emph{Claim $1$:} If $(c,d)=(5,9)$, then there exists a second $G$-invariant partition of $\mathcal{P}$
with $5$ parts of size $9$, so that, without loss of generality we may assume that $(c,d)=(9,5)$.
\medskip\noindent
\emph{Proof of claim:} Suppose that $(c,d)=(5,9)$. In this case $\ell=2$, so a block meets each of six parts $\Delta_i$ in a $2$-subset and is disjoint from the remaining three parts. Now
\[
\#\{(B,\Delta) \mid B\in\mathcal{B}, \Delta\in\mathcal{C}, B\cap \Delta=\emptyset \} = b\times 3 = 9\times x,
\]
where $x$ is the number of blocks disjoint from a given class. Hence $x=b/3=20$, so each part $\Delta$ meets exactly $60-x=40$ blocks nontrivially. If $D$ has socle $PSL_2(8)$ or $\Alt_9$, then $|D|$, and hence also $|G|$, is divisible by $7$. Since $b=60$, $7$ also divides $|(G_B)^\mathcal{C}|$, which is a contradiction since $(G_B)^\mathcal{C}$ fixes setwise the set of three parts disjoint from $B$. It follows that $C_3^2\unlhd D=G^\mathcal{C}\leq {\rm AGL}(2,3)$, and in particular $5$ does not divide $|G^\mathcal{C}|$. Since $b=60$ divides $|G|$, this implies that $5$ divides the order of $K=G_{(\mathcal{C})}$, the kernel of the $G$-action on $\mathcal{C}$. In particular, $K\ne 1$.
Since $K$ is normal in $G$, its orbits on points all have the same size, and hence the $K$-orbits are the parts $\Delta_i$ of $\mathcal{C}$, and since $K^{\Delta_i}$ is a transitive group of prime degree $5$, it is primitive.
Next we show that $K$ acts faithfully on $\Delta_1$. If this is not the case then the kernel $K_{(\Delta_1)}$ of the action of $K$ on $\Delta_1$ is a nontrivial normal subgroup of $K$, and hence acts nontrivially on some part $\Delta\ne \Delta_1$. Thus $K_{(\Delta_1)}^\Delta$ is a nontrivial normal subgroup of the primitive group $K^\Delta$, and hence is transitive. This implies that a Sylow $5$-subgroup $P$ of $K_{(\Delta_1)}$ is nontrivial and acts transitively on $\Delta$.
For each point-pair $\pi\subset \Delta_1$, there are exactly $\lambda=4$ blocks containing $\pi$, and since $P$ fixes $\pi$ (pointwise) it follows that $P$ fixes this set of 4 blocks setwise, and in fact $P$ fixes each of these four blocks (since $P$ is a $5$-group). Since this holds for all pairs $\pi\subset \Delta_1$, and since each block meeting $\Delta_1$ intersects it in two points, it follows that $P$ fixes setwise each of the 40 blocks which intersect $\Delta_1$ nontrivially. For any such block, say $B$, $B$ meets six parts in a two-subset, and each of these part-intersections with $B$ must be fixed setwise by $P$. It follows that $P$ must fix each of these six parts pointwise, and the same argument yields that $P$ fixes setwise each block which intersects any of these six parts nontrivially. Since each block intersects nontrivially with six of the nine parts of $\mathcal{C}$, it follows that $P$ fixes each block of $\cal B$ setwise. This contradicts the fact that $P$ is transitive on $\Delta$. Thus $K_{(\Delta_1)}=1$, and so
$K\cong K^{\Delta_1}\leq \Sym_5$.
Now we consider the map $\phi:G\to {\rm Aut}(K)$ induced by conjugation, and let $N = ker(\phi)=C_G(K)$.
By the previous paragraph, $K\leq \Sym_5$, and in fact either $K=\Alt_5$ or $\Sym_5$, or the largest normal $5$-subgroup $O_5(K)$ of $K$ is isomorphic to $C_5$. In all cases ${\rm Aut}(K)$ is isomorphic to a subgroup of $\Sym_5$, and in particular $|{\rm Aut}(K)|$ is not divisible by $9$. Since $9$ divides $v$ and hence $|G|$, we conclude that $3$ divides $|N|$. Further, $N\cap K =C_G(K)\cap K=Z(K)$, and either $Z(K)=1$ or $Z(K)=K=C_5<N$. In either case, $3$ divides the order of $N/(N\cap K)\cong N^\mathcal{C}$, which is a normal subgroup of the primitive group $G^\mathcal{C}=D$. Hence $N^\mathcal{C}$ contains the translation subgroup $T\cong C_3^2$ of $D\leq {\rm AGL}_2(3)$.
Let $N_0$ be the (uniquely determined) subgroup of $N$ such that $N\cap K < N_0$ and $N_0^\mathcal{C}=T$, and let $M$ be a Sylow $3$-subgroup of $N_0$. By definition $N_0$ is normal in $G$. Since $|N\cap K|=1$ or $5$, and since $N_0$ centralises $N\cap K$, it follows that $N_0\cong M\times (N\cap K)$ and in particular $M=O_3(N_0)\cong C_3^2$ is the unique Sylow $3$-subgroup of $N_0$. Thus $M$ is a characteristic subgroup of $N_0$, and hence is normal in $G$. Since $|M|=9$, the $M$-orbits in $\mathcal{P}$ form a $G$-invariant partition with $5$ parts of size $9$, which proves the claim. \qed
\medskip
Thus we may assume that $(c,d)=(9,5)$, so now $\ell=3$, and each block meets each of four parts $\Delta_i$ in a $3$-subset and is disjoint from the remaining class. This time
\[
\#\{(B,\Delta) \mid B\in\mathcal{B}, \Delta\in\mathcal{C}, B\cap \Delta=\emptyset \} = b\times 1 = 5\times x,
\]
where $x$ is the number of blocks disjoint from a given class, so $x=b/5=12$, and each part meets $60-x=48$ blocks nontrivially.
Let $K=G_{(\mathcal{C})}$, the kernel of the $G$-action on $\mathcal{C}$.
\medskip\noindent
\emph{Claim $2$:} $L=(G_{\Delta_1})^{\Delta_1}$ is of affine type, and $K^{\Delta_1}$ contains the translation group $O_3(L)\cong T$. Moreover, for $Q=O_3(K)$, the largest normal $3$-subgroup of $K$, the $Q$-orbits and the $K$-orbits in $\mathcal{P}$ are the parts of $\mathcal{C}$, and $Q^{\Delta_i}\cong O_3(L)$ for each $i$.
\medskip\noindent
\emph{Proof of claim:}
Note that $G/K\cong D\leq \Sym_5$, and so $|G:K|$ is not divisible by $9$. Since $|G|=fz$ is divisible by $9$, it follows that $3$ divides $|K|$ and so $K\ne 1$. Now $K$ is normal in $G$ and hence its orbits on points all have the same size. In particular $K^{\Delta_1}$ is nontrivial and normal in the primitive group $L$. Hence $K^{\Delta_1}$ is transitive, and the $K$-orbits are the parts in $\mathcal{C}$.
Let $B, B'$ be blocks which meet $\Delta_1$, say $\alpha\in B\cap \Delta_1, \alpha'\in B'\cap \Delta_1$. Since $G$ is flag-transitive, there exists $g\in G$ which maps the flag $(\alpha, B)$ to the flag $(\alpha', B')$ and hence $B^g=B'$ and $g$ fixes setwise the class $\Delta_1$ containing $\alpha$ and $\alpha'$. Thus $g\in G_{\Delta_1}$, and it follows that $G_{\Delta_1}$ is transitive on the set of $48$ blocks meeting $\Delta_1$ nontrivially. Thus $|G_{\Delta_1}:G_{\Delta_1, B}|=48$, and hence
$|L:G_{\Delta_1,B}^{\Delta_1}|$ divides $48$. If $L$ has socle $PSL_2(8)$ or $\Alt_9$ then $7$ divides $|L|$, and since $|L:G_{\Delta_1,B}^{\Delta_1}|$ divides $48$, it follows that $7$ also divides $|G_{\Delta_1, B}^{\Delta_1}|$. This is a contradiction since $G_{\Delta_1,B}^{\Delta_1}$ leaves invariant the $3$-subset $B\cap \Delta_1$. Thus $L$ is of affine type, and hence $K^{\Delta_i}$ contains the translation group $O_3((G_{\Delta_i})^{\Delta_i})\cong T$, for each $i$. It follows that $Q=O_3(K)$ induces $T$ on each part $\Delta_i$ and hence the $Q$-orbits are the parts of $\mathcal{C}$. \qed
\medskip
We may therefore view each $\Delta_i$ as the affine plane ${\rm AG}_2(3)$.
\medskip\noindent
\emph{Claim $3$:} The $Q$-orbits in $\mathcal{B}$ have size $3$, and if $B\cap \Delta_i\ne\emptyset$, then the $Q_B$-orbits in $\Delta_i$ form a parallel class of lines of the affine plane $\Delta_i$. Moreover, for each $i$, each line of the affine plane $\Delta_i$ occurs as the intersection with $\Delta_i$ of exactly four blocks, and each parallel class of lines of $\Delta_i$ corresponds to $12$ of these block-part intersections.
\medskip\noindent
\emph{Proof of claim:}
Since $Q$ is normal in $G$, the $Q$-orbits in $\mathcal{B}$ all have the same length, say $y$. So $y$ divides $b=60$, and $y$ is a power of $3$ since $y$ divides $|Q|$, whence $y=1$ or $3$. Since $Q^{\Delta_1}$ is transitive, it acts nontrivially on the blocks intersecting $\Delta_1$ in a $3$-subset, and hence $y=3$.
Since $G^\mathcal{C}$ is transitive, it is sufficient to prove the other assertions for $\Delta_1$. Let $B$ be a block such that $B\cap \Delta_1\ne\emptyset$. Then $Q_B$ has index $3$ in $Q$, and as $Q^{\Delta_1}$ is the translation group by Claim 2, it follows that the $Q_B$-orbits in $\Delta_1$ form a parallel class of lines of the affine plane.
Let $\alpha\in \Delta_1$. Then $\alpha$ lies in $r=16$ blocks, and also $\alpha$ lies in four lines of the affine plane $\Delta_1$. For each of these lines $m$, and each point $\beta\in m\setminus\{\alpha\}$, the pair $\{\alpha,\beta\}$ lies in $\lambda=4$ blocks, each intersecting $\Delta_1$ in the unique affine line $m$ containing $\{\alpha,\beta\}$. Thus each of the affine lines on $\alpha$ occurs as the intersection with $\Delta_1$ of exactly four blocks. This is true for all points of $\Delta_1$, so each line of the affine plane $\Delta_1$ is the intersection with $\Delta_1$ of exactly four blocks. Moreover each parallel class of lines of $\Delta_1$ corresponds to $3\times 4$ block intersections with $\Delta_1$. \qed
\medskip\noindent
\emph{Claim $4$:} $Q=T\cong C_3^2$ is faithful on each $\Delta_i\in\mathcal{C}$.
\medskip\noindent
\emph{Proof of claim:}
Since $Q^{\Delta_1}=T$ is the translation group, the subgroup $R=Q_{(\Delta_1)}$ fixes $\Delta_1$ pointwise, and is equal to $Q_\alpha$ for each $\alpha\in \Delta_1$. By Claim 3, for each of the 48 blocks $B$ that meet $\Delta_1$, the intersection $m=B\cap \Delta_1$ is a line of $\Delta_1$, and $Q_B=Q_m$ has index $3$ in $Q$. This implies that, for $\alpha\in m$, $Q_{m,\alpha}$ has index $3$ in $Q_m$ and hence index $9$ in $Q$, and we conclude that $Q_\alpha= Q_{m,\alpha} < Q_B < Q$. Thus $R=Q_\alpha$ fixes each of the 48 blocks which meet $\Delta_1$.
Let $\Delta_i$ be one of the other three parts meeting such a block $B$. Then $R$ fixes $B\cap \Delta_i$ setwise, and hence each $R$-orbit in $\Delta_i$ is contained in a line of $\Delta_i$ parallel to $B\cap \Delta_i$. By Claim 3, there are just 12 blocks which meet $\Delta_i$ in a line parallel to $B\cap \Delta_i$, while there are 48 blocks which meet $\Delta_i$ nontrivially, and at most 12 of these are disjoint from $\Delta_1$. Hence there exists a block $B'$ which meets both $\Delta_1$ and $\Delta_i$ and is such that the line $B'\cap \Delta_i$ is not parallel to $B\cap \Delta_i$. We have shown that $R$ fixes each of the (non-parallel lines) $B\cap \Delta_i, B'\cap \Delta_i$ setwise, and hence $R$ fixes their intersection, which is a single point $\alpha'\in \Delta_i$. It follows that $R=Q_{\alpha'}$ and so $R$ fixes $\Delta_i$ pointwise, and hence fixes setwise every block meeting $\Delta_i$. Since this holds for each part $\Delta_i$ meeting $B$, it follows that $R$ fixes setwise every block that meets any of these four parts, and this implies that $R$ fixes every block of $\mathcal{B}$. Hence $R=1$, proving the claim. \qed
\medskip\noindent
\emph{Claim $5$:} $K\cong K^{\Delta_i}$ is faithful, for each $\Delta_i\in\mathcal{C}$.
\medskip\noindent
\emph{Proof of claim:} As $G^\mathcal{C}$ is transitive it is sufficient to prove this for $\Delta_1$. Let $A=K_{(\Delta_1)}$, the pointwise stabiliser of $\Delta_1$ in $K$. By Claim 4, $A\cap Q=1$, and it follows that the normal subgroups $A, Q$ of $K$ centralise each other. Then for each $j$, $A^{\Delta_j}$ is contained in the centraliser of $Q^{\Delta_j}$ in $G_{\Delta_j}^{\Delta_j}\cong L$.
Since $Q^{\Delta_j}$ is self-centralising in $G_{\Delta_j}^{\Delta_j}$ it follows that $A^{\Delta_j}\leq Q^{\Delta_j}$, and in particular $A^{\Delta_j}$ is a $3$-group. Since $A$ is isomorphic to a subgroup of $\prod_{j=1}^5 A^{\Delta_j}$, it follows that $A$ is a $3$-group. Thus $A\leq O_3(K)=Q$, and hence $A=1$. \qed
\medskip
Since $D=G^\mathcal{C}\leq \Sym_5$,
it follows from Claims 2 and 5 that $|G|=|G^\mathcal{C}|.|K|$ divides $|\Sym_5|\times |{\rm AGL}_2(3)|=120\times 9\times 48$. Recall that $|G|=fz$ with $f=720=2^4.3^2.5$, the number of flags. Hence $z$ divides $72$.
To complete this analysis we performed the following check computationally, using Magma:
\begin{itemize}
\item We constructed the group $W={\rm AGL}_2(3)\wr \Sym_5$ in its natural imprimitive permutation action on $\mathcal{P}$ of degree $45$ leaving invariant a partition $\mathcal{C}=\{\Delta_1,\dots,\Delta_5\}$ with each $|\Delta_i|=9$;
\item for each subgroup $G$ of $W$ with order $fz$, for $z$ a divisor of $72$, we checked whether $G$ had an orbit $\mathcal{B}$ of size $b=60$ on $12$-subsets $B$ of $\mathcal{P}$ such that $B\cap \Delta_i$ is a line of the corresponding affine plane for exactly four parts $\Delta_i\in\mathcal{C}$;
\item for each such $G$ and $\mathcal{B}$, we checked whether $(\mathcal{P}, \mathcal{B})$ was a $2$-design with $\lambda=4$.
\end{itemize}
This computer search yielded no $2$-designs.
\qed
\section*{Acknowledgement}
The authors thank Charlie Colbourn, Patric \"{O}sterg\aa rd, and Alfred Wasserman for their advice about the $36$-point design.
| proofpile-arXiv_065-265 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\subsection{A Subsection Sample}
\bibliographystyle{splncs04}
\section{Method}
\begin{figure}[t!]
\centering
\includegraphics[width=0.9\textwidth]{img/SASSNet.png}
\caption{Overview of our method. Our network takes as input a 3D volume, and predicts a 3D SDM and a segmentation map. Our learning loss consists of a multi-task supervised term and an adversarial loss on the SDM predictions.
}
\label{fig:model}
\end{figure}
\subsection{Overview}
We aim to build a deep neural network for medical image segmentation in a semi-supervised setting in order to reduce annotation cost. Due to lack of annotated images, our key challenge is to regularize the network learning effectively from a set of unlabeled ones.
In this paper, we tackle this problem by utilizing the regularity in geometric shapes of the target object class, which provides an effective constraint for both segment prediction and network learning.
Specifically, we propose to incorporate a shape-aware representation of object segments into the deep network prediction. In particular, we develop a multi-task segmentation network that takes a 3D image as input and jointly predicts a segmentation map and a SDM of object segmentation. Based on this SDM representation, we then design a semi-supervised learning loss for training the segmentation network. Our loss mainly consists of two components, one for the network predictions on the labeled set while the other enforcing consistency between the SDM predictions on the labeled and unlabeled set. To achieve effective consistency constraint, we adopt an adversarial loss that encourages the segmentation network to produce segment predictions with similar distributions on both datasets. Figure \ref{fig:model} illustrates the overall pipeline of our semi-supervised segmentation network. Below we will introduce the detailed model design in Section~\ref{sec:model}, followed by the learning loss and network training in Section~\ref{sec:loss}.
\subsection{Segmentation Network}\label{sec:model}
In order to encode geometric shape of a target semantic class, we propose a multi-task segmentation network that jointly predicts a 3D object mask and its SDM for the input 3D volume. Our network has a V-Net \cite{milletari2016v} structure that consists of an encoder module and a decoder module with two output branches, one for the segmentation map and the other for the SDM. For notation clarity, we mainly focus on the single-class setting below\footnote{It is straightforward to generalize our formulation to the multi-class setting by treating each semantic class separately for SDMs.}.
Specifically, we employ a V-Net backbone as in~\cite{yu2019uncertainty}, and then add a light-weighted SDM head in parallel with the original segmentation head. Our SDM head is composed by a 3D convolution block followed by the $tanh$ activation.
Given an input image $\mathbf{X}\in \mathbb{R}^{H\times W\times D}$, the segmentation head generates a confidence score map $\mathbf{M}\in [0,1]^{H\times W\times D}$ and the SDM head predicts a SDM $\mathbf{S}\in [-1,1]^{H\times W\times D}$ as follows:
\begin{align}
\mathbf{M} = f_\text{seg}(\mathbf{X}; \theta), \quad \quad \mathbf{S} = f_\text{sdm}(\mathbf{X}; \theta)
\end{align}
where $\theta$ are the parameters of our segmentation network, and each element of $\mathbf{S}$ indicates the signed distance of a corresponding voxel to its closest surface point after normalization~\cite{xue2019shape}.
\subsection{Shape-aware Semi-supervised Learning}\label{sec:loss}
We now introduce our semi-supervised learning strategy for the segmentation network. While prior methods typically rely on the segmentation output $\mathbf{M}$, we instead utilize the shape-aware representation $\mathbf{S}$ to regularize the network training. To this end, we develop a multi-task loss consisting of a supervised loss $\mathcal{L}_{s}$ on the labeled set and an adversarial loss $\mathcal{L}_{a}$ on the entire set to enforce consistency of the model predictions.
Formally, we assume a standard semi-supervised learning setting, in which the training set contains $N$ labeled data and $M$ unlabeled data, where $N\ll M$. We denote the labeled set as $\mathcal{D}^l=\{\mathbf{X}_n, \mathbf{Y}_n, \mathbf{Z}_n\}^N_{n=1}$ and unlabeled set as $\mathcal{D}^u=\{\mathbf{X}_m\}^{N+M}_{m=N+1}$, where $\mathbf{X}_n \in \mathbb{R} ^{H \times W \times D}$ are the input volumes, $\mathbf{Y}_n \in \{0, 1\}^{H \times W \times D} $ are the segmentation annotations and $\mathbf{Z}_n \in \mathbb{R} ^{H \times W \times D}$ are the groundtruth SDMs derived from $\mathbf{Y}_n$. Below we first describe the supervised loss on $\mathcal{D}^l$ followed by the adversarial loss that utilizes the unlabeled set $\mathcal{D}^u$.
\subsubsection{Supervised Loss $\mathcal{L}_{s}$}
On the labeled set, we employ a dice loss $l_{dice}$ and a mean square loss $l_{mse}$ for the segmentation and SDM output of the multi-task segmentation network, respectively:
\begin{align}
\mathcal{L}_{s}(\theta) =& \mathcal{L}_{seg} + \alpha \mathcal{L}_{sdm} \label{eq:supervised_loss} \\
\mathcal{L}_{seg} = \frac{1}{N}\sum_{i=1}^{N} l_{dice}(f_{seg}(\mathbf{X}_{i} ; \theta),& \mathbf{Y}_{i});\quad
\mathcal{L}_{sdm} = \frac{1}{N}\sum_{i=1}^{N} l_{mse}(f_{sdm}(\mathbf{X}_{i} ; \theta), \mathbf{Z}_{i})
\end{align}
where $\mathcal{L}_{seg}$ denotes the segmentation loss and $\mathcal{L}_{sdm}$ is the SDM loss, and $\alpha$ is a weighting coefficient balancing two loss terms.
\subsubsection{Adversarial Loss $\mathcal{L}_{a}$}
To regularize the model learning with the unlabeled data, we introduce an adversarial loss that enforces the consistency of SDM predictions on the labeled and unlabeled set. To this end, we propose a discriminator network to tell apart the predicted SDMs from the labeled set, which should be high-quality due to the supervision, and the ones from the unlabeled set. Minimizing the adversarial loss induced by this discriminator enables us to learn effective shape-aware features that generalizes well to the unlabeled dataset.
Specifically, we adopt a similar discriminator network $D$ as~\cite{radford2015unsupervised}, which consists of 5 convolution layers followed by an MLP. The network takes a SDM and input volume as input, fuses them through convolution layers, and predicts its class probability of being labeled data. Given the discriminator $D$, we denote its parameter as $\zeta$ and define the adversarial loss as follows,
\begin{align}
\mathcal{L}_{a}(\theta,\zeta) = \frac{1}{N}\sum _{n=1}^{N} \log D(\mathbf{X}_n, \mathbf{S}_n; \zeta) +
\frac{1}{M}\sum_{m=N+1}^{N+M}\log\big(1-D(\mathbf{X}_m,\mathbf{S}_m;\zeta)\big) \label{eq:adv_loss}
\end{align}
where $\mathbf{S}_n=f_{sdm}(\mathbf{X}_n; \theta)$ and $\mathbf{S}_m=f_{sdm}(\mathbf{X}_m; \theta)$ are the predicted SDMs.
\subsubsection{Overall Training Pipeline}
Our overall training objective $\mathcal{V}(\theta, \zeta)$ combines the supervised and the adversarial loss defined above and the learning task can be written as,
\begin{align}
\min_{\theta}\max_{\zeta}\mathcal{V}(\theta, \zeta) = \mathcal{L}_{s}(\theta) + \beta \mathcal{L}_{a}(\theta,\zeta)
\label{eq:total_loss}
\end{align}
where $\beta$ is a weight coefficient that balances two loss terms. We adopt a standard alternating procedure to train the entire network, which includes the following two subproblems.
Given a fixed discriminator $D(\cdot;\zeta)$, we minimize the overall loss w.r.t the segmentation network parameter $\theta$. To speed up model learning, we simplify the loss in two steps: Firstly, we ignore the first loss term in Eqn~\eqref{eq:adv_loss} due to high-quality SDM predictions on the labeled set, i.e., $\mathbf{S}_n\approx \mathbf{Z}_n$, and additionally, we adopt a similar surrogate loss for the generator as in~\cite{Ian2014gan}. Hence the learning problem for the segmentation network can be written as,
\begin{align}
\min_\theta \mathcal{L}_{s}(\theta) - \frac{\beta}{M}\sum_{m=N+1}^{N+M}\log(D(\mathbf{X}_m, f_{sdm}(\mathbf{X}_m; \theta);\zeta)) \label{eq:segmentation_network_loss}
\end{align}
On the other hand, given a fixed segmentation network, we simply minimize the binary cross entropy loss induced by Eqn~\eqref{eq:total_loss} to train the discriminator, i.e.,
$\min_\zeta -\mathcal{V}(\theta,\zeta)$, or $\max_\zeta \mathcal{L}_a(\theta,\zeta)$.
To stablize the overall training, we use an annealing strategy based on a time-dependent Gaussian warm-up function to slowly increase the loss weight $\beta$ (See Sec.~\ref{sec:exp} for details).
\section{Introduction}
Semantic object segmentation is a fundamental task in medical image analysis and has been widely used in automatic delineation of regions of interest in 3D medical images, such as cells, tissues or organs. Recently, tremendous progress has been made in medical semantic segmentation~\cite{taghanaki2019deep} thanks to modern deep convolutional networks, which achieve state-of-the-art performances in many real-world tasks. However, training deep neural networks often requires a large amount of annotated data, which is particularly expensive in medical segmentation problems.
In order to reduce labeling cost, a promising approach is to adopt a semi-supervised learning~\cite{bai2017semi,baur2017semi} framework that typically utilizes a small labeled dataset and many unlabeled images for effective model training.
Recent efforts in semi-supervised segmentation have been focused on incorporating unlabeled data into convolutional network training, which can be largely categorized into two groups. The first group of those methods mainly consider the generic setting of semi-supervised segmentation~\cite{zhang2017deep,hung2019adversarial,nie2018asdnet,zheng2019semi,laine2016temporal, tarvainen2017mean, yu2019uncertainty,bortsova2019semi,li2018semi}. Most of them adopt adversarial learning or consistency loss as regularization in order to leverage unlabeled data for model learning. The adversarial learning methods~\cite{zhang2017deep, hung2019adversarial,nie2018asdnet,zheng2019semi} enforces the distributions of segmentation of unlabeled and labeled images to be close while the consistency loss approaches~\cite{laine2016temporal,tarvainen2017mean,yu2019uncertainty,bortsova2019semi,li2018semi} utilize a teacher-student network design and require their outputs being consistent under random perturbation or transformation of input images. To cope with difficult regions, Nie et al.~\cite{nie2018asdnet} utilize adversarial learning to select regions of unlabeled images with high confidence to train the segmentation network.
Yu et al.~\cite{yu2019uncertainty} introduce an uncertainty map based on the mean-teacher framework~\cite{tarvainen2017mean} to guide student network learning. Despite their promising results, those methods lack explicit modeling of the geometric prior of semantic objects, often leading to poor object coverage and/or boundary prediction.
The second group of semi-supervised methods attempt to address the above drawback by incorporating a strong anatomical prior on the object of interest in their model learning~\cite{zheng2019semi, he2019dpa}. For instance, Zheng et al.~\cite{zheng2019semi} introduce the Deep Atlas Prior (DAP) model that encodes a probabilistic shape prior in its loss design. He et al.~\cite{he2019dpa} propose an auto-encoder to learn priori anatomical features on unlabeled dataset. However, such prior typically assumes properly aligned input images, which is difficult to achieve in practice for objects with large variation in pose or shape.
In this work, we propose a novel shape-aware semi-supervised segmentation strategy to address the aforementioned limitations. Our main idea is to incorporate a more flexible geometric representation in the network so that we are able to enforce a global shape constraint on the segmentation output, and meanwhile to handle objects with varying poses or shapes. Such a ``shape-aware" representation enables us to capture the global shape of each object class more effectively. Moreover, by exploiting consistency of the geometric representations between labeled and unlabeled images, we aim to design a simple and yet effective semi-supervised learning strategy for deep segmentation networks.
To achieve this, we develop a multi-task deep network that jointly predicts semantic segmentation and signed distance map (SDM)~\cite{perera2015motion, dangi2019distance, park2019deepsdf, xue2019shape} with a shared backbone network module. The SDM assigns each pixel a value indicating its signed distance to the nearest boundary of target object, which provides a shape-aware representation that encodes richer features of object shape and surface. To utilize the unlabeled data, we then introduce an adversarial loss between the predicted SDMs of labeled and unlabeled data for semi-supervised learning. This allows the model to learn shape-aware features more effectively by enforcing similar distance map distributions on the entire dataset. In addition, the SDM naturally imposes more weights on the interior region of each semantic class, which can be viewed as a proxy of confidence measure. In essence, we introduce an implicit shape prior and its regularization based on an adversarial loss for semi-supervised volumetric segmentation.
We evaluate our approach on the Atrial Segmentation Challenge dataset with extensive comparisons to prior arts. The results demonstrate that our segmentation network outperforms the state-of-the-art methods and generates object segmentation with high-quality global shapes.
Our main contributions are three-folds: (1) We propose a novel shape-aware semi-supervised segmentation approach by enforcing geometric constraints on labeled and unlabeled data. (2) We develop a multi-task loss on segmentation and SDM predictions, and impose global consistency in object shapes through adversarial learning. (3) Our method achieves strong performance on the Atrial Segmentation Challenge dataset with only a small number of labeled data.
\section{Conclusion}
In this paper, we proposed a shape-aware semi-supervised segmentation approach for 3D medical scans. In contrast to previous methods, our method exploits the regularity in geometric shapes of the target object class for effective segment prediction and network learning. We developed a multi-task segmentation network that jointly predicts semantic segmentation and SDM of object surfaces, and a semi-supervised learning loss enforcing consistency between the predicted SDMs of labeled and unlabeled data. We validated our approach on the Atrial Segmentation Challenge dataset, which demonstrates that our segmentation network outperforms the state-of-the-art methods and generates object segmentation with high-quality global shapes.
\section{Experiments and Results}\label{sec:exp}
\begin{table}[t]
\caption{Quantitative comparisons of semi-supervised segmentation models on the LA dataset. All models use the V-Net as backbone network. Results on two different data partition settings show that our SASSNet outperforms the state-of-the-art results consistently.}
\label{tab:la_16}
\centering
\resizebox{.95\textwidth}{!}{
\begin{tabular}{c|c|c|c|c|c|c} \hline \hline
\multirow{2}{*}{Method} & \multicolumn{2}{c|}{\# \textbf{scans used}} & \multicolumn{4}{c}{\textbf{Metrics}} \\ \cline{2-7}
& Labeled & Unlabeled & Dice{[}\%{]} & Jaccard{[}\%{]} & ASD{[}voxel{]} & 95HD{[}voxel{]} \\ \hline
V-Net & 80 & 0 & 91.14 & 83.82 & 1.52 & 5.75 \\ \hline \hline
V-Net & 16 & 0 & 86.03 & 76.06 & 3.51 & 14.26 \\ \hline
DAP \cite{zheng2019semi} & 16 & 64 &87.89 &78.72 &2.74 &9.29 \\ \hline
ASDNet \cite{nie2018asdnet} & 16 & 64 & 87.90 & 78.85 & \textbf{2.08} & 9.24 \\ \hline
TCSE \cite{li2018semi} & 16 & 64 & 88.15 & 79.20 & 2.44 & 9.57 \\ \hline
UA-MT \cite{yu2019uncertainty} & 16 & 64 & 88.88 & 80.21 & 2.26 & 7.32 \\ \hline
UA-MT(+NMS) & 16 & 64 & 89.11 & 80.62 & 2.21 & \textbf{7.30} \\ \hline
SASSNet(ours) & 16 & 64 & 89.27 & 80.82 & 3.13 & 8.83 \\ \hline
SASSNet(+NMS) & 16 & 64 & \textbf{89.54} & \textbf{81.24} & {2.20} & 8.24 \\ \hline \hline
V-Net & 8 & 0 & 79.99 & 68.12 & 5.48 & 21.11 \\ \hline
DAP \cite{zheng2019semi} & 8 & 72 &81.89 &71.23 &3.80 &15.81 \\ \hline
UA-MT \cite{yu2019uncertainty} & 8 & 72 & 84.25 & 73.48 & 3.36 & 13.84 \\ \hline
UA-MT(+NMS) & 8 & 72 & 84.57 & 73.96 & 2.90 & 12.51 \\ \hline
SASSNet(ours) & 8 & 72 & 86.81 & 76.92 & 3.94 & 12.54 \\ \hline
SASSNet(+NMS) & 8 & 72 & \textbf{87.32} & \textbf{77.72} & \textbf{2.55} & \textbf{9.62} \\ \hline \hline
\end{tabular}}
\end{table}
We validate our method on the Left Atrium (LA) dataset from Atrial Segmentation Challenge\footnote{http://atriaseg2018.cardiacatlas.org/} with detailed comparisons to prior arts.
The dataset contains 100 3D gadolinium-enhanced MR imaging scans (GE-MRIs) and LA segmentation masks, with an isotropic resolution of $0.625 \times 0.625 \times 0.625 mm^3$.
Following \cite{yu2019uncertainty}, we split them into 80 scans for training and 20 scans for validation, and apply the same pre-processing methods.
\subsubsection{Implementation Details and Metrics.}
The segmentation network is trained by a SGD optimizer for 6000 iterations, with an initial learning rate (lr) 0.01 decayed by 0.1 every 2500 iterations. The discriminator uses $4\times4\times4$ kernels with stride 2 in its convolutional layers and an Adam optimizer with a constant lr 0.0001.
We use a batch size of 4 images and a single GPU with 12Gb RAM for the model training.
In all our experiments, we set $\alpha$ as 0.3 and $\beta$ as a time-dependent Gaussian warming-up function
$\lambda (t)=0.001*e^{-5(1-\frac {t} {t_{max}})^2}$ where $t$ indicates number of iterations.
During testing, we take the segmentation map output $\mathbf{M}$ for evaluation. In addition, an non-maximum suppression (NMS) is applied as the post process in order to remove isolated extraneous regions. We use the standard evaluation metrics, including Dice coefficient (Dice), Jaccard Index (Jaccard), 95\% Hausdorff Distance (95HD) and Average Symmetric Surface Distance (ASD).
\begin{figure}[t]
\centering
\subfigure[2D comparison]{
\begin{minipage}[t]{\linewidth}
\centering
\includegraphics[width=0.95\linewidth]{img/la_v2d.png}
\end{minipage}%
}%
\quad
\subfigure[3D comparison]{
\begin{minipage}[t]{\linewidth}
\centering
\includegraphics[width=0.95\linewidth]{img/la_v3d.png}
\end{minipage}%
}
\caption{2D and 3D Visualization of the segmentations by UA-MT \cite{yu2019uncertainty} and our method, where GT denotes groundtruth segmetnation.
}
\label{fig:la_vis2d}
\end{figure}
\subsubsection{Quantitative Evaluation and Comparison.}
We evaluate our method in two different settings with comparisons to several recent semi-supervised segmentation approaches, including DAP \cite{zheng2019semi}, ASDNet \cite{nie2018asdnet}, TCSE \cite{li2018semi} and UA-MT \cite{yu2019uncertainty}. Table \ref{tab:la_16} presents a summary of the quantitative results, in which we first show the upper-bound performance achieved by a fully-supervised network, followed by two individual settings.
The first setting follows the work \cite{yu2019uncertainty}, which takes 20\% of training data as labeled data (16 labeled), and the others as unlabeled data for semi-supervised training. We can see that this setting is relative easy as the model trained with 20\% of data already achieves good performance (86.03\% in Dice). Among the semi-supervised methods, the DAP performs worst, indicating the limitation of an atlas-based prior, while UA-MT achieves the top performance in the previous methods. Our method outperforms all the other semi-supervised networks in both Dice (89.54\%) and Jaccard (81.24\%), and achieves competitive results on other metrics. In particular, our SASSNet surpasses UA-MT in Dice without resorting to a complex multiple network architecture.
To validate the robustness of our method, we also consider a more challenging setting in which we only have 8 labeled images for training. The second half of Table \ref{tab:la_16} show the comparison results, where SASSNet outperforms UA-MT with a large margin (Dice: +2.56\% without NMS and +3.07\% with NMS). Without NMS, our SASSNet tends to generate more foreground regions, which leads to slightly worse performance on ASD and 95HD. However, it also produce better segmentation preserving the original object shape. By contrast, UA-MT often misses inner regions of target objects and generates irregular shapes. Figure~\ref{fig:la_vis2d} provides several qualitative results for visual comparison.
\begin{table}[t]
\caption{Effectiveness of our proposed modules on the LA dataset. All the models use the same V-Net as the backbone, and we conduct an ablative study to show the contribution of each component module.}
\label{tab:la_ablation}
\resizebox{\textwidth}{!}{
\begin{tabular}{l|c|c|c|c|c|c|c} \hline \hline
\multirow{2}{*}{Method} & \multicolumn{2}{c|}{\# \textbf{scans used}} & \multicolumn{4}{c|}{\textbf{Metrics}} & \multicolumn{1}{c}{\textbf{Cost}} \\ \cline{2-8}
& Labeled & Unlabeled & Dice{[}\%{]} & Jaccard{[}\%{]} & ASD{[}voxel{]} & 95HD{[}voxel{]} & Params{[}M{]} \\ \hline
V-Net & 8 & 0 & 79.99 & 68.12 & 5.48 & 21.11 & 187.7 \\ \hline
V-Net +SDM & 8 & 0 & 81.12 & 69.75 & 6.93 & 25.58 & 187.9 \\ \hline
V-Net +SDM +GAN & 8 & 72 & 86.81 & 76.92 & 3.94 & 12.54 & 249.7 \\ \hline
UA-MT \cite{yu2019uncertainty} & 8 & 72 & 84.25 & 73.48 & 3.36 & 13.84 & 375.5 \\ \hline
V-Net +SDM +MT & 8 & 72 & 84.97 & 74.14 & 6.12 & 22.20 & 375.8 \\ \hline
\hline
\end{tabular}
}
\end{table}
\subsubsection{Ablative Study.} We conduct several detailed experimental studies to examine the effectiveness of our proposed SDM head and the adversarial loss (GAN). Table \ref{tab:la_ablation} shows the quantitative results of different model settings.
The first row is a V-Net trained with only the labeled data, which is our base model. We first add a SDM head, denoted as V-Net+SDM, and as shown in the second row, such joint learning improves segmentation results by 1.1\% in Dice. We then add the unlabeled data and our adversarial loss, denoted as V-Net+SDM+GAN, which significantly improves the performance (5.7\% in Dice).
We also compare our semi-supervised learning strategy with two methods in the mean-teacher (MT) framework (last two rows). One is the original UA-MT and the other is our segmentation network with the MT consistency loss. Our SASSNet outperforms both methods with higher Dice and Jaccard scores, which indicates the advantage of our representation and loss design. Moreover, our network has a much simpler architecture than those two networks.
| proofpile-arXiv_065-266 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\subsection{Open Data}
The Open Data movement aims to make data free to use, reuse, and redistribute by anyone.
In the last years, Open Data portals have evolved from offering data in text formats only (e.g., CSV, XML) towards web-based formats, such as LinkedData~\cite{bizer2011linked} and Web APIs, that facilitate the reuse and integration of Open Data sources by external web applications.
In this subsection, we briefly describe the most common Web API technologies for Open Data, based on their popularity in governmental Open Data portals.
\vspace{\mysep}
\noindent \textsc{Socrata}. Promoted by Tyler Technologies, the \textsc{Socrata} data platform provides an integrated solution to create and publish Open Data catalogs.
\textsc{Socrata} supports predefined web-based visualizations of the data, the exporting of datasets in text formats and data queries via its own API that provides rich query functionalities through a SQL-like language called \textsc{SoQL}.
\textsc{Socrata} has been adopted by several governments around the world (e.g., Chicago\footnote{\url{https://data.cityofchicago.org}} or Catalonia\footnote{\url{http://governobert.gencat.cat/es/dades\_obertes/index.html}}).
\vspace{\mysep}
\noindent \textsc{CKAN}. Created by the Open Knowledge Foundation, \textsc{CKAN} is an Open Source solution for creating Open Data portals and publishing datasets in them.
As an example, the European Data Portal relies on CKAN.
Similar to \textsc{Socrata}, CKAN allows viewing the data in Web pages, downloading it, and querying it using a Web API.
The CKAN DataStore API can be used for reading, searching, and filtering data in a classical Web style using query parameters or by writing SQL statements directly in the URL.
\vspace{\mysep}
\noindent \textsc{OData}. Initially created by Microsoft, \textsc{OData} is a protocol for creating data-oriented REST APIs with query and update capabilities.
\textsc{OData} is now also an OASIS standard.
It is especially adapted to expose and access information from a variety of data sources such as relational databases, file systems, and content management systems.
\textsc{OData} allows creating resources that are defined according to a data model and can be queried by Web clients using a URL-based query language in a SQL-like style.
Many service providers adopted and integrated \textsc{OData} in their solutions (e.g., \textsc{SAP} or \textsc{IBM WebSphere}).
\vspace{\mysep}
\noindent \textsc{OpenAPI}. Evolving from Swagger, the \textsc{OpenAPI} specification has become the \emph{de facto} standard to describe REST APIs.
Though not specific for Open Data, \textsc{OpenAPI} is commonly used to specify all kinds of Web APIs, including Open Data ones (e.g., Deutsche Bahn\footnote{\url{https://developer.deutschebahn.com/store}}).
\vspace{\mysep}
In our approach, we target Open Data Web APIs described by any of the previous solutions.
We rely on model-driven techniques to cope with the variety of data schema and operation representations, as described in the next sections.
\subsection{Chatbots}
Chatbots are conversational interfaces able to employ Natural Language Processing (NLP) techniques to ``understand'' user requests and reply accordingly, either by providing a textual answer and/or executing additional external/internal services as part of the fulfillment of the request.
NLP covers a broad range of techniques that may combine parsing, pattern matching strategies and/or Machine Learning (ML) to represent the chatbot knowledge base.
The latter is the dominant one at the moment thanks to the popularization of libraries and Cloud-based services like \textsc{DialogFlow}\footnote{\url{https://dialogflow.com}} or \textsc{IBM Watson Assistant}\footnote{\url{https://www.ibm.com/cloud/watson-assistant}}, which rely on neural networks to match user intents.
However, chatbot applications are much more than raw language processing components~\cite{chatbot-lessons-2018}.
Indeed, the conversational component of the application is usually the front-end of a larger system that involves data storage and service integration and execution as part of the chatbot reaction to the user intent.
Thus, we define a chatbot as an application embedding a \emph{recognition engine} to extract \emph{intentions} from user inputs, and an \emph{execution component} performing complex event processing represented as a set of \emph{actions}.
\emph{Intentions} are named entities that can be matched by the recognition engine.
They are defined through a set of \emph{training sentences}, which are input examples used by the recognition engine's ML/NLP framework to derive a number of potential ways the user could use to express the intention\footnote{In this article we focus on ML/NLP-based chatbots, but the approach can be applied to alternative recognition techniques.}.
Matched intentions usually carry \emph{contextual information} computed by additional extraction rules (e.g. a typed attribute such as a city name, a date, etc) available to the underlying application.
In our approach, \emph{Actions} are used to represent simple responses such as sending a message back to the user, as well as advanced features required by complex chatbots like database querying or external service calling (e.g. API queries in this paper).
Finally, we define a \emph{conversation path} as a particular sequence of received user \emph{intentions} and associated \emph{actions} (including non-messaging actions) that can be executed by the chatbot application.
\subsection{Modeling Open Data APIs}
To model Open Data APIs, we propose employing UML class diagrams plus two UML profiles required to optimize and customize the bot generation.
\subsubsection{Core Open Data representation as a UML Class Diagram}
Concepts, properties and operations of Open Data APIs are represented using standard elements of UML structural models (classes, properties and operations, respectively).
Figure \ref{fig:example} shows an excerpt of the UML model for the running example\footnote{Full model at \url{http://hdl.handle.net/20.500.12004/1/C/ER/2020/575}}.
As can be seen, the model includes the core concept of the API, called \emph{AirQualityData}; plus two more classes to represent data structures (i.e., \emph{Address} and \emph{Location}).
Note that the some elements include the stereotypes that we will present later.
\begin{figure}[t]
\centering
\includegraphics{examplePollution}
\caption{UML model for the running example (our editor can show/hide the stereotypes to show a simplified representation of the diagram).}
\label{fig:example}
\vspace{0.5em}
\end{figure}
It is worth noting that most Open Data APIs focus around a single core data element composed of a rich set of properties which can be split (i.e., ``normalized'') into separate UML classes following good design practices, also facilitating the understanding of the model.
This is what we have done for the UML diagram shown in Figure \ref{fig:example}
\subsubsection{The \textsc{Bot} profile}
To be able to generate more complete bots, in particular, to expand on aspects important for the quality of the conversation, the \textsc{Bot} profile adds a set of stereotypes for UML model elements that cover
(1) what data should the chatbot expose,
(2) how to refer to model elements (instead of the some obscure internal API identifiers),
and (3) synonyms for model elements that citizens may employ when attempting to alternatively name the concept as part of a sentence.
Figure~\ref{fig:botProfile} shows the specification of the \textsc{Bot} profile.
It comprises three stereotypes, namely, \emph{ClassConfig}, \emph{PropertyConfig} and \emph{BotVocabulary}, extending the \emph{Class}, \emph{Property} and \emph{NamedElement} UML metaclasses, respectively.
The \emph{ClassConfig} stereotype includes the \emph{toExpose} property, in charge of defining if the annotated Class element has to be made visible to end-users via the chatbot.
The \emph{PropertyConfig} stereotype also includes the \emph{toExpose} property, with the same purpose; plus the \emph{toFilterWith} property, which indicates if the corresponding annotated property can be used to filter results as part of a conversation iteration.
For instance, in our running example, pollution data could be filtered via date.
Finally, the \emph{BotVocabulary} stereotype can annotate almost any UML model element and allows specifying a more ``readable'' name to be used when printing concept information and a set of synonyms for the element. \looseness=-1
\begin{figure}[t]
\centering
\includegraphics{botProfile}
\caption{\textsc{Bot} profile.}
\label{fig:botProfile}
\end{figure}
In Figure~\ref{fig:example} we see the \textsc{Bot} profile applied on the running example. Note, for instance, how we define that \emph{town} and \emph{city} could be used as synonyms of \emph{Municipality} and that this attribute can be used to filter \emph{AirQuality} results.
\subsubsection{The \textsc{OpenData} profile}
While the previous profile is more oriented towards improving the communication between the chatbot and the user, this \textsc{OpenData} profile is specially aimed at defining the technical details the chatbot needs to know in order to communicate with the Open Data API backend.
The profile defines a set of stereotypes that cover how to access the information of the model elements via the Web API.
The access method depends on the specification followed by the Open Data API, which can be \textsc{Socrata}, \textsc{CKAN}, \textsc{OData} or \textsc{OpenAPI}.
Figure~\ref{fig:opendataProfile} shows the \textsc{OpenData} profile.
As can be seen, we have defined three stereotypes, namely, \emph{OpenDataAPIDetails}, \emph{OpenDataField} and \emph{OpenDataFieldType}, which extend \emph{Class}, \emph{Property} and \emph{Type} UML metaclasses, respectively.
The \emph{OpenDataAPIDetails} stereotype includes a set of properties to enable the API query of the annotated UML Class.
For instance, it includes the \emph{domain} and \emph{webUri} to specify the host and route parameters to build the query.
It also includes the \emph{APIType} property, which sets the kind of Open Data API (see values of the \emph{OpenDataAPIType} enumeration).
The \emph{OpenDataField} stereotype annotates properties with additional information depending on the type of Open Data API used.
For instance, the \emph{SocrataField} stereotype indicates the name of the field (see \emph{fieldName}) that has to be queried to retrieve the annotated property.
Finally, the \emph{OpenDataFieldType} stereotype includes additional information regarding the types of the properties used by the Open Data APIs.
\begin{figure}[t]
\centering
\includegraphics{opendataProfile-v3}
\caption{\textsc{OpenData} profile.}
\label{fig:opendataProfile}
\end{figure}
Figure~\ref{fig:opendataProfile} also includes stereotypes prefixed with \emph{CKAN}, \emph{OData} and \emph{Adhoc} (in grey) to cover the information required for \textsc{CKAN}, \textsc{OData} and \textsc{OpenAPI} specifications.
We do not fully detail them due to the lack of space but they are available online\footnote{\url{http://hdl.handle.net/20.500.12004/1/C/ER/2020/822}}.
Besides, the \emph{Adhoc} annotations also use the \textsc{OpenAPI} profile~\cite{DBLP:conf/models/Ed-DouibiIBC19}.\looseness=-1
As an example, this profile is also used to annotate Figure~\ref{fig:example}. While the profile is rather exhaustive and comprises plenty of detailed, technical information, note that it is automatically applied during the injection process.
\subsection{Injection of Open Data Models}
Injectors collect specific data items from the API descriptions in order to generate a model representation of the API.
In a nutshell, regardless of the API specification used, the injector always collects information about the API metadata, its concepts and properties.
This information is used to generate a UML model annotated with the \textsc{OpenData} profile.
Additionally, injectors also initialize the annotations corresponding to the \textsc{Bot} profile with default values
which will later be tuned during the refinement step (see next subsection).
In our running example, the injector takes as input the \textsc{Socrata} description of the data source\footnote{\url{https://analisi.transparenciacatalunya.cat/api/views/metadata/v1/uy6k-2s8r.json}} to create the UML model classes and stereotypes.
To complement the definition of the data fields and their types, the injector also calls the \textsc{Views API}\footnote{\url{https://analisi.transparenciacatalunya.cat/api/views.json?id=uy6k-2s8r}}, an API provided by \textsc{Socrata} to retrieve metainformation about the data fields of datasets.
\subsection{Refinement of Open Data Models}
Once the injection process creates a UML schema annotated with stereotypes, the bot designer can revise and complete it to generate a more effective chatbot.
The main refinement tasks cover:
(a) providing default names and synonyms for model elements, which enriches the way the chatbot (and the user) can refer to such elements;
and (b) set the visibility of data elements, thus enabling the designer to hide some elements of the API in the conversation.
During the refinement step, the bot designer can also revise the \textsc{OpenData} profile values if the API description is not fully aligned with the actual API behavior, as sometimes the specification (input of the process) unfortunately is not completely up-to-date with the API implementation deployed (e.g., type mismatchings).
\section{Introduction}
\label{sec:introduction}
\input{2020-ER-Introduction}
\section{Background}
\label{sec:background}
\input{2020-ER-Background}
\section{Overview}
\label{sec:overview}
\input{2020-ER-Overview}
\section{Importing Open Data APIs as Models}
\label{sec:importing}
\input{2020-ER-Importing}
\section{Generating the Bot}
\label{sec:generator}
\input{2020-ER-Generator}
\section{Tool support}
\label{sec:toolSupport}
\input{2020-ER-ToolSupport}
\section{Related Work}
\label{sec:related}
\input{2020-ER-Related}
\section{Conclusion}
\label{sec:conclusion}
\input{2020-ER-Conclusion}
\bibliographystyle{splncs04}
\section{First Section}
\subsection{A Subsection Sample}
Please note that the first paragraph of a section or subsection is
not indented. The first paragraph that follows a table, figure,
equation etc. does not need an indent, either.
Subsequent paragraphs, however, are indented.
\subsubsection{Sample Heading (Third Level)} Only two levels of
headings should be numbered. Lower level headings remain unnumbered;
they are formatted as run-in headings.
\paragraph{Sample Heading (Fourth Level)}
The contribution should contain no more than four levels of
headings. Table~\ref{tab1} gives a summary of all heading levels.
\begin{table}
\caption{Table captions should be placed above the
tables.}\label{tab1}
\begin{tabular}{|l|l|l|}
\hline
Heading level & Example & Font size and style\\
\hline
Title (centered) & {\Large\bfseries Lecture Notes} & 14 point, bold\\
1st-level heading & {\large\bfseries 1 Introduction} & 12 point, bold\\
2nd-level heading & {\bfseries 2.1 Printing Area} & 10 point, bold\\
3rd-level heading & {\bfseries Run-in Heading in Bold.} Text follows & 10 point, bold\\
4th-level heading & {\itshape Lowest Level Heading.} Text follows & 10 point, italic\\
\hline
\end{tabular}
\end{table}
\noindent Displayed equations are centered and set on a separate
line.
\begin{equation}
x + y = z
\end{equation}
Please try to avoid rasterized images for line-art diagrams and
schemas. Whenever possible, use vector graphics instead (see
Fig.~\ref{fig1}).
\begin{figure}
\includegraphics[width=\textwidth]{fig1.eps}
\caption{A figure caption is always placed below the illustration.
Please note that short captions are centered, while long ones are
justified by the macro package automatically.} \label{fig1}
\end{figure}
\begin{theorem}
This is a sample theorem. The run-in heading is set in bold, while
the following text appears in italics. Definitions, lemmas,
propositions, and corollaries are styled the same way.
\end{theorem}
\begin{proof}
Proofs, examples, and remarks have the initial word in italics,
while the following text appears in normal font.
\end{proof}
For citations of references, we prefer the use of square brackets
and consecutive numbers. Citations using labels or the author/year
convention are also acceptable. The following bibliography provides
a sample reference list with entries for journal
articles~\cite{ref_article1}, an LNCS chapter~\cite{ref_lncs1}, a
book~\cite{ref_book1}, proceedings without editors~\cite{ref_proc1},
and a homepage~\cite{ref_url1}. Multiple citations are grouped
\cite{ref_article1,ref_lncs1,ref_book1},
\cite{ref_article1,ref_book1,ref_proc1,ref_url1}.
| proofpile-arXiv_065-267 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Based on only a few sample images of a certain object with different poses, humans have the strong ability to infer and depict 2D images of the same object in arbitrary poses \cite{shepard1971mental}. This paper focuses on a similar task, known as the novel view synthesis, which aims to make computer render a novel target view image of an object given its current source view input. Obviously, this task requires the computer to understand the relationship between the 3D object and its pose. It has many potential applications in computer vision and graphic such as action recognition \cite{wang2014cross}, 3D object recognition \cite{savarese2008view}, modeling and editing \cite{massa2016deep} \emph{etc.}.
\begin{figure}
\centering
\includegraphics[width=1.0\textwidth]{main0.pdf}
\caption{
We use unpaired data to realize view synthesis. In (a), given the
first source view image, the chair rotates with a span of 360$^\circ$ . In (b), faces
are synthesized into existing predefined views in the dataset. In (c), we are able to interpolate the face into unseen views in the training data. Details are given in the result section \ref{subsec:abl} and \ref{subsec:rel}.}
\label{fig:main0}
\end{figure}
Traditional approaches \cite{avidan1997novel,kholgade20143d} for this task are mainly based on 3D projection geometry. They first construct the 3D shape model of the object from the cues in the image. Then the model is projected onto a 2D image plane of the target view. Actually, if 3D model can be perfectly built, object in arbitrary poses can be rendered precisely. However, building 3D object model from a single 2D image is an ill-posed problem. Therefore, it needs a large amount of close viewpoint images to capture the full object structure. Since structures of various objects are quite different, 3D geometry model for a particularly object may not generalize to other. Moreover, rendering a high quality image not only depends on the object model, but also other conditions such as the lighting and the background, but they need to be modeled independently.
Learning based approaches \cite{rematas2014image,zhou2016view} begin to show the advantages with the help of deep convolutional neural network (CNN). This type of methods directly learn the mapping network from the source view to the target without building the 3D model and knowing the camera pose. The mapping network is modeled by a huge number of parameters determined in the data-driven manner. Hence it is large enough to accommodate not just the geometry projection function, but the background and lighting conditions.
Recently, employing the image generation technique like generative adversarial notwork (GAN) has drawn researchers' attention.
\emph{E.g.}, novel synthesis can be modeled by a conditional GAN (cGAN) just like image-to-image translation \cite{isola2017image}.
Disadvantages of such methods lie in two aspects. First, the model dose not consider the prior knowledge about the projection geometry, though, previous works \cite{tran2017disentangled} already achieve the promising results given both the pose and identity labels as conditions. The works in \cite{nguyen2019hologan,sun2018multi,xu2019view} improves this by either designing differentiable 3D to 2D projection unit \cite{nguyen2019hologan}, predicting the warping flow between two different views \cite{sun2018multi}, or using a specific pose matrix rather than one hot vector as the input conditions \cite{xu2019view}.
Training such a view translation model often requires the paired data, with one being used as the source view and the other as the target. The paired data essentially provide the important constraining loss function for minimization. Nonetheless, the ground truth data from target view are not easy to obtain in real applications. Lately, with the recent synthesis technique \cite{zhu2017unpaired,bao2017cvae}, building a translation model by unpaired data becomes possible, which can greatly release the constraint of novel view synthesis.
This paper proposes a novel view synthesis algorithm using the conditional deformable flow in cVAE-GAN framework, and it designs for training with the unpaired data, although it still achieves the better results if the target view image can be further exploited in the loss functions. The key idea is to perform the view translation by deforming the latent feature map with the optical flows, computed from by the image feature and the view condition vectors together.
We find that cVAE is able to disentangle the view-relevant and irrelevant factors, by mapping different source view images into posteriors, and making them close to a common prior.
It greatly increases the performance on the unpaired data. To further improve the synthesis results, we incorporate the adversarial training in the pixel and latent feature domain, and the reconstruction loss on the sampling code from the view-irrelevant posterior.
Specifically, we built the generator with a pair of connected encoder and decoder.
The source and target view conditions are added into them by our proposed conditional deformable module (CDM), in which the one-hot view vector is first mapped into two latent codes, and then they are used as two filters to convolve the features, giving the displacements on $x$ and $y$ directions. Note that instead of one flows, we actually get $3\times3$ flows for each location like in \cite{dai2017deformable}. To achieve this, the features are divided into $9$ channel groups and the two filters convolve each group to output a pair of displacement maps. Each $3\times3$ results then deform the corresponding location in its $3\times3$ neighbourhood, naturally followed by an ordinary conv layer to refine feature maps after the deformation. Rather than directly giving the deformed features into the later layers, we also design a deformed feature based normalization module (DFNM), which learns the scale and offset
given the deformed feature as its input. With the help of the CDM and DFNM, the encoder maps the source into a posterior, while the decoder transforms the code, sampled from either the posterior or the prior, back into a target view image. Besides the reconstructed and prior-sampled image in traditional cVAE-GAN, our model also synthesizes a view-translated image to guide the generator for the view synthesis task.
The contributions of this paper lie in following aspects. First, we build a model in cVAE-GAN for novel view synthesis based on the unpaired data. With the traditional and the extra added constraining loss, the model maps the source image into a latent code, which does not reflect the view conditions. The target view then complements the code in the decoder. Second, we propose two modules named the CDM and DFNM for view translation. They fits in our model to improve the synthesis results. Third, extensive experiments are performed on two datasets to validate the effectiveness of the proposed method.
\section{Related Works}
\textbf{Image generation by VAE and GAN.} GAN \cite{goodfellow2014generative} and Variational Auto-Encoder (VAE) \cite{kingma2013auto} are two powerful tools for generating high dimensional structured data. Both of them map the random code drawn from the prior into the image domain data. GAN introduces a discriminator $D$ to evaluate the results from the generator $G$. $D$ and $G$ are training in the adversarial manner, and finally $G$ is able to synthesize high quality images. However, GAN's training is unstable, and mode collapse often happens. Therefore, extra tricks are often added to limit the ability of $D$ \cite{gulrajani2017improved,heusel2017gans}. VAE has a pair of encoder and decoder. In VAE, the input image is first mapped into the latent probabilistic space by the encoder. The decoder takes the random code drawn from the posterior to reconstruct the input image. VAE can be easily trained by the reconstruction loss together with KL loss as its regularization. But it tends to give the blurry image. So it usually works with a discriminator to form a GAN \cite{larsen2015autoencoding}. Originally, both GAN and VAE perform unconditional generation. To better control the generated results, cGAN \cite{mirza2014conditional,isola2017image,miyato2018cgans} and cVAE \cite{sohn2015learning,bao2017cvae} are proposed. In these works, the conditional label is given to the network as the input. So it controls the generation results to fulfill the required condition. $D$ in cGAN not only evaluates the image quality, but also the condition conformity.
GAN and VAE become popular tool in novel view synthesis. Particularly, the latent code is disentangled into different dimensions in the unsupervised way \cite{higgins2017beta,nguyen2019hologan}, with some of them naturally controlling the pose, which shows their great potential on view synthesis.
\textbf{Novel view synthesis.}
Novel view synthesis is a classical topic in both computer vision and graphics. Traditional approaches are built by the 3D projection geometry \cite{avidan1997novel,savarese2008view,kholgade20143d,zhang2015meshstereo,rematas2016novel}. These approaches estimate the 3D representation of the object, including the depth and camera pose \cite{avidan1997novel}, 3D meshes \cite{zhang2015meshstereo} and 3D model parameters \cite{savarese2008view,kholgade20143d,rematas2016novel}. Learning based method becomes increasingly popular with the help of CNN. Since all types of 3D representations can now be estimated by CNN, it is the main building blocks of the view synthesis algorithm. Dosovitskiy \emph{et al.} \cite{dosovitskiy2015learning} learn a CNN which takes the low dimensional code including the shape and camera pose as the input, and maps it into a high dimensional image. Zhou \emph{et al.} \cite{zhou2016view} employ a CNN to predict the appearance flow to warp source view pixels directly. However, without the adversarial training, these works tend to give low quality images.
Since GAN and VAE is able to generate high quality images, GAN-based method becomes dominant recently \cite{park2017transformation,tran2017disentangled,sun2018multi,tian2018cr,xu2019view}. Park \emph{et al.} \cite{park2017transformation} predict the flow and the occlusion map to warp pixels first, and then the deformed image is given to the following network for refinement. The work \cite{sun2018multi} fully exploits a sequence of source images by giving them to an RNN-based network, which predicts a series of warping flows from sources to the current target view. In DR-GAN \cite{tran2017disentangled}, a connected encoder-decoder based generator is proposed. The encoder transforms the image into a latent code. Together with the target view condition, the code is applied by the decoder to synthesize the image. The discriminator in DR-GAN takes advantage of the ID labels to ensure the view translation not to change the source ID. CR-GAN \cite{tian2018cr} extends the encoder-decoder based structure by adding an extra path beginning from the decoder, which gives an extra reconstruction constraint in the image domain. VI-GAN \cite{xu2019view} employs the estimated camera pose matrix as the input condition for both source and target views, which replaces the one-hot condition vector. It also feeds back the view-translated image into the encoder, and requires its latent code to be close with the code from the source view, hence building the view-independent space. Note that in the above works, most of them \cite{park2017transformation,sun2018multi,tian2018cr,xu2019view} ask for the paired data to form the loss function. Although, DR-GAN do not have this constraint, it still requires the ID label for training the discriminator. Our work is totally based on the unpaired data and it dose not need any ID label during training.
\section{Method}
\subsection{Overview framework}
This paper regards the novel view synthesis as the condition translation task in cVAE-GAN. To achieve the view translation based on the unpaired data, we propose a conditional deformable module (CDM) and a deformed feature based normalization module (DFNM) in our designed network. To enhance the separation between the view-relevant and irrelevant factors, a disentanglement adversarial classifier (DAC) is also incorporated. As is shown in the Figure \ref{fig:fig1}, our network consists of three major components, an encoder $E$, a decoder $G$ and a discriminator $D$. $\Psi_{EX}$, $\Psi_{EY}$ and $\Psi_{GX}$, $\Psi_{GY}$ are four different MLPs in $E$ and $G$, respectively. These MLPs maps the view label into conv filters, which are responsible for generating the optical flow. Given a source input image $X_a$ and its view label $Y_a$, the algorithm synthesizes a view-translated image $\bar{X}_b$ under the target view $Y_b$. Note that we do not have the ground truth $X_b$ to constrain the model during training.
In Figure \ref{fig:fig1},
$E$ maps $X$ into a posterior $E(Z|X,Y)=N(\mu, \Sigma)$
, from which a random code $Z\sim E(Z|X,Y)$ can be sampled. With $Z$ as its input, $G$ renders the fake images, and they are given to $D$ to evaluate the realness and view conformity.
cVAE constrains $E(Z|X,Y)$ for all $X$ with the common prior $N(0, I)$ by reducing the KL divergence between them.
In cVAE, $E$ removes $Y_a$ from the source $X_a$, while $G$ adds $Y_b$ into the synthesized image. To fit the task of novel view synthesis, $G$ generates three kinds images: the reconstructed, prior-sampled images and the view-translated image. Note that, our model employs $Y$ as the input for $E$ and $G$. Instead of directly concatenation, we propose the modules CDM and DFNM, which make the whole network suitable for view translation.
Moreover, we follow the idea of BicycleGAN \cite{zhu2017toward} to reconstruct $Z$ from the prior-sampled image, and it ensures $G$ to take effective information from the code $Z$.
\begin{figure}
\centering
\includegraphics[height=7.0cm]{main1.pdf}
\caption{Overview framework of the proposed network structure. (a) the source image $X_a$ with its label viewpoint $Y_a$ is translated into $\bar{X}_b$ in the target view $Y_b$. $\bar{X}_a$ is the reconstructed image with the same $Y_a$ given at both $E$ and $G$. (b) demonstrates that the code $Z\sim N(0,I)$ is synthesizing into a prior-sampled image, which is given back to $E$ to reconstruct the code $Z$.}
\label{fig:fig1}
\end{figure}
\subsection{Conditional Deformable Module (CDM)}
We now give the details about the proposed CDM, applied in both $E$ and $G$. Our motivation is to change the source view $Y_a$ to the target $Y_b$ by warping $X_a$ with the optical flow. Therefore, the CDM actually learns to generate the 2D flows for the features. Note that the warping is particularly useful when $Y_a$ and $Y_b$ are close. However, if they are far from each other, the deformed feature needs to be refined and complemented by the later layers.
Here, we argue that the flows are mainly determined by $Y$, but they are also influenced by the content in $X$. Therefore, they should be computed from both of them. As the view label $Y$ has no spatial dimensions,
$Y$ is first mapped into a latent code, and then the code convolves the feature to get the offsets.
Specifically, two sets of MLPs, $\Psi_{EX}$, $\Psi_{EY}$ and $\Psi_{GX}$, $\Psi_{GY}$, first map $Y_a$ and $Y_b$ to the latent codes $W$ ($W_{EX}$, $W_{EY}$ in $E$ and $W_{GX}$, $W_{GY}$ in $G$). Here, we separate the filters for $x$ and $y$ directions, and for $E$ and $G$. Detailed discussions are given in the experiments. Then, $W$ are used as the filters to convolve on the feature maps, resulting several pairs of feature maps indicating the displacement $dx$ and $dy$ on $x$ and $y$ directions.
Figure \ref{fig:fig2} shows the details about CDM. It mainly composed of the conditional flow computation (CFC) and the deformable conv module, as is shown in Figure \ref{fig:fig2} (a). Supposed the input $F^i\in \mathbb{R}^{H\times W\times C}$, of $i$th layer, CDM outputs the deformed $F_d^i$ of the same size. $W$ are also inputs, which are two latent vectors, computed from the view condition label $Y$ by MLP.
Particularly, $F^i$ is given to a conv layer with $C'$ filters to produce ${F'}\in \mathbb{R}^{H\times W\times C'}$. ${F'}$ is split into different groups along the channel, then given to the CFC. Figure \ref{fig:fig2} (b) and (c) are two options for CFC. In practice, we choose the design in Figure \ref{fig:fig2} (b), in which the layer of Kernel Given convolution ($KGconv$) uses $W_X, W_Y\in\mathbb{R}^{1 \times 1\times \frac{C'}{9}}$ as a pair of filters to convolve on each $\frac{C'}{9}$ intervals, leading to a pair of ${dx, dy}\in\mathbb{R}^{H\times W\times 9}$. Note that $dx, dy$ are composed of 9 groups of flows.
Using 9 groups of flows is proposed by \cite{dai2017deformable} to introduce adaptive receptive fields in conv layer, 9 sets flows correspond to the offsets of a $3\times3$ conv kernels, and it finally gives the deformed feature ${F}_d^i$. We follow it but the flows are redundant and correlated to some extend, since they are the offsets of adjacent $3\times3$ elements. However, the 9 sets of flows could sometimes be different, depending on the data.
\begin{figure}
\centering
\includegraphics[width=1.0\textwidth]{sub2.pdf}
\caption{The details for CDM. (a) Given the $F\in\mathbb{R}^{H\times W\times C}$ before the deformation, its output $F_d$ is the deformed feature with the same size as $F$. (b) CFC also has two separated input latent codes $W_X$ and $W_Y$, and they are used as the filters to convolve on a number (usually 9) of groups in $F'$.
(c) Another design for CFC. Only one filter is provided, and it convolves on 18 groups. }
\label{fig:fig2}
\end{figure}
\subsection{Deformed Feature based Normalization Module (DFNM)}
The deformed feature maps ${F}_d^i$ need to be further processed by $(i+1)$th layers in $E$ and $G$. One intuitive way is to directly use ${F}_d^i$ as the input. However, recent advances in GAN and cGAN show the advantage of the conditional normalization like AdaIN \cite{huang2017arbitrary} and SPADE \cite{park2019semantic}. Different from BN or IN, such layers do not learn the scale $\gamma$ and offset $\beta$ as trainable model parameters. Instead, they are the features from the side branch. In other words, the conditional adaptive normalization module learns to scale and offset based on the conditional input.
Inspired by SPADE, we propose a new conditional normalization way named DFNM, which uses ${F}_d^i$ as the conditional input from the side branch. DFNM performs the de-normalization, which means to determine the appropriate values on $\beta$ and $\gamma$. To be specific, it employs ${F}_d^i$ as its input, and specifies $\beta$ and $\gamma$ by two conv layers. Note that DFNM has distinct internal parameters for different layers, hence it progressively adjusts the features in the main branch based on its current input. In practice, we can have different choices on the dimensions of $\beta$ and $\gamma$. Here we simply follow the setting in SPADE, which outputs the unique $\gamma^i_{y,x,c}$ and $\beta^i_{y,x,c}$ at different 3D sites, where the subscripts are the indexes along the height, width and channel dimensions, respectively. Before the de-normalization, the features in the main branch should be normalized first by subtracting $\mu$ and dividing $\sigma$.
Here we follow the way in BN to compute per-channel statistics
$\mu^i_c$ and $\sigma^i_c$ from $h^i_{n,y,x,c}$ in the batch.
\subsection{Overall Optimization Objective}
The loss functions used in this paper mainly are three parts, namely, disentangling losses, reconstruction losses and adversarial loss.
\subsubsection{Disentangling loss}
The disentangling loss constrains the encoder $E$, and prevents it from extracting the source view-relevant feature, so that the target view $Y_b$ can be easily added into the view-translated image. The KL constraint penalizes the posterior distribution $E (Z|X_a, Y_a)$ being far from the standard Gaussian $N(0,I)$, which to some extent makes the random code $Z\sim E(Z|X_a, Y_a)$ not carry the information related to $Y_a$.
KL loss $L_{KL}$, as is shown in Eq.\ref{eq:kl}, can be easily computed in closed form since both the prior and posterior are assumed as Gaussians.
\begin{equation}
\label{eq:kl}
L_{KL} = D_\text{KL}[E(Z|X_a, Y_a)||{N}({0},{ I})]
\end{equation}
However, this loss also constrains on view-irrelevant factors, so that this kind of information in $Z$ may lose because of the penalty from it. To cope with this issue, the paper proposes the DAC which mainly aims to reduce view-relevant factors in $Z$. With the help of DAC, the KL loss weight can be reduced so that the view-irrelevant factors remain in $Z$ to a greater extent. In practice, we implement the DAC as two FC-layers with the purpose of classifying the view based on $Z$. DAC is trained in the adversarial manner. Hence it has two training stages, $D$ and $G$ stages.
In $D$ stage, the DAC is provided with the output $Z$ from $E$ and the correct source view label as well, while in $G$ stage, DAC is fixed and $E$ get trained with the adversarial loss from DAC. In this stage, we give an all-equal one-hot label to DAC
with the same degree of confidence on each view.
The cross entropy loss are defined as Eq.\ref{eq:advE} and Eq.\ref{eq:advDAC}, respectively.
\begin{equation}
\label{eq:advE}
L_{E}^{cls} = - \mathbb{E}_{Z\sim E(Z|X_a,Y_a)}\sum_c \frac{1}{C} \log DAC(c | Z)
\end{equation}
\begin{equation}
\label{eq:advDAC}
L_{DAC}^{cls} = - \mathbb{E}_{Z\sim E(Z|X_a, Y_a)}\sum_c \mathbb{I}(c=Y_a) \log DAC(c | Z)
\end{equation}
where $\mathbb{I}(c=Y_a)$ is the indicator function, and $DAC(c | Z)$ is softmax probability output by the disentanglement adversarial classifier.
\subsubsection{Reconstruction losses}
Reconstruction losses are important regularizations which also
ensure that
the view-relevant factors remain unchanged during view translation.
Without extra supervisions, cVAE wants the synthesized image $\hat{X}_a$ to be close to the input when $E$ and $G$ are provided the same view label $Y_a$. In addition,
the constraints of the middle layer features of the classification network is also employed in our work.
As shown in Eq.\ref{eq:L1 pixel} and Eq.\ref{eq:L1 gram}, $\phi^i$ indicates $i$th of a pre-trained VGG network, and $Gram$ means to compute the Gram matrix, which is a typical second order features.
\begin{equation}
\label{eq:L1 pixel}
\begin{aligned}
L_{E, G}^{pixel} =& ||X_a -\bar{X_a}||_1, \quad
L_{E, G}^{content} =& ||\phi^i({X_a}) -\phi^i({\bar{X_a}})||_1
\end{aligned}
\end{equation}
\begin{equation}
\label{eq:L1 gram}
\begin{aligned}
L_{E, G}^{style} =& ||Gram(\phi^i({X_a})) -Gram(\phi^i({\bar{X_a}}))||_1 \\
\end{aligned}
\end{equation}
When $Z \sim {N} (0, I)$ for the prior-sampled image $G(Z, Y_a)$, we cannot constrain it directly in the image domain, so we extract the feature from the image $G(Z, Y_a)$ with $E$, and to reconstruct $Z$.
So that the information in $Z$ is kept. The reconstruction loss expressed in Eq.\ref{eq:L1 z}
\begin{equation}
\label{eq:L1 z}
L^{rec_{z}}_{G} = \mathbb{E}_{Z\sim N(0, I)}||Z - E(G(Z, Y_a), Y_a)||_1
\end{equation}
\subsubsection{Adversarial loss}
In this paper, the projection discriminator \cite{miyato2018cgans} is adopted.
Given the real image $X_a$, constraints are made for three types of fake images, reconstructed $G(E(X_a, Y_a), Y_a)$, view-translated $G(E(X_a, Y_a), Y_b)$ and prior-sampled image $G(Z, Y_a)$, as shown in Eq.\ref{eq:adv D} and Eq.\ref{eq:adv EG}.
\begin{equation}
\label{eq:adv D}
\begin{aligned}
L_{D}^{adv} =& \mathbb{E}_{{X}\sim p_{\rm{data}}}[\max(0,1-D(X,Y_a))]\\
&+\mathbb{E}_{Z\sim E(Z|X_a, Y_a)}[\max(0,1+D(G(Z, Y_a)),Y_a)]\\
&+\mathbb{E}_{Z\sim E(Z|X_a, Y_a)}[\max(0,1+D(G(Z, Y_b)), Y_b)]\\&+\mathbb{E}_{Z\sim N(0, I)}[\max(0,1+D(G(Z, Y_a)), Y_a)]
\end{aligned}
\end{equation}
\begin{equation}
\label{eq:adv EG}
\begin{aligned}
L_{E, G}^{adv} =&\mathbb{E}_{Z\sim E(Z|X_a, Y_a)}[\max(0,1-D(G(Z, Y_a)),Y_a)]\\
&+\mathbb{E}_{Z\sim E(Z|X_a, Y_a)}[\max(0,1-D(G(Z, Y_b)), Y_b)]\\
&+\mathbb{E}_{Z\sim N(0, I)}[\max(0,1-D(G(z, Y_a)), Y_a)]
\end{aligned}
\end{equation}
The total loss for $E$, $G$, $D$ and DAC can be written as following.
\begin{equation}
\label{eq:E all}
\begin{aligned}
L_{E, G} = L_{KL} + L_{E, G}^{adv} + \alpha_1L_{E, G}^{style} + \alpha_2L_{E, G}^{content} + \alpha_3L_{E, G}^{pixel} + L_{E}^{cls} + L^{rec_{z}}_{G}
\end{aligned}
\end{equation}
\begin{equation}
\label{eq:E all}
\begin{aligned}
L_{D} = L_{D}^{adv},\quad
L_{DAC} = L_{DAC}^{cls}
\end{aligned}
\end{equation}
We set the loss weight $\alpha_1= 0.001$, $\alpha_2= 10$, $\alpha_3= 100$ for all experiments.
\section{Experiments}\label{sec:exp}
\subsection{Dataset and implementation details}
\textbf{Dataset.} We validate the proposed method on the 3D chair \cite{aubry2014seeing} and the MultiPIE face datasets \cite{gross2010multi}. The 3D chair contains $86,304$ images with a span of $360^\circ$ at azimuth and $30^\circ$ at pitch, respectively, covering a total of 62 angles. There are 1,392 different types of chairs. The multiPIE contains about 130,000 images, with a total span of $180^\circ$ and a spacing of $15^\circ$ in azimuth dimension. A total of 13 angles are used for training and testing. Meanwhile, it also contains images of 250 identities under different lights. For all the datasets, 80\% are used for model training and the rest 20\% for testing.
\begin{figure}
\centering
\includegraphics[height=8.0cm]{chair_ablation.pdf}
\caption{Ablation study on 3D chair dataset.}
\label{fig:ablation}
\end{figure}
\textbf{Implementation details.}
In $E$ and $G$, all layers adopt instance normalization, except those replaced by DFNM. The spectral norm \cite{miyato2018spectral} is applied to all layers in $D$. All learning rates are set to 0.0002. We use the ADAM \cite{kingma2014adam} and set $\beta_1$ = 0, $\beta_2$ = 0.9.
Details are given in the supplementary materials.
\subsection{Results and ablation studies on 3D chair and MultiPIE}\label{subsec:abl}
Extensive ablation studies is conducted to verify the effectiveness of each module. We have 6 different settings for it. View-translated images in different settings are presented in the corresponding rows in Figure \ref{fig:ablation} and the quantitative metrics are given in Table \ref{Table1}.
\begin{figure}
\centering
\includegraphics[width=1.0\textwidth]{multipie_ablation.pdf}
\caption{Ablation study on multiPIE dataset.}
\label{fig:ablation multiPIE}
\end{figure}
\textbf{Baseline.}
To verify the effectiveness of our proposed method, we use a general framework cVAE-GAN \cite{bao2017cvae} as the baseline.
To make the comparison fair, we introduce the view-translated image in it, and use all the loss functions that is presented. The result is indicated as "A: baseline" in Table \ref{Table1} and Figure \ref{fig:ablation}, \ref{fig:ablation multiPIE}.
\textbf{Validity of CDM.}
To validate CDM, setting B is modified based on A. The only difference is we introduce the label through CDM, thus the setting is indicated by "B: A+CDM" in Table \ref{Table1} and Figure \ref{fig:ablation}, \ref{fig:ablation multiPIE}.
Comparing the results between A and B in Figure \ref{fig:ablation}, we find that both can translate images to the given view. But when the difference between the target and input view is large, it is difficult for A to maintain the attributes and local details of the source image. While the CDM in B
has the advantage of
maintaining the representative details.
In both the visual fidelity and similarity, B has a greater improvement on A.
\textbf{Validity of DFNM.}
We validate the DFNM in setting C based on B. The only difference between B and C is that we apply DFNM in C, while the deformed features are directly given to the later layers in the main branch in B. This setting is written as "C: B+DFNM" in Table \ref{Table1} and Figure \ref{fig:ablation}, \ref{fig:ablation multiPIE}.
As is shown in Figure \ref{fig:ablation}, for some of the complex chair types, the synthesized image keep the chair style, indicating that DFNM helps catching the detail features in the source image. The quantitative results in Table \ref{Table1} indicate that DFNM refines the results compared with the setting B.
\textbf{Validity of DAC. } To demonstrate the effectiveness of DAC loss, we experiment in setting D based on C. In setting D, DAC is employed to provide the loss for encoder by Eq.\ref{eq:advE} .
By introducing DAC, it enables $G$ to get more view-irrelevant information. In Figure \ref{fig:ablation}, \ref{fig:ablation multiPIE}, we can clearly see that although setting C basically maintain details, DAC in setting D gives a clearer representation. The results in Table \ref{Table1} give further proof that all metrics are improved on 3D chair, and L1 error and FID have only negligible decreasing on MultiPIE.
\textbf{Necessity of separating MLPs for $x$ and $y$ directions.}
We are also interested in the way that CFC is implemented in CDM. There are at least two options for the filters $W$ from MLPs. One possible way is to employ the same $W$ to generate both $dx$ and $dy$, as is shown in Figure \ref{fig:fig2}(c). The other way is illustrated in the conditional flow computation sub-module in Figure \ref{fig:fig2}(b). The results of the first option are specified as "E: D-XYS", as is shown in Figure \ref{fig:ablation}, \ref{fig:ablation multiPIE} and Table \ref{Table1}.
We can see that
the image is defective. The declines in quantitative metrics further illustrate the necessity of our design in CDM.
\textbf{Necessity of separating the MLPs in $E$ and $G$.} $E$ and $G$ both use CDM to warp the features. But considering the different purposes of $E$ and $G$, the input conditional filters are different, coming from $\Psi_{EX}$, $\Psi_{EY}$, and $\Psi_{GX}$, $\Psi_{GY}$, as is shown in Figure \ref{fig:fig1}.
We are wondering whether separating the MLPs in $E$ and $G$ is necessary, hence we implement a network in which $\Psi_X$, $\Psi_Y$ are sharing in $E$ and $G$. The results are presented as "F: D-EDS",
which are worse than D, as is shown in Figure \ref{fig:ablation}, \ref{fig:ablation multiPIE} and Table \ref{Table1}. It shows the necessity of separating MLPs.
\begin{table}[ht]
\begin{center}
\setlength{\tabcolsep}{1mm}{
\begin{tabular}{ l l l c c c c c c c}
\toprule
& method && \multicolumn{3}{c}{MultiPIE}&&\multicolumn{3}{c }{3D chair} \\
\cline{4-6}\cline{8-10}
&&&L1 &SSIM &FID& &L1 &SSIM &FID \\
\midrule
A:& Baseline & & $31.37$ & $0.49$ &$44.84$& & $8.39$ & $0.86$ &$104.78$ \\
B:& CDM & & $23.43$ & $0.55$ &$26.79$& & $7.88$ & $0.87$ &$88.23$ \\
C:& B + DFNM & &\boldmath{$21.53$} & $0.56$ &\boldmath{$23.59$}& &$6.68$ & $0.88$ &$93.11$ \\
D:& C + DAC& & $21.90$ & \boldmath{$0.57$} &$23.95$& & \boldmath{$6.37$} &\boldmath{ $0.89$} &\boldmath{$86.34$} \\
\midrule
E:& D - XYS & & $24.48$ & $0.54$ &$31.02$& & $7.18$ & $0.88$ &$90.31$ \\
F:& D - EDS & & $23.59$ & $0.54$ &$28.40$& & $6.94$ & $0.88$ &$89.56$ \\
\bottomrule
\end{tabular}
}
\end{center}
\caption{Quantitative ablation study on the MultiPIE and the 3D chair dataset. The pixel-wise mean L1 error and the structural similarity index measure (SSIM) \cite{wang2004image} are computed between the view-translated images and the ground truths. Besides, the FID is also reported.}
\label{Table1}
\end{table}
\subsection{Results and analysis on MultiPIE. }\label{subsec:rel}
\textbf{View-translation among discrete angles. }
Qualitative comparisons are performed among our proposed method and the existing works like cVAE-GAN \cite{bao2017cvae}, VI-GAN \cite{xu2019view} and CR-GAN \cite{tian2018cr}. The results are listed in Figure \ref{fig:comp}. Note that in this study, we do not use paired data for all experiments during training.
The results of the quantitative metrics on each method are shown in the Table \ref{Table2}. After removing the constraint from the paired data, CR-GAN can hardly realize the view translation. The image qualities of VI-GAN significantly deteriorate under the condition of large angle translation. Although cVAE-GAN can still work, the converted image can not keep the view-irrelevant details from the source.
\begin{table}[]
\begin{center}
\setlength{\tabcolsep}{1mm}{
\begin{tabular}{ l l l c c c c c c c}
\toprule
& method && \multicolumn{3}{c}{MultiPIE}&&\multicolumn{3}{c }{3D chair} \\
\cline{4-6}\cline{8-10}
&&&L1 &SSIM &FID& &L1 &SSIM &FID \\
\midrule
& CR-GAN\cite{tian2018cr}& & $39.80$ & $0.397$ &$48.87$& &$13.45$ & $0.696$ &$111.34$ \\
&VI-GAN\cite{xu2019view}& & $38.18$ & $0.464$ &$47.02$& & $10.54$ & $0.802$ &$105.78$ \\
&cVAE-GAN\cite{bao2017cvae}& & $31.37$ & $0.493$ &$44.84$& & $8.39$ & $0.859$ &$104.78$ \\
& Ours& & \boldmath{ $21.90$ } & \boldmath{$0.571$} &\boldmath{$23.95$}& & \boldmath{$6.37$} &\boldmath{ $0.885$} &\boldmath{$86.34$} \\
\bottomrule
\end{tabular}
}
\end{center}
\caption{Quantitative metrics comparisons. Results from CR-GAN, VI-GAN and cVAE-GAN are provided on MultiPIE and the 3D chair datasets, respectively.}
\vspace{-1cm}
\label{Table2}
\end{table}
\begin{figure}[ht]
\centering
\includegraphics[height=7.5cm]{multipie_com.pdf}
\caption{ Comparison on Multi-PIE. For each image, the top row is the ground truth while the second row is generated by ours. The third , fourth and fifth rows are the output of cVAE-GAN\cite{bao2017cvae} ,VI-GAN\cite{xu2019view} and CR-GAN\cite{tian2018cr} respectively. }
\label{fig:comp}
\vspace{-0.5cm}
\end{figure}
\textbf{Continuous view synthesis by interpolation.
}
Synthesizing images at continuously varying angles is important in real applications. In our implementation, this can be achieved by interpolating between two adjacent labels. Meanwhile, we realize that the filter $W$, computed from the discrete view labels through the MLPs $\Psi$, can help synthesizing the image at an unseen angle. Therefore, we can also directly interpolate on $W$.
\begin{figure}
\centering
\includegraphics[width=1.0\textwidth]{interpolation.pdf}
\caption{Interpolating $W$ to synthesis unseen view images.}
\label{fig:interpolation}
\vspace{-0.5cm}
\end{figure}
The minimum angle interval in MultiPIE is $15^\circ$, and we choose to interpolate at every $7.5^\circ$. As is shown in Figure \ref{fig:interpolation}, we visualize all the images by interpolating $W$ from $0^\circ$ to 90$\circ$ and find that the face realized smooth transformation.
For comparison, zooming-in results by interpolating on both $W$ and $Y$ are given in Figure \ref{fig:interpolation sub}.
Note that all these images are the outputs from our model with the source view at $0^\circ$. The image marked with the red box is the obtained by interpolating $W$, while the green box is the result from interpolating $Y$. The results show that interpolation on $W$ gives the more accurate images. This also demonstrates that we have learned good representation $W$ for the angle since it directly relates to the optical flow on the feature.
The above results can be verified by the quantitative metric of FID. By interpolation on $W$, FID achieves $30.70$, while it is $32.04$ if the interpolation is implemented on $Y$.
\begin{figure}
\vspace{-1cm}
\centering
\includegraphics[width=0.8\textwidth]{interpolation_sub.pdf}
\caption{Comparisons on different interpolation schemes for synthesizing an unseen view image on MultiPIE.}
\label{fig:interpolation sub}
\vspace{-1cm}
\end{figure}
\section{Conclusions}\label{sec:con}
This paper proposes the conditional deformable VAE for the novel view synthesis based on unpaired training data. We design the CDM and DFNM which are utilized in both the encoder and decoder. The CDM employs the latent code mapping from the conditional view label as the filters to convolve the feature, so that a set of optical flows can be obtained to deform the features. The output from CDM are not directly given to the later layers, instead, they take effect through DFNM, which actually performs the conditional normalization according to its input. The experiments on 3D chair and MultiPIE show the effectiveness of our method particularly for unpaired training.
\section{Introduction}
Based on only a few sample images of a certain object with different poses, humans have the strong ability to infer and depict 2D images of the same object in arbitrary poses \cite{shepard1971mental}. This paper focuses on a similar task, known as the novel view synthesis, which aims to make computer render a novel target view image of an object given its current source view input. Obviously, this task requires the computer to understand the relationship between the 3D object and its pose. It has many potential applications in computer vision and graphic such as action recognition \cite{wang2014cross}, 3D object recognition \cite{savarese2008view}, modeling and editing \cite{massa2016deep} \emph{etc.}.
\begin{figure}
\centering
\includegraphics[width=1.0\textwidth]{eccv2020-deform/main0.pdf}
\caption{
We use unpaired data to realize view synthesis. On the left, given the
first source view image, the chair rotates with a span of 360$^\circ$. On top right, faces
are synthesized into existing predefined views in the training data. On bottom
right, we are able to interpolate the face into unseen views in the dataset.
}
\label{fig:main0}
\end{figure}
Traditional approaches \cite{avidan1997novel,kholgade20143d} for this task are mainly based on 3D projection geometry. They first construct the 3D shape model of the object from the cues in the image. Then the model is projected onto a 2D image plane of the target view. Actually, if 3D model can be perfectly built, object in arbitrary poses can be rendered precisely. However, building 3D object model from a single 2D image is an ill-posed problem. Therefore, it needs a large amount of close viewpoint images to capture the full object structure. Since structures of various objects are quite different, 3D geometry model for a particularly object may not generalize to other. Moreover, rendering a high quality image not only depends on the object model, but also other conditions such as the lighting and the background, but they need to be modeled independently.
Learning based approaches \cite{rematas2014image,zhou2016view} begin to show the advantages with the help of deep convolutional neural network (CNN). This type of methods directly learn the mapping network from the source view to the target without building the 3D model and knowing the camera pose. The mapping network is modeled by a huge number of parameters determined in the data-driven manner. Hence it is large enough to accommodate not just the geometry projection function, but the background and lighting conditions.
Recently, employing the image generation technique like generative adversarial notwork (GAN) has drawn researchers' attention.
\emph{E.g.}, novel synthesis can be modeled by a conditional GAN (cGAN) just like image-to-image translation \cite{isola2017image}.
Disadvantages of such methods lie in two aspects. First, the model dose not consider the prior knowledge about the projection geometry, though, previous works \cite{tran2017disentangled} already achieve the promising results given both the pose and identity labels as conditions. The works in \cite{nguyen2019hologan,sun2018multi,xu2019view} improves this by either designing differentiable 3D to 2D projection unit \cite{nguyen2019hologan}, predicting the warping flow between two different views \cite{sun2018multi}, or using a specific pose matrix rather than one hot vector as the input conditions \cite{xu2019view}.
Second, it depends on many training data under different poses. Particularly, training view translation model often requires the paired data, with one being used as the source view and the other as the target. The paired data essentially provide important constraining loss function for minimization. Nonetheless, the ground truth data from target view are not easy to obtain in real applications. Lately, with the progress of the unpaired synthesis technique \cite{zhu2017unpaired,bao2017cvae}, building a translation model by unpaired data becomes possible, which can greatly release the constraint of novel view synthesis.
This paper proposes a novel view synthesis algorithm using the conditional deformable flow in cVAE-GAN framework. Our method mainly intends for training with the unpaired data, although it still achieves the better results if the target view image can be further exploited in the loss functions. The key idea is to perform the view translation by deforming the latent feature map with the optical flows, specified by the image feature and the source (or target) view conditions together. Moreover, we find that cVAE is useful to map the input image into view-irrelevant posterior. It greatly increases the performance on the unpaired data. To further improve the synthesis results, we incorporate the adversarial training in the pixel and latent feature domain, and the reconstruction loss on the view-irrelevant code.
Specifically, we built the generator with a pair of connected encoder and decoder.
We apply the source and target views in them by our proposed conditional deformable module (CDM), in which the one-hot view vector is first mapping into two latent codes, and then they are used as two filters to convolve the feature maps to obtain the flow fields, which are two displacements on $x$ and $y$ directions. Note that instead of one flows, we actually get $3\times3$ flows for each location like in \cite{dai2017deformable}. To achieve this, the feature maps are divided into $9$ channel groups and the two filters convolve each group to output a pair of displacement maps. Each $3\times3$ results then deform the corresponding location in its $3\times3$ neighbourhood, naturally followed by an ordinary conv layer to refine feature maps after the deformation. Rather than directly giving the deformed features into the later layers, we also design a deformed feature based normalization module (DFNM), which learns the scale and offset
given the deformed feature as its input. With the help of the CDM and DFNM, the encoder maps the source into a posterior, while the decoder transforms the code, sampled from either the posterior or the prior, back into a target view image. Besides the reconstructed and prior-sampled image in traditional cVAE-GAN, our model also synthesizes a view-translated image to guide the generator for view synthesis task.
The contributions of this paper lie in following aspects. First, we build a model in cVAE-GAN for novel view synthesis based on the unpaired data. With the traditional and the extra added constraining loss, the model maps the input into a code reflecting factors other than views of the source image. The target view then complements the code in the decoder. Second, we propose two modules named CDM and DFNM for view translation. They fits in our model to improve the synthesis results. Third, extensive experiments are performed on two datasets to validate the effectiveness of the proposed method.
\section{Related Works}
\subsubsection{Image generation by VAE and GAN.} GAN \cite{goodfellow2014generative} and Variational Auto-Encoder (VAE) \cite{kingma2013auto} are two powerful tools for generating high dimensional structured data. Both of them map the random code drawn from the prior into the image domain data. GAN introduces a discriminator $D$ to evaluate the results from the generator $G$. $D$ and $G$ are training in the adversarial manner, and finally $G$ is able to synthesize high quality images. However, GAN's training is unstable, and mode collapse often happens. Therefore, extra tricks are often added to limit the ability of $D$ \cite{gulrajani2017improved,heusel2017gans}. VAE has a pair of encoder and decoder. In VAE, the input image is first mapped into the latent probabilistic space by the encoder. The decoder takes the random code drawn from the posterior to reconstruct the input image. VAE can be easily trained by the reconstruction loss together with KL loss as its regularization. But it tends to give the blurry image. So it usually works with a discriminator to form a GAN \cite{larsen2015autoencoding}. Originally, both GAN and VAE perform unconditional generation. To better control the generated results, cGAN \cite{mirza2014conditional,isola2017image,miyato2018cgans} and cVAE \cite{sohn2015learning,bao2017cvae} are proposed. In these works, the conditional label is given to the network as the input. So it controls the generation results to fulfill the required condition. $D$ in cGAN not only evaluates the image quality, but also the condition conformity.
GAN and VAE become popular tool in novel view synthesis. Particularly, the latent code is disentangled into different dimensions in the unsupervised way \cite{higgins2017beta,nguyen2019hologan}, with some of them naturally controlling the pose, which shows their great potential on view synthesis.
\subsubsection{Novel view synthesis.}
Novel view synthesis is classical topic in both computer vision and graphics. Traditional approaches are built by the 3D projection geometry \cite{avidan1997novel,savarese2008view,kholgade20143d,zhang2015meshstereo,rematas2016novel}. These approaches estimate the 3D representation of the object, including the depth and camera pose \cite{avidan1997novel}, 3D meshes \cite{zhang2015meshstereo} and 3D model parameters \cite{savarese2008view,kholgade20143d,rematas2016novel}. Learning based method becomes increasingly popular with the help of CNN. Since all types of 3D representations can now be estimated by CNN, it is the main building blocks of the view synthesis algorithm. Dosovitskiy \emph{et al.} \cite{dosovitskiy2015learning} learn a CNN which takes the low dimensional code including the shape and camera pose as the input, and maps it into a high dimensional image. Zhou \emph{et al.} \cite{zhou2016view} employ a CNN to predict the appearance flow to warp source view pixels directly. However, without the adversarial training, these works tend to give low quality images.
Since GAN and VAE is able to generate high quality images, GAN-based method becomes dominant recently \cite{park2017transformation,tran2017disentangled,sun2018multi,tian2018cr,xu2019view}. Park \emph{et al.} \cite{park2017transformation} predict the flow and the occlusion map to warp pixels first, and then the deformed image is given to the following network for refinement. The work \cite{sun2018multi} fully exploits a sequence of source images by giving them to an RNN-based network, which predicts a series of warping flows from sources to the current target view. In DR-GAN \cite{tran2017disentangled}, a connected encoder-decoder based generator is proposed. The encoder transforms the image into a latent code. Together with the target view condition, the code is applied by the decoder to synthesize the image. The discriminator in DR-GAN takes advantage of the ID labels to ensure the view translation not to change the source ID. CR-GAN \cite{tian2018cr} extends the encoder-decoder based structure by adding an extra path beginning from the decoder, which gives an extra reconstruction constraint in the image domain. VI-GAN \cite{xu2019view} employs the estimated camera pose matrix as the input condition for both source and target views, which replaces the one-hot condition vector. It also feeds back the view-translated image into the encoder, and requires its latent code to be close with the code from the source view, hence building the view-independent space. Note that in the above works, most of them \cite{park2017transformation,sun2018multi,tian2018cr,xu2019view} ask for the paired data to form the loss function. Although, DR-GAN do not have this constraint, it still requires the ID label for training the discriminator. Our work is totally based on the unpaired data and it dose not need any ID label during training.
\section{Method}
\subsection{Overview framework}
This paper regards the novel view synthesis as the condition translation task in cVAE-GAN. To achieve the view translation on the unpaired data, we propose a conditional deformable module (CDM) and a deformed feature based normalization module (DFNM) in our designed network. To enhance the disentanglement between the view and its irrelevant factors, a disentanglement adversarial classifier (DAC) is also incorporated. As is shown in the Figure \ref{fig:fig1}, our network consists of three major components, an encoder $E$, a decoder $G$ and a discriminator $D$. $\Psi_{EX}$, $\Psi_{EY}$ and $\Psi_{GX}$, $\Psi_{GY}$ are four different MLPs in $E$ and $G$, respectively. These MLPs maps the conditional label into conv filters, which are responsible for generating the optical flow. Given a source input image $X_a$ and its view condition label $Y_a$, the algorithm synthesizes a novel view image and changes its view to $Y_b$. Note that we do not use the ground truth $X_b$ to constrain the model during training.
In Figure \ref{fig:fig1},
$E$
$E$ maps $X$ (either $X_a$ or $X_b$) into a posterior $E(Z|X,Y)=N(\mu, \Sigma)$ in the Gaussian form, from which a random code $Z\sim E(Z|X,Y)$ is sampled. With $Z$ as its input, $G$ renders the fake images, and they are given to $D$ to evaluate the realness and view conformity.
cVAE constrains $E(Z|X,Y)$ for all $X$ with the common view-irrelevant prior $N(0, I)$ according to the KL divergence. $E(Z|X,Y)$ and the code $Z$ tend to become independent on input views. Hence $Z$ is expected to extract factors, other than the view condition $Y$, from $X$. In cVAE, $E$ removes $Y_a$ from the source $X_a$, while $G$ adds $Y_b$ into the synthesized image. To fit the task of novel view synthesis, $G$ generates not only the reconstructed and prior-sampled images, but also the view-translated image. Note that similar with cVAE, our model employs $Y_a$ and $Y_b$ as the input for $E$ and $G$. Instead of making the concatenation at the beginning, we propose the CDM and DFNM, which make the whole network suitable for view translation.
Moreover, we follow the idea of BicycleGAN \cite{zhu2017toward} to reconstruct $Z$ from the prior-sampled image, and it ensures $G$ to take effective information from the code $Z$.
\begin{figure}
\centering
\includegraphics[height=7.0cm]{eccv2020-deform/main1.pdf}
\caption{Overview framework of the proposed network structure. (a) shows the source input $X_a$ with its condition viewpoint $Y_a$ is translated into $\bar{X}_b$ in the target view $Y_b$. $\bar{X}_a$ is the reconstructed image with the same $Y_a$ given at both $E$ and $G$. (b) demonstrates that the code $Z\sim N(0,I)$ is synthesizing into an image, which is given back to $E$ to reconstruct the code.}
\label{fig:fig1}
\end{figure}
\subsection{Conditional Deformable Module (CDM)}
We now give the details about the proposed CDM, applied in both $E$ and $G$. One experience is that changing from the source view $Y_a$ to the target $Y_b$ can be accomplished by deforming $X_a$ with the optical flow, particularly when $Y_a$ and $Y_b$ are close. Therefore, the CDM actually learns to generate the 2D optical flows for the feature maps.
Here, we argue that the flows
are mainly determined by $Y_a$ and $Y_b$, but it is also influenced by the content in $X_a$. Therefore, they should be computed from both of them. Considering the full combinations of $Y_a$, $Y_b$, and $X_a$, we choose to learn two sets of flows in $E$ and $G$, respectively. Specifically, two sets of separated MLPs, $\Psi_{EX}$, $\Psi_{EY}$ and $\Psi_{GX}$, $\Psi_{GY}$, first map $Y_a$ and $Y_b$ to the latent codes $W$ ($W_{EX}$, $W_{EY}$ in $E$ and $W_{GX}$, $W_{GY}$ in $G$). Here, we separate the filters for $x$ and $y$ directions, and for $E$ and $G$. Detailed discussions are given in the experiments. Then, $W$ are used as the filters to convolve on the feature maps, resulting several pairs of feature maps indicating the displacement $dx$ and $dy$ on $x$ and $y$ directions.
Figure \ref{fig:fig2} shows the details about CDM. It mainly composed of the conditional flow computation (CFC) and the deformable conv module, as is shown in Figure \ref{fig:fig2} (a). Supposed the input $F^i\in \mathbb{R}^{H\times W\times C}$, of $i$th layer,
CDM outputs the deformed $F_d^i$ of the same size. $W_X$ and $W_Y$ are also CDM's input, which are two latent vectors, computed from the view condition label $Y$ by two different MLPs. Particularly, $F^i$ is given to a conv layer with $C'$ filters to produce ${F'}\in \mathbb{R}^{H\times W\times C'}$. ${F'}$ is split into different groups along the channel, then given to the CFC. Figure \ref{fig:fig2} (b) and (c) are two options for CFC. In practice, we choose the design in Figure \ref{fig:fig2} (b), in which the layer of Kernel Given convolution ($KGconv$) uses $W_X, W_Y\in\mathbb{R}^{1 \times 1\times \frac{C'}{9}}$ as a pair of filters to convolve on each $\frac{C'}{9}$ intervals, leading to a pair of ${dx, dy}\in\mathbb{R}^{H\times W\times 9}$. Note that $dx, dy$ are composed of 9 groups of flows.
They are used to warp a $3\times3$ neighbourhood on the corresponding location in ${F}$ like the way in deformable conv \cite{dai2017deformable}, and it finally gives the deformed feature ${F}_d^i$.
\begin{figure}
\centering
\includegraphics[width=0.85\textwidth]{eccv2020-deform/sub2.pdf}
\caption{The details for CDM. (a) Given the $F\in\mathbb{R}^{H\times W\times C}$ before the deformation, its output $F_d$ is the deformed feature with the same size as $F$. (b) CFC also has two separated input latent codes $W_X$ and $W_Y$, and they are used as the filters to convolve on a number (usually 9) of groups in $F'$.
(c) Another design for CFC. Only one filter is provided, and it convolves on 18 groups. }
\label{fig:fig2}
\end{figure}
\subsection{Deformed Feature based Normalization Module (DFNM)}
The deformed feature maps ${F}_d^i$ need to be further processed by $(i+1)$th layers in $E$ and $G$. One intuitive way is to directly use ${F}_d^i$ as the input. However, recent advances in GAN and cGAN show the advantage of the conditional normalization like AdaIN \cite{huang2017arbitrary} and SPADE \cite{park2019semantic}. Different from BN or IN, such layers do not learn the scale $\gamma$ and offset $\beta$ as trainable model parameters. Instead, they are the features from the side branch. In other words, the conditional adaptive normalization module learns to scale and offset based on the conditional input.
Inspired by SPADE, we propose a new conditional normalization way named DFNM, which uses ${F}_d^i$ as the conditional input from the side branch. DFNM performs the de-normalization, which means to determine the appropriate values on $\beta$ and $\gamma$. To be specific, it employs ${F}_d^i$ as its input, and specifies $\beta$ and $\gamma$ by two conv layers. Note that DFNM has distinct internal parameters for different layers, hence it progressively adjusts the features in the main branch based on its current input. In practice, we can have different choices on the dimensions of $\beta$ and $\gamma$. Here we simply follow the setting in SPADE, which outputs the unique $\gamma^i_{y,x,c}$ and $\beta^i_{y,x,c}$ at different 3D sites, where the subscripts are the indexes along the height, width and channel dimensions, respectively. Before the de-normalization, the features in the main branch should be normalized first by subtracting $\mu$ and dividing $\sigma$.
Here we follow the way in BN to compute per-channel statistics
$\mu^i_c$ and $\sigma^i_c$ from $h^i_{n,y,x,c}$ .
\subsection{Overall Optimization Objective}
The loss functions used in this paper mainly are three parts, namely, disentangling losses, reconstruction losses and adversarial loss.
\subsubsection{Disentangling loss}
The disentangling loss constrains the encoder $E$, and prevents it from extracting the source view-relevant feature, so that the target view $Y_b$ can be easily added into the translated image. The KL constraint penalizes the posterior distribution $E (Z|X_a, Y_a)$ being far from the standard Gaussian $N(0,I)$, which to some extent makes the random code $Z\sim E(Z|X_a, Y_a)$ not carry the information related to $Y_a$.
KL loss $L_{KL}$, as is shown in Eq. \ref{eq:kl}, can be easily computed in closed form since both the prior and posterior are assumed as Gaussians.
\begin{equation}
\label{eq:kl}
L_{KL} = D_\text{KL}[E(Z|X_a, Y_a)||{N}({0},{ I})]
\end{equation}
However, this loss also constrains on view-irrelevant factors, so that other information in $Z$ may lose because of the penalty from it. To cope with this issue, the paper proposes the DAC which mainly aims to reduce view-relevant factors in $Z$. With the help of DAC, the KL loss weight can be reduced so that the view-irrelevant factors remain in $Z$. In practice, we implement the DAC as two FC-layers with the purpose of classifying the view based on $Z$. DAC is trained in the adversarial manner. Hence it has two training stages, $D$ and $G$ stages.
In $D$ stage, the DAC is provided with the output $Z$ from $E$ and the correct source view label as well, while in $G$ stage, DAC is fixed and $E$ get trained with the adversarial loss from DAC. In this stage, we give an all-equal one-hot label to DAC
with the same degree of confidence on each view.
The cross entropy loss are defined as Eq.\ref{eq:advE} and Eq.\ref{eq:advDAC}, respectively.
\begin{equation}
\label{eq:advE}
L_{E}^{cls} = - \mathbb{E}_{Z\sim E(Z|X_a,Y_a)}\sum_c \frac{1}{C} \log DAC(c | Z)
\end{equation}
\begin{equation}
\label{eq:advDAC}
L_{DAC}^{cls} = - \mathbb{E}_{Z\sim E(Z|X_a, Y_a)}\sum_c \mathbb{I}(c=Y_a) \log DAC(c | Z)
\end{equation}
where $\mathbb{I}(c=Y_a)$ is the indicator function, and $DAC(c | Z)$ is softmax probability output by the disentanglement adversarial classifier.
\subsubsection{Reconstruction losses}
Reconstruction losses are important regularizations which also
ensure that
the view-relevant factors remain unchanged during view translation.
Without extra supervisions, cVAE wants the synthesized image $\hat{X}_a$ to be close to the input when $E$ and $G$ are provided the same $Y_a$. In addition,
the constraints of the middle layer features of the classification network is also employed in our work.
As shown in Eq. \ref{eq:L1 pixel} and Eq. \ref{eq:L1 gram}, $\phi^i$ indicates $i$th of a pre-trained VGG network, and $Gram$ means to compute the Gram matrix, which is a typical second order features.
\begin{equation}
\label{eq:L1 pixel}
\begin{aligned}
L_{E, G}^{pixel} =& ||X_a -\bar{X_a}||_1, \quad
L_{E, G}^{content} =& ||\phi^i({X_a}) -\phi^i({\bar{X_a}})||_1
\end{aligned}
\end{equation}
\begin{equation}
\label{eq:L1 gram}
\begin{aligned}
L_{E, G}^{style} =& ||Gram(\phi^i({X_a})) -Gram(\phi^i({\bar{X_a}}))||_1 \\
\end{aligned}
\end{equation}
When $Z \sim {N} (0, I)$ for the prior-sampled image $G(Z, Y_a)$, we cannot constrain it directly in the image domain, so we extract the feature from the image $G(Z, Y_a)$ with $E$, and to reconstruct $Z$.
So that the information in $Z$ is kept. The reconstruction loss expressed in Eq:\ref{eq:L1 z}
\begin{equation}
\label{eq:L1 z}
L^{rec_{z}}_{G} = \mathbb{E}_{Z\sim N(0, I)}||Z - E(G(Z, Y_a), Y_a)||_1
\end{equation}
\subsubsection{Adversarial loss}
In this paper, the projection discriminator \cite{miyato2018cgans} is adopted.
Given the real image $X_a$, constraints are made for three types of fake images, reconstructed $G(E(X_a, Y_a), Y_a)$, view-translated $G(E(X_a, Y_a), Y_b)$ and prior-sampled image $G(Z, Y_a)$, as shown in Eq. \ref{eq:adv D} and Eq. \ref{eq:adv EG}.
\begin{equation}
\label{eq:adv D}
\begin{aligned}
L_{D}^{adv} =& \mathbb{E}_{{X}\sim p_{\rm{data}}}[\max(0,1-D(X,Y_a))]\\
&+\mathbb{E}_{Z\sim E(Z|X_a, Y_a)}[\max(0,1+D(G(Z, Y_a)),Y_a)]\\
&+\mathbb{E}_{Z\sim E(Z|X_a, Y_a)}[\max(0,1+D(G(Z, Y_b)), Y_b)]\\&+\mathbb{E}_{Z\sim N(0, I)}[\max(0,1+D(G(Z, Y_a)), Y_a)]
\end{aligned}
\end{equation}
\begin{equation}
\label{eq:adv EG}
\begin{aligned}
L_{E, G}^{adv} =&\mathbb{E}_{Z\sim E(Z|X_a, Y_a)}[\max(0,1-D(G(Z, Y_a)),Y_a)]\\
&+\mathbb{E}_{Z\sim E(Z|X_a, Y_a)}[\max(0,1-D(G(Z, Y_b)), Y_b)]\\
&+\mathbb{E}_{Z\sim N(0, I)}[\max(0,1-D(G(z, Y_a)), Y_a)]
\end{aligned}
\end{equation}
The total loss for $E$, $G$, $D$ and DAC can be written as following.
\begin{equation}
\label{eq:E all}
\begin{aligned}
L_{E, G} = L_{KL} + L_{E, G}^{adv} + \alpha_1L_{E, G}^{style} + \alpha_2L_{E, G}^{content} + \alpha_3L_{E, G}^{pixel} + L_{E}^{cls} + L^{rec_{z}}_{G}
\end{aligned}
\end{equation}
\begin{equation}
\label{eq:E all}
\begin{aligned}
L_{D} = L_{D}^{adv},\quad
L_{DAC} = L_{DAC}^{cls}
\end{aligned}
\end{equation}
We set the loss weight $\alpha_1= 0.001$, $\alpha_2= 10$, $\alpha_3= 100$ for all experiments.
\section{Experiments}
\subsection{Dataset and implementation details}
\textbf{Dataset.} We validate the proposed method on the face dataset MultiPIE \cite{gross2010multi} and 3D chair \cite{aubry2014seeing} object dataset. The dataset of multiPIE contains about 130,000 images, with a total span of $180^\circ$ and a spacing of $15^\circ$ in azimuth dimension. A total of 13 angles are used for training and testing. Meanwhile, the dataset also contains images of 250 identities under different lights. The 3D chair dataset contains 86304 images with a span of $360^\circ$ at azimuth and $30^\circ$ at pitch, respectively, covering a total of 62 angles. There are 1,392 different types of chairs. For all the datasets, 80\% are used for model training and the rest 20\% for testing.
\textbf{Implementation details.}
In $E$ and $G$, all layers adopt instance normalization, except those applied by DFNM. The spectral norm \cite{miyato2018spectral} is applied to all layers in $D$. All learning rates are set to 0.0002. We use the ADAM \cite{kingma2014adam} and set $\beta_1$ = 0, $\beta_2$ = 0.9.
Specific details is given in the supplementary materials.
\subsection{Results and ablation studies on 3D chair and MultiPIE}
Extensive ablation studies is conducted to verify the effectiveness of each module. We have 6 different settings for it. View-translated images in different settings are presented in the corresponding rows in Figure \ref{fig:ablation} and the quantitative metrics are given in Table \ref{Table1}.
\begin{figure}
\centering
\includegraphics[height=8.0cm]{eccv2020-deform/chair_ablation.pdf}
\caption{Ablation study on 3D chair dataset.}
\label{fig:ablation}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=1.0\textwidth]{eccv2020-deform/multipie_ablation.pdf}
\caption{Ablation study on multiPIE dataset.}
\label{fig:ablation multiPIE}
\end{figure}
\textbf{Baseline.}
To verify the effectiveness of our proposed method, we use a general framework cVAE-GAN \cite{bao2017cvae} as the baseline.
To make the comparison fair, we introduce the view-translated image in it, and use all the loss functions that is presented. The result is indicated as "A: baseline" in Table \ref{Table1} and Figure \ref{fig:ablation}, \ref{fig:ablation multiPIE}.
\textbf{Validity of CDM.}
To validate CDM, setting B is modified based on A. The only difference is we introduce the label through CDM, thus the setting is indicated by "B: A+CDM" in Table \ref{Table1} and Figure \ref{fig:ablation}, \ref{fig:ablation multiPIE}.
Comparing the results from A and B in Figure \ref{fig:ablation}, we can find that both can translate images to the given view. But when the difference between the target view and the input view is large, it is difficult for A to maintain the attributes and local details of the source image. While the CDM in B
has the advantage of
maintaining the representative details in the source images.
Both in terms of image visual fidelity and similarity, B has a greater improvement on A.
\textbf{Validity of DFNM.}
We validate the DFNM in setting C based on B. The only difference between B and C is that we apply DFNM in C, while the deformed features are directly given to the later layers in the main branch in B. This setting is written as "C: B+DFNM" in Table \ref{Table1} and Figure \ref{fig:ablation}, \ref{fig:ablation multiPIE}.
As is shown in Figure \ref{fig:ablation}, for some of the complex chair types, the synthesized image keep the chair style, indicating that DFNM helps catching the detail features in the source image. The quantitative results in Table \ref{Table1} indicate that DFNM refines the results compared with the setting B.
\textbf{Validity of DAC. } To demonstrate the effectiveness of DAC loss, we experiment in setting D based on C, which means that we employ DAC to provide the loss for encoder by Eq. \ref{eq:advE} .
By introducing DAC, it enables $G$ to get more view-irrelevant information. In Figure \ref{fig:ablation}, \ref{fig:ablation multiPIE}, we can clearly see that although setting C basically maintain details, DAC in setting D gives a clearer representation. The results in Table \ref{Table1} give further proof that all metrics are improved on 3D chair, and L1 error and FID have only negligible decreasing on MultiPIE.
\textbf{Necessity of separating MLPs for $x$ and $y$ directions.}
We are also interested in the way that CFC is implemented in CDM. There are at least two options for the filters $W$ from MLPs. One possible way is to employ the same $W$ to generate both $dx$ and $dy$, as is shown in Figure \ref{fig:fig2}(c). The other way is illustrated in the conditional flow computation sub-module in Figure \ref{fig:fig2}(b). The results of the first option are specified as "E: D-XYS", as is shown in Figure \ref{fig:ablation}, \ref{fig:ablation multiPIE} and Table \ref{Table1}.
We can see that
the image is defective. The declines in quantitative metrics further illustrate the necessity of our design in CDM.
\textbf{Necessity of separating the MLPs in $E$ and $G$.} $E$ and $G$ both use CDM to warp the features. But considering the different purposes of $E$ and $G$, the input conditional filters are different, coming from $\Psi_{EX}$, $\Psi_{EY}$, and $\Psi_{GX}$, $\Psi_{GY}$, as is shown in Figure \ref{fig:fig1}.
We are wondering whether separating the MLPs in $E$ and $G$ is necessary, hence we implement a network in which $\Psi_X$, $\Psi_Y$ are sharing in $E$ and $G$. The results are presented as "F: D-EDS",
results are worse than D, as is shown in Figure \ref{fig:ablation}, \ref{fig:ablation multiPIE} and Table \ref{Table1}, which demonstrates the necessity of separating MLPs in $E$ and $G$.
\begin{table}[ht]
\begin{center}
\setlength{\tabcolsep}{1mm}{
\begin{tabular}{ l l l c c c c c c c}
\toprule
& method && \multicolumn{3}{c}{MultiPIE}&&\multicolumn{3}{c }{3D chair} \\
\cline{4-6}\cline{8-10}
&&&L1 &SSIM &FID& &L1 &SSIM &FID \\
\midrule
A:& Baseline & & $31.37$ & $0.49$ &$44.84$& & $8.39$ & $0.86$ &$104.78$ \\
B:& CDM & & $23.43$ & $0.55$ &$26.79$& & $7.88$ & $0.87$ &$88.23$ \\
C:& B + DFNM & &\boldmath{$21.53$} & $0.56$ &\boldmath{$23.59$}& &$6.68$ & $0.88$ &$93.11$ \\
D:& C + DAC& & $21.90$ & \boldmath{$0.57$} &$23.95$& & \boldmath{$6.37$} &\boldmath{ $0.89$} &\boldmath{$86.34$} \\
\midrule
E:& D - XYS & & $24.48$ & $0.54$ &$31.02$& & $7.18$ & $0.88$ &$90.31$ \\
F:& D - EDS & & $23.59$ & $0.54$ &$28.40$& & $6.94$ & $0.88$ &$89.56$ \\
\bottomrule
\end{tabular}
}
\end{center}
\caption{Quantitative ablation study on the MultiPIE and the 3D chair dataset. The pixel-wise mean L1 error and the structural similarity index measure (SSIM) \cite{wang2004image} are computed between the view-translated images and the ground truths. Besides, the FID is also reported.}
\label{Table1}
\end{table}
\subsection{Results and analysis on MultiPIE. }
\textbf{View-translation among discrete angles. }
Qualitative comparisons are performed among our proposed method and the existing works like cVAE-GAN \cite{bao2017cvae}, VI-GAN \cite{xu2019view} and CR-GAN \cite{tian2018cr}. The results are listed in Figure \ref{fig:comp}. Note that in this study, we do not use paired data for all experiments during training.
The results of the quantitative metrics on each method are shown in the Table \ref{Table2}. After removing the constraint from the paired data, CR-GAN can hardly realize the view translation. The image qualities of VI-GAN significantly deteriorate under the condition of large angle translation. Although cVAE-GAN can realize the view synthesis, the converted image can not keep the view irrelevant information from the source.
\begin{table}[]
\begin{center}
\setlength{\tabcolsep}{1mm}{
\begin{tabular}{ l l l c c c c c c c}
\toprule
& method && \multicolumn{3}{c}{MultiPIE}&&\multicolumn{3}{c }{3D chair} \\
\cline{4-6}\cline{8-10}
&&&L1 &SSIM &FID& &L1 &SSIM &FID \\
\midrule
& CR-GAN\cite{tian2018cr}& & $39.80$ & $0.397$ &$48.87$& &$13.45$ & $0.696$ &$111.34$ \\
&VI-GAN\cite{xu2019view}& & $38.18$ & $0.464$ &$47.02$& & $10.54$ & $0.802$ &$105.78$ \\
&cVAE-GAN\cite{bao2017cvae}& & $31.37$ & $0.493$ &$44.84$& & $8.39$ & $0.859$ &$104.78$ \\
& Ours& & \boldmath{ $21.90$ } & \boldmath{$0.571$} &\boldmath{$23.95$}& & \boldmath{$6.37$} &\boldmath{ $0.885$} &\boldmath{$86.34$} \\
\bottomrule
\end{tabular}
}
\end{center}
\caption{Quantitative metrics comparisons. Results from CR-GAN, VI-GAN and cVAE-GAN are provided on MultiPIE and the 3D chair datasets, respectively.}
\vspace{-1cm}
\label{Table2}
\end{table}
\begin{figure}[ht]
\centering
\includegraphics[height=7.5cm]{eccv2020-deform/multipie_com.pdf}
\caption{ Comparison on Multi-PIE. For each image, the top row is the ground truth while the second row is generated by ours. The third , fourth and fifth rows are the output of cVAE-GAN\cite{bao2017cvae} ,VI-GAN\cite{xu2019view} and CR-GAN\cite{tian2018cr} respectively. }
\label{fig:comp}
\vspace{-0.5cm}
\end{figure}
\textbf{Continuous view synthesis by interpolation.
}
Synthesizing images at continuously varying views is important in real applications. In our implementation, this can be achieved by interpolating between two adjacent labels. Meanwhile, we realize that the filter $W$, computed from the discrete view labels through the MLPs $\Psi$, can help synthesizing the image at an unseen angle. Therefore, we can also directly interpolate on $W$.
\begin{figure}
\centering
\includegraphics[width=1.0\textwidth]{eccv2020-deform/interpolation.pdf}
\caption{Interpolating $W$ to synthesis unseen view images.}
\label{fig:interpolation}
\vspace{-0.5cm}
\end{figure}
The minimum angle interval in MultiPIE is $15^\circ$, and we choose to interpolate at every $7.5^\circ$. As is shown in Figure \ref{fig:interpolation}, we visualize all the images by interpolating $W$ from $0^\circ$ to 90$\circ$ and find that the face realized smooth transformation.
For comparison, zooming-in results by interpolating on both $W$ and $Y$ are given in Figure \ref{fig:interpolation sub}.
Note that all these images are the outputs from our model with the source view at $0^\circ$. The image marked with the red box is the obtained by interpolating $W$, while the green box is the result from interpolating $Y$. The results show that interpolation on $W$ gives the more accurate images. This also demonstrates that we have learned good representation $W$ for the angle since it directly relates to the optical flow on the feature.
The above results can be verified by the quantitative metric of FID. By interpolation on $W$, FID achieves $30.70$, while it is $32.04$ if the interpolation is implemented on $Y$.
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{eccv2020-deform/interpolation_sub.pdf}
\caption{Comparisons on different interpolation schemes for synthesizing an unseen view image on MultiPIE.}
\label{fig:interpolation sub}
\vspace{-1cm}
\end{figure}
\section{Conclusions}
This paper proposes the conditional deformable VAE for the novel view synthesis based on unpaired training data. We design the CDM and DFNM which are applied in both the encoder and decoder. The CDM employs the latent code mapping from the conditional view label as the filters to convolve the feature maps, so that a set of optical flows can be obtained to deform the features. The output from CDM are not directly given to the later layers, instead, they take effect through DFNM, which actually performs the conditional normalization according to its input. The experiments on 3D chair and MultiPIE show the effectiveness of our method particularly for unpaired training.
\par\vfill\par
\clearpage
\bibliographystyle{splncs04}
\section{First Section}
\subsection{A Subsection Sample}
Please note that the first paragraph of a section or subsection is
not indented. The first paragraph that follows a table, figure,
equation etc. does not need an indent, either.
Subsequent paragraphs, however, are indented.
\subsubsection{Sample Heading (Third Level)} Only two levels of
headings should be numbered. Lower level headings remain unnumbered;
they are formatted as run-in headings.
\paragraph{Sample Heading (Fourth Level)}
The contribution should contain no more than four levels of
headings. Table~\ref{tab1} gives a summary of all heading levels.
\begin{table}
\caption{Table captions should be placed above the
tables.}\label{tab1}
\begin{tabular}{|l|l|l|}
\hline
Heading level & Example & Font size and style\\
\hline
Title (centered) & {\Large\bfseries Lecture Notes} & 14 point, bold\\
1st-level heading & {\large\bfseries 1 Introduction} & 12 point, bold\\
2nd-level heading & {\bfseries 2.1 Printing Area} & 10 point, bold\\
3rd-level heading & {\bfseries Run-in Heading in Bold.} Text follows & 10 point, bold\\
4th-level heading & {\itshape Lowest Level Heading.} Text follows & 10 point, italic\\
\hline
\end{tabular}
\end{table}
\noindent Displayed equations are centered and set on a separate
line.
\begin{equation}
x + y = z
\end{equation}
Please try to avoid rasterized images for line-art diagrams and
schemas. Whenever possible, use vector graphics instead (see
Fig.~\ref{fig1}).
\begin{figure}
\includegraphics[width=\textwidth]{fig1.eps}
\caption{A figure caption is always placed below the illustration.
Please note that short captions are centered, while long ones are
justified by the macro package automatically.} \label{fig1}
\end{figure}
\begin{theorem}
This is a sample theorem. The run-in heading is set in bold, while
the following text appears in italics. Definitions, lemmas,
propositions, and corollaries are styled the same way.
\end{theorem}
\begin{proof}
Proofs, examples, and remarks have the initial word in italics,
while the following text appears in normal font.
\end{proof}
For citations of references, we prefer the use of square brackets
and consecutive numbers. Citations using labels or the author/year
convention are also acceptable. The following bibliography provides
a sample reference list with entries for journal
articles~\cite{ref_article1}, an LNCS chapter~\cite{ref_lncs1}, a
book~\cite{ref_book1}, proceedings without editors~\cite{ref_proc1},
and a homepage~\cite{ref_url1}. Multiple citations are grouped
\cite{ref_article1,ref_lncs1,ref_book1},
\cite{ref_article1,ref_book1,ref_proc1,ref_url1}.
| proofpile-arXiv_065-268 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Two dimensional periodic tilings play a role in the decorative arts, prominent examples being the euclidean tilings
that cover the Alhambra palace in Granada, Spain, and
M.C.~Escher's illustrations of euclidean, hyperbolic and spherical tilings involving reptiles, birds and other shapes \citep{Schattschneider2010}.
Two dimensional euclidean tilings are used in the construction of floors, walls and roofs. Hyperbolic tilings are used in the analysis of minimal surfaces of three-dimensional crystal structures \citep{Ramsden2009,kolbe2018isotopic}.
A two-dimensional periodic tiling $(\mathcal{T},\Gamma)$ of the euclidean plane, sphere or hyperbolic plane, consists of a set of tiles $\mathcal{T}$
and a discrete group $\Gamma$ of symmetries of $\mathcal{T}$ with compact fundamental domain, see Figure~\ref{fig:examples}.
Combinatorial tiling theory, based on the encoding of periodic tilings
as ``Delaney-Dress symbols'' \citep{Dress84,Dress87,DressHuson87}, can be used
to systematically enumerate all possible (equivariant) types of two-dimensional
tilings by their curvature and increasing number of equivalence classes of tiles \citep{Huson93a}.
\begin{figure}
\hfil
{
\begin{tabular}{ccc}
\includegraphics[height=0.3\textwidth]{figs/greens-3.png} &
\includegraphics[height=0.3\textwidth]{figs/greens-2.png} &
\includegraphics[height=0.3\textwidth]{figs/greens-1.png} \\
$\delta=18$, $\Gamma=\mbox{\tt *532}$ & $\delta=18$, $\Gamma=\mbox{\tt *632}$ & $\delta=18$, $\Gamma=\mbox{\tt *642}$\\
\end{tabular}
}
\hfil
\caption{Periodic tilings of the sphere, plane and hyperbolic plane. Each is labeled by
its Dress complexity $\delta=\delta(\mathcal{T},\Gamma)$
and the orbifold name of the symmetry group $\Gamma$.}
\label{fig:examples}
\end{figure}
In this paper, we introduce the term {\em Dress complexity} of a periodic tiling, which is simply the size of the corresponding Delaney-Dress symbol.
We discuss how to systematically enumerate all Delaney-Dress symbols up to
a given Dress complexity, in the case of two-dimensional periodic tilings.
Using this, we have enumerated all two-dimensional periodic tilings of
complexity $\leq 24$. There are $2,395,220,319$ such tilings.
We refer to this collection as a ``galaxy of periodic tilings''
in the title of this paper, because, first, the number of tilings is very big (although not as large as the number of stars in a typical galaxy), and second, when viewing these tilings, the impression is that many look very similar to each other, much like stars in the sky.
Each such tiling is represented by its Delaney-Dress symbol and we provide
these in a SQLITE database.
We provide a new program called {\em Tegula} that allows the user to explore and
query the database, and to visualize the corresponding tilings in all three geometries.
Tegula and the database of periodic tilings are open source and freely available.
\section{Conway's orbifold notation}
In this section, we briefly recall results on the classification of surfaces \citep{SeifertThrefall1934,ZIPProof} and their Conway names \citep{ConwayHuson99}. Any orientable, closed, connected surface is homeomorphic to either the sphere, denoted by {\tt 1},
or a sphere with $h>0$ handles attached, denoted by
$$\underbrace{\mbox{\tt o o \dots~o}}_h.$$
Any non-orientable, closed, connected surface is homeomorphic to a sphere with $k\geq 1$ crosscaps attached, denoted
by
$$\underbrace{\mbox{\tt x x \dots~x}}_k.$$
The surface {\tt x} is a project plane, the surface {\tt xx} is a Klein bottle
and the surface {\tt xxx} is called Dyck's surface.
Note that the classification of closed surfaces does not mention combining both handles and crosscaps. This is because, if a crosscap is present, then any given handle can be replaced by two crosscaps \citep{Dyck1888}.
A connected surface with boundary is obtained from a closed connected surface by removing $k$ disks from the interior
of the surface. In Conway's notation, a sphere with $b>0$ boundary components is written as
$$\underbrace{\mbox{\tt * * \dots *}}_b,$$
a sphere with $h>0$ handles and $b>0$ boundary components is written as
$$\underbrace{\mbox{\tt o o \dots~o}}_h\underbrace{\mbox{\tt * * \dots~*}}_b,$$
and
a sphere with $k>0$ crosscaps and $b>0$ boundary components is written as
$$\underbrace{\mbox{\tt * * \dots~*}}_b\underbrace{\mbox{\tt x x \dots~x}}_k.$$
The surface {\tt *x} is a M\"obius strip.
For the purposes of this paper, a {\em two-dimensional orbifold} \citep{Thurston80,ConwayHuson99} consists of a connected surface $S$,
either orientable or non-orientable, with boundary or without, together with a finite set of
points $P=\{p_1,\dots,p_t\}$ in $S$, where each such point $p_i$ is labeled with an integer $v_i\geq 2$ that we call its {\em order}.
Any such point is called a {\em cone}, if it is contained in the interior of $S$, or
a {\em corner}, if it is contained in the boundary of $S$.
For example, in Figure~\ref{fig:orbifold} we depict an orbifold obtained by adding three cones and four corners to the surface {\tt o**}. Note that the set of added points can be empty, so any surface, such
as {\tt o**}, is also an orbifold.
Conway's notation, which we introduced above for the naming of surfaces, also covers orbifolds and has
this form
\begin{equation}\label{eqn:orbifold}
\underbrace{\mbox{\tt o o \dots~o}}_{h\mbox{~handles}}~
\underbrace{\mbox{\it A B C \dots}}_{\mbox{cones}}~
\underbrace{
*\underbrace{\mbox{\it a b c \dots}}_{\mbox{corners}}~
*\underbrace{\mbox{\it r p q \dots}}_{\mbox{corners}}~
* \dots~}_{b~\mbox{boundary components}}
\underbrace{\mbox{\tt x x \dots~x}}_{k~\mbox{crosscaps}}.
\end{equation}
Note that the cone degrees $A, B, C, \dots$ are unordered, whereas the corner degrees associated with
any given boundary component have a cyclic ordering given by the order in which they are encountered along
boundary component. One can flip the direction in which corners of an individual boundary component are listed
if there are no other boundary components with corners, or if the surface is non-oriented.
\begin{figure}
\centering
\begin{tabular}{ccc}
\includegraphics[width=0.3\textwidth]{figs/orbifold-1.png} & &
\includegraphics[width=0.3\textwidth]{figs/orbifold-2.png}\\
Surface {\tt o**} & & Orbifold {\tt o227**2424}\\
\end{tabular}
\caption{On the left we show the orientable surface {\tt h**}.
On the right we show the orbifold obtained by adding three cones and four corners.
(Torus image: Oleg Alexandrov, Wikipedia.)
}
\label{fig:orbifold}
\end{figure}
\section{Equivariant tilings}
Through this paper, let $\mathcal{X}$ be one of the three two-dimensional geometries, namely either the
sphere $\mathbb{S}^2$, the euclidean plane $\mathbb{E}^2$, or the hyperbolic plane $\mathbb{H}^2$.
We use $(\mathcal{T},\Gamma)$ to denote a {\em equivariant tiling} of $\mathcal{X}$, defined in the usual way \citep{DressHuson87}. That is, any such tiling $(\mathcal{T},\Gamma)$ consists
of a set of tiles $\mathcal{T}$ that is ``well-behaved'' (i.e.\ a locally finite tiling whose tiles have simply-connected interiors), and a group $\Gamma$ of isometries of $\mathcal{X}$ that map
the set of tiles $\mathcal{T}$ onto itself.
We emphasize that the word {\em equivariant} indicates that
the symmetry group is prescribed and thus may be only a subgroup
of the group of all automorphisms of the tiling.
Such a tiling $(\mathcal{T},\Gamma)$ is called {\em periodic} tiling,
if its symmetry group $\Gamma$ is a discrete group with a compact fundamental domain.
Examples of such tilings are shown in Figure~\ref{fig:examples}.
Let $\Gamma$ be the symmetry group of a two-dimensional
periodic tiling.
The {\em orbifold} $\O(\Gamma)$ of such a group is ``the surface divided by the group'', that is, the orbit-manifold given by
the quotient topological space whose points are the orbits under the group \citep{ConwayHuson99}. Here, reflections are mapped onto boundary segments,
rotational centers are mapped onto cones,
dihedral centers are mapped onto corners, and glide-reflections give rise to crosscaps. The order of such a point is given by
the largest order of any rotation in the symmetry group that fixes the center.
This is illustrated in Figure~\ref{fig:group}, and in Figure~\ref{fig:examples} we
list the orbifold names for the displayed tilings.
Given the drawing of a periodic tiling, or other periodic pattern,
one can easily
determine the orbifold name for corresponding symmetry group, as
discussed in \citep{ConwayHuson99}.
Do all orbifold names correspond to symmetry groups?
Any two-dimensional orbifold with name $\O$, of
the form shown above in Equation~\ref{eqn:orbifold}, can be obtained
as either
\begin{enumerate}
\item $\mathbb{S}^2~/$ an orthogonal group,
\item $\mathbb{E}^2~/$ a crystallographic group, or
\item $\mathbb{H}^2~/$ a ``non-euclidean crystallographic group'',
\end{enumerate}
except for the ``bad orbifolds'' with names
{\tt p}, {\tt pq}, {\tt *p} and {\tt *pq} with
$p,q \geq 2$ and $p\neq q$, see \citep{ConwayHuson99}.
\begin{figure}
\centering
\begin{tabular}{ccc}
\begin{tabular}[c]{c}
\includegraphics[width=0.3\textwidth]{figs/tiling.png}
\end{tabular}
&
\begin{tabular}[c]{c}
\includegraphics[width=0.3\textwidth]{figs/fund-on-tiling.png}
\end{tabular}
&
\begin{tabular}[c]{c}
\includegraphics[width=0.15\textwidth]{figs/orbifold-from-tiling.png}\\
\\
{\tt 3*3}\\
\end{tabular}
\\
(a) Periodic tiling $(\mathcal{T},\Gamma)$ & (b) Fundamental domain
\& symmetries & (c) Orbifold\\
\end{tabular}
\caption{(a) A periodic tilings of the plane.
(b) Here we highlight a fundamental domain. Reflectional axes are shown as thin lines.
The boundary of the fundamental domain that gives rise to the boundary of the
orbifold is shown as a solid thick line, whereas the two dotted thick lines are identified
with each other. There are two rotational centers on the boundary of the fundamental domain, labeled $3_1$ and $3_2$, which give rise to a corner and cone, respectively.
(c) The corresponding orbifold and orbifold name.
(Sphere image: Darkdadaah, Wikipedia.)
}
\label{fig:group}
\end{figure}
\section{Combinatorial tiling theory}
In combinatorial tiling theory, every periodic tiling
$(\mathcal{T},\Gamma)$ is represented by a Delaney-Dress symbol $(\mathcal{D},m)$,
defined as a finite set $\mathcal{D}$, together with
the action of a certain free group $\Sigma$, together with
with maps $m_{01}, m_{12}, m_{02}:D \to \mathbb{N}$, fulfilling
certain conditions, see \citep{Dress84,DressHuson87}.
A key result is that the Delaney-Dress symbol describes a periodic tiling up to equivariant equivalence. In more detail, two
periodic tilings $(\mathcal{T},\Gamma)$ and $(\mathcal{T}',\Gamma')$ are
equivariantly equivalent, if and only if their corresponding
Delaney-Dress symbols $(\mathcal{D},m)$ and $(\mathcal{D}',m')$ are isomorphic \citep{Dress84,Dress87}.
Based on this, all two-dimensional periodic tilings can
be systematically enumerated \citep{Huson93a}.
Delaney-Dress symbols can be assigned to higher-dimensional tilings,
and have been used to address classification problems for
three-dimensional euclidean tilings \citep{Molnar1997,DelgadoHuson99a,delgado2005isohedral,dutour2010space} and
as a useful data-structure in the context of developing the
system of orbifold names for three-dimensional euclidean space groups \citep{DelgadoHuson96,ConwayDelgadoHusonThurston2001}.
Rather than repeat the details of the definition of a Delaney-Dress symbol here,
we illustrate its construction using an example.
Consider the periodic tiling shown in Figure~\ref{fig:group}(a).
To construct its Delaney-Dress symbol, start by triangulating the
tiling using a barycentric subdivision, as shown in Figure~\ref{fig:delaney}(a).
Note that each triangle corresponds
to a flag $(v,e,t)$ consisting of a vertex $v$ contained in
an edge $e$, which is contained in a tile $t$. Every triangle
has exactly three neighbors, which we call its $0$-, $1$-
$2$-neighbor, whose flags differs only in their $0$-, $1$- or $2$-component, respectively.
The second step is to partition the set of triangles into
equivalence classes, considering any two triangles to be equivalent,
if their exists an symmetry of the tiling that maps the one triangle onto the other. In this example we obtain eight such classes and label them 1--8.
These eight equivalence classes define the Delaney-Dress set $\mathcal{D}$,
which are represented by nodes in the graph shown
in Figure~\ref{fig:delaney}(b).
Two such nodes are connected by an edge with label $i$,
if the $i$-neighbor of any triangle in the one equivalence class
is contained in the other equivalence class.
For example, nodes $1$ and $2$ are connected by an edge labeled $2$,
because triangles of neighboring triangles in classes $1$ and $2$
are incident to the same node and edge, but are contained in different tiles.
The final step is to label each node $D$ by two numbers, $p,q$;
these record the tile-degree (number of edges of the tile) and vertex-degree associated with the given equivalence class of triangles. More formally, the two numbers are denoted by
$m_{01}(D)$ and $m_{12}(D)$.
For example, node $1$ is labeled $3,4$, because all triangles
in equivalence class $1$ are contained in a tile of degree $3$ and are incident to a vertex of degree $4$, whereas node $4$ is labeled $4,6$, because
the corresponding triangles are contained in tiles of degree $4$
and are incident to vertices of degree $6$.
So, we can view a two-dimensional Delaney-Dress symbol as a {\em Delaney-Dress graph}
that is, a connected (multi-) graph in which each node is incident to exactly one edge of each color $0$, $1$, and $2$, together with a labelling of its
nodes by two maps $m_{01}$ and $m_{12}$, fulfilling certain conditions.
\begin{figure}
\centering
\begin{tabular}{cc}
\begin{tabular}[c]{c}
\includegraphics[width=0.4\textwidth]{figs/chambers.png} \\
\end{tabular}
&
\begin{tabular}[c]{c}
\includegraphics[width=0.4\textwidth]{figs/symbol.png}\\
\end{tabular}
\\
(a) Periodic tiling with chamber system & (b) Delaney-Dress symbol\\
\end{tabular}
\caption{(a) A periodic tiling, triangulated into chambers,
all symmetry-equivalent chambers labeled with the same number 1--8.
The highlighted fundamental domain contains exactly one chamber for each of the eight numbers. (b) The associated
Delaney-Dress symbol has eight corresponding nodes (labeled
here 1--8).
There are three types of edges, labeled 0--2,
indicating neighbor relationships between chambers.
Each node is labeled with two bold numbers, indicating
the degree of the containing tile and the degree of the
incident vertex, respectively.
}
\label{fig:delaney}
\end{figure}
Let an {\em $i,j$-component} $Z$ be set of nodes in $Z\subseteq \mathcal{D}$ that
is connected by edges of colors $i$ and $j$, with $0\leq i<j\leq 2$.
By the properties of an edge coloring,
$Z$ will always be a cycle or a chain, and
we define the $i,j$-length of $Z$ to be $|Z|$, in the former case,
and $2|Z|$, in the latter. For any node $D\in \mathcal{D}$,
we define $r_{ij}(D)$ to be the $i,j$-length of the $i,j$-component containing $D$.
A natural interpretation of the Delaney symbol is as a triangulation
of the associated orbifold, where each triangle is the image
of one equivalence class of triangles of the original tiling.
By construction, any cone or corner of the orbifold
will lie on a vertex of this triangulation and will be
surrounded by triangles that belong to the same $i,j$-component
$Z$, for some choice of $0\leq i < j\leq 2$.
We call this a {\em $k$-vertex}, with $k$ such that $\{i,j,k\}=\{0,1,2\}$.
For all nodes $D\in Z$, we define $v_{ij}(D)$ to be the
order of the associated cone or corner, that is, the highest
order of any rotation in the symmetry group about the vertex.
For each $i,j$-orbit $Z$ whose triangles are not incident to a cone or corner, we set $v_{ij}(D)=1$ for all $D\in Z$.
Note that we have $m_{ij}(D)=v_{ij}(D)r_{ij}(D)$, linking
combinatorial features of the tiling, such as vertex degrees, etc,
with rotational degrees in the symmetry group.
In particular, all equivalence classes of rotational centers
and dihedral centers of the symmetry group can be obtained by
from the Delaney symbol of a tiling by
enumerating all $i,j$-components $Z$ in $\mathcal{D}$ for which
$v_{ij}(D)=\frac{m_{ij}(D)}{r_{ij}(D)}>1$ holds for
$D\in Z$.
The number of symmetry-equivalence classes
of vertices, edges and tiles in a periodic tiling is given by
the number of $1,2$-, $0,2$- and $0,1$-components in its Delaney symbol, respectively.
Other properties of a tiling require more involved analysis
of the corresponding Delaney symbol, such
as the Euler characteristic, curvature, geometry and
the corresponding orbifold name \citep{BalkeHuson94a}.
For example, the curvature is given by the following calculation:
\[
{\mathcal K}(\mathcal{D},m)=\sum_{D\in\mathcal{D}}\left(
\frac{1}{m_{01}(D)}+\frac{1}{m_{12}(D)}-\frac{1}{2}\right),
\]
and this, in turn, defines the geometry associated with the tiling,
namely spherical, euclidean or hyperbolic, depending
on whether the curvature is positive, 0 or negative, respectively.
A more difficult tiling property to obtain from an analysis of the corresponding Delaney symbol is whether the tiling is {\em pseudo convex}, that is, whether the intersection of any two tiles is always either empty or simply connected.
The size of the Delaney-Dress symbol $(\mathcal{D},m)$ is an important
invariant for the corresponding tiling $(\mathcal{T},\Gamma)$, albeit
the most simplest property to obtain, and
we propose to call this the {\em Dress complexity} of the tiling,
denoted by $\delta(\mathcal{T},\Gamma)$.
\section{Enumeration}
A main goal of this paper is to enumerate all periodic tilings
of low Dress complexity.
We first start with Dress complexity $1$, that is, Delaney-Dress
symbols of size one, as displayed in Figure~\ref{fig:symbol1}.
In this case, the curvature is given by
\[{\mathcal K}(\mathcal{D},m)=\frac{1}{p}+\frac{1}{q}-\frac{1}{2}.\]
For $p=3$ and $q=3,4,5$, this value is positive, and
thus the corresponding tilings are spherical.
The same is true for $p=3,4,5$ and $q=3$.
For $p=3$ and $q=6$, or $p=6$ and $q=3$, or $p=q=4$, the curvature is
$0$ and thus the corresponding tilings are euclidean.
In all other cases, for example $p=4$ and $q=5$,
the curvature is negative and thus the corresponding tiling is hyperbolic.
If we allow tiles to be digons, that is, to have only two edges,
then $p=2$ and for any value of $q\geq 3$ the curvature is positive
and so all such tilings of Dress complexity $1$ are tilings of the sphere.
To reduce the number of resulting classes, in this paper
we only enumerate tilings for which all tiles have at least 3 edges,
contrast to some of our previous work \citep{DelgadoHusonZamorzaeva92,Huson93a}.
Already for Dress complexity $1$ we encounter infinite families
of non-equivalent periodic tilings.
To address this, we say that a periodic tiling $(\mathcal{T},\Gamma)$
is {\em geometry minimal}, if one of the three cases hold:
\begin{enumerate}
\item the tiling is spherical and either the corresponding orbifold is one of $\tt 532$ and $\tt *532$, or all rotational degrees are $\leq 4$,
\item the tiling is euclidean, or
\item the tiling is hyperbolic and one can't reduce
the rotational order of any tile, or vertex,
without changing the sign of the curvature of the symmetry group,
or without reducing the degree of the tile, or the vertex, respectively,
to below $3$.
\end{enumerate}
This property is easily inferred from the corresponding
Delaney-Dress symbol. For a spherical tiling, determine whether
the value of $v_{ij}$ is $\leq 5$ on all $0,1$- and $1,2$-orbits.
For a hyperbolic tiling, reduce the value of $v_{ij}$
on each $0,1$- and $1,2$-orbit in turn, and check whether the
modified Delaney-Dress symbol has negative curvature and that
the resulting value for $m_{ij}$ is $\geq 3$.
\begin{figure}
\centering
\includegraphics[width=0.3\textwidth]{figs/symbol1.pdf}
\caption{Any Delaney-Dress symbol $(\mathcal{D},m)$ of Dress complexity $1$
consists of a single node $D$, three self-edges of colors $0,1,2$, and two number $p\geq 3$ and $q \geq 3$.}
\label{fig:symbol1}
\end{figure}
We can now formulate our first result: there exist exactly 12 different
equivariant types of geometry minimal, periodic two-dimensional tilings
with Dress complexity 1, see Figure~\ref{fig:size1}.
Note here we only consider tilings with tiles of degree 3 or more.
In addition, there are two types of geometry-minimal tilings
with digons, they have parameters ($p,q$) $2,3$ and $2,4$.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{figs/size1.pdf}
\caption{The 12 geometry-minimal types of periodic two-dimensional tilings with Dress complexity 1. Each labeled with $p,q$, that is, their tile and vertex
degress, and the orbifold name of their symmetry group.}
\label{fig:size1}
\end{figure}
An enumeration of all possible equivariant types of geometry minimal, periodic two-dimensional tilings, up to a given maximal Dress complexity
$D$, can be obtained by systematically enumerating all non-isomorphic
Delaney-Dress symbols of size $\leq D$ that are geometry minimal.
Our second result is: there exist exactly 50
equivariant types of geometry minimal, periodic two-dimensional tilings with Dress complexity 2.
More generally, we have solved this classification up to Dress complexity $D=24$ and obtain the following main result:
There exist exactly $2,395,220,319$ equivariant types of geometry minimal, periodic two-dimensional tilings with Dress complexity $\leq 24$.
Summary statistics are provided in Table~\ref{tab:stats}.
We have generated and saved all Delaney-Dress symbols for these tilings,
and make them available as described below.
\begin{table}[]
\centering
\begin{tabular}{r|rrrr}
$\delta$ & $\#$ Spherical & $\#$ Euclidean & $\#$ Hyperbolic & Total \\
\hline
1 & 5 & 3 & 4 & 12 \\
2 & 13 & 15 & 22 & 50 \\
3 & 15 & 8 & 13 & 36 \\
4 & 30 & 37 & 71 & 138 \\
5 & 26 & 15 & 41 & 82 \\
6 & 119 & 86 & 221 & 426 \\
7 & 104 & 64 & 201 & 369 \\
8 & 252 & 217 & 796 & 1,265 \\
9 & 296 & 185 & 858 & 1,339 \\
10 & 697 & 527 & 2,974 & 4,198 \\
11 & 771 & 506 & 3,993 & 5,270 \\
12 & 2,014 & 1,573 & 13,987 & 17,574 \\
13 & 2,364 & 1,575 & 22,162 & 26,101 \\
14 & 5,428 & 4,227 & 75,270 & 84,925 \\
15 & 6,627 & 4,528 & 140,024 & 151,179 \\
16 & 15,103 & 12,078 & 475,445 & 502,626 \\
17 & 18,622 & 13,105 & 982,726 & 1,014,453 \\
18 & 42,881 & 34,242 & 3,327,350 & 3,404,473 \\
19 & 53,588 & 38,470 & 7,419,771 & 7,511,829 \\
20 & 120,496 & 98,076 & 25,029,758 & 25,248,330 \\
21 & 151,234 & 111,145 & 58,815,127 & 59,077,506 \\
22 & 340,744 & 280,574 & 197,482,678 & 198,103,996 \\
23 & 428,769 & 322,102 & 482,898,722 & 483,649,593 \\
24 & 965,620 & 805,130 & 1,614,643,799 & 1,616,414,549 \\
\hline
Total & 2,155,818 & 1,728,488 & 2,391,336,013 & 2,395,220,319\\
\end{tabular}
\caption{For Dress-complexity $\delta=1,\dots,24$, we list the number
of different geometry-minimal periodic tilings of the sphere, euclidean plane and hyperbolic plane.}
\label{tab:stats}
\end{table}
\section{Visualization}
In Table~\ref{tab:stats} we count billions of Delaney-Dress symbols
that correspond to two-dimensional periodic tilings.
To enable the exploration of these, we require an algorithm for calculating a drawing of the tiling associated with any given Delaney-Dress symbol $(\mathcal{D},m)$.
Figure~\ref{fig:delaney} illustrates
that each node of a Delaney-Dress symbol corresponds to a different equivalence class of triangles in the barycentric subdivision of the corresponding periodic tiling, and that a fundamental domain for the
symmetry group can be obtained by selecting a suitable set of representatives
of the different classes of triangles.
To construct a tiling associated with a given two-dimensional Delaney-Dress symbol $(\mathcal{D},m)$, we proceed in three stages:
\begin{enumerate}
\item Compute coordinates for a barycentric triangulation
of the tiling for a fundamental domain of the symmetry group.
\item
Compute a set of isometric transformations that generate the symmetry group.
\item Apply the generators to copies of
the triangulation of the fundamental domain so as cover
a desired region of the tiling.
\end{enumerate}
Stage (1) uses an algorithm and code developed by Klaus Westphal \citep{Westphal1991}.
The algorithm assigns a triangle to each node or chamber of the given
Delaney-Dress symbol $(\mathcal{D},m)$. Triangles of adjacent nodes are then identified along the corresponding side, in a iterative manner.
This is done is such a way that the resulting triangulated region is a topological disc $B$ and all vertices of the triangulation that
correspond to a cone or corner point are located on the boundary.
Let $Z$ be a $i,j$-component.
We use $s(Z)$ to denote
the number of vertices of the triangulation that are associated with $Z$.
This will be one, if $Z$ lies in the interior of $B$.
If $s(z)>1$, then the vertices must all lie on the boundary of $Z$ and we say that $Z$ is {\em split}.
For example, in Figure~\ref{fig:delaney}, the $1,2$-component containing chambers $1,2,3,5$ is split and is represented {\em twice} by vertices on the boundary of the fundamental domain, once involving the chambers labeled $1-3$ and the other time involving $5$.
Taking splitting into account, we assign an interior angle to $Z$ as
$\alpha(Z)=\frac{360^\circ}{s(Z)\times v_{ij}(D)}$, if $Z$ is a $i,j$-cycle, and
$\alpha(Z)=\frac{180^\circ}{s(Z)\times v_{ij}(D)}$, otherwise.
With this, we setup a polygon whose corners are given by the vertices assigned to the boundary of $B$, using the calculated interior angles. The polygon is heuristically fitted around an incircle and vertices with interior angle $180^\circ$ are then placed equally-spaced along the sides of the polygon.
Then all triangulation vertices that are assigned to the interior of $B$ are
iteratively assigned to the centroid of all adjacent vertices so as to obtain useful coordinates.
To address stage (2), a set of generators for the symmetry group is obtained
as follows. For a given chamber $D$ whose $i$-th edge lies on the boundary of $B$,
let $D'$ be its $i$-neighbor. Then a generator of the symmetry group can be obtained by calculating the
isometry that maps that the $0$-, $1$- and $2$-vertices of
the triangle representing $D$ onto the $0$-, $1$- and $2$-vertices of
the triangle representing $D'$, respectively. This is performed on all boundary chambers.
In stage (3), we repeatedly concatenate generators and keep
the transformed copy of the fundamental domain, if it will be visible.
The key practical challenge is to avoid placing more than one copy of the fundamental domain at the same location. To address this, we select a reference point within the interior of the fundamental domain and use either a quad-tree (in the case of euclidean tiles), or
a oct-tree (for spherical and hyperbolic tilings), to determine whether
the current transformation applied to the reference point gives rise to a point that has already been seen.
All hyperbolic calculations are performed using the Minkowski hyperboloid model. Visualization of the Poincare model and Klein model are implemented by observing the hyperboloid model using a perspective
camera at locations $(0,0,-1)$ and $(0,0,0)$, respectively.
To speed-up the visualization of translations, copies of the fundamental domain that disappear from view on one side of the tiling are reused and reappear on the other side of the tiling.
\section{Enumeration and visualization software}
We have implemented the enumeration of Delaney-Dress symbols in a program called
{\em genDSyms} using the programming language Julia \citep{bezanson2017julia}.
For performance purposes, we use the principle of {\em orderly
generation} \citep{read1978every} to ensures that every symbol is produced exactly
once and no additional effort is required to identify and remove duplicates.
The process has two stages.
In the first stage we enumerate all possible Delaney-Dress {\em graphs} up to the required size.
Note that, for any given Delaney-Dress graph and choice of an initial node,
there exists a unique {\em ordered traversal},
that is, a breadth-first graph traversal, in which at each node
the incident edges are visited in the order of their labels \citep{delgado2003data}.
We use this to assign numbers to the nodes in the order they are encountered in and represent the traversal as a linear string of numbers by
listing the 0-, 1- and 2-neighbors of all the vertices in that same order.
As an example, consider the Delaney-Dress graph
in Figure~\ref{fig:delaney}(b). Beginning at the node on the left labelled
$1$, we see that it is its own 0- and 1-neighbor. Its 2-neighbor is thus labelled $2$, its
1-neighbor in turn $3$, and so on. Continuing in this fashion, we see that in fact the vertices are
already numbered in accordance with this traversal, which is then represented by the list
$1,1,2;\, 2,3,1;\, 4,2,5;\, 3,6,7;\, 7,5,3;\, 6,4,8;\, 5,8,4;\, 8,7,6$.
Of all the possible
traversals for this graph, this one turns out to be the lexicographically
smallest, because the leftmost node is the only one with both a 0- and a 1-loop, and thus the
only one that can give rise to a traversal representation starting with two ones.
To perform an orderly generation of Delaney-Dress graphs up to a given
size, we generate all possible ordered traversals and keep only those that are lexicographically
smallest for the graph they represent.
To speed up the enumeration process, we prune the enumeration tree
whenever we identify a partial ordered traversal that cannot be completed to a
lexicographically smallest one.
In the second stage of the enumeration,
for each Delaney-Dress graph we generate all possible valid definitions of the maps $m_{01}$ and $m_{12}$, in particular making use of the restrictions imposed by geometric minimality.
The file containing the complete galaxy of tilings is $322$~GB in size,
and it is thus impractical to make the file available on a webserver.
To provide easy access to much of the classification, we have
produced three SQLITE databases of Delaney-Dress symbols.
The first, {\tt tilings-1-18.tdb}, contains all tilings
with Dress complexity 1--18. The other two, {\tt spherical-1-24.tdb}
and {\tt euclidean-1-24.tdb},
contain all spherical and euclidean tilings, respectively of Dress complexity 1--24.
Each database contains a table called ``tilings'' that has the schema
shown in Table~\ref{tab:schema}.
\begin{table}[]
\centering
\begin{tabular}{lp{8cm}}
Column name and type & Explanation\\
\hline
id INTEGER PRIMARY KEY & number in file\\
symbol TEXT & Delaney-Dress symbol $(\mathcal{D},m)$\\
complexity INTEGER & Dress complexity $\delta(\mathcal{D},m)$\\
geometry TEXT & Name of two-dimensional geometry\\
curvature TEXT & Curvature ${\mathcal K}(\mathcal{D},m)$\\
euler REAL & Euler characteristic\\
orbifold TEXT & Orbifold name of symmetry group\\
symmetry\_class TEXT & Symmetry class of graph, as defined in \citep{Hyde:2014aa}\\
signature TEXT & Signature is expression such as $(3 4 6 5)$ that indicates the tiling consists of tiles of degree 4 with vertices of degree
3, 4, 6 and 5.\\
tile\_deg TEXT & List of tile degrees in ascending order\\
vertex\_deg TEXT & List of vertex degrees in ascending order\\
tiles INTEGER & Number of equivalence classes of tiles\\
edges INTEGER & Number of equivalence classes of edges\\
vertices INTEGER & Number of equivalence classes of vertices\\
normal BOOLEAN & Is tiling pseudo-convex?\\
maximal BOOLEAN & Is the symmetry group of the tiling maximal? \\
colorable BOOLEAN & Tiling is colorable, if no two tiles of the same symmetry equivalence class share an edge. \\
orientable BOOLEAN & Does symmetry group only contain orientation-preserving symmetries?\\
fixed\_point\_free BOOLEAN & Is symmetry group fixed-point free?\\
self\_dual BOOLEAN & Is tiling self-dual?\\
\end{tabular}
\caption{Schema for table ``tilings'' used for storing Delaney-Dress symbols and some associated properties.}
\label{tab:schema}
\end{table}
We have implemented a new program called Tegula that can be used to explore
our galaxy of tilings. Tegula takes as input an SQLITE database of
Delaney-Dress symbols and provides drawings of the corresponding tilings
in a ``collection'' tab, on a page-by-page basis.
The program provides an interactive dialog for searching
for tilings of specific interest. The user can page through all
tilings that fulfill a given query.
For example, the query ``symmetry\_class = 'Stellate' and normal = 'true' and maximal = 'true' and colorable = 'true' '' returns 31 tilings
of Dress complexity $\leq 18$,
displayed Figure~\ref{fig:db-example}.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{figs/db-example.png}
\caption{Applying the query {\tt symmetry\_class = 'Stellate' and normal = 'true' and maximal = 'true' and colorable = 'true'} to the database {\tt tilings-1-18.tdb} of
all Delaney-Dress symbols of Dress complexity $\leq 18$
returns 31 tilings.
}
\label{fig:db-example}
\end{figure}
Individual tilings can be edited in a number of different ways.
Double-clicking on a tiling in a collection tab will open the tiling
in a new ``editor'' tab and there five panels of tools are available to
modify the displayed tiling, as illustrated in Figure~\ref{fig:editor}.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{figs/editor.png}
\caption{
Any tiling can be edited using five different panels.
The {\em symmetries} panel allows editing of the rotational orders
of the symmetry group; the {\em hyperbolic model} panel allows
selection between the Poincare, Klein and hyperboloid
model; the {\em appearance} panel allows the modification
of different aspects of the tiling; the {\em algorithms}
panel provides some transformations of the drawing and
the Delaney symbol; and the {\em fundamental domain} panel
allows interactive reshaping of the edges of the tiling.
}
\label{fig:editor}
\end{figure}
\section*{Availability}
The enumeration program genDSyms is written in Julia. The source is provided here:
\url{ https://github.com/odf/julia-dsymbols}.
The visualization and exploration program Tegula is written in Java and uses the OpenJFX library.
The source is provided here:
\url{https://github.com/husonlab/tegula}.
Installers for Windows and MacOS are available here:
\url{https://software-ab.informatik.uni-tuebingen.de/download/tegula}.
\section*{Acknowledgements}
We thank Klaus Westphal for providing us with his
original source code for computing the fundamental domain of a tiling.
We thank Julius Vetter and Cornelius Wiehl for programming contributions.
\bibliographystyle{abbrvnat}
| proofpile-arXiv_065-269 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}\label{sect:introduction}}
\IEEEPARstart {S}{ARS-CoV-2}, also known as novel coronavirus, emerged in China and soon after
spread across the globe. The World Health Organization (WHO) named the resultant disease
COVID-$19$. COVID-19 was declared a pandemic on March 11, $2020$
\cite{world2020coronavirus}. In its early stages, the symptoms of COVID-$19$ include fever, cough,
fatigue, and myalgia. However, in more serious cases, it can lead to shortness of breath, pneumonia,
severe acute respiratory disorder, and heart problems, and may lead to death
\cite{mahase2020coronavirus}. It is of paramount importance to detect which individuals are
infected at as early a stage as possible in order to limit the spread of disease through
quarantine and contact tracing. In response to COVID-19, governments around the
world have issued social distancing and self-isolation orders. This has led to a significant increase
in unemployment across diverse economic sectors. As a result, COVID-$19$ has triggered an
economic recession in a large number of countries \cite{nicola2020socio}.
Reverse Transcription-Polymerase Chain Reaction (RT-PCR) is currently the gold standard
for SARS-CoV-2 detection \cite{butt2020deep}. This test is based on viral nucleic acid detection
in sputum or nasopharyngeal swab. Although it has high specificity, it has several drawbacks.
The RT-PCR test is invasive and uncomfortable, and non-reusable testing kits have led to
significant supply chain deficiencies.
SARS-CoV-2 infection can also be assessed with an antibody test \cite{dheda2020diagnosis}.
However, antibody titers are only detectable from the second week of illness onwards and persist for an
uncertain length of time. The antibody test is also invasive, requiring venipuncture which, in
combination with a several-day processing time, makes it less ideal for rapid mass screening.
In the current economic and social situation, there is a great need for an alternative
SARS-CoV-2/COVID-19 detection method that is easily accessible to the public for repeated testing
with high accuracy.
To address the above issues, researchers have begun to explore the use of artificial intelligence
(AI) algorithms to detect COVID-$19$ \cite{bullock2020mapping}.
Initial work concentrated on CT scans and X-ray images \cite{farooq2020covid, wang2020covid,
SIRM,giovagnonidiagnosi,butt2020deep,zhang2020covid,narin2020automatic,abbas2020classification,
hall2020finding,sethy2020detection,li2020artificial,gozes2020rapid,apostolopoulos2020covid,
wang2020fully,afshar2020covid, hassantabar2020diagnosis}. A survey of such datasets can be found in
\cite{kalkreuth2020covid,cohen2020covid}.
These methods often rely on transfer learning of a convolutional neural network (CNN) architecture,
pre-trained on large image datasets, on a smaller COVID-$19$ image dataset.
However, such an image-based AI approach faces several challenges that include lack of large
datasets and inapplicability outside the clinic or hospital. In addition, other work
\cite{Lin2020} shows that it is difficult to distinguish COVID-19 pneumonia from influenza
virus pneumonia in a clinical setting using CT scans. Thus, the work in this area is not
mature yet.
CORD-19 \cite{cord19} is an assembly of $59000$ scholarly articles on COVID-$19$.
It can be used with natural language processing methods to distill useful information on
COVID-$19$-related topics.
AI$4$COVID-$19$ \cite{imran2020ai4covid} performs a preliminary diagnosis of COVID-$19$ through
cough sample recordings with a smartphone application. However, since coughing is a common symptom of
two dozen non-COVID-$19$ medical conditions, this is an extremely difficult task. Nonetheless,
AI$4$COVID-$19$ shows promising results and opens the door for COVID-$19$ diagnosis through
a smartphone.
The emergence of wearable medical sensors (WMSs) offers a promising way to tackle these challenges.
WMSs can continuously sense physiological signals throughout the day \cite{yin2017health}.
Hence, they enable constant monitoring of the user's health status. Training AI algorithms
with data produced by WMSs can enable pervasive health condition tracking and disease onset
detection \cite{yin2019diabdeep}. This approach exploits the knowledge distillation
capability of machine learning algorithms to directly extract information from physiological signals.
Thus, it is not limited to disease detection in the clinical scenarios.
We propose a framework called CovidDeep for daily detection of SARS-CoV-2/COVID-19 based on
off-the-shelf WMSs and compact deep neural networks (DNNs). It bypasses manual feature engineering
and directly distills information from the raw signals captured by available WMSs.
It addresses the problem posed by small COVID-19 datasets by relying on intelligent synthetic data
generation from the same probability distribution as the training data \cite{hassantabar2020Tutor}.
These synthetic data are used to pre-train the DNN architecture in order to impose a prior on the
network weights. To cut down on the computation and storage costs of the model without any
loss in accuracy, CovidDeep leverages the grow-and-prune DNN synthesis paradigm
\cite{dai2017nest, hassantabar2019scann}. This not only improves accuracy, but also shrinks
model size and reduces the computation costs of the inference process.
The major contributions of this article are as follows:
\begin{itemize}
\item We propose CovidDeep, an easy-to-use, accurate, and pervasive SARS-CoV-2/COVID-19
detection framework. It combines features extracted from physiological signals using WMSs and
simple-to-answer questions in a smartphone application-based questionnaire with efficient DNNs.
\item It uses an intelligent synthetic data generation module to obtain a synthetic
dataset \cite{hassantabar2020Tutor}, labeled by decision rules. The synthetic dataset is used to
pre-train the weights of the DNN architecture.
\item It uses a grow-and-prune DNN synthesis paradigm that learns both an efficient
architecture and weights of the DNN at the same time \cite{dai2017nest, hassantabar2019scann}.
\item It provides a solution to the daily SARS-CoV-2/COVID-19 detection problem. It captures all
the required physiological signals non-invasively through comfortably-worn WMSs that are commercially
available.
\end{itemize}
The rest of the article is organized as follows. Section \ref{sect:related} reviews background
material. Section \ref{sect:methodology} describes the CovidDeep framework. Section
\ref{sect:implementation} provides implementation details. Section \ref{sect:results} presents
experimental results. Section \ref{discussion} provides a short discussion on CovidDeep and possible
directions for future research. Finally, Section \ref{conclusion} concludes the article.
\section{Background}
\label{sect:related}
In this section, we discuss background material related to the CovidDeep framework.
It involves recent methods for synthesizing and training efficient DNN architectures.
One approach is based on the use of efficient building blocks. Using such blocks results in compact
networks and significantly reduces the computational costs and storage needs.
For example, inverted residual blocks used in MobileNetV$2$
\cite{sandler2018mobilenetv2} reduce the number of parameters and the
floating-point operations (FLOPs) greatly.
In addition, spatial convolution is one
of the most computationally expensive operations in CNN architectures.
To address this issue, ShuffleNet-v$2$ \cite{ma2018shufflenet} uses the depth-wise separable
convolutions and channel-shuffling operations. Furthermore, Shift \cite{wu2018shift} addresses this problem by using shift-based modules that combine shifts and point-wise convolutions.
Neural architecture search (NAS) is also used in the literature to automatically generate compact architectures.
For example, FBNetV$2$ \cite{wan2020fbnetv2} uses differentiable NAS approach to synthesize compact CNN architectures.
Efficient performance predictors, e.g., for accuracy, latency, and energy, are also used to accelerate the DNN search process \cite{dai2018chamnet, hassantabar2019steerage}.
FBNetV$3$ \cite{dai2020fbnetv3} takes into account the training recipe (i.e., training hyperparameters) in the NAS as well, leading to finding higher accuracy-recipe combinations.
In addition, DNN compression methods can remove redundancy in the DNN models. Network
pruning~\cite{han2015deep} uses a pruning methodology to remove redundancy from both CNN and
multilayer-perceptron architectures. ESE \cite{han2017ese} shows the pruning methods are also
helpful in removing redundancy in recurrent neural networks. Dai et al.~\cite{dai2017nest,
dai2018grow} combine network growth with pruning to generate efficient CNNs and long short-term
memories. SCANN \cite{hassantabar2019scann} combines feature dimensionality reduction with
grow-and-prune synthesis to generate very compact models that can be easily deployed on edge devices
and Internet-of-Things sensors.
Orthogonal to the above works, low-bit quantization of DNN weights can also be used to reduce computations in a network with little to no accuracy drop \cite{zhu2016trained}.
\section{Methodology}
\label{sect:methodology}
In this section, we present the CovidDeep framework. First, we give an
overview of the entire framework. Then, we describe the DNN architecture that
is used in CovidDeep for inference. We also describe how synthetic data generation can be used
to impose a prior on the DNN weights and then use the DNN grow-and-prune synthesis paradigm to
boost the test accuracy further and ensure computational efficiency of the model.
\begin{figure*}[!ht]
\centering
\includegraphics[scale=0.5]{schematic.pdf}
\caption{Schematic diagram of the CovidDeep framework (GSR: Galvanic skin response,
IBI: inter-beat interval, Ox.: oxygen saturation, BP: blood pressure, DT/RF: decision tree/random
forest, NN: neural network, KB: knowledge-base, MND: multi-variate Normal distribution,
GMM: Gaussian mixture model, KDE: kernel density estimation).}
\label{fig:schematic}
\end{figure*}
\subsection{Framework overview}
The CovidDeep framework is shown in Fig.~\ref{fig:schematic}. CovidDeep obtains data from two
different sources: physiological signals and questionnaire. It has two flows: one that does
not use synthetic data and another one that does. When synthetic data are not used, the
framework just uses the real dataset divided into three categories: training, validation, and
test. It trains the DNNs with the training dataset and picks the best one for the given set of
features based on the validation dataset, and finally tests this DNN on the test dataset to
obtain the test accuracy. However, when the real training dataset size is small, it is often
advantageous to draw a synthetic dataset from the same probability distribution.
CovidDeep uses synthetic data generation methods to increase the dataset size and use
such data to pre-train the DNN architecture. Then, it uses grow-and-prune synthesis to generate
inference models that are both accurate and computationally-efficient. The models generated by
CovidDeep are efficient enough to be deployed on the edge, e.g., the smartphone or smartwatch,
for SARS-CoV-2/COVID-19 inference.
Next, we discuss the data input, model training, and model inference details.
\begin{itemize}
\item \textbf{Data input}: As mentioned above, physiological signals and a questionnaire are
the two sources of data input to the model. The physiological signals are derived from WMSs
embedded in a smartwatch as well as a discrete pulse oximeter and blood pressure monitor. These
signals can be easily obtained in a non-invasive, passive, and user-transparent manner. The list of
these signals includes Galvanic skin response (GSR), inter-beat interval (IBI) that indicates
the heart rate, skin temperature, oxygen saturation, and blood pressure (systolic and diastolic).
In the questionnaire, we asked the following yes/no questions: immune-compromised,
chronic lung disease, cough, shortness of breath, chills, fever, muscle pain, headache, sore
throat, smell-taste loss, and diarrhea. We collected data on age, gender, weight, height, and
smoking/drinking (yes/no), but did not find them to be useful either because of overfitting or
being unrepresentative. All the relevant data sources are aggregated into a comprehensive data input
for further processing.
\item \textbf{Model training}: CovidDeep uses different types of DNN models: (i) those
trained on the raw data only, (ii) those trained on raw data augmented with synthetic data to
boost accuracy, and (iii) those subjected to grow-and-prune synthesis for both boosting accuracy
further and reducing model size. The first type of DNN model uses a few hidden layers. The
second type of DNN model is trained based on a system called TUTOR \cite{hassantabar2020Tutor} and
is suitable for settings where data availability is limited. It provides the DNN with a suitable
inductive bias. The third type of DNN model is based on the grow-and-prune DNN synthesis paradigm
and employs three architecture-changing operations: neuron growth, connection growth, and connection
pruning. These operations have been shown to yield DNNs that are both accurate and efficient
\cite{hassantabar2019scann}.
\item \textbf{Model inference}: CovidDeep enables the users to have SARS-CoV-2/COVID-19 detection
decision on their edge device on demand.
\end{itemize}
Next, we discuss the CovidDeep DNN architecture.
\subsection{Model architecture}
Fig.~\ref{fig:arch} shows the processing pipeline of the CovidDeep framework. The architecture takes
the data inputs (shown at the bottom) and generates a prediction, i.e., the detection decision,
(shown at the top). The pipeline consists of four steps: data pre-processing, synthetic data
generation and architecture pre-training, grow-and-prune synthesis, and output generation
through softmax.
\begin{figure}[!ht]
\centering
\includegraphics[scale= 0.5]{architecture.pdf}
\caption{An illustration of the CovidDeep processing pipeline to generate predictions from data
inputs.}
\label{fig:arch}
\end{figure}
In the data pre-processing stage, data normalization and data alignment/aggregation are done.
\begin{itemize}
\item \emph{Data normalization}: This step is aimed at changing feature values to a common
scale. While data normalization is not always required, it is highly beneficial in the case of
datasets that have features with very different ranges. It leads to better noise tolerance and
improvement in model accuracy \cite{krizhevsky2012imagenet}. Data normalization can be done in
several ways, such as min-max scaling and standardization. In this work, we use min-max scaling to
map each data input to the $[0,1]$ interval. Scaling can be done as follows:
\[
x_{scaled} = \frac{x - \text{min($x$)}}{\text{max($x$) $-$ \text{min($x$)}}}
\]
\item \emph{Data alignment/aggregation}: The data from different WMSs may have different
start times and frequencies. In order to merge them into a dataset, we need to synchronize the data
streams based on their timestamps. The answers to the questions in the questionnaire are also added
to the final dataset.
\end{itemize}
\noindent
\textbf{Synthetic data generation}: The training dataset generated in the above manner is next used to
generate a synthetic dataset that is used to pre-train the DNN. These synthetic data and pre-training
steps are based on the TUTOR framework \cite{hassantabar2020Tutor}. The schematic diagram of
the training scheme based on synthetic data is shown in Fig.~\ref{fig:syn-training}. The synthetic
dataset is generated in three different ways in TUTOR:
\begin{figure}[!ht]
\centering
\includegraphics[scale= 0.4]{syn-training.pdf}
\caption{The schematic diagram for pre-training of the DNN model with the synthetic dataset
(DT/RF: decision tree/random forest, NN: neural network, KB: knowledge-base).}
\label{fig:syn-training}
\end{figure}
\begin{itemize}
\item Using multi-variate Normal distribution (MND): In this approach, the real training dataset,
i.e., the one obtained as a fraction of the data obtained from the WMSs and questionnaire, is modeled
as a normal distribution to generate the synthetic data.
\item Using Gaussian mixture model (GMM): This approach uses a multi-dimensional GMM to model the
data distribution. The optimal number of GMM components is obtained with the help of a validation
dataset. Subsequently, the synthetic dataset is generated from this GMM.
\item Using kernel density estimation (KDE): This approach uses non-parametric density estimation
to estimate the probability distribution as a sum of many kernels. In our implementation, KDE is
based on the Gaussian kernel function. The synthetic data are generated based on samples generated
from this model.
\end{itemize}
\noindent
\textbf{Building a knowledge base (KB)}:
After generation of the synthetic data, we need to label the data points. To this end, we build a
KB from the real training dataset. Decision tree (DT) and random forest (RF) are two
classical machine learning methods that are inherently rule-based. In fact, each decision path in a
decision tree, from the root to a leaf, can be thought of as a rule. Therefore, we aim to identify
the set of rules that best describes the data. We use such a model as a KB to label the generated
synthetic dataset.
\noindent
\textbf{Training with synthetic data}:
We use the labeled synthetic data to impose a prior on the DNN weights. To accomplish this, we
pre-train the DNN model by using the generated synthetic dataset. This provides the network
with an appropriate inductive bias and helps the network to ``get underway."
This helps improve accuracy when data availability is limited.
\subsection{Grow-and-prune synthesis of the DNN}
In this section, we discuss the grow-and-prune synthesis paradigm
\cite{dai2017nest,hassantabar2019scann}. The approach presented in \cite{hassantabar2019scann}
allows the depth of the DNN to grow during synthesis. Thus, a hidden neuron can receive inputs from
any neuron activated before it (including input neurons) and can feed its output to any neuron
activated after it (including output neurons). As a result, the depth of the model is determined
based on how the hidden neurons are connected, enabling the depth to be changed during
training. We use three basic architecture-changing operations in the grow-and-prune synthesis process
that are discussed next.
\noindent
\textbf{Connection growth}:
This activates the dormant connections in the network. The weights of the added connections are set
to $0$ and trained later. We use two different methods for connection growth:
\begin{itemize}
\item \textbf{Gradient-based growth}: This approach was first introduced by Dai et
al.~\cite{dai2017nest}. Algorithm \ref{alg:gradient-growth} shows the process of gradient-based
growth. Each weight matrix has a corresponding binary mask of the same size. This mask is used to
disregard the inactive connections. The algorithm adds connections to reduce the loss function
$\mathcal{L}$ significantly. To this end, the gradients of all the dormant connections are evaluated
and their effectiveness ranked based on this metric. During a training epoch, the gradients of all
the weight matrices for all the data mini-batches are captured in the back-propagation step. An
inactive connection is activated if its gradient magnitude is large relative to the gradients in
its associated layer.
\item \textbf{Full growth}: This connection growth restores all the dormant connections in
the network to make the DNN fully-connected.
\end{itemize}
\begin{algorithm}[h]
\caption{Connection growth algorithm}
\label{alg:gradient-growth}
\begin{algorithmic}[l]
\REQUIRE
$W \in R^{M \times N}$: weight matrix of dimension $M \times N$ (connecting layer with $M$ neurons to layer with $N$ neurons);
$Mask \in R^{M \times N}$: weight mask of the same dimension as the weight matrix;
Network $P$; $W.grad$: gradient of the weight matrix (of dimension $M \times N$); data $D$;
$\alpha$: growth ratio
\IF{full growth}
\STATE $Mask_{[1:M, 1:N]} = 1 $
\ELSIF{gradient-based growth}
\STATE {Forward propagation of data $D$ through network $P$ and then back propagation}
\STATE {Accumulation of $W.grad$ for one training epoch}
\STATE {$t = (\alpha \times MN)^{th}$ largest element in the $\left|W.grad\right|$ matrix}
\FORALL {$w.grad_{ij}$}
\IF{$\left| w.grad_{ij} \right| > t$}
\STATE {$Mask_{ij} = 1$}
\ENDIF
\ENDFOR
\ENDIF
\STATE $W$ = $W \otimes Mask$
\ENSURE Modified weight matrix $W$ and mask matrix $Mask$
\end{algorithmic}
\end{algorithm}
\noindent
\textbf{Connection pruning}: Connection pruning deactivates the connections that are smaller than a
specified threshold. Algorithm \ref{alg:pruning} shows this process.
\begin{algorithm}[h]
\caption{Connection pruning algorithm}
\label{alg:pruning}
\begin{algorithmic}[l]
\REQUIRE Weight matrix $W \in R^{M \times N}$; mask matrix $Mask$ of the same dimension as
the weight matrix; $\alpha$: pruning ratio
\STATE $t = (\alpha \times MN) ^{th}$ largest element in $\left|W\right|$
\FORALL {$w_{ij}$}
\IF{$\left| w_{ij} \right| < t$}
\STATE {$Mask_{ij} = 0$}
\ENDIF
\ENDFOR
\STATE $W$ = $W \otimes Mask$
\ENSURE Modified weight matrix $W$ and mask matrix $Mask$
\end{algorithmic}
\end{algorithm}
\noindent
\textbf{Neuron growth}: This step adds neurons to the network and thus increases network size.
This is done by duplicating existing neurons in the architecture. To break the symmetry,
random noise is added to the weights of all the connections related to the newly added neurons.
The neurons to be duplicated are either selected randomly or based on higher activation values.
The process is explained in Algorithm \ref{alg:neuron-growth}.
\begin{algorithm}[h]
\caption{Neuron growth algorithm}
\label{alg:neuron-growth}
\begin{algorithmic}[l]
\REQUIRE Network $P$; weight matrix $W \in R^{M \times N}$; mask matrix $Mask$ of the same
dimension as the weight matrix; data $D$; candidate neuron $n_j$ to be added; array $A$ of activation values for all hidden neurons
\IF{activation-based selection}
\STATE {forward propagation through $P$ using data $D$}
\STATE {$i = argmax~(A)$}
\ELSIF{random selection}
\STATE {randomly pick an active neuron $n_i$}
\ENDIF
\STATE {$Mask_{j\cdot} = Mask_{i\cdot}, Mask_{{\cdot}j} = Mask_{{\cdot}i}$}
\STATE {$w_{j\cdot} = w_{i\cdot} + noise, w_{{\cdot}j} = w_{{\cdot}i} + noise$}
\ENSURE Modified weight matrix $W$ and mask matrix $Mask$
\end{algorithmic}
\end{algorithm}
We apply connection pruning after neuron growth and connection growth in each iteration.
Grow-and-prune synthesis starts from a fully connected architecture (mask values set to 1) and runs for a pre-defined number of iterations.
Finally, the architecture
that performs the best on the validation dataset is chosen.
\section{Implementation Details}
\label{sect:implementation}
In this section, we first explain how the data were obtained from 87 individuals and how various
datasets were prepared from the data. We also provide implementation details of
the CovidDeep DNN model.
\subsection{Data collection and preparation}
We collected physiological signals and questionnaire data
with Institutional Research Board (IRB) approval at San Matteo Hospital in Pavia, Italy.
30 individuals were healthy (referred to as Cohort $1$) and the remaining were SARS-CoV-2-positive
with varying levels of disease severity. The SARS-CoV-2-positive cases were categorized into two
other cohorts: asymptomatic (Cohort $2$ with 27 individuals) and symptomatic
(Cohort $3$ with 30 individuals). Distinguishing among
these cohorts is important to ascertain who may be spreading the virus unknowingly and to determine whether
medical support is needed for symptomatic individuals. Hence, we
train DNN models that can perform three-way classification.
To collect the physiological signals, we used commercially available devices: Empatica E$4$
smartwatch (sensors we found useful: GSR, IBI, skin temperature), a pulse oximeter, and a blood
pressure monitor.
Alongside the physiological signals, we employed a questionnaire to collect information about
possible COVID-$19$-related symptoms from all the individuals. We also collected data
about age, gender, weight, height, and smoking/drinking (yes/no), but did not rely on these features
as they were not necessarily representative of the larger population.
Table \ref{tab:data} shows all the data types that we found to be useful. The
smartwatch data capture the physiological state of the user. GSR measures continuous
variations in the electrical characteristics of the skin, such as conductance, which can be caused by
variations in body sweat. IBI correlates with cardiac health. Furthermore, skin acts as a medium for
insulation, sweat, and control of blood flow. Although it is not a clear indicator of internal body
temperature, skin temperature helps assess skin health.
The pulse oximeter indirectly measures blood oxygen saturation.
It is a comfortable and painless way of measuring how well oxygen is being sent to parts of the
body furthest from the heart, such as the arms and legs. Blood pressure exposes various
underlying health problems. Last, but not the least, the questionnaire elicits information that
may help improve COVID-19 detection accuracy. From all these sources of
data, we derive various subsets as datasets for use in the CovidDeep framework to see
which data features are the most beneficial to obtaining a high detection accuracy. In
addition, the various sensor subsets have different costs. Hence, our results
also let one take test accuracy vs.~cost into consideration.
Before data collection commences, we inform the participants about the procedure. We then collect
some relevant information and COVID-$19$-related symptoms in response to a questionnaire.
We place the pulse oximeter on the index finger of the user for blood oxygen measurement. We also
obtain the systolic/diastolic blood pressure measurements. We place the smartwatch on the
participant's wrist. Data collection lasts for at most one hour for each participant, during
which time we collect sensor data from the smartwatch. We stream the data from the smartwatch to the
smartphone over Bluetooth in real-time using a smartphone application. This application collects the
data and performs basic validation to ensure data integrity.
Next, we pre-process the raw data to generate a comprehensive dataset. To this end, we first
synchronize the WMS data streams. We then divide the data streams into $15$-second data
windows. We then split the participants into three different sets of training, validation, and
test. The training set contains data from $52$ individuals, approximately $60\%$ of all the
participants. Among the $52$ individuals represented in the training set, 18 are healthy, 16 are
asymptomatic (but virus-positive), and 18 are symptomatic (and virus-positive). The validation set
consists of data from 17 individuals, approximately $20\%$ of all the participants, with $6$, $5$, and
$6$ individuals from Cohorts $1$, $2$, and 3, respectively. The test set contains data from 18
individuals, approximately $20\%$ of all the participants, with $6$ individuals from each of the
three cohorts. This data partitioning ensures that all the data collected from any individual
are limited to just one of the three sets. Furthermore, the data instances extracted from each
individual have no time overlap. In addition, in order to conduct ablation studies to gauge the
impact of different data streams, we create different datasets, with various subsets of all the
features.
\begin{table}[]
\caption{Data types collected in the CovidDeep framework}
\label{tab:data}
\centering
\begin{tabular}{ll}
\toprule
Data type & Data source \\
\toprule
Immune-compromised & Questionnaire \\
Chronic lung disease & Questionnaire \\
Shortness of breath & Questionnaire \\
Cough & Questionnaire \\
Fever & Questionnaire \\
Muscle pain & Questionnaire \\
Chills & Questionnaire \\
Headache & Questionnaire \\
Sore throat & Questionnaire \\
Smell/taste loss & Questionnaire \\
Diarrhea & Questionnaire \\
\midrule
Galvanic skin response ($\mu$S)& Smartwatch \\
Skin temperature ($^\circ C$) & Smartwatch \\
Inter-beat interval ($ms$) & Smartwatch \\
\midrule
Oxygen saturation (\%)& Pulse oximeter \\
Systolic blood pressure (mmHg) & Blood pressure monitor\\
Diastolic blood pressure (mmHg) & Blood pressure monitor\\
\bottomrule
\end{tabular}
\end{table}
\subsection{Model implementation}
We have implemented the CovidDeep framework in PyTorch. We perform DNN training on the Nvidia Tesla
P$100$ data center accelerator, with $16$GB of memory. We use cuDNN library to accelerate GPU
processing. Next, we give the details of the implemented DNN architectures trained on the
different datasets.
We train various DNNs (with different numbers of layers and different numbers of
neurons per layer) and verify their performance on the validation dataset.
In general, a four-layer architecture with 256, 128, 128, and 3 neurons, respectively, performs the best.
The number of neurons in the input layer depends on which subset of features is selected for
training the DNN. In the case of the full dataset, the input layer has 194 neurons, which
indicates the dataset dimension. We obtain the features of the dataset from the 15$s$
data window as follows. Sensor data collected from the smartwatch in the data window consist of
180 signal readings, hence 180 features, from the three data streams running at $4$Hz.
We derive 11 features from the 11 questionnaire questions. Finally, we append the pulse oximeter
oxygen saturation measurement and systolic/diastolic blood pressure measurements to obtain a
feature vector of length 194.
We use leaky ReLU as the nonlinear activation function in all the DNN layers. As explained in
Section \ref{sect:methodology}, we generate three DNNs for each dataset: (i) DNN
trained on the real training dataset, (ii) DNN pre-trained on the synthetic dataset
and then trained on the real training dataset, and (iii) DNN synthesized and trained with the
grow-and-prune synthesis paradigm.
\subsection{Network training}
\label{sect:training}
We use the stochastic gradient descent optimizer for DNN training, with a learning rate of $5$e-$3$ and batch size of $256$.
We use $100000$ synthetic data
instances to pre-train the network architecture. Moreover, in the grow-and-prune synthesis
phase, we train the network for $20$ epochs each time the architecture changes. We apply
network-changing operations over five iterations.
In this step, we use pruning to achieve a pre-defined number of connections in the network, chosen
based on performance on the validation set.
\section{Experimental Results}
\label{sect:results}
In this section, we analyze the performance of CovidDeep DNN models.
We target three-way classification among the three cohorts described earlier.
In addition, we perform an ablation study to analyze the impact of different subsets of features
as well as different steps of CovidDeep DNN synthesis.
The CovidDeep DNN models are evaluated with four different metrics: test accuracy, false positive
rate (FPR), false negative rate (FNR), and F$1$ score.
These terms are based on the following:
\begin{itemize}
\item True positive (negative): SARS-CoV-2/COVID-$19$ (healthy) data instances classified as
SARS-CoV-2/COVID-$19$ (healthy).
\item False positive (negative): healthy (SARS-CoV-2/COVID-$19$) data instances classified as
SARS-CoV-2/COVID-$19$ (healthy).
\end{itemize}
These metrics evaluate the model performance from different perspectives. Test accuracy
evaluates its overall prediction power. It is simply the ratio of all the correct predictions
on the test data instances and the total number of such instances.
The FPR is defined as the ratio of the number of negative,
i.e., healthy, instances wrongly categorized as positive (false positives) and the total number of
actual negative instances. The FNR is the ratio of positives that yield different test outcomes. Thus, there is an FNR for both Cohorts 2 and 3.
Because of the three-way classification, the F$1$ score we report is the Macro F1 score.
\subsection{Model performance evaluation}
We obtained the highest test accuracy with a DNN model trained with the
grow-and-prune synthesis paradigm on the dataset that contained features from four
categories: GSR, pulse oximeter (Ox), blood pressure (BP), and questionnaire (Q).
Table \ref{tab:confusion-GP} shows the confusion matrix for three-way classification
among the three cohorts: Cohort 1 (healthy), Cohort 2 (asymptomatic-positive), Cohort 3
(symptomatic-positive), denoted as C1, C2, and C3, respectively. CovidDeep DNN achieves a test
accuracy of 98.1\%. The model achieves an FPR of only 0.8\%. The low FPR means that the model
does not raise many false alarms. It results in a 4.5\% FNR for Cohort 2 and a 0.0\% FNR for Cohort
3, denoted as FNR(2) and FNR(3), respectively (each FNR refers to the ratio of the number of false
predictions for that cohort divided by the total number of data instances of that type).
The low FNRs demonstrate the ability of the DNN model to not miss virus-positive
cases. Moreover, the Macro F1 score of the DNN model is also high: 98.2\%.
\begin{table}[]
\caption{Confusion matrix for the most accurate three-way classification model}
\label{tab:confusion-GP}
\centering
\begin{tabular}{c|ccc|c}
\toprule
Label$\downarrow$\textbackslash Prediction$\rightarrow$ & C1 & C2 & C3 & Total\\
\toprule
C1 & $1066$ & $9$ & $0$ & $1075$ \\
C2 & $54$ & $1152$ & $0$ & $1206$ \\
C3 & $0$ & $0$ & $975$ & $975$ \\
\hline
Total& $1120$ & $1161$ & $975$ & $3256$ \\
\bottomrule
\end{tabular}
\end{table}
Next, we compare the three DNN models, trained on the real training dataset, with the aid of
synthetic data, and with the aid of grow-and-prune synthesis, for the most accurate case in
Table \ref{tab:confusion-3DNNs}. From this comparison, we see that the use of synthetic data and
then grow-and-prune synthesis is able to boost the test accuracy compared to the DNN model trained on
just the real dataset. In addition, we see improvements in the FPR and FNR values. The F1 score
also follows the same trend, increasing with the use of synthetic data, and even more with the use of
grow-and-prune synthesis.
\begin{table*}[]
\caption{Test accuracy, FPR, FNRs, and F1 score (all in \%) for the three DNN models obtained for the
most accurate case}
\label{tab:confusion-3DNNs}
\centering
\begin{tabular}{l|ccccc}
\toprule
DNN model trained on& Acc. & FPR & FNR(2) & FNR(3) & F1 Score\\
\toprule
Real training dataset & $79.9$ & $22.5$ & $34.2$ & $0.0$ & $80.9$\\
Real+synthetic training dataset & $84.8$ & $14.1$ & $28.4$ & $0.0$ & $85.5$ \\
Real+synthetic training dataset + grow-prune& $98.1$ & $0.8$ & $4.5$ & $0.0$ & $98.2$ \\
\bottomrule
\end{tabular}
\end{table*}
\subsection{Ablation studies}
In this section, we report results on various ablation studies.
We begin by considering DNN models trained on features obtained from subsets of the six data
categories (five sensors and the questionnaire). This helps us understand the impact of
each of these categories and their various combinations.
Then, we analyze the impact of different parts of the CovidDeep training process, pre-training
with synthetic data, and grow-and-prune synthesis.
Since there are six data categories from which the corresponding features are obtained, there
are 64 subsets. However, one of these subsets is the null subset. Thus, we evaluate the
remaining 63 subsets.
For these evaluations, we only consider the first two types of DNN models, referred to as DNN
Models 1 and 2. We consider grow-and-prune synthesis-based models later.
The results shown in Table~\ref{tab:feature-ablation} correspond to the case when features from
only one, two or three data categories are chosen, and in Table~\ref{tab:feature-ablation2} when
features from four, five or six data categories are chosen.
We first notice that DNN Model 2 generally performs better than DNN Model 1 across the various
performance metrics. This underscores the importance of using synthetic data when the available
dataset size is not large. Second, we observe that since this is a three-way classification,
only 33.3\% accuracy is possible by randomly predicting one of the three Cohorts. Thus, even
single data categories (GSR, Temp, IBI, Ox, BP, Q) enable much better prediction than by chance.
These single data categories are still only weak learners of the correct label,
when used in isolation.
Third, DNN models, in general, tend to perform better on the
various performance metrics when more data categories are used. However, this is not always
true.
For example, we obtain the highest accuracy of 93.6\% with DNN Model 2 when only
features from four (GSR, Temp, Ox, BP) of the six categories are used. Adding features based on IBI or
Q or both to these four categories actually reduces the test accuracy.
This may be due to the curse of dimensionality.
When the number of features increases, in general, the dataset size needs to be increased to
obtain a good accuracy.
For a fixed dataset size, this curse indicates that the number of features
should be reduced.
However, throwing out informative features would also reduce accuracy.
In addition, some features are interactive, i.e., work synergistically to increase accuracy.
Hence, a balance has to be found between accuracy and the number of features.
Finally, when not all sensors are available (perhaps due to cost reasons), a suitable set that still provides reasonable
accuracy can be chosen based on the given cost budget.
This may help a broader cross-section of the population access the technology.
\begin{table*}[]
\caption{Test accuracy, FPR, FNRs, and F1 score (all in \%) for two DNN models obtained for feature
subsets from one, two or three data categories}
\label{tab:feature-ablation}
\centering
\begin{tabular}{l|ccccc|ccccc}
\toprule
& \multicolumn{5}{c|}{DNN Model 1} & \multicolumn{5}{c}{DNN Model 2}\\
Data category& Acc. & FPR & FNR(2) & FNR(3) & F1 Score & Acc. & FPR & FNR(2) & FNR(3) & F1 Score\\
\toprule
GSR & $54.2$ & $22.1$ & $23.3$ & $99.6$ & $44.6$ & $54.2$ & $22.1$ & $23.4$ & $99.5$ & $44.7$ \\
Temp & $57.2$ & $31.5$ & $60.3$ & $33.4$ & $57.5$ & $58.6$ & $32.2$ & $60.2$ & $28.2$ & $58.7$ \\
IBI & $66.6$ & $55.1$ & $24.0$ & $21.1$ & $65.6$ & $66.8$ & $53.1$ & $25.1$ & $21.1$ & $66.0$ \\
Ox & $45.4$ & $56.2$ & $59.6$ & $46.7$ & $45.5$ & $45.4$ & $56.2$ & $59.6$ & $46.7$ & $45.5$ \\
BP & $44.3$ & $96.3$ & $60.3$ & $5.2$ & $36.4$ & $44.3$ & $96.3$ & $60.3$ & $5.2$ & $36.4$ \\
Q & $61.4$ & $0.0$ & $100.0$ & $5.2$ & $53.5$ & $63.0$ & $0.0$ & $100.0$ & $0.0$ & $54.7$ \\
GSR+Temp & $57.2$ & $33.4$ & $60.3$ & $31.4$ & $57.3$ & $76.9$ & $6.4$ & $44.1$ & $15.4$ & $76.5$ \\
GSR+IBI & $74.9$ & $3.2$ & $34.6$ & $37.4$ & $74.3$ & $76.1$ & $3.6$ & $31.9$ & $36.3$ & $75.5$ \\
GSR+Ox & $52.7$ & $29.0$ & $44.2$ & $71.3$ & $51.3$ & $47.5$ & $44.3$ & $44.7$ & $71.3$ & $46.1$ \\
GSR+BP & $55.2$ & $70.7$ & $53.8$ & $5.2$ & $52.7$ & $64.1$ & $46.4$ & $51.2$ & $5.2$ & $63.7$ \\
GSR+Q & $89.1$ & $6.8$ & $23.3$ & $0.0$ & $89.6$ & $89.2$ & $6.7$ & $23.3$ & $0.0$ & $89.7$ \\
Temp+IBI & $68.1$ & $19.3$ & $53.9$ & $18.8$ & $68.4$ & $68.2$ & $19.9$ & $52.9$ & $18.9$ & $68.6$ \\
Temp+Ox & $48.3$ & $26.3$ & $78.4$ & $46.7$ & $46.5$ & $49.3$ & $24.2$ & $77.7$ & $46.7$ & $47.3$ \\
Temp+BP & $50.3$ & $84.5$ & $54.7$ & $5.2$ & $45.9$ & $53.7$ & $74.0$ & $54.7$ & $5.2$ & $50.9$ \\
Temp+Q & $68.9$ & $26.5$ & $60.4$ & $0.0$ & $69.8$ & $69.0$ & $26.3$ & $60.3$ & $0.0$ & $69.9$ \\
IBI+Ox & $48.1$ & $60.4$ & $68.0$ & $22.7$ & $49.8$ & $49.0$ & $58.3$ & $68.0$ & $22.1$ & $50.7$ \\
IBI+BP & $47.8$ & $92.8$ & $54.0$ & $5.2$ & $44.8$ & $48.5$ & $89.8$ & $54.9$ & $5.2$ & $46.3$ \\
IBI+Q & $80.9$ & $19.5$ & $34.2$ & $0.0$ & $81.8$ & $80.9$ & $17.8$ & $35.8$ & $0.0$ & $81.7$ \\
Ox+BP & $59.6$ & $56.2$ & $54.8$ & $5.2$ & $59.1$ & $66.9$ & $56.2$ & $35.0$ & $5.2$ & $66.8$ \\
Ox+Q & $50.2$ & $56.2$ & $80.2$ & $5.2$ & $52.5$ & $50.2$ & $56.2$ & $80.2$ & $5.2$ & $52.5$ \\
BP+Q & $51.8$ & $56.2$ & $80.1$ & $0.0$ & $49.9$ & $57.6$ & $56.2$ & $60.3$ & $5.2$ & $56.8$ \\
GSR+Temp+IBI & $70.5$ & $11.5$ & $54.7$ & $17.9$ & $70.8$ & $76.6$ & $3.5$ & $46.0$ & $17.2$ & $76.7$ \\
GSR+Temp+Ox & $69.1$ & $22.1$ & $33.5$ & $37.2$ & $70.0$ & $69.7$ & $23.1$ & $27.1$ & $42.4$ & $70.2$ \\
GSR+Temp+BP & $57.0$ & $64.0$ & $54.8$ & $5.2$ & $55.4$ & $67.0$ & $34.2$ & $54.4$ & $5.2$ & $66.4$ \\
GSR+Temp+Q & $83.6$ & $0.2$ & $44.2$ & $0.0$ & $83.9$ & $91.3$ & $0.2$ & $23.3$ & $0.0$ & $91.7$ \\
GSR+IBI+Ox & $64.8$ & $14.0$ & $45.4$ & $45.8$ & $64.8$ & $70.8$ & $19.1$ & $43.2$ & $23.0$ & $71.7$ \\
GSR+IBI+BP & $60.2$ & $34.4$ & $52.8$ & $29.5$ & $61.5$ & $64.3$ & $32.2$ & $43.7$ & $29.5$ & $64.8$ \\
GSR+IBI+Q & $87.7$ & $11.2$ & $23.3$ & $0.0$ & $88.3$ & $88.8$ & $7.7$ & $23.3$ & $0.0$ & $89.4$ \\
GSR+Ox+BP & $71.3$ & $40.7$ & $37.1$ & $5.2$ & $71.2$ & $81.9$ & $23.1$ & $4.1$ & $29.8$ & $82.1$ \\
GSR+Ox+Q & $69.9$ & $22.9$ & $56.7$ & $5.2$ & $71.0$ & $75.5$ & $22.7$ & $41.8$ & $5.2$ & $76.7$ \\
GSR+BP+Q & $63.9$ & $26.5$ & $73.8$ & $0.0$ & $62.3$ & $64.1$ & $25.9$ & $73.8$ & $0.0$ & $62.4$ \\
Temp+IBI+Ox & $57.4$ & $38.9$ & $62.4$ & $22.2$ & $57.5$ & $61.8$ & $30.7$ & $57.8$ & $22.2$ & $61.8$ \\
Temp+IBI+BP & $55.8$ & $71.6$ & $51.2$ & $5.2$ & $53.9$ & $55.3$ & $70.0$ & $54.0$ & $5.2$ & $53.0$ \\
Temp+IBI+Q & $73.6$ & $17.2$ & $51.8$ & $5.0$ & $74.5$ & $77.1$ & $9.0$ & $53.6$ & $0.0$ & $77.5$ \\
Temp+Ox+BP & $70.6$ & $34.5$ & $44.2$ & $5.4$ & $72.1$ & $72.3$ & $33.9$ & $40.4$ & $5.2$ & $73.7$ \\
Temp+Ox+Q & $53.3$ & $56.2$ & $71.8$ & $5.2$ & $55.8$ & $53.4$ & $56.2$ & $71.4$ & $5.2$ & $55.9$ \\
Temp+BP+Q & $47.9$ & $46.6$ & $94.9$ & $5.2$ & $43.5$ & $49.9$ & $40.8$ & $94.7$ & $5.2$ & $45.1$ \\
IBI+Ox+BP & $65.0$ & $59.1$ & $37.5$ & $5.2$ & $66.1$ & $64.1$ & $60.8$ & $38.4$ & $5.2$ & $65.0$ \\
IBI+Ox+Q & $54.8$ & $56.2$ & $67.8$ & $5.2$ & $57.2$ & $55.0$ & $56.2$ & $67.2$ & $5.2$ & $57.4$ \\
IBI+BP+Q & $55.9$ & $56.2$ & $65.2$ & $4.6$ & $55.0$ & $53.4$ & $56.2$ & $71.6$ & $5.2$ & $52.3$ \\
Ox+BP+Q & $66.9$ & $56.2$ & $35.0$ & $5.2$ & $68.2$ & $66.9$ & $56.2$ & $35.0$ & $5.2$ & $68.2$ \\
\bottomrule
\end{tabular}
\end{table*}
\begin{table*}[]
\caption{Test accuracy, FPR, FNRs, and F1 score (all in \%) for two DNN models obtained for feature
subsets from four, five or six data categories}
\label{tab:feature-ablation2}
\centering
\begin{tabular}{l|ccccc|ccccc}
\toprule
& \multicolumn{5}{c|}{DNN Model 1} & \multicolumn{5}{c}{DNN Model 2}\\
Data category& Acc. & FPR & FNR(2) & FNR(3) & F1 Score & Acc. & FPR & FNR(2) & FNR(3) & F1 Score\\
\toprule
GSR+Temp+IBI+Ox & $76.6$ & $23.3$ & $27.0$ & $19.2$ & $77.3$ & $74.5$ & $28.5$ & $28.3$ & $18.8$ & $75.2$ \\
GSR+Temp+IBI+BP & $62.5$ & $27.1$ & $53.4$ & $29.2$ & $62.4$ & $73.3$ & $13.6$ & $44.0$ & $19.8$ & $73.4$ \\
GSR+Temp+IBI+Q & $87.1$ & $0.2$ & $34.7$ & $0.0$ & $87.5$ & $89.1$ & $1.6$ & $27.9$ & $0.0$ & $89.6$ \\
GSR+Temp+Ox+BP & $77.6$ & $24.2$ & $34.7$ & $5.2$ & $77.8$ & $93.6$ & $1.7$ & $11.4$ & $5.2$ & $93.7$ \\
GSR+Temp+Ox+Q & $80.7$ & $22.5$ & $27.8$ & $5.2$ & $81.7$ & $81.2$ & $22.5$ & $26.4$ & $5.2$ & $82.2$ \\
GSR+Temp+BP+Q & $60.0$ & $11.5$ & $93.4$ & $5.2$ & $53.2$ & $61.8$ & $11.5$ & $93.0$ & $0.0$ & $54.5$ \\
GSR+IBI+Ox+BP & $75.0$ & $23.3$ & $42.6$ & $5.2$ & $76.1$ & $76.8$ & $24.2$ & $37.0$ & $5.2$ & $77.8$ \\
GSR+IBI+Ox+Q & $69.8$ & $32.2$ & $48.5$ & $5.2$ & $71.4$ & $76.1$ & $40.4$ & $24.5$ & $4.9$ & $77.1$ \\
GSR+IBI+BP+Q & $59.3$ & $32.6$ & $80.3$ & $0.8$ & $57.1$ & $66.2$ & $3.4$ & $84.5$ & $4.6$ & $60.7$ \\
GSR+Ox+BP+Q & $79.9$ & $22.5$ & $34.2$ & $0.0$ & $80.9$ & $84.8$ & $14.1$ & $28.4$ & $0.0$ & $85.5$ \\
Temp+IBI+Ox+BP & $59.2$ & $52.9$ & $58.9$ & $5.2$ & $61.1$ & $66.9$ & $53.8$ & $37.2$ & $5.2$ & $67.9$ \\
Temp+IBI+Ox+Q & $63.1$ & $48.5$ & $52.2$ & $5.2$ & $65.1$ & $62.1$ & $56.2$ & $48.0$ & $5.2$ & $64.0$ \\
Temp+IBI+BP+Q & $54.5$ & $31.9$ & $90.3$ & $5.2$ & $49.8$ & $54.7$ & $30.7$ & $90.7$ & $5.1$ & $49.8$ \\
Temp+Ox+BP+Q & $67.1$ & $56.2$ & $34.5$ & $5.2$ & $68.3$ & $66.8$ & $56.2$ & $35.3$ & $5.2$ & $68.1$ \\
IBI+Ox+BP+Q & $66.9$ & $56.2$ & $35.0$ & $5.2$ & $68.2$ & $66.9$ & $56.2$ & $35.0$ & $5.2$ & $68.2$ \\
GSR+Temp+IBI+Ox+BP & $77.1$ & $29.1$ & $31.8$ & $5.2$ & $78.2$ & $83.3$ & $34.2$ & $10.3$ & $5.2$ & $83.7$ \\
GSR+Temp+IBI+Ox+Q & $67.2$ & $5.8$ & $79.1$ & $5.2$ & $65.3$ & $83.1$ & $20.1$ & $23.5$ & $5.2$ & $83.9$ \\
GSR+Temp+IBI+BP+Q & $64.3$ & $4.7$ & $88.2$ & $5.1$ & $57.8$ & $69.0$ & $15.7$ & $65.8$ & $4.7$ & $67.0$ \\
GSR+Temp+Ox+BP+Q & $83.8$ & $0.4$ & $39.1$ & $5.2$ & $84.2$ & $83.8$ & $0.4$ & $39.1$ & $5.2$ & $84.2$ \\
GSR+IBI+Ox+BP+Q & $71.8$ & $37.5$ & $38.5$ & $5.2$ & $73.3$ & $75.3$ & $23.8$ & $41.1$ & $5.2$ & $76.6$ \\
Temp+IBI+Ox+BP+Q & $62.5$ & $44.8$ & $57.0$ & $5.2$ & $64.5$ & $66.6$ & $48.8$ & $42.4$ & $5.2$ & $68.3$ \\
GSR+Temp+IBI+Ox+BP+Q & $77.8$ & $18.3$ & $39.4$ & $5.2$ & $78.8$ & $83.7$ & $26.9$ & $15.9$ & $5.2$ & $84.1$ \\
\bottomrule
\end{tabular}
\end{table*}
To illustrate the effect of the different parts of the CovidDeep training process, we compare
11 CovidDeep DNN models, trained based on the different DNN synthesis and training steps.
We chose these models from different accuracy ranges.
Table~\ref{tab:NN-ablation3} shows comparison results for the three-way classification task.
We have already compared various performance metrics for DNN Models 1 and 2 earlier. Hence,
here, we just report their accuracy, FLOPs, and number of model parameters (\#Param). The best
DNN Model 3 was obtained with the help of the validation dataset. This enabled us to find the
best \#Param. value. Only this model was tested on the test dataset.
Acc.(1) and Acc.(2), respectively, refer to the accuracy of DNN Models 1 and 2. The FLOPs and
\#Param. for these two models are identical. We report
all the performance metrics for DNN Model 3 that is generated by grow-and-prune synthesis using
both real and synthetic data. Thus, the starting point for DNN Model 3 synthesis is DNN Model 2.
Next, we compare DNN Model 3 with the other two models based on various measures and show why
it is suitable for deployment on the edge devices.
\begin{itemize}
\item \textbf{Smaller model size}: It contains $3.4\times$ fewer parameters on an average
(geometric mean) than DNN Models 1 and 2, thus significantly reducing the memory requirements.
\item \textbf{Less computation}: It reduces FLOPs per inference by $3.5\times$ on an average
(geometric mean) relative to DNN Models 1 and 2, thus facilitating more efficient inference on the
edge devices.
\item \textbf{Better performance}: It improves accuracy on an average by $7.8$\% ($1.9$\%)
relative to DNN Model 1 (2), while also lowering FPR and FNRs, in general.
\end{itemize}
\small\addtolength{\tabcolsep}{-1.5pt}
\begin{table*}[]
\caption{Comparison of the three DNN models (all performance metrics in \%) for various feature sets}
\label{tab:NN-ablation3}
\centering
\begin{tabular}{l|cccc|cccccccc}
\toprule
& \multicolumn{4}{c|}{DNN Models 1 and 2} & \multicolumn{7}{c}{DNN Model 3}\\
Data category& Acc.(1) & Acc.(2) &FLOPs &\#Param. & Acc. &FLOPs &\#Param & FPR & FNR(2) & FNR(3) & F1 Score \\
\toprule
GSR+Ox+BP+Q & 79.9 & 84.8 & 136.4k & 68.5k & 98.1 & 19.5k & 10.0k & 0.8 & 4.5 & 0.0 & 98.2 \\
GSR+IBI+Q & 87.7 & 88.8 & 165.6k & 83.1k & 91.5 & 39.5k & 20.0k & 1.3 & 21.9 & 0.0 & 91.9 \\
GSR+Q & 89.1 & 89.2 & 134.9k & 67.7k & 91.3 & 9.5k & 5.0k & 0.2 & 23.2 & 0.0 & 91.7 \\
GSR+Temp+Q & 83.6 & 91.3 & 165.6k & 83.1k & 91.3 & 151.5k & 76.0k & 0.2 & 23.3 & 0.0 & 91.7 \\
GSR+Temp+IBI+Q & 87.1 & 89.1 & 196.3k & 98.4k & 90.7 & 19.5k & 10.0k & 0.2 & 20.7 & 5.2 & 91.0 \\
GSR+Temp+Ox+Q & 80.7 & 81.2 & 166.1k & 83.3k & 87.7 & 119.5k & 60.0k & 0.3 & 28.7 & 5.2 & 88.1 \\
GSR-Temp-IBI-Ox-Q & 67.2 & 83.1 & 196.8k & 98.7k & 86.4 & 59.5k & 30.0k & 11.3 & 22.6 & 5.2 & 87.0 \\
GSR+Temp+IBI+Ox+BP & 77.1 & 83.3 & 192.2k & 96.4k & 84.6 & 59.5k & 30.0k & 29.5 & 11.2 & 5.2 & 85.1 \\
GSR+Ox+BP & 71.3 & 81.9 & 130.8k & 65.7k & 82.4 & 89.5k & 45.0k & 23.8 & 2.1 & 29.8 & 82.5 \\
GSR+Temp+Ox+BP & 77.6 & 93.6 & 161.5k & 81.0k & 82.3 & 129.5k & 65.0k & 25.2 & 21.0 & 5.2 & 82.8 \\
IBI+Q & 80.9 & 80.9 & 134.9k & 67.7k & 81.7 & 19.5k & 10.0k & 29.3 & 23.3 & 0.0 & 82.5 \\
\bottomrule
\end{tabular}
\end{table*}
\section{Discussion and Future Work}
\label{discussion}
In this section, we discuss the inspirations we took from the human brain in the synthesis process of
CovidDeep DNNs. We also discuss future directions in medical research enabled by the CovidDeep
framework.
An interesting ability of the human brain is to efficiently solve novel problems in a new domain
despite limited prior experience. Inspired by this human capability, CovidDeep
uses the TUTOR \cite{hassantabar2020Tutor} approach for synthetic data generation and labeling to
help the neural network start from a better initialization point. Use of gradient descent from
a learned initialization point provides the DNN with an appropriate inductive bias. Hence, it
reduces the need for large datasets that are not readily available for SARS-CoV-2/COVID-$19$
AI research.
The CovidDeep DNN training process takes another inspiration from the human brain development process
in the grow-and-prune synthesis step. The human brain undergoes dynamic changes in its synaptic
connections every second of its lifetime. Acquisition of knowledge depends on these synaptic
rewirings \cite{grossberg1988nonlinear}. Inspired by this phenomenon, CovidDeep utilizes
the grow-and-prune synthesis paradigm to enable DNN architecture adaptation throughout training.
CovidDeep DNNs synthesized with grow-and-prune synthesis do not suffer from the situation faced by
most current DNNs: fixed connections during training. This enables CovidDeep to generate
very compact, yet accurate, models for SARS-CoV-2/COVID-$19$ detection.
CovidDeep uses physiological signals extracted using commercially available devices and achieves
high test accuracy. As a result, it provides a testing mechanism that is accurate, easily
accessible to the general public, and easy for individuals to use. Furthermore, this
mechanism only requires a few minutes of data collection from an individual to perform an
inference. Note that at most one hour of data collection from each individual was only required for
training of the DNN models. It does not require the presence of a nurse or physician during testing.
In fact, besides the data collected by the smartwatch and discrete sensors (for obtaining
blood oxygen and blood pressure), the additional information required by the electronic questionnaire
is small, related to the general health of the subject, and can be easily filled out with a yes/no
answer. Thus, CovidDeep has the potential to significantly decrease the spread of SARS-CoV-2,
save hundreds of thousands of lives, and drastically reduce the need for hospitalization,
while also helping the world economy recover.
CovidDeep demonstrates that WMS-based SARS-CoV-2/COVID-19 detection is feasible. Previously, diabetes
diagnosis was shown to be possible with the help of such sensors \cite{yin2019diabdeep}.
We believe that WMS-based disease detection is feasible for a large number of diseases
\cite{yin2017health}.
Since data were collected from only 87 individuals, despite being augmented with synthetic
training data drawn from the real training data probability distribution, more work is needed
for validating the various DNN models in the field, especially since the data were obtained
from a single location in Italy. This process has begun across various continents.
\section{Conclusion}
\label{conclusion}
In this article, we proposed a framework called CovidDeep to facilitate daily and pervasive
detection of SARS-CoV-2/COVID-19. The framework combines off-the-shelf WMSs with efficient DNNs to
achieve this goal. CovidDeep DNNs can be easily deployed on edge devices (e.g., smartphones and
smartwatches) as well as servers. CovidDeep uses synthetic data generation to alleviate the need for
large datasets. In addition, training of CovidDeep DNNs based on the grow-and-prune synthesis
paradigm enables them to learn both the weights and the architecture during training.
CovidDeep was evaluated based on data collected from 87 individuals. The highest accuracy it
achieves is 98.1\%. However, several subsets of features that correspond to easily accessible
sensors in the market also achieve high enough accuracy to be practically useful.
With more data collected from larger deployment scenarios, the accuracy of CovidDeep DNNs can
be improved further through incremental learning.
\noindent
{\bf Contributions:} The SARS-CoV-2/COVID-19 detection project was conceived by Niraj K. Jha. He
also supervised the dataset preparation and DNN model generation efforts. Shayan Hassantabar
performed DNN synthesis and evaluation. Vishweshwar Ghanakota developed the smartphone application
for data collection, authenticated the credentials of the application sending data, ensured data
integrity, and ran pre-processing scripts. Gregory N. Nicola MD and Ignazio R. Marino MD defined the
patient cohorts, and helped with the IRB approval process. Gregory N. Nicola MD, Ignazio R. Marino
MD, and Bruno Raffaele decided on the questions to be placed in the questionnaire. Novati Stefano,
Alessandra Ferrari, and Bruno Raffaele collected data from patients and healthy individuals and
labeled the data. Kenza Hamidouche helped with the synthesis and evaluation of the DNN models.
All co-authors helped with the revision and editing of the manuscript.
\noindent
{\bf Acknowledgments:} The project was facilitated by the tireless efforts of Bob Schena (CEO,
Rajant Corp.) and Adel Laoui (CEO, NeuTigers, Inc.). Giana Schena and Maria Schena helped with
buying and transporting the instruments as well as English-to-Italian translations of various
documents. Joe Zhang helped initially with feature extraction from the raw dataset.
Claudia Cirillo coordinated the administrative work and helped with translation of
documents to Italian for the IRB application. Ravi Jha helped with proofreading of the manuscript.The Chief of the Italian Police, Franco Gabrielli, helped ensure safe and fast entrance and transfer of US researchers on Italian soil during the
COVID-19 lockdown.
\noindent
{\bf Competing interests:}
Five of the co-authors of this article, Niraj K. Jha, Shayan Hassantabar, Vishweshwar
Ghanakota, Gregory N. Nicola MD, and Kenza Hamidouche have equity in NeuTigers, Inc. Neutigers, along
with Rajant Corporation and Thomas Jefferson University and Jefferson Health, enabled data collection
from San Matteo Hospital, Pavia, Italy.
\bibliographystyle{IEEEtran}
| proofpile-arXiv_065-270 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}\label{sec1}
In the first parts of this study \citep{Rosenbush202Xa, Rosenbush202Xb}, we performed a review of the photometric data of about 500 novae, including classical and recurrent novae, in the active phase. Review was carried out by comparing the light curves displayed in modified scales: the logarithmic scale of a radius of shell ejected during a nova outburst and the magnitude scale. Calibration of each scale was defined by certain rules. The brightness of a nova was normalized to the level of a quiet state, i.e. we were dealing with an outburst amplitude. The abscissa scale was normalized on the moment of maximum brightness. For fast novae, it was a principal light maximum, which was often well defined. For slow novae, as a moment of maximum was considered to be the transition from a rapid increase in brightness to a slower growth before maximum values, which can also be identified with a known pre-maximum halt before the final rise. From this moment, the state of maximum brightness began, the duration of that can be compared with the duration of a nova state with the brightness above a certain level according to \cite{Arp1956}. Since at the maximum brightness there is an ejection of a substance forming an expanding shell, in which dust condensation occurs at a certain radius \citep{Clayton1976} and other processes, assuming that the speed of shell expansion is constant, we switched from the time scale to the shell radius scale in a logarithmic representation. In this case, it could been arose some controlled nuances with the determination of these parameters; therefore, we are forced to refer the first parts of our study for the details of an entire procedure for displaying the light curve in modified scales. An important result of this idea was the stable shape of the light curves at the each stage of the outburst and in the region of transitions from one stage to another, which made it possible to extrapolate the remaining parts of the light curves. An extrapolation of light curve forward for modern novae and confirm the preliminary results as new photometric data become available were for us as the verification of this result.
The result of our review was a confirmation of our preliminary results of 1999 \citep{Rosenbush1999a, Rosenbush1999b, Rosenbush1999c, Rosenbush2002} about the existence of classical (CN) and recurrent (RN) novae groups with certain forms of light curves. The shape of light curve is determined almost uniquely in the presence of sufficient photometric material. Between groups of novae there is a difference in certain properties: possible dust condensation, geometric shape of discarded shells. The unique V1280 Sco light curve, along with recurrent novae, became the foundation for our research, which simplified the understanding of a light curve comparison process.
Naturally, there were classical novae with spectral confirmation of an outburst type, but with a unique light curve, presented in 1-2 exemplars \citep{Rosenbush202Xb}. When preparing materials for publication, an object with unique behaviour appeared. His light curve with rare observations before the final brightness decline allowed us to classify him as a nova of the CP Pup group. This object - Nova Cen 2005 - suddenly brightened again at almost 6$^{m}$ in 2019 as a slow nova. By the end of a visibility season of object, it became clear to us that a nova, if viewed separately from the 2005 outburst, may have a light curve typical of the HR Del subgroup of the RR Pic group. We postponed the final decision until the visibility season of 2020 and now we can more confidently draw some conclusions regarding this very unique classical nova.
\section{Nova Cen 2005}\label{sec2}
Classical Nova Cen 2005, V1047 Cen, was discovered by \cite{Liller2005}. The brightness of this fast nova reached 8.5$^{m}$ with an orange filter. The end of the 2005 visibility season, the weakness of object in the next season and the absence of any peculiar features resulted in the light curve with a small number of observations and a considerable scatter of points.
Photometric data before the re-brightening of 2019 contributed to the opinion that an outburst of 2005 ended: during the years 2013-2018 V1047 Cen had a mean magnitude of I=17.12${\pm}$0.10$^{m}$ \citep{Mroz2019, Geballe2019}.
\section{Re-brightening of 2019 - rare-observed activity of classical nova during a final brightness decline stage}\label{sec3}
In the 2019 outburst, the light curve of V1047 Cen was radically different from the 2005 one if it is considered as independent event: the amplitude of an outburst was as of a dwarf nova (DN) \citep{Geballe2019}. The DN outbursts of classical novae are not uncommon: the most famous examples are GK Per and V446 Her (two members of the CP Pup group \citep{Rosenbush202Xb}), V1017 Sgr (an outburst in 1917 is the candidate for RN of the T Pyx group \citep{Rosenbush202Xa}). But as noted \cite{Geballe2019}), the 14 years between CN and DN outbursts of V1047 Cen is the shortest of all known and it is a problem for standard theories. According to the discussion of near-infrared spectroscopy they are inclined to believe that it was nevertheless a DN outburst.
The presentation of 2005 outburst light curve in our modification indicates that the V1047 Cen in 2019 was still at the second half of final brightness decline stage.
We briefly recall that the modified light curve is the behaviour of nova brightness relatively its quiet state, i.e., on the scale of outburst amplitude \citep{Rosenbush1999a, Rosenbush202Xb}, in the dependence on logarithm of a radius of the shell which was ejected during the outburst
log(r)=log(t-t$_{0}$) + C, ${ }$ (1)
where C=13.1 is the constant on which the abscissa scale is calibrated, and which is equal to the logarithm of radius r$_{0}$ of the shell at a moment of its ejection from the nova. A time t and a shell radius r in expression (1) are in days and centimetres, respectively. Corresponding parameters for V1047 Cen outbursts in 2005 and 2019 are given in Table 1. Such a representation of the abscissa scale gives simultaneously an idea on the geometric dimensions of the shell in an absence of corresponding spectral observations.
An important parameter for us is the amplitude of outburst. Progenitor V1047 Cen is missing from the SuperCOSMOS catalogue \citep{Geballe2019}, which is 90 per cent completeness to R$\approx$19.5$^{m}$ and B$_{J}$$\approx$20.5$^{m}$ \citep{Hambly2001}. This, in our approach to determining this parameter, provides the basis for accepting the outburst amplitude of V1047 Cen equal to the average amplitude of the group prototypes with which it is possible to find a better match. And from this we can estimate a brightness of V1047 Cen in a quiet state near 20.5$^{m}$ (Table 1).
The modified V1047 light curve during the 2005 outburst with parameters of Table 1 is nearly identical to the light curve of V476 Cyg, one of the CP Pup group prototypes (Fig.1). I.e. it was an outburst of a classical nova that did not end as of 2020, and to a level of normal brightness, by analogy with V476 Cyg, it is necessary still to weaken by 4-5$^{m}$. At the brightness final decline stage of V1047 Cen, presented before the 2019 re-brightening by the Gaia mission data, the main trend of the nova brightness decline, which is gave way to a outburst of 2019, is clearly visible. The long-term variability with the low amplitude of 0.3-0.4$^{m}$ in the photometric I-band mentioned by \cite{Mroz2019} is also part of this general trend. By this time, the energy distribution in the spectrum of the white dwarf - post-nova - has returned to its original state, and the observed light curve is the re-radiation by the expanding shell of the radiation of central hot source.
\begin{center}
\begin{table}[t]%
\centering
\caption{Modified light curve parameters of V1047 Cen.\label{tab1}}%
\tabcolsep=0pt%
\begin{tabular*}{220pt}{@{\extracolsep\fill}rcc@{\extracolsep\fill}}
\toprule
\textbf{Year}& \textbf{Adopted magnitude} & \textbf{Maximal brightness} \\
\textbf{}& \textbf{of quiescence, m$_{q}$} & \textbf{moment, t$_{0}$, JD} \\
\midrule
2005 & V=20.5$^{m}$ & 2453614 \\
2019 & V=20.5$^{m}$ & 2458658 \\
\bottomrule
\end{tabular*}
\end{table}
\end{center}
\begin{figure}[t]
\centerline{\includegraphics[width=78mm]{Fig1}}
\caption{Complete light curve of V1047 Cen in the 2005 outburst (dashed line with pluses), 476 Cyg (dots) - one of the prototypes of the CP Pup group, and V1280 Sco (dashed line). The amplitude scale is normalized to the average one of the outbursts of 4 prototypes of the CP Pup group \citep{Rosenbush202Xb} (the dashed horizontal line corresponds to the average "null" magnitude of the quiescent of four prototypes of the group). Data from AAVSO, Gaia and \cite{Liller2005} were used.\label{fig1}}
\end{figure}
Belonging to a group implies the presence of common details of light curves as a result of the same physical and others causes of outbursts. Therefore, when comparing with the main prototypes of our classification scheme, we can pay attention to interesting details of the light curve of V1047 Cen. For example, the beginning of 2019 outburst coincides with the completion of a small plateau on the light curves of the prototypes (V476 Cyg, CP Pup) for log(r)$\approx$16.3 (Fig.1). I.e., the central source at this time has a certain activity and this may be a common characteristic of novae in this group. The matter ejected during the principal maximum of outburst, and which is significantly responsible for the visual brightness, is at this time on a distance of about 1000 AU from a central hot source.
The maximum brightness level of V1047 Cen in the re-brightening was 1$\div$1.5$^{m}$ higher than the typical brightness level of novae in this group at the transition stage of the outburst.
In our review, the unique nova of V1280 Sco \citep{Rosenbush202Xb}, which 13 years after the outburst continues to be at an unusually high level of brightness, played a decisive role. The brightness of V1047 Cen in the re-brightening of 2019 for a short time, on a logarithmic scale, reached that brightness of V1280 Sco. At the same time, the beginning of re-brightening almost coincided with the moment of appearance of quasi-periodic temporal light dips with an amplitude of about 1$^{m}$.
\section{Discussion}\label{sec4}
Proceeding from one of our starting points \citep{Rosenbush202Xa, Rosenbush202Xb}: the similarity/coincidence of light curves means equal/close physical and geometric parameters of binary systems in which these novae are arose, we can also allow equal outburst amplitudes. I.e., with an average modern visual brightness in the quiet state of V476 Cyg of about 17.4$^{m}$ and the outburst amplitudes of about 15.5$^{m}$ for both novae, we get a possible brightness of a quiet state for V1047 Cen of about 24$^{m}$, which corresponds to the absence of a pre-nova in the SuperCOSMOS catalogue. 2005 outburst of V1047 Cen should end no earlier than 2030, if the brightness of star quiet state is about 20.5$^{m}$; if this brightness is weaker, then the end of outburst is delayed to a later time. Here you can ask the obvious question: what is considered the end of outburst, but without an obvious answer.
Secondary outbursts of novae are not rare. The authors of first detailed study of V1047 Cen \citep{Geballe2019} drew attention to this. The classical nova of GK Per, also a member of the CP Pup group, after the outburst of 1901 showed some more outbursts as a dwarf nova of small amplitude. V1017 Sgr in 1919 showed an outburst in our interpretation \citep{Rosenbush202Xa} typical of the recurrent nova of the T Pyx group, and at least more three DN outbursts were recorded \citep{Salazar2017}. These outbursts occurred a long time after the principal outburst, unlike V1047 Cen with its 14 years between outbursts. An example of a closer relationship between the principal and secondary outbursts is the recurrent nova of T CrB, which in two known cases repeated the secondary re-brightening at the same time after the principal one; the delay time was shorter compared to V1047 Cen. We add that the difference in amplitudes of the primary and secondary outbursts of these two novae is the same 5-5.5$^{m}$.
The secondary outburst of V1047 Cen, unlike all the other second outbursts of DNs in CNs mentioned above has the highest amplitude of nearly 5$^{m}$ above the current brightness level. But such outburst amplitudes, after our first experiment \citep{Rosenbush1999c}, we are discuss with caution: against the background of small amplitudes of outbursts, it is more difficult to distinguish characteristic details; but outbursts of the so-called “tremendous outburst amplitude dwarf novae” (TOADs) were in our control \citep{Rosenbush202Xb}. The lowest outburst amplitudes among classical novae were among the members of the HR Del of the RR Pic group \citep{Rosenbush202Xb}.
If the outburst of 2019 to regard as a phenomenon independent of the outburst of 2005, then even at the end of the season of visibility of the sky region with V1047 Cen in 2019, the modified light curve with the parameters of Table 1 was very like on the HR Del light curve (Fig.2), but has not yet reached the final brightness decline stage. This became the basis for us not to include this nova in the second part of our study. With the resumption of observations in 2020, it became apparent that the outburst of 2019 went into the stage of the final brightness decline it is already possible to more definitely compare with the prototypes.
On Fig.2 our attention attracts immediately the earlier beginning of the final brightness decline stage and the faster brightness decline of V1047 Cen in comparison with HR Del. We tend to consider such differences for low-amplitude outbursts to be sufficient to doubt their identity (a similar difference became the basis for us to form the HR Del subgroup in the RR Pic group).
By the search for analogues among little-studied novae with small fixed amplitudes of outbursts, for which it was not possible for us to draw certain conclusions, we did not find similar light curves. Here, a large role been played probably the selection of observations due to the low brightness of novae at this stage.
\begin{figure}[t]
\centerline{\includegraphics[width=78mm]{Fig2}}
\caption{Light curve of V1047 Cen (line with pluses) during the 2019 “isolated” outburst compared to the HR Del, Nova Del 1967 (dots) light curve \citep{Rosenbush202Xb}. For comparison the modified light curves of dwarf nova outbursts in a post-nova system OGLE-2004-BLG-081 (schematic version, line with dots, \citep{Mroz2015} and GK Per in 2015 (dots, visual data of the AAVSO) are displayed in the left corner.\label{fig2}}
\end{figure}
Here we must understand that the observed light curve during the re-brightening against the background of the final stage of the principal outburst is formed by two sources of visual radiation: a source from the principal outburst, having a brightness of slightly more than 2$^{m}$, and a source associated with the re-brightening and adding a total brightness of up to 7$^{m}$ above the accepted level of quiet state brightness. It is clear that the first source is the shell ejected during the principal outburst. One can only make assumptions about the second source, starting with the repeated ejection of the matter and/or increase in the activity of a post-nova. \cite{Geballe2019} from the spectral line profiles were assumed a possibility, but still were rejected its, that the 2019 profile had generated by two separate outflows. The common emission of these two sources possibly and caused a 1-1.5$^{m}$ excess of the typical V1047 Cen brightness at the transition stage of the outburst, mentioned in Section 3.
In connection with re-brightening, let us once again draw attention to the recurrent nova of T CrB \citep{Rosenbush202Xa} already mentioned above with the secondary maximum and compare it with the classical nova of the CP Pup group \citep{Rosenbush202Xb}. Fig.3 shows that the secondary maximum of T CrB coincides with the ending of transition stage and the beginning of final stage in the V476 Cyg outburst. [Recall that the abscissa scale in our interpretation is equivalent to the logarithmic time scale according to (1).] On this interval of time/radii, the classical novae develop a nebular spectrum. Interestingly that the depression of RS Oph in the final stage falls within the same interval, i.e. the depression of RS Oph and the secondary maximum of T CrB occur "simultaneously".
\begin{figure}[t]
\centerline{\includegraphics[width=78mm]{Fig3}}
\caption{Light curves of T CrB in 1946 (dots) and V476 Cyg (broken dashed line) compared to the possible hypothetical light curve of HR Del (broken line with dots, a nova name is added next to the final part for the quiet brightness level) during the outburst of 1967 as the secondary to the principal outburst by the T CrB type.\label{fig3}}
\end{figure}
Hypothetically, HR Del, before a recorded outburst of 1967, could have the one like to a RN of the T CrB group \citep{Rosenbush202Xa}.
We will proceed from the fact that the subgroup of very slow low-amplitude novae as HR Del is very small and contains only 4 members that we confidently identified from the total number of 235 classical novae in our resulting list, or less than 2\%. Let us make the assumption that the observed HR Del outburst in 1967 was a secondary outburst relative to the principal one. Here we consider two options: an outburst of the classical nova and an outburst of recurrent nova.
Relative to the principal HR Del outburst, a classical nova can be determined after viewing the known photometry until 1967, knowing the brightness of the pre- and post-nova equal to V$\approx$12.1$^{m}$ \citep{Strope2010}. In the interpretation of the classical nova, the principal outburst of HR Del would have occurred in the first half of the 50s of the XX century and the nova would have a maximum brightness of -1$\div$0$^{m}$ and above. These years, more than 10 years, are necessary that the classical nova has time to return to a state close to the initial. Available photometric data \citep{Collazzi2009} exclude such bright state of HR Del. Also the distance to HR Del by the shell expansion parallax d=970$\pm$70 ps \citep{Harman2003} is consistent with the corrected distance by the Gaia mission d=932(+32/-29) ps \citep{Bailer-Jones2018}, i.e. the shell was ejected namely in 1967. Therefore, here could take place the variant with a principal outburst like T CrB and without a shell ejection. The duration of principal outburst was less 50$^{d}$ and it took place during the gap in observations. At this variant we should assume that the secondary maximum was connected with the ejection of a large mass of matter with its concentration in the region of shell "equator" \citep{Harman2003}.
The parameters of such a outburst: t$_{0}$=JD 2439545, the brightness of a quiet state is 12.1$^{m}$. A possible HR Del light curve with a principal outburst of the type RN of the T CrB group and the secondary as classical is shown in Fig.3 schematically: only maximum for the first and actually observed curve for the secondary. It is noteworthy that in this hypothetical version, the secondary outburst precisely lies at the end of transition stage and completely at the final stage of V476 Cyg up to a small plateau for log(r)$\approx$16.3.
A comparison of the modified light curves of a dwarf nova outburst in a post-nova system similar to GK Per, etc., and the re-brightening of V1047 Cen shows their significant difference (Fig.2) [this is also evident when comparing traditional light curves]. Earlier we \citep{Rosenbush202Xa, Rosenbush202Xb} came to the conclusion that already a shift of 0.2 to combine two similar in shape curves of light should cause suspicion. Here we have similar in shape light curves, but the abscissa shift, about 1, is very large for the like outburst processes. The outburst amplitude is also less. [We do not give the parameters of modified light curves, since they are not used anywhere in the future. For us, it is important the fundamental difference between the light curves.]
\subsection{V3645 Sgr}
When searching for candidates for the same 2019 outburst as V1047 Cen, we drew attention to the little-studied Nova Sagittarii 1970, V3645 Sgr, discovered from the nebular spectrum on an objective-prism plate taken by R. Bartaja and T.Vashakidse at Abastumani Astrophysical Observatory of Georgia \citep{Arhipova1970}. The light curve was restored from archival images by \cite{Arhipova1971} was added by \cite{Sarajedini1984} (two estimates of the brightness from the spectral images, which form a local short flash in the final part of the light curve, are overestimated, according to the opinion of data authors). The modified light curve (Fig.4) fits well with the prototype of the RR Pic group with parameters t$_{0}$=JD 2440497 and photographic magnitude m$_{q}$=22.3$^{m}$. The maximal magnitude was near 11.8$^{m}$. The brightness of a quiet state that we adopted is at 4.3$^{m}$ faintest from the estimation of \cite{Arhipova1970}. Due to the low accuracy of the coordinates of V3645 Sgr, 1 arc minute, and the comparison chart given of \cite{Arhipova1970}, \cite{Downes2000} recognized the lack the identification of post-nova, and hence the coordinates of V3645 Sgr. Therefore, the data from Gaia catalogue and its applications \citep{Bailer-Jones2018} is related to a field star. The V3645 Sgr spectrum, which \cite{Surina2013} are described as the spectrum of a cold star, is also belong to a field star.
\begin{figure}[t]
\centerline{\includegraphics[width=78mm]{Fig4}}
\caption{Modified light curve of V3645 Sgr vs one of RR Pic, the prototype of the group.\label{fig4}}
\end{figure}
If we focus on an object that is present only in the image in the H$\alpha$ filter of \cite{Downes2000}, then the Gaia DR2 4093261823550037376 object with a magnitude of g=20.11$^{m}$ can be the post-nova. Same object is present on the SuperCOSMOS scans \citep{Hambly2001}, which makes it possible to evaluate it as very blue compared to field stars: it is not visible on red but is bright on UKST blue images, and the presence on the red ESO review image can indicate a possible variability.
\section{Conclusion}
An example of V1047 Cen behaviour in the final stage of the outburst in 2019-2020 draws our attention to the poor knowledge of the processes occurring in the binary system at this stage of the return to a quiescence. At this time, the radiation of expanding envelope remains a good indicator of central source state. The only thing that can slow down the development of investigations in this section of close binary system events is the need for long-term monitoring as these processes developed with the slowed down rate.
It is necessary also other, more strong criterion to estimate the state of a post-nova as a stability of brightness is not seems as such good criterion. Here can help the coincidence of local activity in objects different types: secondary maximum of RN T CrB and the brightness depression of RN RS Oph on the one hand and the beginning of final brightness decline stage in some CNs, in particular, the CP Pup group, on the other hand. Main difference between these objects is the lack of ejection large mass of matter from the former.
The inclusion of V3645 Sgr in the RR Pic group confirmed the conclusion of \cite{Downes2000} about the lack of identification for this old nova and made it possible to suggest such identification.
\section*{Acknowledgements}
We thank the AAVSO observers who made the observations on which this project is based, the AAVSO staff who archived them and made publicly available. This research has made use of the NASA's Astrophysics Data System and the SIMBAD database, operated at CDS, Strasbourg, France. This work has made use of data from the European Space Agency (ESA) mission Gaia (https://www.cosmos.esa.int/gaia), processed by the Gaia Data Processing and Analysis Consortium (DPAC, https://www.cosmos.esa.int/web/gaia/dpac/consortium) and the Photometric Science Alerts Team (http://gsaweb.ast.cam.ac.uk/alerts). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the Gaia Multilateral Agreement. Author is thankful to the Valga vallavalitsus, Estonia, for the support, which allowed us to carry out this interesting investigation.
\nocite{*
| proofpile-arXiv_065-271 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
A reliable model of tokamak is critical to magnetic confinement fusion
experimental research. It is used to check the feasibility of pulse,
interpret experimental data, validate the theoretical model, and develop
control technology.
The conventional physical-driven modeling tools come from empirical
models or derivations based on first principles, the so-called \textquotedbl Integrated
Modeling\textquotedbl{} \citep{Falchetto2014}. \textquotedbl Integrated
Modeling\textquotedbl{} is a suite of module codes that address the
different physical processes in the tokamak, i.e. core transport,
equilibrium, stability, boundary physics, heating, fueling, and current
drive. Typical workflows are ETS \citep{Falchetto2014}, PTRANSP \citep{Budny2008},
TSC \citep{Kessel2006}, CRONOS \citep{Artaud2010}, JINTRAC \citep{Romanelli2014},
METIS \citep{Artaud2018}, ASTRA \citep{Pereverzev1991}, TOPICS \citep{Hayashi2010},
etc. The reliability of the first-principles model depends on the
completeness of the physical processes involved. In the past few decades,
sophisticated physical modules have been developed and integrated
into these codes for more realistic modeling results. Typical workflow
for a full discharge modeling on tokamak is using sophisticated modules
that integrate many physical processes \citep{Meneghini2015,Falchetto2014}.
Due to the nonlinear, multi-scale, multi-physics characteristics of
tokamak, high-fidelity simulation of the whole tokamak discharge process
is still a great scientific challenge \citep{Bonoli2015}.
Increasingly, researchers are turning to data-driven approaches. The
history can be traced back to the use of machine learning for interrupt
prediction since the 1990s, i.e. ADITYA \citep{sengupta2000forecasting,Sengupta2001},
Alcator C-Mod \citep{Rea2018,Tinguely2019,Montes2019}, EAST \citep{Montes2019,Zhu2020,Guo2020},
DIII-D \citep{Rea2018,Rea2018a,Montes2019,Wroblewski1997,Rea_2019,DeVries2011,Vega2013,Cannas2014,Ratta2014,Murari2018,Pau2018,Churchill2019,kates-harbeck2019be,Zhu2020}
JET \citep{Cannas2003,Vega2013,Windsor2005,Cannas2004,cannas2007support,Murari2008,Murari2009,Ratt2010,Ferreira2020,kates-harbeck2019be,Zhu2020},
ASDEX-Upgrade\citep{Cannas2010,pautasso2002line,Windsor2005,Aledda2015},
JT-60U \citep{Yoshino2003,Yoshino2005}, HL-2A \citep{yang2020,yang2020modeling},
NSTX \citep{Gerhardt2013} and J-TEXT \citep{Zheng2018,Wang2016}.
Neural-network-based models are also used to accelerate theory-based
modeling \citep{Honda2019,Meneghini2017,Meneghini2020}. In these
works \citep{Honda2019,Meneghini2017,Meneghini2020}, one neural network
was trained with a database of modeling and successfully reproduced
approximate results with several orders of magnitude speedup. There
are also many deep learning architectures have been created and successfully
applied in sequence learning problems \citep{Churchill2019,Graves2013,Lipton2015,IsmailFawaz2019,Ferreira2020}
in areas of time-series analysis or natural language processing. At
present, most machine learning work in fusion community estimates
the plasma state at each moment in time, usually identified as either
non-disruptive or disruptive. However, compared with traditional physical
modeling methods, it is far from enough to just predict whether the
disruption will occur. We need to understand the evolution of the
plasma state of tokamak and its response to external control during
the discharge process.
Physics-driven approaches reconstruct physical high-dimensional reality
from the bottom-up and then reduce them to the low-dimensional model.
Alternatively, data-driven approaches discover the relationships between
low-dimensional quantities from a large amount of data and then construct
approximate models of the nonlinear dynamical system. When focusing
only on the evolution of low-dimensional macroscopic features of complex
dynamic systems, data-driven approaches can build models more efficiently.
In practical applications, control signals and diagnostic signals
of tokamak usually appear as temporal sequence of low-dimensional
data, most of which are zero-dimensional or one-dimensional profiles
and rarely two-dimensional distribution. If we consider tokamak as
a black box, these signals can be considered as inputs and outputs
of a dynamic system. Discharge modeling is to model the connection
between the input and output signal. This can be understood as the
conversion from one kind of time series data to another.
In the present work, a neural network model is trained with the temporal
sequence of control signals and diagnostic signals for a large dataset
of EAST discharges \citep{Wan2015,Wan2019,ISI:000294731600008}. It
can reproduce the response of the main diagnostic signals to control
signals during the whole discharge process and predict their time
evolution curves, such as electron density $n_{e}$, store energy
$W_{mhd}$ and loop voltage $V_{loop}$.
The rest of this paper consists of five parts. Section \ref{sec:Dataset}
provides descriptions of data preprocessing and data selection criteria.
Section \ref{sec:Machine} shows the model details of this work. The
detailed model training process can be found in section \ref{sec:Training}.
Then an in-depth analysis of the model results is put forward in section
\ref{sec:Validation}. Finally, a brief conclusion is made in section
\ref{sec:Conclusion}.
\section{Dataset \label{sec:Dataset}}
EAST’s data system stores more than 3000 channels of raw acquisition
signals and thousands of processed physical analysis data \citep{wang2018studyof},
which record the entire process of tokamak discharge. These data can
be divided into three categories: configuration parameters, control
signals, and diagnostic signals. The configuration parameters describe
constants related to device construction, such as the shape of the
vacuum chamber, the position of the poloidal magnetic field (PF) coils,
etc. The control signals are the external constraints actively applied
to the magnetic field coil and auxiliary heating systems, such as
the current of PF coils, or the power of Lower Hybrid Wave (LHW),
etc. The diagnostic signals are the physics information passively
measured from the plasma, such as electron density $n_{e}$, or loop
voltage $V_{loop}$, etc. The configuration parameters will not change
during the experiment campaign, so there is no need to consider them
unless a cross-device model is built. The discharge modeling is essentially
a process of mapping control (input) signals to diagnostic (output)
signals.
In the present work, three signals that can represent the key characteristics
of discharge are selected as outputs, which are plasma stored energy
$W_{mhd}$, electron density $n_{e}$ and loop voltage $V_{loop}$.
The input signal should include all signals that may affect the output.
In this paper, ten types signals are selected as inputs, such as plasma
current $I_{p}$, central toroidal magnetic field $B_{t0}$, current
of PF coils, and power of LHW, etc. In principle, these signals can
be designed in the experimental proposal stage. Detailed information
about input and output signals are listed in table \ref{tab:Model-input-and}.
Some of these signals are processed signals with a clear physical
meaning, and others are unprocessed raw acquisition signals. As long
as the input signal contains information to determine the output,
whether it is a processed physical signal will not affect the modeling
result.
\begin{table}
{\small{}\caption{The list of signals. \label{tab:Model-input-and}}
}{\small\par}
{\small{}}%
\begin{tabular}{lll>{\raggedright}p{0.1\paperwidth}}
\hline
{\small{}Signals} & {\small{}Physics meanings} & {\small{}Unit} & {\small{}Number of Channels}\tabularnewline
\hline
\multicolumn{3}{l}{{\small{}Output Signals}} & {\small{}3}\tabularnewline
\hline
{\small{}$n_{e}$} & {\small{}Electron density} & {\small{}$10^{19}m^{-3}$} & {\small{}1}\tabularnewline
{\small{}$V_{loop}$} & {\small{}Loop voltage} & {\small{}$V$} & {\small{}1}\tabularnewline
{\small{}$W_{mhd}$} & {\small{}Plasma stored energy} & {\small{}$J$} & {\small{}1}\tabularnewline
\hline
\multicolumn{3}{l}{{\small{}Input Signals}} & {\small{}65}\tabularnewline
\hline
{\small{}$I_{p}$} & {\small{}Plasma current} & {\small{}$A$} & {\small{}2}\tabularnewline
{\small{}PF} & {\small{}Current of Poloidal field (PF) coils} & {\small{}$A$} & {\small{}14}\tabularnewline
{\small{}$B_{t0}$} & {\small{}Toroidal magenetic field} & {\small{}$T$} & {\small{}1}\tabularnewline
{\small{}LHW} & {\small{}Power of Lower Hybrid Wave Current Drive and Heating System} & {\small{}$kW$} & {\small{}4}\tabularnewline
{\small{}NBI} & {\small{}Neutral Beam Injection System} & {\small{}Raw signal} & {\small{}8}\tabularnewline
{\small{}ICRH} & {\small{}Ion Cyclotron Resonance Heating System} & {\small{}Raw signal} & {\small{}16}\tabularnewline
{\small{}ECRH/ECCD} & {\small{}Electron Cyclotron Resonance Heating/Current Drive System} & {\small{}Raw signal} & {\small{}4}\tabularnewline
{\small{}GPS} & {\small{}Gas Puffing System} & {\small{}Raw signal} & {\small{}12}\tabularnewline
{\small{}SMBI} & {\small{}Supersonic Molecular Beam Injection} & {\small{}Raw signal} & {\small{}3}\tabularnewline
{\small{}PIS} & {\small{}Pellet Injection System} & {\small{}Raw signal} & {\small{}1}\tabularnewline
\hline
\end{tabular}{\small\par}
\end{table}
Tokamak discharge is a complex nonlinear process, and there is no
simple way to determine the connection between the control signals
and the diagnostic signals. Therefore, the input data set covers most
of the control signals that can be stably obtained, and the redundant
signals are not identified and excluded. Determining the clear dependence
between control signals and diagnostic signals is one of the main
tasks of data-driven modeling, and it is also a direction worth exploring
in the future. This work focuses on verifying the feasibility of data-driven
modeling, and will not discuss this issue in depth.
In practical applications, there are significant differences in the
sampling rate of raw signals $R_{raw}^{i}$, where $i$ is the index
of signal. The input and output signal data sets need to be resampled
at a common sampling rate $R_{c}$ to ensure that the data points
of different signals are aligned at the same time. If $R_{raw}^{i}<R_{c}$,
the raw signal data needs to be interpolated to complement the time
series. If $R_{raw}^{i}>R_{c}$, the raw signal data needs to be smoothed
to eliminate high-frequency fluctuations.
The resampling rate $R_{c}$ depends on the time resolution of the
output signal, which refers to the time resolution of the physical
process of interest rather than the sampling rate of the raw experiment
signal. The accuracy of the reproduction of the physical process determines
the quality of the modeling. The size of the data set determines the
length of the model training time and the efficiency of modeling.
A lower resampling rate means lower time resolution and poor modeling
quality. However, a higher resampling rate means greater computing
resource requirements and lower modeling efficiency.
The normal discharge waveform of the tokamak can be divided into three
phases, ramp-up, flat-top, and ramp-down (see figure \ref{fig:Schematic}).
Most signals climb slowly during the ramp-up phase, remain stable
during the flat-top phase, and slowly decrease during the ramp-down
phase. The time scale of the ramp-up and ramp-down phases are similar,
and the flat-top phase is much longer than the former two. The signals
show different time characteristics in these three phases. The signal
waveforms of $n_{e}$ and $W_{mhd}$ remain smooth in all three phases,
which can be accurately reproduced with a uniform resampling rate
$R_{c}^{n_{e}}=1kHz$. However, the waveform of $V_{loop}$ varies
greatly and frequently in the ramp-up and ramp-down phases, see Figure
\ref{fig:Schematic}. In order to ensure the quality of modeling,
the resampling rate of $V_{loop}$ is increased in these two phases
\[
R_{c}^{V_{loop}}=\left\{ \begin{array}{ll}
1kHz, & \text{flat-top},\\
10kHz, & \text{ramp-up or ramp-down }
\end{array}\right.,
\]
which is an adaptive piece-wise function. The purpose of using a non-uniform
adaptive resampling function is to balance the quality and efficiency
of modeling.
\begin{figure}
\includegraphics[width=0.5\paperwidth]{Adaptive}
\caption{Schematic diagram of adaptive resampling. The higher resampling rate
is used for segments that are of great interest to physicists or vary
greatly and frequently. \label{fig:Schematic}}
\end{figure}
Three signal channels are selected as outputs, and 65 channels are
selected as inputs, and then resample according to the characteristics
of the output signal to align the time points. In the next step, machine
learning will be performed on these data.
\section{Machine learning architecture \label{sec:Machine}}
The data of tokamak diagnostic system are all temporal sequences,
and different signal data have different characteristics. According
to the temporal characteristics of the data of tokamak diagnostic
system, the sequence to sequence (seq2seq) model \citep{Sutskever2014}
was chosen as the machine learning model for tokamak discharge modeling.
Two methods of uniform resampling and adaptive resampling are adopted
for different data characteristic.
Natural language processing (NLP) is more similar to the discharge
modeling process than classification. NLP converts a natural language
sequence into another natural language sequence. It is an important
branch of machine learning, and the main algorithms are recurrent
neural network (RNN), gated recurrent unit (GRU) and long-term short-term
memory (LSTM), etc.
The encoder-decoder architecture \citep{Sutskever2014} is a useful
architecture in NLP, the architecture is partitioned into two parts,
the encoder and the decoder. The encoder’s role is to encode the inputs
into the state, which often contains several tensors. Then the state
is passed into the decoder to generate the outputs. In this paper,
LSTM was chosen as a fundamental component of the model. And stacked
LSTM as an encoder-decoder machine learning model.
\begin{figure}
\includegraphics[width=0.5\columnwidth]{Architecture}
\caption{Architecture of our model. Where \textquotedblleft None\textquotedblright{}
represents different lengths of sequence because of different discharge
shot duration time. \textquotedblleft output\_channels\textquotedblright{}
is the number of output sequences. \label{fig:Architecture}}
\end{figure}
As shown in figure \ref{fig:Architecture}, the machine learning model
architecture used in this work is based on the sequence to sequence
model (seq2seq) \citep{Sutskever2014}. The first two LSTM layers
(LSTM\_0 and LSTM\_1 in figure \ref{fig:Architecture}) and Dropout\_0
can be considered as the encoder, and the last two LSTM layers (LSTM\_2
and LSTM\_3 in figure \ref{fig:Architecture}) and Dropout\_1 can
be regarded as the decoder. In this work, the encoder is to learn
the high-level representation (\emph{cannot be displayed directly})
of input signals (tab \ref{tab:Model-input-and} input signals). The
last hidden state of the encoder is used to \emph{initialize} the
hidden state of the decoder. The decoder plays the role of decoding
the information of the encoder. Encoder-decoder is built as an end-to-end
model, it can learn sequence information directly without manually
extracting features.
In terms of components, the main component of our architecture is
long short-term memory (LSTM) \citep{Hochreiter1997}, because the
LSTM can use trainable parameters to balance long-term and short-term
dependencies. This feature is suitable for tokamak sequence data,
tokamak discharge response is always strongly related to short-term
input changes but it is also affected by long-term input changes (e.g.
$W_{mhd}$ hardly changes fast, this property can be regarded as a
short-term dependence. However, the impact of other factors on energy
storage is cumulative. This property can be seen as long-term dependence.).
The dropout layer is a common trick to prevent over-fitting. The final
component is the fully connected layer to match the high-dimensional
decoder output with the real target dimension.
When considering the specific mathematical principles of the model,
the encoder hidden states $h_{t}$ are computed using this formula:
\begin{equation}
h_{t}=f(W^{(h_{lstm0}\delta_{dropout0}h_{lstm1})}h_{t-1}+W^{(hx)}x_{t}),\label{eq:1}
\end{equation}
where $h_{lstm0}$, $h_{lstm1}$ are the hidden state of LSTM \_0
and LSTM\_1 in figure \ref{fig:Architecture}, $\delta_{dropout0}$
is the dropout rate in Dropout\_0 in fig \ref{fig:Architecture},
$W$ and $W^{hx}$ are the appropriate weights to the previously hidden
state $h_{t-1}$ and the input vector $x_{t}$. $\delta_{dropout0}\sim Bernoulli(p)$.
This means $\delta_{dropout0}$ is equal to 1 with probability $p$
and 0 otherwise, we let $p=0.9$ for all experiment. Dropout\_0 means
that not all hidden states of LSTM\_0 can be transferred to LSTM\_1.
The encoder vector is the final hidden state produced from the encoder
part of the model. It is calculated using the formula above. This
vector aims to encapsulate the information for all input elements
to help the decoder make accurate predictions. It \emph{acts as the
initial hidden state} of the decoder part of the model.
In decoder a stack of two LSTM units where each predicts an output
$y_{t}$ at a time step $t$. Each LSTM unit accepts a hidden state
from the previous unit and produces and output as well as its hidden
state. In the modeling of tokamak discharge, The output sequence is
a collection of all time steps from the $y$. Any hidden state $h_{t}$
is computed using the formula:
\begin{equation}
h_{t}=f(W^{(h_{lstm2}\delta_{dropout1}h_{lstm3})}h_{t-1}),\label{eq:2}
\end{equation}
where $h_{lstm2,}h_{lstm3}$ are the hidden state of LSTM\_2 and LSTM\_3
in figure \ref{fig:Architecture}. $\delta_{dropout1}$ is the dropout
rate in Dropout\_1 in fig \ref{fig:Architecture}, $\delta_{dropout1}\sim Bernoulli(0.9)$.
$W$ is the appropriate weights to the previously hidden state $h_{t-1}$.
The process of data flowing in the decoder is similar to the encoder,
but the initial states of $h_{lstm2}$ is equal to the last states
of $h_{lstm1}$. As formula above, we are just using the previous
hidden state to compute the next one. The output $y_{t}$ at time
step t is computed using the formula:
\begin{equation}
y_{t}=activation(W^{D}h_{t}).\label{eq:3}
\end{equation}
The model calculates the outputs using the hidden state at the current
time step together with the respective weight $W^{D}$ of the fully
connected layer as shown in figure \ref{fig:Architecture} . The fully
connected layer is used to determine the final outputs. The activation
function is a linear function.
\section{Training \label{sec:Training}}
\begin{figure}
\includegraphics[width=0.5\textwidth]{FlowChart_R7}
\caption{Workflow of training. The resampling method is determined according
to the time characteristics of the output signal.\label{fig:workflow}}
\end{figure}
The form of the resampling function is determined according to the
time characteristics of the output signal waveform. The output signals
with the same sampling function are grouped together. The input signals
are resampled using the same sampling function as the output signal
set to ensure that all data points follow the same time axis. In this
section, model training and data processing will be introduced in
detail. The training of the model (see figure \ref{fig:workflow})
can be divided into five steps as follows:
\begin{enumerate}
\item Obtaining the data of 68 channels (include input and output signals
as shown in table \ref{tab:Model-input-and}) of the selected signals
from the EAST source database.
\item Using different resampling methods base on the time characteristics
of the output signal that would be modeled.
\item Standardizing the data with z-scores.
\item Data fed into the deep learning model for training.
\item Using the loss between model output and real experimental output as
backing propagation metric and then update parameters for the training
model.
\end{enumerate}
The dataset is selected from EAST campaign 2016-2018, discharge shot
number in the range \#70000-80000 \citep{Wan2015,Wan2013,ISI:000294731600008}.
A total 3476 normal shots are selected and divide the data into a
training set, a validation set, and a test set (training set: validation
set: test set = 6: 2: 2). The normal shot means that no disruption
occurred during this discharge, the flat top lasts more than two seconds,
and the key signals (i.e., model output signals, magnetic field, and
plasma current $I_{p}$) are complete. If there is no certain magnetic
field configuration, plasma is impossible to be constrained and without
completed $I_{p}$ or model output signals data there is no meaning
for tokamak discharge experiment or this model. The sampling of the
signal starts from $t=0s$ and continues to the end of the discharge
( typical EAST normal discharge time is five to eight seconds).
The shuffling method is a common improved generalization technique
\citep{KawaguchiLesliePackKaelbling} used in an entire data set.
However, in this work, the method is not used in the entire data set.
In order to prevent data leakage caused by multiple adjacent discharge
experimental shots with similar parameters, the entire data set is
divided into training set, validation set and test set according to
the experimental shot order. The phenomenon of adjacent discharge
experimental shots with similar parameters is very common in tokamak
discharge experiments. For generalization reasons, it is also necessary
to shuffle shot order in the training set. For example, there are
ten normal discharge shots in the original data set. For example,
there are ten normal discharge shots in original data sets. The shot
numbers of ten normal discharge shots are 1-10. the training set is
\emph{shuffle}(1-6) (maybe one order is 1,4,6,5,2,3), validation set
is \emph{shuffle}(7-8), and testing set is \emph{shuffle}(9-10). Inner
a single normal shot discharge sequence will keep strict time order.
When all source data was obtained the z-scores \citep{Zill2011} will
be applied for standardization. And then all the preprocessed data
will be input to the deep learning model for training. In statistics,
the z-score is the number of standard deviations by which the value
of a raw score (i.e., an observed value or data point) is above or
below the mean value of what is being observed or measured. Raw scores
above the mean have positive standard scores, while those below the
mean have negative standard scores. z-score is calculated by $z=(x-\mu\text{)/\ensuremath{\sigma}}$
where $\mu$ is the mean of the population. $\sigma$ is the standard
deviation of the population.
The deep learning model uses an end-to-end training was executed on
8x Nvidia P100 GPUs with Keras \citep{chollet2015keras} and TensorFlow
\citep{abadi2016tensorflow} in the Centos7 system in local computing
cluster and remote computing cluster. The training of the deep learning
model starts with kernel initializer is glorot uniform initialization
\cite{Glorot2010}, the recurrent initializer is orthogonal \cite{Saxe2013},
bias initializer is zeros, and optimizer is Adadelta \cite{Zeiler2012}
for solving gradient explosion. The model trains about twelve days,
40 epochs. Then use callbacks and checkpoints to choose the best performing
model in $W_{mhd}$ and $n_{e}$ modeling best epoch is fifteen, while
$V_{loop}$ modeling best epoch is seven. In per epoch, all the data
in the training set will be put into the model for one time.
The training of our model is executed several times. Many of these
trials are considered as failed (e.g. divergence in training, poor
performance on the test set, etc) because of unsuitable hyper-parameters.
In the process of training our model, multiple sets of hyperparameter
are tried. Determine the best hyperparameter set by performance in
the validation set. Finally, the best hyper-parameters were found
by and shown in table \ref{tab:Hype}.
\begin{table}
\caption{Hyperparameters in this model \label{tab:Hype}}
\begin{tabular}{lll}
\hline
Hyperparameter & Explanation & Best value\tabularnewline
\hline
$\eta$ & Learning rate & $5\times10^{-3}$\tabularnewline
$\gamma$ & \selectlanguage{american}%
Adadelta\foreignlanguage{english}{ decay factor}\selectlanguage{english}%
& 0.95\tabularnewline
Loss function & Loss function type & Mean squared error (MSE)\tabularnewline
Optimizer & Optimization scheme & Adadelta\tabularnewline
Dropout & Dropout probability & 0.1\tabularnewline
Epoch & Epoch & 15 and 7\tabularnewline
dt & Time step & 0.001s\tabularnewline
Batch\_size & Batch size & 1\tabularnewline
LSTM type & Type of LSTM & CuDNNLSTM\tabularnewline
LSTM size & Size of the hidden state of an LSTM unit. & 256\tabularnewline
$LSTM_{kernel}$ & Initializer for the kernel of LSTM weights matrix & Glorot uniform\tabularnewline
$LSTM_{recurrent}$ & Initializer for the recurrent kernel of LSTM weights matrix & Orthogonal\tabularnewline
Dense size & Size of the Dense layer. & 256\tabularnewline
$Dense_{kernel}$ & Initializer for the kernel of dense matrix & Glorot uniform\tabularnewline
$Dense_{bias}$ & Initializer for the bias vector & Zeros\tabularnewline
$n_{encoder}$ & Number of LSTMs stacked in encoder & 2\tabularnewline
$n_{decoder}$ & Number of LSTMs stacked in decoder & 2\tabularnewline
\hline
\end{tabular}
\end{table}
\section{Results\label{sec:Validation}}
After the deep learning model had been trained, the model can model
unseen data. As shown in figure \ref{fig:Usage}, at the beginning
of using the trained model, all data for 65 channel signals should
be obtained. In this step, It is necessary to keep and select the
same type of signal data in tokamak during training. The type of signal
data includes processed data and raw data. And then aligning all the
data on the time axis. The aligned data should be standardized by
the same parameters of the training set. All the standardized data
will be fed into the trained model and get the modeling sequence of
diagnostic signals, In the final step, the trained model should be
selected according to the diagnostic signal that you want to model.
\begin{figure}
\includegraphics[width=0.5\textwidth]{Usage_Test}
\caption{Using the trained model.\label{fig:Usage}}
\end{figure}
In this section, the results of modeling will be analyzed in detail,
including representative modeling results and similarity distributions.
In this work, the similarity is a quantitative measurement of the
modeling results accuracy and is defined as follows:
\begin{equation}
S\left(\boldsymbol{x},\boldsymbol{y}\right)=\max\left(\frac{\Sigma(\boldsymbol{x}-\bar{\boldsymbol{x}})(\boldsymbol{y}-\bar{\boldsymbol{y}})}{\sqrt{\Sigma(\boldsymbol{x}-\bar{\boldsymbol{x}})^{2}\Sigma(\boldsymbol{y}-\bar{\boldsymbol{y}})^{2}}},0\right),
\end{equation}
where $\boldsymbol{x}$ is experimental data, $\boldsymbol{y}$ is
modeling result, $\bar{\boldsymbol{x}}$, $\bar{\boldsymbol{y}}$
are the means of the vector $\boldsymbol{x}$ and vector $\boldsymbol{y}$.
Two typical EAST normal discharge shots shot \#77873 and \#78461,
are selected to check the accuracy of the model trained in this article.
Figure \ref{fig:Output}(a) shows the modeling result for shot \#77873,
which has two LHW injections during discharge. Figure \ref{fig:Output}(b)
shows the result for shot \#78461, which has NBI, LHW, ICRF injection.
\begin{figure}
\includegraphics[width=0.3\paperheight]{77873}
\includegraphics[width=0.3\paperheight]{78461}
\caption{Comparison of modeling result and EAST experiment data, shot \#77873
(a), and \#78461 (b). NBI and ICRH are the raw data so the physical
units are meaningless. \label{fig:Output}}
\end{figure}
Experimental data and modeling results are displayed together in figure
\ref{fig:Output}. The comparison shows that they are in good agreement
in most regions of discharge, from ramp-up to ramp-down. The slope
of the ramp-up and the amplitude of the flat-top are accurately reproduced
by the model. The vertical dash-dot lines indicate the rising and
falling edges of the external auxiliary system signal and the plasma
response, which show the time accuracy of the model.
Compared with experimental signals, the modeling results are more
sensitive to changes in external drives. For example, after the external
drive is turned off, the experimental signal $n_{e}$ continues to
decrease with a fixed slope, but the modeling results show a step-down.
However, it will also cause the deviation of the modeling result and
the experimental data when the external drive changes rapidly. How
to adjust the sensitivity of the model is still an open question.
A test data set with 695 shots were used to quantitatively evaluate
the reliability of the model. The statistical results of the similarity
between model results and the experimental data are shown in figure
\ref{fig:Similarity-distribution}.
\begin{figure}
\includegraphics[width=0.9\columnwidth]{conclu_v1}
\caption{The similarity distribution and average similarity in the test set.
show the similarity distributions of (a) $n_{e}$, (b) $V_{loop}$
, and (c) $W_{mhd}$, respectively. Figure (d) is a joint scatter
plot of three parameters. If the similarity is less than 0, it is
regarded as 0. \label{fig:Similarity-distribution}}
\end{figure}
$W_{mhd}$ is the best performance parameters, with the similarity
concentrated at more than 95\%. In other words, $W_{mhd}$ can be
considered to have been almost completely modeled under the normal
discharge condition. The almost similarity of $n_{e}$ is greater
than 85\%. $V_{loop}$ is the worst performing parameter, but many
of the errors are due to the plasma start-up pulse in the ramp-up
segment and the plasma shutdown pulse in the ramp-down segment. However,
$V_{loop}$ in the ramp-up and ramp-down sections is not the key factor
for the operation of the experiment.
The joint distribution of the three parameters is shown in figure
\ref{fig:Similarity-distribution}(d). Most shots are concentrated
in a limited range, which reflect the consistency of the model on
three target signals. It also shows that these shots belong to the
same tokamak operating mode. In other words, those points far away
from the center area indicate that the experiment is running in abnormal
mode. We checked all deviation shots in test set, and all deviation
are in abnormal conditions. For example, shot \#77833 (as shown in
figure \ref{fig:shot77833}) is a classical deviation caused by abnormal
equipment conditions. This shot is used for cleaning device.
\begin{figure}
\includegraphics[width=0.7\textwidth]{77833}
\caption{shot \#77833 is a classical deviation caused by abnormal equipment
conditions. Shot \#77833 is used for cleaning device. \textcolor{red}{\label{fig:shot77833}}}
\end{figure}
In terms of the similarity distribution of demonstrative parameters
and representative discharge modeling results, this machine learning
model application in tokamak discharge modeling is promising. $W_{mhd}$
can be regarded as almost completely reproduced under normal discharge
shot. $n_{e}$ can be successfully modeled in most areas under the
normal discharge condition. In normal discharge shot, the modeling
results of $V_{loop}$ at ramp-down and flat-top phases are in good
fitting with the experimental results.
\section{Conclusion \label{sec:Conclusion}}
In the present work, we showed the possibility of modeling the tokamak
discharge process using experimental data-driven methods. A machine
learning model based on the encoder-decoder was established and trained
with the EAST experimental data set. This model can use the control
signals (i.e. NBI, ICRH, etc.) to reproduce the normal discharge process
(i.e. electron density $n_{e}$, store energy $W_{mhd}$ and loop
voltage $V_{loop}$) without introducing physical models. Up to 95\%
similarity was achieved for $W_{mhd}$. Recent work of discharge modeling
has focused on physical-driven ``Integrated Modeling''. However,
this work shows promising results for the modeling of tokamak discharge
by using the data-driven methodology. This model can be easily extended
to more physical quantities, and then a more comprehensive tokamak
discharge model can be established.
Checking the physical goal of the experimental proposal is an important
and complicated problem. This work provides a reference for the realization
of physical goals under the normal discharge condition. Specifically,
the model mainly checks whether an experimental proposal can be achieved
under the normal discharge condition. Furthermore, if the experimental
result deviates greatly from the result of our model, there may be
two situations. one is that the experiment has some problems (as shown
in figure \ref{fig:shot77833}); another is that a new discharge mode
has appeared. The reason why a new discharge mode can be found is
that this model only models discharge mode that has appeared in EAST
normal discharges. Of note, if other discharge modes appear in the
data, this model will appear chaotic output as if it were abnormal
shots \citep{ISI:000355286600030}. In general, chaotic model output
means wrong input. For an experimental proposal, if the model gives
a chaotic output, the input should be carefully checked by an experienced
experimenter. If the input can be confirmed to be correct then it
is likely to be a new discharge mode. Our model is not capable of
fully accurate to recognize unsuccessful sets of inputs.
Compared with the physical-driven method, the data-driven method can
build models more efficiently. We also realize that there are many
challenges before the practical application of this method. For example,
the impact of model sensitivity on modeling results has been recognized.
How to adjust the sensitivity of the model is still an open question.
Cross-device modeling is more important for devices under design and
construction such as ITER and CFETR. Introducing device configuration
parameters and performing transfer learning is a feasible solution
to this problem. Our next step is to model the time evolution of the
one-dimensional profile and the two-dimensional magnetic surface.
\ack{}{}
The author would like to thank all the members of EAST Team, especially
Feng Wang, for providing such a large quantity of past experimental
data. The author Chenguang Wan sincerely thanks Yong Guo, Dalong Chen
for explanation of the experimental data, Cristina Rea, and Professor
Robert Granetz for technical discussion.
This work was supported by the National MCF Energy R\&D Program under
Contract No.2018YFE0304100 and the Comprehensive Research Facility
for Fusion Technology Program of China under Contract No. 2018-000052-73-01-001228.
\bibliographystyle{unsrt}
| proofpile-arXiv_065-272 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Acknowledgement}
\section{Background}
\label{background}
\subsection{Build and Execution Environments (BEE)}
\texttt{BEE} \cite{bee,beeflow,chen2018build} is a
containerization environment that enables HPC applications to run on both HPC and cloud computing platforms. \texttt{BEE} provides a unified user interface for automatic job launching and monitoring. \texttt{BEE} users only need to wrap their applications in a standard Docker image and provide a simple \texttt{BeeFile} (job execution environment description) to run on \texttt{BEE}. Since the same Docker image is used
across platforms, no source code modification is necessary.
In this work, we build \texttt{BeeSwarm} based on \texttt{BEE}, so it naturally inherits all benefits of \texttt{BEE}. This allows us to build a unified scalability test system across multiple platforms.
\subsection{Continuous Integration (CI)}
CI was first named and proposed by Grady Booch in 1991. Its aim was to greatly reduce integration problems. CI was initially combined with automated unit testing to run on the developer's local machine before committing to the central code repository. However, as software being developed becomes more complicated and more people are involved in developing, localized testing becomes inefficient and the code base on each developer's machine can easily become outdated, so integration can still be problematic. The longer a branch of code remains checked out, the greater the risk of multiple integration conflicts and failures when the developer branch is reintegrated into the main line. So, centralized build servers are used for CI. The build servers can perform more frequent (e.g., every commit) test runs and provide reports back to the developers. Driven by these benefits many HPC application development projects are now using CI. For example, almost all projects in Next-Generation Code Project in Los Alamos National Laboratory are using CI \cite{daniel2016lanl}. Currently, many CI tools are available to developers such as Travis CI, GitLab CI, Circle CI, Codeship, etc. Many computing platforms also provide CI as a feature in their services such as AWS, Azure, etc. However, current designs of CI services only focus on detecting software bugs in the HPC softwares. To the best of our knowledge, none of the current work can easily enable automatic scalability tests in CI. So, in this work we propose to enable easy scalability tests for HPC developers.
\section{Conclusion}
\label{conclusion}
In this work, we first discuss the benefit of CI in the software development process. Then, we propose to bring scalability tests to CI so that developers can also get feedback about their applications in terms of scalability in addition to functionality. We design \texttt{BeeSwarm}, as a scalability test system for most CI environments. It is easy to use and can be integrated into any software development workflow. A variety of computing platforms can be used as a computing back-end for scalability tests. Experiments were conducted on Travis CI and GitLab CI with Chameleon Cloud as computing backend. Experimental results show that \texttt{BeeSwarm} offers good performance and scalability on large scale tests.
\section{Design}
\label{design}
In order to fulfill the goals of \texttt{BeeSwarm} the software architecture required would require that we both leverage industry standards, while at the same time implementing new functionality to \texttt{BEE}\cite{bee}. \texttt{BeeSwarm} is a general solution that can be deployed on any git repository, any CI service and any BEE-supported computing platform. For the purposes of example the software platforms
\texttt{Travis CI} and \texttt{GitLab CI} are used as two CI platform, and \texttt{Chameleon Cloud}\cite{mambretti2015next}
are used a scalability test platform. \textbf{Fig. \ref{arch}} shows the architecture of \texttt{BeeSwarm}. \texttt{BEE} is at the core of the architecture and serves a number of vital roles. As part of the continuous integration process \texttt{BEE} is deployed on the CI test environment, from there it is responsible for managing the workflow associated with creating a scalable test environment, copying required test scripts, initiating the target application, and finally parsing the output.
\textbf{Fig. \ref{overall}} shows the workflow of \texttt{Travis CI}/\texttt{GitLab CI} with \texttt{BeeSwarm}. Once developers make commits to the central code repository, the original CI correctness test will be triggered. If the test finishes without a fail, \texttt{BeeSwarm} will start to deploy the scalability on BEE-supported computing platform, gather the results and push back to the code repository. It is crucial that we use \texttt{BeeSwarm} to conduct scalability test, since the CI test environment is usually deployed on a single machine incapable for large-scale scalability test. There are four major design tasks in \texttt{BeeSwarm} and we discuss them as follows.
\subsection{Integrate BEE in CI Test Environment}
Each time a developer commits to the central code repository, a new CI test job is triggered on the CI test environment. That means in order to launch BEE inside that test environment, we need to install BEE every time before the scalability test. To minimize overhead caused by the installation, we designed a more efficient customized BEE installer for CI environment. Since BEE does not run any test locally in the CI environment, we remove the image building process that was originally in the BEE installer, required when BEE runs jobs in a virtual machine on a system.
Also, we design a simplified BEE launcher (discussed in the next subsection), which requires less dependent packages/libraries, simplifying the BEE installer. Finally, to enable remote control of compute platforms through SSH, we add SSH key generation in the new BEE installer. This was not present in the original BEE installer, since it can utilize the current user's key. With all kinds of optimization, we are able to keep the BEE installation time to less than two minutes, causing only a slight overhead compared with minutes to hours of CI test and scalability test.
\subsection{Customize BEE Launcher for CI Test Environment}
BEE was designed to handle multiple tasks simultaneously, so it adopted a server-client structure, in which the server is a centralized controller (i.e., \texttt{BEE Orchestration Controller}) that stores the global information of all running BEE jobs and clients, a series of BEE launchers (each targeting a computing platform). This structure can facilitate normal use, however it can be cumbersome to launch BEE jobs on a CI test environment using the server-client structure (first start the \texttt{BEE Orchestration Controller} in the background and then launch the job using the BEE launcher). Since we only run one BEE job for each CI job, there is no need to use the centralized controller to keep all the information of multiple jobs. So, in this work we design a simplified BEE launcher. It allows Travis to launch the BEE job with just one simple command. Basically, we integrate the input parser and job launching process together in our simplified BEE launcher.
\subsection{Customized \texttt{beefile} }
\texttt{beefile} is a simple JSON-format task description file used by BEE as user input. It contains necessary information needed to launch a task using BEE that include Docker images tag, platform-specific settings, and run script for both sequential runs and parallel runs. Here, we extend the run script configuration part for parallel runs. In the original design, uses need to specify each parallel run command one by one including the script to invoke, number of node to use, and number of processes to be used per node. Since users usually only need to run a few parallel run command, this design is clear and simple to use. However, for scalability test, users expect to run their application with a series of configurations (e.g., increasing number of nodes/processes). Fill in each configuration one by one can be cumbersome. So, we extend the \texttt{beefile} to allow easier configuration. Specifically, instead of letting users specify each configuration one by one, we now allow users to specify a range of configurations. For example, a range of nodes and a range of processes per node. In addition, we also users to specify, whether they want to increase the number of nodes or processes linearly with a fixed step size or logarithm with base of two. An example \texttt{beefile} is shown in \textbf{Listing 1}.
\lstset{numbers=left,
xleftmargin=1.5em,
frame=single,
framexleftmargin=3em}
\lstset{language=Java}
\begin{lstlisting}[ float, escapechar=!,
caption= An example \texttt{beefile}]
"task_conf": {
"task_name": <task name>,
"exec_target": bee_cc|bee_vm|bee_aws|bee_os,
!\hl{"scalability\_test": \{}!
!\hl{"script" : \<path to script\>,}!
!\hl{"num\_of\_nodes": [1, 32],}!
!\hl{"proc\_per\_node": [1, 16],}!
!\hl{"mode": linear or log}!
!\hl{\},}!
"docker_conf": {
"docker_img_tag": <docker image>,
"docker_username": <username>,
"docker_shared_dir": <dir>
},
"exec_env_conf": {
"bee_cc": {...} or
"bee_vm": {...} or
"bee_aws": {...} or
"bee_os": {...}
}
\end{lstlisting}
\subsection{Test Scalability on BEE-supported Platform}
Since CI services usually only allocate one computing node (e.g., virtual machine) for each job, it is impractical to conduct a scalability test beyond one node. So, in this work we choose to use BEE as the computing back-end for the scalability test. BEE supports launching any kind of computing task on a variety of computing platforms, ranging from HPC systems to cloud computing systems (e.g., Amazon EC2, OpenStack). It can launch each job on as many nodes as each computing platform allows. BEE takes a job description file, \texttt{Beefile}, as input, that specifies all job related information including selecting the target platform, name tag of the Docker container for the application, and run scripts that the user specifies to be run when the application is deployed on the target platform. To launch a BEE job for the scalability test, we keep using the same \texttt{Beefile} as the job description. To specify specific test configurations for the scalability test, users only need to add multiple entries to the "mpirun" section inside the \texttt{Beefile}. Deploying the execution environment on the target system can take several minutes, to avoid setting up the environment repeatedly for each test, the BEE-CI launcher will first scan through the \texttt{Beefile}, and then setup the environment with the maximum number of nodes needed to conduct all tests.
\subsection{Collect and Store Scalability Test Results}
Unlike common CI tests that only provide results in the form of ``\textit{pass}'' or ``\textit{no pass}'' to developers, scalability test reports a variety of information generated from different execution scales to developers. Since the information that developers care about is different from application to application, it is hard to develop a universal monitor strategy to gather information that suits everyone's needs. So instead, we leave this part to the developers. We let developers program their applications so that after each run the application will output relevant information. BEE will gather all the outputs from different runs as separate files, transferred and saved in the CI test environment. Next, we require developers to provide an output parser that can parse all relevant information from the output files and generate one final result file. Finally, BEE will push the final result file back to the central code repository and rename the file using the git build number to distinguish final result files generated from different commits.
\section{Experiments}
\label{experiments}
In this section, we conduct experiments to show the performance and scalability of \texttt{BeeSwarm}. We use a Department of Energy (DOE) code, FleCSALE \cite{charest2017flexible}, as an example software development project. FleCSALE is a computer software package developed for studying problems that can be characterized using continuum dynamics, such as fluid flow. It is specifically developed for existing and emerging large distributed memory system architectures. We deploy \texttt{BeeSwarm} on both Travis CI and GitLab CI. For Travis CI, We use the default virtual machine based execution environment to run the original correctness test and \texttt{BeeSwarm}. For GitLab CI, we user the Docker-in-Docker (i.e., dind) runner to run the original correctness test and \texttt{BeeSwarm}. We found that Docker-in-Docker runner enables an more easy-to-configured environment for \texttt{BeeSwarm} compared to other runner types. We use Chameleon Cloud \cite{mambretti2015next} as the computation back-end for the scalability tests. The Chameleon Cloud is an OpenStack-based cloud computing platform that offer bare-metal access to all computing nodes. It is currently deployed at University of Chicago and the Texas Advanced Computing Center with total 650 multi-core nodes. We conduct our test on the nodes located at University of Chicago.
\subsection{Modified CI script}
In this section, we show a sample modified Travis CI script (similar on GitLab CI) for FleCSALE that has \texttt{BeeSwarm} scalability test enabled (\textbf{Listing 2}). Line 1 - 13 are the original FleCSALE test code on Travis CI. To enable \texttt{BeeSwarm} scalability test, we only need to add less than 10 lines of simple code (line 14 - 23). The original CI script include building a Docker image (line 9), running the Docker image (line 10) to correctness test scrips, and push the image to DockerHub if the test was successful (line 14). We add the \texttt{BeeSwarm} configuration and launching scripts after the image is successfully pushed onto the DockerHub. We obtain and install \texttt{BeeSwarm} in line 14 - 16. We add necessary environment variables (for OpenStack and \texttt{BeeSwarm}) in line 17. The scalability test is launched using a simple command in line 18. We add a 120 minutes timeout here since Travis CI would kill a job if a command runs more than 10 minutes by default and a scalability test usually needs more time than that. The actual timeout length can be set based on need of a specific application. Finally, we run the output parser in line 19 followed by pushing scalability test result to original code repository in line 20 - 23. It can be seen that with minimum modification current CI scripts can easily enable scalability test through \texttt{BeeSwarm} and the scalability test code is highly portable across any kind of CI service platforms.
\lstset{numbers=left,
xleftmargin=1.5em,
frame=single,
framexleftmargin=2em}
\lstset{language=Java}
\begin{lstlisting}[float, escapechar=!,
caption= Example Travis CI script (\texttt{.travis.yml}) for FleCSALE with \texttt{BeeSwarn} scalability test. Highlighted part shows that only simple modifications are required to enable autonomic scalability test.]
language: cpp
sudo: required
services:
- docker
before_install:
- git fetch --unshallow
- git fetch --tags
script:
- docker build -t <img> <dockerfile>
- docker run <img> <correctness_test>
after_success:
- docker login -u $DOCKER_USERNAME -p $DOCKER_PASSWORD
- docker push <img>
- !\hl{git clone https://github.com/lanl/BEE.git}!
- !\hl{cd ./BEE}!
- !\hl{./install\_on\_travis.sh}!
- !\hl{source openrc.sh}!
- !\hl{travis\_wait 120 bee\_ci\_launcher.py -l FleCSALE}!
- !\hl{output\_parser.py}!
- !\hl{git add scalability\_test\_result\_\$BUILD\_NUM.csv}!
- !\hl{git commit --message "BeeSwarm commit \$BUILD\_NUM [skip ci]"}!
- !\hl{git remote add remote\_repo https://\$REPO\_TOKEN@\$REPO\_URL }!
- !\hl{git push --quiet --set-upstream remote\_repo \$BRANCH}!
\end{lstlisting}
\subsection{Required environment variables}
\begin{table}[h]
\caption{List of variables needed by \texttt{BeeSwarm} in CI environment}
\label{var}
\begin{tabular}{|p{3cm}|p{5cm}|}
\hline
Variable & Description \\ \hline
DOCKER\_USERNAME & Username for Docker image registry.\\ \hline
DOCKER\_PASSWORD & Password for Docker image registry.\\ \hline
REPO\_TOKEN & Access token used for pushing scalability test results back to the code repository. \\ \hline
REPO\_URL & The URL to the code repository. \\ \hline
REPO\_BRANCH & The current branch of the code repository. \\ \hline
BUILD\_NUM & Current build number. \\ \hline
OS\_USERNAME & Username for accessing OpenStack platform. \\ \hline
OS\_PASSWORD & Password for accessing OpenStack platform. \\ \hline
OS\_RESERVATION\_ID & Reservation ID used for current scalability test on OpenStack platform. \\ \hline
\end{tabular}
\end{table}
\textbf{Table \ref{var}} lists the variables that are necessary for \texttt{BeeSwarm} in the CI test environment. \texttt{DOCKER\_USERNAME} and \texttt{DOCKER\_PASSWORD} are used to access (e.g., pull and push) Docker images from the images registry. \texttt{REPO\_TOKEN} is used to let \texttt{BeeSwarm} push the scalability test results back to the original code repository. \texttt{REPO\_BRANCH} and \texttt{BUILD\_NUM} are used to make sure that \texttt{BeeSwarm} will push the scalability test results back to the corresponding branch with build number marked in the commit message. \texttt{OS\_USERNAME} and \texttt{OS\_PASSWORD} are used to access the OpenStack platforms (e.g., Chameleon cloud) and \texttt{OS\_RESERVATION\_ID} is used to specify a list of nodes used for scalability test.
\subsection{Performance of \texttt{BeeSwarm}}
In order to evaluate the performance of \texttt{BeeSwarm}, we discuss the overhead of launching \texttt{BeeSwarm} and the scalability of \texttt{BeeSwarm} for large-scaled test.
\subsubsection{Overhead of \texttt{BeeSwarm}}
\textbf{Fig. \ref{breakdown}} and \textbf{Fig. \ref{breakdown-gl}} show the time breakdown of CI for FleCSALE with \texttt{BeeSwarm} scalability test, including the original correctness test on Travis CI and one set of multi-node scalability tests using \texttt{BeeSwarm}. The scalability test involves different execution configurations that range from 1 process to 128 processes. We can see the major overhead of \texttt{BeeSwarm} comes from deploying the scalability test environment. This is mainly caused by long instance launching time on Chameleon cloud. However, since CI tests are usually not on the critical path of applications' development process, the extra time cost brings negligible impact to developers.
\begin{figure}[h]
\centering
\includegraphics[width=0.48\textwidth]{figures/perf.pdf}
\caption{Time breakdown of an example CI test with \texttt{BeeSwarm} scalability test on Travis CI.}
\label{breakdown}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.48\textwidth]{figures/perf-gl.pdf}
\caption{Time breakdown of an example CI test with \texttt{BeeSwarm} scalability test on GitLab CI.}
\label{breakdown-gl}
\end{figure}
\subsubsection{Scalability of \texttt{BeeSwarm} }
\begin{figure}[h]
\centering
\includegraphics[width=0.48\textwidth]{figures/scalability.pdf}
\caption{Scalability of deploying scalability test environment for \texttt{BeeSwarm} on Chameleon cloud.}
\label{scalability}
\end{figure}
Since BeeSwarm is designed for launching large-scaled parallel applications, the scalability of \texttt{BeeSwarm} itself is also very important.
As we mentioned before, the main overhead of \texttt{BeeSwarm} comes from deploying the scalability test environment. \textbf{Fig. \ref{scalability}} shows the performance of deploying the scalability test environment for \texttt{BeeSwarm}. We test it with an increasing number of processes ranging from 1 to 1024.
We run the scalability test on 16 instances on Chameleon cloud. Each instance has 64 cores. From \textbf{Fig. \ref{scalability}}, we can see the time cost is nearly constant (less than 900 seconds) as we increase the number of process. This indicate the scalability of \texttt{BeeSwarm} itself is sufficient for large-scale test.
\subsection{Scalability Test Showcase}
\begin{figure}[h]
\centering
\includegraphics[width=0.45\textwidth]{figures/flecsale-result.pdf}
\caption{The scalability test result of FleCSALE.}
\label{flecsale-result}
\end{figure}
We use FleCSALE to showcase a sample scalability test using \texttt{BeeSwarm}. We configure it to run using 2 to 32 processes on one or two nodes. When using two nodes, we evenly divide the total number of processes among them (each has 1 to 16 processes). The file generated by \texttt{BeeSwarm} is in the comma separated values (CSV) file format, and we plotted the result data in \textbf{Fig. \ref{flecsale-result}}.
Even with this simple test using \texttt{BeeSwarm}, we can observe some interesting behavior of FleCSALE. We can see FleCSALE gains better speedup (1.73x - 4.01x) on a single node environment compared to the speedup on two nodes (1.05x - 1.40x) given the same total number of processes. This may suggest that inter-node communication could be a performance bottleneck for FleCSALE running on systems similar to Chameleon.
This result can effectively give developers the scalability data of the application they are developing, so that they can make adjustment to their application in a more timely manner. Not only can the developer observe behavior of different processing schemes, but using \texttt{BeeSwarm} can help aid them to see performance improvement or degradation of their application as they push changes to the application.
\section{Introduction}
High software quality is one of the most important goals of software development. Software testing serves as the most widely used approach to ensure the quality of software meet expectation.
A good way to test software is to include automated tests in the build process. With the rise of Extreme Programming (XP) and Test Driven Development (TDD), self-testing processes for code development have become popular and are widely adopted by many software development projects.
As software becomes increasingly structurally complicated, the number of developers involved in the development process increases. As each developer makes progress, they commit their work periodically (every several hours or days) to the central code repository (e.g., git, SVN). Not only does each developer's work require testing, the integration of work between developers also requires testing. So, Continuous Integration (CI) \cite{fowler2006continuous} is widely adopted in many software development projects. A CI server is used dedicatedly for testing. Each time a developer makes a commit of her work to the central code repository, the CI server automatically make a clone of the project and conduct pre-designed tests, so that it can constantly monitor the quality of the software in terms of correctness and report potential problems in a timely fashion, helping developers make bug fixes more efficiently.
\begin{figure*}[h]
\centering
\includegraphics[width=1\textwidth]{figures/CI-motivation-2.png}
\caption{Example: the performance of Legion \cite{bauer2012legion} changes as developers make progress. The performance is obtained by running a benchmark PENNANT\cite{ferenbaugh2015pennant} on the Legion system. The test suit sedovbig3x30 running on 10 processes (CPU cores) is used.
}
\label{exp}
\end{figure*}
When it comes to HPC applications, \textit{performance} and \textit{scalability}
are the other two important factors of software quality besides correctness, since the application are usually designed to deliver high performance on given platforms. Also applications that aim to solve complex time-consuming problems are expected to obtain good speedup when deployed on multi-node clusters, many-core architectures, or large-scale supercomputers. The scalability of HPC application is usually interpreted as how much speed up can be obtained given more computing resources. Better scalability means that the HPC application can use the underlying computing resources more efficiency and constantly deliver good performance on a various amount of computing resources.
During the HPC application development, as developers make progress,
and they commit their work to the central code repository, the scalability of the application can change. For instance, it can be caused by changes in algorithm design, tunable parameters, and different hardware architectures of target production systems. For example, \textbf{Fig. \ref{exp}} shows the performance of Legion \cite{bauer2012legion}, a data-centric parallel programming system, changes
with different source code commits. The performance is obtained by running a benchmark software, PENNANT\cite{ferenbaugh2015pennant}, on the Legion system. As we can see the execution time can significantly change as developers make progress. Receiving performance or scalability results like this in a timely manner can greatly help developers make better decisions about their code design and deliver HPC software with expected quality. However, current designs of CI services are commonly focused on monitoring the software quality in terms of correctness (e.g., detecting software bugs). To the best of our knowledge, none of the current work can easily enable automatic performance or scalability tests in CI since the test environment of CI is usually deployed on a single machine incapable of conducting large-scale scalability test.
\begin{figure*}[h]
\centering
\includegraphics[width=0.8\textwidth]{figures/beeSwarm_Arch.pdf}
\caption{Architecture of \texttt{BeeSwarm}.
}
\label{arch}
\end{figure*}
In this work, we propose a performance and scalability test system for CI -- \texttt{BeeSwarm}. \texttt{BeeSwarm} can be used as a plug-in for any current CI service. It takes the widely used Docker container as input, and the performance and scalability test can run on both HPC cluster environments and cloud computing environments. Just like the original correctness test in CI, the performance and scalability test are also autonomic. It only requires users to make simple specifications about the test environment they want to use and the test specification they need. Every time developers commit a change to the central code repository, they can choose to schedule a scalability test after the success of original correctness test. The performance and scalability test results will be automatically pushed back to the central code repository.
Although we deploy \texttt{BeeSwarm} on Travis CI and GitLab CI in this work, it can also be deployed on any other CI test environment. To deploy on another CI platform, only minimum modifications to the \texttt{BeeSwarm} configuration scripts are necessary, which makes \texttt{BeeSwarm} highly portable across CI platforms. In addition, although we only show the use of Chameleon cloud, the scalability test can also be executed on any other BEE-supported platform (HPC clusters, AWS, OpenStack, etc). This gives developers the flexibility to choose the platform they want their applications to run on.
The rest of this paper is organized as follows. We motivate our work in section \ref{motivation}. In section \ref{background}, we give necessary background that can help readers understand this work. We provide design details of \texttt{BeeSwarm} in section \ref{design} followed by experimental evaluation in section \ref{experiments}. Section \ref{related_work} discuss recent work that related to ours. Finally, section \ref{conclusion} concludes our work.
\section{Motivation}
\label{motivation}
\begin{table}[h]
\centering
\caption{Several commits in the Legion commit tree that may cause the performance improvement after commit 4400 shown in Figure \ref{exp}.}
\label{commits}
\begin{tabular}{|p{0.3cm}|p{5.8cm}|}
\hline
\multicolumn{1}{|c|}{Commit HASH} & \multicolumn{1}{c|}{Commit message} \\ \hline
725e549dc & legion: fixing a potential hang with old-style bounds checking \\ \hline
3edff3290 & regent: small bug fix to openmp code generation for regent \\ \hline
d0b157755 & tools: small bug fix for legion prof ascii deserializer \\ \hline
1162649ea & legion: small bug fix for dependence analysis of close operations involving different children in different modes for the same field \\ \hline
2818b5fe9 & legion: small bug fix for remote advances of version numbers \\ \hline
824d6c77d & legion: fixing a bug where we were not properly paging in version states for remote virtual mappings \\ \hline
\end{tabular}
\end{table}
In this section, we use an example to motivate our work by showing the necessity of having automatic scalability test in CI. In \textbf{Fig. \ref{exp}} we show the performance of Legion changes as developers make progress. However, it is hard to find out the exactly which commit(s) causes the performance change. For example, the performance of Legion improved significantly from commit \texttt{1e96} to \texttt{4400}. Commit \texttt{4400} is a merge operation between two branches, which totally contains about 61300 lines of code changes composing hundreds of commits. It is hard to tell which commit(s) causes the performance improvement. By searching the commit tree of Legion, we found several commits focusing on bug fixing that may potentially affect performance. We list several of them in \textbf{table \ref{commits}}. So, if scalability test was available in the CI for Legion upgrade, we would be able to easily find the root cause of the performance change by searching in the scalability test results for each commit and keep track of the changes that benefit or hurt the scalability.
\section{Related Work}
\label{related_work}
Scalability is one of the most important metric when we evaluate the quality of HPC applications. Many works have been done to build scalability test tools to facilitate HPC application development. For example, \cite{vetter2005mpip} proposed a a lightweight profiling library for MPI applications, which is only based on statistical information about MPI functions and brings little performance overhead. \cite{chen2006stas} proposed a effective scalability testing and analysis system -- STAS. \cite{chung2006mpi} proposed a configurable MPI scalability analysis tool for Blue Gene/L supercomputer. \cite{brunst2013custom} proposed a performance tool, Vampir, that can be used to detect hot spots in HPC applications. This can efficiently help HPC developers make their applications more scalable. \cite{merchant2012tool} proposed JACE (Job Auto-creator and Executor), a tool that enables automation of creation and execution of complex performance and scalability regression tests. It can help developers tune an application on a given platform to maximize performance given different optimization flags and tunable variables. \cite{muraleedharan2012hawk} presented a HPC performance and scalability test tool, Hawk-i, that uses cloud computing platforms to test HPC applications in order to reduce the effort to access relative scarce and on-demand high performance resources. \cite{bell2003paraprof} proposed, ParaProf, a portable, extensible, and scalable tool for parallel performance profile analysis. It gathers rich number of hardware counters and traceable information in order to offer much more detailed profiling result similar to state-of-the-art single process profiling tools. \cite{yoo2015patha} proposed a scalability test tool, PATHA, that uses system logs to extract key performance measures and apply the statistical tools and data mining methods on the performance data to identify bottlenecks or to debug the performance issues in HPC applications. Although these recent work is useful in scalability test for HPC applications, their tools or systems cannot be easily adopted by current HPC application development projects since they either require modification to the HPC application or a complicated installation or configuration process in order to make their tools working properly on a given HPC platform.
| proofpile-arXiv_065-273 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
An Artinian graded algebra $A$ over a field $\K$ is said to satisfy the weak Lefschetz property (WLP for short) if there exists a linear form $\ell$ such that the multiplication map $\times \ell:A_i\rightarrow A_{i+1}$ has maximal rank for every $i\geq 0$. An algebra $A$ is said to satisfy the strong Lefschetz property (SLP) if there is a linear form $\ell$ such that $\times \ell^j:A_i\rightarrow A_{i+j}$ has maximal rank for each $i,j\geq 0$. Determining which graded Artinian algebras satisfy the Lefschetz properties has been of great interest (see for example \cite{HMNW, MMN2, Gondim, BK, Lefbook, BMMNZ, tour} and their references).
It is known that every Artinian algebra of codimension two in characteristic zero has the SLP, it was proven many times using different techniques, see for example \cite{HMNW} and \cite{Briancon}. This is no longer true for codimension three and higher and in general it is not easy to determine Artinian algebras satisfying or failing the WLP or SLP. Studying the Lefschetz properties of Artinian Gorenstein algebras is a very interesting problem. The $h$-vector of an Artinian algebra with the WLP is unimodal. In general there are examples of Artinian Gorenstein algebras with non-unimodal $h$-vector and hence failing the WLP. R. Stanley \cite{Stanley} gave the first example with $h$-vector $h=(1,13,12,13,1)$. Later D. Bernstein and A. Iarrobino \cite{BI} and M. Boij and D. Laksov \cite{BL} provided examples of non-unimodal Gorenstein $h$-vector with $h_1=5$.
Sequence $h=\left(h_0,h_1,\dots \right)$ is a Stanley-Iarrobino sequence, or briefly SI-sequence, if it is symmetric, unimodal and its first half, $(h_0,h_1,\dots , h_{\lfloor\frac{d}{2}\rfloor})$ is differentiable.
R. Stanley \cite{Stanley} showed that the Hilbert functions of Gorenstein sequences are SI-sequences for $h_1\leq 3$. By the examples of non-unimodal Gorenstein Hilbert functions it is known that this is not necessarily true for $h_1\geq 5$. Whether Hilbert functions of Artinian Gorenstein algebras with $h_1=4$ are SI-sequences is still open. It is known that any SI-sequence is a Gorenstein $h$-vector \cite{ChoIarrobino, MiglioreNagel}. T. Harima in \cite{Harima1995} gave a characterization on $h$-vectors of Artinian Gorenstein algebras satisfying the WLP. In this article we generalize this result and characterize $h$-vectors of Artinian Gorenstein algebras satisfying the SLP, see Theorem \ref{SI-SLP-Theorem}.\par
In section \ref{section4}, we consider classes of Artinian Gorenstein algebras which are quotients of coordinate rings of a set of $\K$-rational points in $\mathbb{P}_{\K}^n$. We prove that for a set $X$ of points in $\mathbb{P}_{\K}^n$ which lie on a rational normal curve any Artinian Gorenstein quotient of $A(X)$ satisfies the SLP, Theorem \ref{smoothconic}.
Higher Hessians of dual generators of Artinian Gorenstein algebras were introduced by T. Maeno and J. Watanabe \cite{MW}. We study the higher Hessians of dual generators of Artinian Gorenstein quotients of $A(X)$. We show Artinian Gorenstein quotients of $A(X)$ where $X\subset \mathbb{P}_{\K}^2$ lie on a conic satisfy the SLP, Theorem \ref{singularConic}.
We also prove non-vanishing of the determinants of certain higher Hessians in Theorems \ref{points-on-conic} and \ref{points-on-line} for Artinian Gorenstein quotients of coordinate ring of points $X\subset \mathbb{P}_{\K}^2$ where $X$ contains points on a conic and a line respectively. We then in Corollary \ref{corSLP} provide classes of such Artinian algebras satisfying SLP.
\section{Preliminaries}
Let $S={\sf k}[x_0,\dots ,x_n]$ be a polynomial ring equipped with the standard grading over a field $\sf k$ of characteristic zero and $\mathbb{P}^{n}=\mathbb{P}^{n}_{\sf k}=\mathrm{Proj} S$. Let $A = S/I$ be a graded Artinian (it has Krull dimension zero) algebra where $I$ is a homogeneous ideal. The \emph{Hilbert function} of $A$ in degree $i$ is $h_{A}(i) = h_i =\dim_{\sf k}(A_i)$. Since $A$ is Artinian the Hilbert function of $A$ is determined by its $h$-vector, $h=\left(h_0,h_1,h_2,\dots ,h_d\right)$ such that $h_d\neq 0$. The integer $d$ is called the \emph{socle degree}. The graded ${\sf k}$-algebra is \emph{Gorenstein} if it has a one dimensional socle. Without loss of generality we may assume that $I$ does not contain a linear form (form of degree $1$) so $h_1=n+1$ and is called the codimension of $A$. If $A$ is Gorenstein then the $h$-vector is symmetric and so $h_d=1$. A sequence $h=\left(h_0,\dots ,h_d\right)$ is called a \emph{Gorenstein sequence} if $h$ is the Hilbert function of some Artinian Gorenstein algebra. \par
\noindent Let $h$ and $i$ be positive integers. Then $h$ can be written uniquely in the following form
\begin{equation}
h=\binom{m_i}{i}+\binom{m_{i-1}}{i-1}+\cdots +\binom{m_j}{j},
\end{equation}
where $m_i > m_{i-1}>\cdots > m_j\geq j\geq 1$. This expression for $h$ is called the $i$-binomial expansion of $h$. Also define
\begin{equation}
h^{\langle i\rangle}=\binom{m_i+1}{i+1}+\binom{m_{i-1}+1}{i}+\cdots +\binom{m_j+1}{j+1}
\end{equation}
where we set $0^{\langle i\rangle}:=0$.
A sequence of non-negative integers $h=\left(h_0,h_1,\dots \right)$ is called an \emph{O-sequence} if $h_0=1$ and $h_{i+1}\leq h_i^{\langle i\rangle}$ for all $i\geq 1$. Such sequences are the ones which exactly occur as Hilbert functions of standard graded algebras.
\begin{theorem}[Macaulay\cite{Macaulay}]
The sequence $h = \left(h_0,h_1,\dots , h_d\right)$ is an O-sequence if and only if it is the $h$-vector of some standard graded Artinian algebra.
\end{theorem}
We say $h=\left(h_0,h_1,\dots \right)$ is \emph{differentiable} if its first difference $\Delta h = \left(h_0,h_1-h_0,\dots \right)$ is an O-sequence. Moreover, an $h$-vector is called \emph{unimodal} if $h_0\leq h_1\leq\cdots\leq h_i \geq h_{i+1}\geq \cdots \geq h_d$. A sequence $h=\left(h_0,h_1,\dots \right)$ is \emph{Stanley-Iarrobino sequence}, or briefly \emph{SI-sequence}, if it is symmetric, unimodal and its first half, $(h_0,h_1,\dots , h_{\lfloor\frac{d}{2}\rfloor})$ is differentiable.\par
Now we recall the theory of \emph{Macaulay Inverse systems}. Define Macualay dual ring $R= {\sf k}[X_0,\dots ,X_n]$ to $S$ where the action of $x_i$ on $R$, which is denoted by $\circ$, is partial differentiation with respect to $X_i$.
For a homogeneous ideal $I\subseteq S$ define its \emph{inverse system} to be the graded $S$-module $M\subseteq R$ such that $I=\ann_S(M)$.
There is a one-to-one correspondence between graded Artinian algebras $S/I$ and finitely generated graded $S$-submodules $M$ of $R$, where $I=\ann_S(M)$ is the annihilator of $M$ in $S$, conversely, $M=I^{-1}$ is the $S$-submodule of $R$ which is annihilated by $I$. Moreover, the Hilbert functions of $S/I$ and $M$ are the same, in fact $\dim_{\sf k}(S/I)_i=\dim_{\sf k}M_i$ for all $i\geq 0$. See \cite{Geramita} and \cite{IK} for more more details.
By a result by F.H.S. Macaulay \cite{F.H.S} it is known that an Artinian standard graded $\mathsf{k}$-algebra $A=S/I$ is Gorenstein if and only if there exists $F\in R_d$, such that $I=\ann_S(F)$. The homogeneous polynomial $F\in R_d$ is called the \emph{Macaulay dual generator} of $A$.
\begin{definition}\cite[Definition 3.1]{MW}
Let $F$ be a polynomial in $R$ and $A= S/\ann_S(F)$ be its associated Artinian Gorenstein algebra. Let $\mathcal{B}_{j} = \lbrace \alpha^{(j)}_i+\ann_S(F) \rbrace_i$ be a $\mathsf{k}$-basis of $A_j$. The entries of the $j$-th Hessian matrix of $F$ with respect to $\mathcal{B}_j$ are given by
$$
(\Hess^j(F))_{u,v}=(\alpha^{(j)}_u\alpha^{(j)}_v \circ F).
$$
We note that when $j=1$ the form $\Hess^1(F)$ coincides with the usual Hessian. Up to a non-zero constant multiple $\det \Hess^j(F)$ is independent of the basis $\mathcal{B}_j$. By abuse of notation we will write $\mathcal{B}_{j} = \lbrace \alpha^{(j)}_i \rbrace_i$ for a basis of $A_j$. For a linear form $\ell=a_0x_0+\cdots +a_nx_n$ we denote by $\Hess^j_\ell(F)$ the Hessian evaluated at the point $P$ dual to $\ell$ that is $P=(a_0,\dots ,a_n)$.
\end{definition}
The following result by T. Maeno and J. Watanabe provides a criterion for Artinian Gorenstein algebras satisfying the SLP.
\begin{theorem}\cite[Theorem 3.1]{MW}
Let $A = S/\ann_S(F)$ be an Artinian Gorenstein quotient of $S$ with socle degree $d$. Let $\ell$ be a linear form and consider the multiplication map $\times \ell^{d-2j} :A_j\longrightarrow A_{d-j}$. Pick any bases $\mathcal{B}_j$ for $A_j$ for $j=0,\dots , \lfloor\frac{d}{2}\rfloor$. Then linear form $\ell$ is a strong Lefschetz element for $A$ if and only if
$$
\det\Hess^j_\ell(F)\neq 0,
$$
for every $j=0,\dots , \lfloor\frac{d}{2}\rfloor$.
\end{theorem}
\begin{definition}
Let $A= S/\ann(F)$ where $F\in R_d$. Pick bases $\B_j = \lbrace \alpha^{(j)}_u\rbrace_u$ and $\B_{d-j} = \lbrace \beta^{(d-j)}_u\rbrace_u$ be $\sf k$-bases of $A_j$ and $A_{d-j}$ respectively. The entries of the \emph{catalecticant matrix of $F$} with respect to $\B_j$ and $\B_{d-j}$ are given by
$$
(\Cat^j_F)_{uv}=(\alpha^{(j)}_u\beta^{(d-j)}_v\circ F).
$$
\end{definition}
Up to a non-zero constant multiple $\det \Cat^j_F$ is independent of the basis $\mathcal{B}_j$. The rank of the $j$-th catalecticant matrix of $F$ is equal to the Hilbert function of $A$ in degree $j$, see \cite[Definition 1.11]{IK}.
Throughout this paper we denote by $X=\{P_1,\dots ,P_s\}$ a set of $s$ distinct points in $\mathbb{P}^n$. Denote the coordinate ring of $X$ by $A(X)=S/I(X)$, where $I(X)$ is a homogeneous ideal of forms vanishing on $X$. For each point $P\in X$ we consider it as a point in the affine space $\mathbb{A}^{n+1}$ and denote it by $P=(a_0 : \dots : a_n)$ and the linear form in $R$ dual to $P$ is $L=a_0X_0+\cdots +a_nX_n$. We set
\begin{equation}
\tau(X):=\min \{i\mid h_{A(X)}(i)=s\}.
\end{equation}
A. Iarrobino and V. Kanev \cite{IK} proved that any Artinian Gorenstein quotient of $A(X)$ by a general enough hyperplane of degree $d\geq \tau(X)$ has the Hilbert function that is equal to $h_{A(X)}$ in degrees $0\leq j\leq \lfloor\frac{d}{2}\rfloor$.
M. Boij in \cite{Boij} provides the special form of the dual generator for an Artinian Gorenstein quotients of $A(X)$.
\begin{proposition}\cite[Proposition 2.3]{Boij}\label{dualForm}
Let $A$ be any Artinian Gorenstein quotient of $A(X)$ with socle degree $d$ such that $d\geq \tau(X)$ and dual generator $F$. Then $F$ can be written as
$$
F= \sum^s_{i=1}\alpha_iL_i^d
$$
where $\alpha_1,\dots ,\alpha_s\in \K$, are not all zero.
\end{proposition}
\begin{proposition}\cite[Proposition 2.4]{Boij}\label{h-vector}
Assume that $d\geq 2\tau(X)-1$ and $F =\sum^s_{i=1}\alpha_iL_i^d$ where $\alpha_i\neq 0$ for all $i$. Let $A$ be the Artinian Gorenstein quotient of $A(X)$ with dual generator $F$. Then the Hilbert function of $A$ is given by
$$
h_A(i)=\begin{cases}
&h_{A(X)}(i) \quad\quad\quad\hspace*{0.25cm} 0\leq i\leq \lfloor \frac{d}{2}\rfloor,\\
&h_{A(X)}(d-i)\quad \quad \lceil\frac{d}{2}\rceil\leq i\leq d.
\end{cases}
$$
\end{proposition}
The following well known result guarantees the existence of a set of points $X\subseteq\mathbb{P}^n$ with given Hilbert function under an assumption on the Hilbert function.
\begin{theorem}\cite[Theorem 4.1]{GMR} \label{diffOseqthm}
Let $h=(h_0,h_1,\dots )$ be a sequence of non-negative integers. Then
there is a reduced $\sf k$-algebra with Hilbert function $h$ if and only if $h$ is a differentiable
O-sequence.
\end{theorem}
The following theorem due to E. D. Davis \cite{Davis} provides information about the geometric properties of a set of points $X\subseteq \mathbb{P}^2$ given the Hilbert function $h_{A(X)}$.
\begin{theorem}\cite{Davis}\label{davis}
Let $X\subseteq \mathbb{P}^2$ be a set of distinct points such that
$\Delta h_{A(X)} = (\mathrm{h}_0,\mathrm{h}_1,\dots ,\mathrm{h}_{\tau(X)}).$ Assume that $\mathrm{h}_j=\mathrm{h}_{j+1}=r$ for some $j\geq t$ where $t$ is the smallest degree of the generators of the defining ideal of $X$. Then $X$ is a disjoint union of $X_1\subseteq \mathbb{P}^2$ and $X_2\subseteq \mathbb{P}^2$ such that $X_1$ lies on a curve of degree $r$ and $\Delta h_{A(X_2)}= (\mathrm{h}_r-r,\mathrm{h}_{r+1}-r ,\dots ,\mathrm{h}_{j-1}-r)$.
\end{theorem}
\section{Hilbert functions of Artinian Gorenstein algebras and SI-sequences}
In this section we give a characterization of the Hilbert functions of Artinian Gorenstein algebras satisfying the SLP which generalizes Theorem 1.2 in \cite{Harima1995}.
We do so by using the higher Hessians of the Macaulay dual form of Artinian Gorenstein algebras. We first provide an explicit expression for the higher Hessians of such polynomials $F=\sum_{i=1}^s\alpha_iL_i^d$.
\begin{lemma}\label{HessLemma}
Let $A$ be an Artinian Gorenstein quotient of $A(X)$ with dual generator $F = \sum^s_{i=1}\alpha_iL_i^d$ where $\alpha_i\neq 0$ for all $i$ and $d\geq 2\tau(X)-1$. Then for each $0\leq j\leq \tau(X)-1$ we have that
\begin{equation}\label{hess}
\det\Hess^j(F) = \sum_{\mathcal{I}\subseteq \{1,\dots ,s\}, \vert\mathcal{I}\vert=h_{A(X)}(j)} c_\mathcal{I}\prod_{i\in \mathcal{I}}\alpha_iL_i^{d-2j},
\end{equation}
where $c_{\mathcal{I}}\in \K$. \par
\noindent Moreover, $c_{\mathcal{I}}\neq 0$ if and only if for $X_{\mathcal{I}} = \{P_i\}_{i\in \mathcal{I}}$ we have that $h_{A(X)}(j)=h_{A(X_{\mathcal{I}})}(j)$.
\end{lemma}
\begin{proof}
We have that
$$
\Hess^j(F) = \sum^s_{i=1}\alpha_i\Hess^j(L_i^d).
$$
Notice that $\Hess^j(L_i^d)$ is a rank one matrix that is equal to $L_i^{d-2j}$ times a scalar matrix. Let $T=\K[\alpha_1, \dots ,\alpha_s, L_1, \dots , L_s]$ be a polynomial ring over $\K$. For each $P_i = (a_{i,0}: \dots : a_{i,n})\in X$ denote $L_i=a_{i,0}X_0+\cdots+a_{i,n}X_n$ and define the action of $S$ on $T$ by $x_j\circ L_i=a_{i,j}$ and $x_j\circ \alpha_i=0$ for every $1\leq i\leq s$ and $0\leq j\leq n$. \par \noindent We consider $\det \Hess^j(F)$ as a bihomogeneous polynomial in $T$ having bidegree $\left(h_{A(X)}(j), (d-2j)h_{A(X)}(j)\right)$.
We claim that $\det \Hess^j(F)$ is square-free in $\alpha_i$'s.
We prove the claim by showing that the coefficient of any monomial in $T$ that has exponent larger than one in $\alpha_i$'s is zero. Without loss of generality, we let $\alpha_1$ to be the only one that has exponent two. So we show that $\alpha^2_1L^{2(d-2j)}_1\prod^{h_{A(X)}(j)-1}_{i=2}\alpha_iL_i^{d-2j}$, which has bidegree $\left(h_{A(X)}(j), (d-2j)h_{A(X)}(j)\right)$, has zero coefficient in $\det \Hess^j(F)$. Assume not and set
\begin{equation}\label{lemeq}
\det\left(\Hess^j(F)\biggm\vert_{\alpha_{h_{A(X)}(j)}=\cdots =\alpha_s=0}\right)= \det\left(\sum^{h_{A(X)}(j)-1}_{i=1}\alpha_i\Hess^j(L_i^d) \right) = \lambda \alpha^2_1L^{2(d-2j)}_1\prod^{h_{A(X)}(j)-1}_{i=2}\alpha_iL_i^{d-2j}\neq 0
\end{equation}
for some $\lambda\in \mathsf{k}^*$. Notice that $\Hess^j(F)\biggm\vert_{\alpha_{h_{A(X)}(j)}=\cdots =\alpha_s=0}$ is a square matrix of size $h_{A(X)}(j)$ and by the above equation has maximal rank. On the other hand, $\sum^{h_{A(X)}(j)-1}_{i=1}\alpha_i\Hess^j(L_i^d)$ is the sum of ${h_{A(X)}(j)-1}$ rank one matrices which has rank at most equal to $h_{A(X)}(j)-1$ that is a contradiction.\par
\noindent Now let $\mathcal{I}\subseteq \{1,\dots ,s\}$ such that $\vert\mathcal{I}\vert=h_{A(X)}(j)$. If $c_{\mathcal{I}}\neq 0$ in Equation (\ref{hess}) setting $\alpha_i=0$ for every $i\in \{1,\dots , s\}\setminus \mathcal{I}$ implies that $\det\Hess^j(F)\neq 0$ and therefore $h_{A(X)}(j)=h_{A(X_{\mathcal{I}})}(j)$. Conversely, assume that we have $h_{A(X)}(j)=h_{A(X_{\mathcal{I}})}(j)$ then since $\vert\mathcal{I}\vert=h_{A(X)}(j)$ we pick $\mathcal{B}_j=\{L^j_i\}_{i\in\mathcal{I}}$ as a basis for $A(X)_j$. Therefore, setting $\alpha_i=0$ for every $i\in \{1,\dots , s\}\setminus \mathcal{I}$ implies that $\Hess^j(F)$ with respect to $\mathcal{B}_j$ is a diagonal matrix with diagonal entries equal to $\frac{d!}{(d-2j)!}\alpha_iL^{d-2j}_i$ for every $i\in \mathcal{I}$ which implies that $c_{\mathcal{I}}\neq 0$.
\end{proof}
Now we are able to state and prove the main result of this section.
\begin{theorem}\label{SI-SLP-Theorem}
Let $h=\left( h_0,h_1,\dots ,h_d\right)$ be a sequence of positive integers. Then $h$ is the Hilbert function of some Artinian Gorenstein algebra with the SLP if and only if $h$ is an SI-sequence.
\end{theorem}
\begin{proof}
Suppose $A$ is an Artinian Gorenstein algebra with the Hilbert function $h$ and the strong Lefschetz element $\ell\in A_1$ that is in particular the weak Lefschetz element and using \cite[Theorem 1.2]{Harima1995} we conclude that $h$ is an SI-sequence.
Conversely, assume that $h$ is an SI-sequence. We set $h_1=n+1$ and $h_t=s$ where $t=\min \{i\mid h_i\geq h_{i+1}\}$. So we have
\begin{equation}\label{hilbertfunction}
h=\left(1,n+1,\dots ,s,\dots , s,\dots ,n+1,1\right).
\end{equation}
Define a sequence of integers $\overline{h}=(\overline{h}_0,\overline{h}_1,\dots )$ such that $\overline{h}_i=h_i$ for $i=0,\dots , t$ and $\overline{h}_i=s$ for $i\geq t$. Assuming that $h$ is an SI-sequence implies that $\overline{h}$ is a differentiable O-sequence and by Theorem \ref{diffOseqthm} there exists $X = \{P_1,\dots ,P_s\}\subseteq \mathbb{P}^{n}$ such that the Hilbert function of its coordinate ring $A(X)$ is equal to $\overline{h}$, that is $h_{A(X)}=\overline{h}$. Denote by $\{L_1,\dots, L_{s}\}$ the linear forms dual to $\{P_1,\dots ,P_s\}$.
As in Proposition \ref{dualForm}, let $A$ be the Artinian Gorenstein quotient of $A(X)$ with dual generator $F = \sum^{s}_{i=1}\alpha_i L^{d}_i$ for $d\geq 2\tau(X)$, notice that $\tau(X)=t$. By Proposition \ref{h-vector}, in order to have $h_A=h$ we must have $\alpha_i\neq 0$ for all $i$. Let $L$ be a linear form dual to $P\in \mathbb{P}^n$ such that $X\cap P=\emptyset$ and denote by $\ell$ the dual linear form to $L$. Therefore, $\beta_i:= \ell\circ L_i\neq 0$ for every $1\leq i\leq s$. We claim that there exist $\alpha_1, \dots , \alpha_{s}$ such that $\ell$ is the strong Lefschetz element for $A$. First note that for every $j = t,\dots ,\lfloor\frac{d}{2}\rfloor$ the multiplication map by $\times \ell^{d-2j}:A_j\rightarrow A_{d-j}$ can be considered as the multiplication map on $A(X)$, that is $\times \ell^{d-2j}:A(X)_j\rightarrow A(X)_{d-j}$ which has trivially maximal rank.\par
Now we prove that there is a Zariski open set for $\alpha_i$'s such that for every $j=0,\dots , t-1$ the $j$-th Hessian matrix of $F$ evaluated at $\ell$ has maximal rank, that is
\begin{align*}
\rk \Hess^j_\ell(F)= h_A(j)=h_j.
\end{align*}
Using Lemma \ref{HessLemma} we get that
\begin{equation}
\det \Hess^j_\ell(F)=\sum_{\mathcal{I}\subseteq \{1,\dots ,s\},\vert\mathcal{I}\vert=h_j}c_\mathcal{I}\prod_{i\in \mathcal{I}}\alpha_i\beta^{d-2j}_i
\end{equation}
where $c_\mathcal{I}\neq 0$ if and only if $h_{A(X_{\mathcal{I}})}(j)=h_j$ for $X_{\mathcal{I}}=\{P_i\}_{\mathcal{I}}$. Notice that since there is at least one subset $\mathcal{I}$ such that $c_{\mathcal{I}}\neq 0$ the determinant of the $j$-th Hessian is not identically zero. Therefore, $\det \Hess^j_\ell\neq 0$ or equivalently $\rk\Hess^j_\ell(F)=h_j$ for each $i=0,\dots ,t-1$, provides a Zariski open subset of $\mathbb{P}^{s-1}$ for $\alpha_i$'s and therefore the intersection of all those open subsets is non-empty. Equivalently, there is an
Artinian Gorenstein algebra $A$ such that $h_A=h$ and satisfies the SLP with $\ell\in A_1$.
\end{proof}
\section{Higher Hessians of Artinian Gorenstein quotients of $A(X)$ }\label{section4}
In this section we prove the non-vanishing of some of the higher Hessians for any Artinian Gorenstein quotient of $A(X)$ for $X\subset \mathbb{P}^n$ under some conditions on the configuration of the points in $X$. In some cases we conclude that they satisfy the SLP.
\begin{proposition}\label{tophess-s-1}
Let $X=\{P_1,\dots , P_s\}$ be a set of points in $\mathbb{P}^n$ and $A$ be any Artinian Gorenstein quotient of $A(X)$ with dual generator $F=\sum_{i=1}^s\alpha_iL_i^d$ for $d\geq 2\tau(X)-1$. Assume that $h_A(j)=s-1$ for some $j\geq 0$ then there is a linear form $\ell$ such that
$$
\det\Hess^j_\ell(F)\neq 0.
$$
\end{proposition}
\begin{proof}
Using Lemma \ref{detLemma} we have that
$$
\det \Hess^j(F) = \sum_{\mathcal{I}\subseteq \{1,\dots ,s\}, \vert\mathcal{I}\vert=s-1}c_\mathcal{I}\prod_{i\in \mathcal{I}}\alpha_iL^{d-2j}_i.
$$
We prove that $\det \Hess^j(F)\neq 0 $ as a polynomial in $X_i$'s.
Without loss of generality assume that for $\mathcal{I}=\{2,\dots , s \}$ we have that $c_{\{2,\dots ,s\}}\neq 0$. Then if $\det\Hess^j(F)$ is identically zero we get that
$$
c_{\{2,\dots , s\}}\prod^s_{i=2}\alpha_iL^{d-2j}_i = -\alpha_1L^{d-2j}_1\left(\sum_{\mathcal{I}\subseteq\{2,\dots ,s\}, \vert\mathcal{I}\vert=s-2}c_\mathcal{I}\prod_{i\in \mathcal{I}}\alpha_iL^{d-2j}_i\right).
$$
This contradicts the fact that $R={\sf k}[X_0,\dots,X_n]$ is a unique factorization domain. We conclude that there exists $\ell$ such that $\det \Hess_\ell^j(F)\neq 0$.
\end{proof}
\begin{proposition}\label{tophess-s-2}
Let $s\geq 3$ and $X=\{P_1,\dots , P_s\}$ be a set of points in $\mathbb{P}^2$ in a general linear position.
Let $A$ be any Artinian Gorenstein quotient of $A(X)$ with dual generator $F=\sum_{i=1}^s\alpha_iL_i^d$ for $d\geq 2\tau(X)-1$ and assume that $h_A(j)=s-2$ for some $0 \leq j\leq \frac{d+1}{2}$. Then there is a linear form $\ell$ such that
$$
\det\Hess^j_\ell(F)\neq 0.
$$
\end{proposition}
\begin{proof}
If $h_A(j+1)<s-2$ then the maximum value of $h_A$ is equal to $s-2$ and also we have that $j=\frac{d}{2}$. Therefore, $\Hess^j(F)=\Cat^j_F$ is the trivial multiplication on $A_j$ which clearly has maximal rank.\par
\noindent If $h_A(j+1)>s-2$ then the assumption on $X$ implies that $h_A(j+1)=s$. Since we have $h_A(j+1)=s-1$ the last three non-zero entries of $\Delta h_{A(X)}$ are equal to one. So E. D. Davis's theorem \ref{davis} implies that $X$ contains at least three colinear points.
Therefore,
$$
h_A=(1,3,\dots , s-2,\underbrace{s,\dots ,s}_k, s-2, \dots , 3,1),
$$
for some $k\geq 1$. Note that for a linear form $\ell$ such that $\ell\circ L_i\neq 0$ for every $i$ we get that the multiplication map $\ell^{d-2i}:A_i\rightarrow A_{d-i}$ for every $j+1\leq i\leq \lfloor\frac{d}{2}\rfloor$ is a map on $A(X)$ in the same degrees and therefore has maximal rank. So $\det\Hess^j_\ell(F)\neq 0$ if and only if $\det\Hess^j_\ell(\ell^k\circ F)\neq 0$.
Denote by $\beta_i=\ell\circ L_i\neq 0$ for each $i$. So we have that
$$
G:= \ell^k\circ F = \frac{d!}{(d-k)!}\sum^s_{i=1}\alpha_i\beta^k_iL^{d-k}_i.
$$
The Artinian Gorenstein quotient of $A(X)$ with dual generator $G$ has the following Hilbert function
$$
(1,3,\dots ,s-2,s-2,\dots ,3,1).
$$
Therefore, it is enough to show that $\det\Hess_\ell^j(F)\neq 0$ for some $\ell$ in the case $h_A(j)=h_A(j+1)=s-2$. Note that in this case $d=2j+1$. By Lemma \ref{detLemma} we have that
\begin{equation}
\det \Hess^j(F) = \sum_{\mathcal{I}\subseteq \{1,\dots ,s\}, \vert\mathcal{I}\vert=s-2}c_\mathcal{I}\prod_{i\in \mathcal{I}}\alpha_iL_i,
\end{equation}
such that $c_{\mathcal{I}}\neq 0$ if and only if $h_{A(X)}(j)=h_{A(X_{\mathcal{I}})}(j)$. Without loss of generality assume that $c_{\{3,\dots ,s\}}\neq 0$. Suppose that $\det \Hess^{j}(F)=0$ then
$$
c_{\{3,\dots , s\}}\prod^s_{i=3}\alpha_iL_i = -\alpha_1L_1\left(\sum_{\mathcal{I}\subseteq\{3,\dots ,s\}, \vert\mathcal{I}\vert=s-3}c_\mathcal{I}\prod_{i\in \mathcal{I}}\alpha_iL_i\right) -\alpha_2L_2\left(\sum_{\mathcal{I}\subseteq\{1,3,\dots ,s\}, \vert\mathcal{I}\vert=s-3}c_\mathcal{I}\prod_{i\in \mathcal{I}}\alpha_iL_i\right).
$$
Common zeros of $L_1$ and $L_2$ correspond to the line passing through $P_{1}$ and $P_{2}$. By the assumption this line does not pass through any other point in $\{P_{3},\dots ,P_{s}\}$ which means that the left hand side of the above equality is nonzero on the points where $L_1=L_2=0$ which is a contradiction. So this implies that $\det \Hess^j(F)\neq 0$ and therefore $\det\Hess_\ell^{j}(F) \neq 0$, for some linear form $\ell$.
\end{proof}
\begin{theorem}[Points on a rational normal curve]\label{smoothconic}
Let $X=\{P_1,\dots , P_s\}$ be a set of points in $\mathbb{P}^n$ lying on a rational normal curve. Assume that $A$ is an Artinian Gorenstein quotient of $A(X)$ with dual generator $F=\sum_{i=1}^s\alpha_iL_i^d$ for $d\geq 2\tau(X)$ and $\alpha_i\neq 0$ for every $i$. Then $A$ satisfies the SLP.
\end{theorem}
\begin{proof}
Denote by $Y =\{Q_1,\dots ,Q_s\}\subset \mathbb{P}^1$ the preimage of $X$ under the Veronese embedding $\varphi :\mathbb{P}^1\longrightarrow \mathbb{P}^n.$
For each $i=1,\dots ,s$ denote by $K_i$ the linear form in ${\sf k}[S,T]$ dual to $Q_i$. Let $B$ be any Artinian Gorenstein quotient of the coordinate ring of $Y$, ${\sf k}[s,t]/I(Y)$, with dual generator $G = \sum_{i=1}^s\beta_iK_i^{nd}$. \par
\noindent The Artinian Gorenstein algebra $B$ has the following Hilbert function
$$
h_B=(1,2,3,\dots, \underbrace{s,\dots , s}_k,\dots ,3,2,1),
$$
for some $k\geq 1$ since we have assumed that $d\geq 2\tau(X)$ and $\alpha_i\neq 0$ for every $i$. It is known that $B$ has the SLP for some linear form $\ell\in B_1$ \cite[Proposition 2.2]{HMNW}. The Veronese embedding $\varphi$ gives a map of rings $\psi : S=\mathsf{k}[x_0,\dots ,x_n]\rightarrow \mathsf{k}[s,t]$ defined by taking each $x_i$ to a different monomial of degree $n$ in $s,t$. The map $\psi$ induces isomorphisms $A_j\cong B_{nj}$ as $\sf k$-vector spaces for every $j$. Let $\ell^\prime:=\psi^{-1}(\ell^n)\in A_1$, then we have
$$\rk\left(\times (\ell^\prime)^{d-2j}:A_j\longrightarrow A_{d-j} \right) = \rk\left(\times (\ell)^{n(d-2j)}:B_{nj}\longrightarrow B_{nd-nj} \right) = nj+1=\dim_{\sf k}A_j.$$
Thus $A$ satisfies the SLP with linear form $\ell^\prime$.
\end{proof}
The above proposition shows that every Artinian Gorenstein quotient of $A(X)$ such that $X\subset \mathbb{P}^2$ consists of points on a smooth conic satisfies the SLP.
We will show that the SLP also holds when $X\subset \mathbb{P}^2$ consists of points on a singular conic. \par
\noindent First we need to prove a lemma.
\begin{lemma}\label{detLemma}
Let $A=B+C$ be a square matrix of size $2m-1$ for $m\geq 1$ as the following
\begin{small}
\begin{equation}
B = \begin{pmatrix}
f_1&f_2&\dots & f_m &0&\cdots &0\\
f_2&f_3&\dots & f_{m+1}&0&\cdots &0\\
\vdots &\vdots && \vdots &\vdots & &\vdots \\
f_{m} &f_{m+1} &\dots & f_{2m} &0&\cdots &0\\
0&0&\dots & 0 &0&\cdots &0\\
\vdots &\vdots && \vdots &&\vdots &\vdots \\
0&0&\dots & 0 &0&\cdots &0\\
\end{pmatrix}, \quad C = \begin{pmatrix}
0&\dots & 0 &0&\cdots &0&0\\
0&\dots & 0 &0&\cdots &0&0\\
\vdots & &\vdots & \vdots &&\vdots &\vdots \\
0 &\dots &0& g_{2m} &\dots & g_{m+1} &g_m\\
\vdots &&\vdots & \vdots &&\vdots &\vdots \\
0 &\dots &0& g_{m+1} &\dots & g_{3} &g_2\\
0 &\dots &0& g_{m} &\dots & g_{2} &g_1\\
\end{pmatrix}.
\end{equation}
\end{small}
Then $$
\det A = (\det B_{\{1,\dots , m-1\}})(\det C_{\{m,\dots ,2m-1\}})+(\det B_{\{1,\dots , m\}})(\det C_{\{m+1,\dots ,2m-1\}}),
$$
such that for a subset $\mathcal{J}\subset \{1,\dots ,2m-1\}$ we denote by $B_{\mathcal{J}}$ and $C_{\mathcal{J}}$ the square submatrices of $B$ and $C$ respectively with rows and columns in the index set $\mathcal{J}$.
\end{lemma}
\begin{proof}
We have that
$$
\det A = \sum_{\sigma\in S_{2m-1}}\mathrm{sign}\sigma A_{1\sigma_1}\dots A_{(2m-1)\sigma_{2m-1}},
$$
where the entry $A_{i\sigma_i}$ is the entry in row $i$ and column $\sigma_i$. Then we split $\det A$ in the following way
\begin{align*}
\det A =&(\sum_{\sigma\in S_{m-1}}\mathrm{sign}\sigma A_{1\sigma_1}\dots A_{(m-1)\sigma_{m-1}}) A_{m,m}(\sum_{\tau\in S_{m-1}}\mathrm{sign}\tau A_{(m+1)(m+\tau_{1})}\dots A_{(2m-1)(m+\tau_{m-1})})\\
&+ (\sum_{\sigma\in S_{m}, \sigma_m\neq m}\mathrm{sign}\sigma A_{1\sigma_1}\dots A_{m\sigma_{m}}) (\sum_{\tau\in S_{m-1}}\mathrm{sign}\tau A_{(m+1)(m+\tau_{1})}\dots A_{(2m-1)(m+\tau_{m-1})})\\
&+ (\sum_{\sigma\in S_{m-1}}\mathrm{sign}\sigma A_{1\sigma_1}\dots A_{(m-1)\sigma_{m-1}}) (\sum_{\tau\in S_{m}, \tau_1\neq 1}\mathrm{sign}\tau A_{m(m-1+\tau_{1})}\dots A_{(2m-1)(m-1+\tau_{m})})\\
=&(\det B_{\{1,\dots , m-1\}})(\det C_{\{m,\dots ,2m-1\}})+(\det B_{\{1,\dots , m\}})(\det C_{\{m+1,\dots ,2m-1\}}).
\end{align*}
\end{proof}
\begin{theorem}\label{singularConic}
Assume that $X=\{P_1,\dots ,P_s\}$ is a set of points in $\mathbb{P}^2$ which lie on a conic. Let $A$ be an Artinian Gorenstein quotient of $A(X)$ with dual generator $F=\sum_{i=1}^s\alpha_iL_i^d$, for $d\geq 2\tau(X)$ and $\alpha_i\neq0$ for every $i$. Then $A$ satisfies the SLP.
\end{theorem}
\begin{proof}
If $X$ lies on a smooth conic applying Theorem \ref{smoothconic} for $n=2$ we get the desired result.
Now suppose that $X$ consists of points on a singular conic that is a union of two lines in $\mathbb{P}^2$. Suppose that $X_1:=\{P_1,\dots , P_{s_1}\}$ is a subset of $X$ which lies on one line and $X_2:=\{Q_{1},\dots , Q_{s_2}\}$ is a subset of $X$ with the points on the other line, so $X=X_1\cup X_2$. If $X_1\cap X_2=\emptyset$ then $s_1+s_2=s$ otherwise $s_1+s_2-1=s$. Denote by $L_i$ the linear form dual to $P_i$ for $1\leq i\leq s_1$ and by $K_i$ the linear form dual to $Q_i$ for each $1\leq i\leq s_2$. Let $F_1 =\sum^{s_1}_{i=1} a_iL_i^d $ and $F_2= \sum^{s_2}_{i=1} b_iK_i^d$ for linear forms $L_i$ and $K_i$ where $F=F_1+F_2$. By linear change of coordinates we may assume that $L_i=u_{0,i}X_0+u_{2,i}X_2$ and $K_i=v_{1,i}X_1+ v_{2,i}X_2$ such that $u_{0,i},u_{2,i},v_{0,i},v_{2,i}\in \sf k$ for every $i$. The Hilbert function of $A$ is equal to
$$
h_A = \left(1,3,5,\dots, 2k+1,s,\dots ,s, 2k+1, \dots , 5, 3,1\right),
$$
where $k$ is the largest integer such that $2k+1\leq s$. If $s=2k+1$ then $\tau(X)=k$ and otherwise $\tau(X)=k+1$. Let $j$ be an integer such that $1\leq j\leq \tau(X)-1$.
Consider the following ordered monomial basis for $A$ in degree $j$
$$\B_j=\{x_0^j,x_0^{j-1}{x_2},\dots , x_0x_2^{j-1},x_2^j,x_2^{j-1}x_1,\dots , x_2x_1^{j-1},x_1^{j} \}.$$ The $j$-th Hessian of $F$ with respect to $\B_j$ is the following matrix
\begin{align*}
\Hess^j(F) &= \Hess^j(F_1)+\Hess^j(F_2) = \sum^{s_1}_{i=1} a_i\Hess^j(L_i)+\sum^{s_2}_{i=1} b_i\Hess^j(K_i)\\
& = \begin{pmatrix}
C^j_0&C^j_1&\dots & C^j_j &\cdots &0&0\\
C^j_1&C^j_2&\dots & C^j_{j+1}&\cdots &0&0\\
\vdots &\vdots && \vdots &&\vdots &\vdots \\
C^j_j & C^j_{j+1} &\dots & C^j_{2j}+D^j_{2j} &\dots & D^j_{j+1} &D^j_j\\
\vdots &\vdots && \vdots &&\vdots &\vdots \\
0&0&\dots & D^j_{j+1} &\dots & D^j_{2} &D^j_1\\
0&0&\dots & D^j_{j} &\dots & D^j_{1} &D^j_0\\
\end{pmatrix}
\end{align*}
where we set $C^j_i = ({x_0^{j-i}x_2^{i}})\circ F_1$ and $D^j_i =({x_1^{j-i}x_2^{i}})\circ F_2$ for each $i=0,\dots ,j$.\par
\noindent Then using Lemma \ref{detLemma} we get that
\begin{equation}\label{HessDecomposition}
\det \Hess^j(F) = (\det C^j_{\{0,\dots ,j-1\}}) (\det D^j) +(\det D^j_{\{1,\dots ,j\}}) (\det C^j),
\end{equation}
where we set $C^j = \begin{pmatrix}
C^j_0&C^j_1&\dots & C^j_j\\
C^j_1&C^j_2&\dots & C^j_{j+1}\\
\vdots &\vdots && \vdots \\
C^j_j & C^j_{j+1} &\dots & C^j_{2j}
\end{pmatrix}
$ and $D^j = \begin{pmatrix}
D^j_{2j}&D^j_{2j-1}&\dots & D^j_j\\
D^j_{2j-1}&D^j_{2j-2}&\dots & D^j_{j-1}\\
\vdots &\vdots && \vdots \\
D^j_j & D^j_{j-1} &\dots & D^j_{0}
\end{pmatrix}$ and we denote by $C^j_{\{i_1,\dots ,i_r\}}$ the square submatrix of $C^j$ of size $r$ with rows and columns $i_1,\dots ,i_r$, similarly for $D^j$. \par
\noindent Let $A_1$ and $A_2$ be Artinian Gorenstein quotients of $A(X_1)={\sf k}[x_0,x_2]/I(X_1)$ and $A(X_2) = {\sf k}[x_1,x_2]/I(X_2)$ with dual generators $F_1$ and $F_2$ respectively. We observe that $C^j = \Hess^j(F_1)$ and $D^j = \Hess^j(F_2)$. Since every Artinian algebra of codimension two has the SLP we have that $\det C^j\neq 0$ and $\det D^j\neq 0$.\par
\noindent We set
\begin{align*}
F^\prime_1 := x_0^2\circ F_1, \quad F^\prime_2 := x_1^2\circ F_2.
\end{align*}
Then $C^j_{\{0,\dots ,j-1\}}$ is equal to the $(j-1)$-th Hessian of $F^\prime_1$ with respect to the ordered basis $\{x_0^{j-1}, x_0^{j-2}x_2,\dots ,x_2^{j-1}\}$. Similarly, $D^j_{1,\dots , j} = \Hess^{j-1}(F^\prime_2)$ with respect to \begin{small}
$\{x_2^{j-1}, x_2^{j-2}x_1,\dots ,x_1^{j-1}\}$.
\end{small} So using the result that Artinian algebras in codimension two have the SLP we get that
$$\det\Hess^{j-1}(F^\prime_1) = \det C^j_{\{0,\dots ,j-1\}}\neq 0,\quad \text{and}\quad \det \Hess^{j-1}(F^\prime_2) = \det D^j_{\{1,\dots ,j\}}\neq 0.\quad $$
Therefore, Equation (\ref{HessDecomposition}) is equivalent to
\begin{equation}
\det \Hess^j(F) = (\det \Hess^{j-1}(F^\prime_1)) (\det \Hess^j(F_2)) +(\det \Hess^{j-1}(F^\prime_2)) (\det \Hess^j(F_1)).
\end{equation}
Note that assuming $d\geq 2\tau(X)$ and $1\leq j\leq \tau(X)-1$ implies that
$$\deg(\det \Hess^{j-1}(F^\prime_1))<\deg (\det \Hess^j(F_1)),\quad \deg(\det \Hess^{j-1}(F^\prime_2))<\deg (\det \Hess^j(F_2)).$$
Therefore, $\det\Hess^j(F)\neq0$ unless when $X_2^{d-2j}$ is a factor of both $\det\Hess^j(F_1)$ and $\det\Hess^j(F_2)$ so we must have $X_1\cap X_2\neq \emptyset$ and $s=s_1+s_2-1$. On the other hand, using Lemma \ref{HessLemma} we get that $\det\Hess^j(F_1)$ is in fact a non-zero monomial in $L_i$'s and $j=s_1$. Similarly, we get $j=s_2$. So $s=2s_1-1=2s_2-1=2k+1$ and therefore $\tau(X)=k$ and $s_1=s_2=k+1=\tau(X)+1$. This contradicts the assumption that $j\leq \tau(X)-1$.\par
For each $\tau(X)\leq j\leq \lfloor\frac{d}{2}\rfloor$ the $j$-the Hessian of $F$ corresponds to the multiplication map on $A(X)$ and then trivially has maximal rank for general enough linear forms. \par Note that $\det\Hess^0(F)=F\neq 0$. Therefore, we have proved that there is a linear form $\ell$ such that $\det\Hess_\ell^j(F)\neq 0$ for every $0\leq j\leq \lfloor\frac{d}{2}\rfloor$ and equivalently $A$ has the SLP.
\end{proof}
We now prove that if $X\subseteq \mathbb{P}^2$ contains points on a conic then higher Hessians of $F$ of high enough order are non-zero. First we set a notation that for every $i\geq 0$ the subscript of the entry $ \Delta h_A (i) = h_i-h_{i-1}$ is denoted by $i$.
\begin{theorem}\label{points-on-conic}
Let $X=\{P_1,\dots ,P_s\}$ be a set of points in $\mathbb{P}^2$ and $A$ be an Artinian Gorenstein quotient of $A(X)$ with dual generator $F=\sum_{i=1}^s\alpha_iL_i^d$, for $d\geq 2\tau(X)$ and $\alpha_i\neq 0$ for every $i$. Suppose that the first difference of $h_A$ is equal to
$$\Delta h_A = (1,2,h_2-3,\dots , 2_k,\dots ,2_{\tau(X)}),$$
for some $1\leq k< \tau(X)$. Then there is a linear form $\ell$ such that for every $k-1\leq j\leq \lfloor\frac{d}{2}\rfloor$
$$\det\Hess^j_\ell(F)\neq 0.$$
\end{theorem}
\begin{proof}
Since $\Delta h_{A(X)}$ is flat, the Theorem \ref{davis} due to E. D. Davis \cite{Davis} implies that $X$ is a disjoint union of $2\tau(X)+1$ points on a conic and $s-2\tau(X)-1$ other points. We may assume that $P_1,\dots , P_{s-2\tau(X)-1}$ lie outside the conic. Note that for each $k-1\leq j\leq \lfloor\frac{d}{2}\rfloor$ we have that $h_{A(X)}(j) = 2j+1+s-2\tau(X)-1=s-2\tau(X)+2j$. Using Lemma \ref{HessLemma} we get that for each $k-1\leq j\leq \lfloor\frac{d}{2}\rfloor$
\begin{equation}\label{iff}
\det \Hess^j(F) =\sum_{\mathcal{I}\subseteq{\{1,\dots ,s\}}, \vert\mathcal{I}\vert = h_{A(X)}(j)}c_{\mathcal{I}}\prod_{i\in\mathcal{I}}\alpha_iL_i^{d-2j},
\end{equation}
where $c_{\mathcal{I}}\neq0$ if and only if $h_{A(X_\mathcal{I})}(j)=h_{A(X)}(j) = s-2\tau(X)+2j$. Notice that the Hilbert function of the coordinate ring of the points on a conic in degree $j$ is at most $2j+1$. Therefore, $c_{\mathcal{I}}\neq 0$ if and only if $\mathcal{I}$ contains $s-2\tau(X)+2j-(2j+1) = s-2\tau(X)-1$ points off the conic that means $\{1,\dots ,s-2\tau(X)-1\}\subset \mathcal{I}$. \par
\noindent This implies that $\prod_{i=1}^{s-2\tau(X)-1}\alpha_iL^{d-2j}_i$ is a common factor of the right hand side of Equation (\ref{iff}), so
\begin{equation}\label{factoriff}
\det \Hess^j(F) = \prod_{i=1}^{s-2\tau(X)-1}\alpha_iL^{d-2j}_i\left(\sum_{\mathcal{I}\subseteq{\{s-2\tau(X),\dots ,s\}}, \vert\mathcal{I}\vert = 2j+1}c_{\mathcal{I}}\prod_{i\in\mathcal{I}}\alpha_iL_i^{d-2j}\right).
\end{equation}
Let $Y := \{P_{s-2\tau(X)},\dots ,P_s\}$ be the subset of $X$ which lies on a conic. Consider the Artinian Gorenstein quotient of $A(Y)$ with dual generator $G=\sum_{i=s-2\tau(X)}^{s}\alpha_iL_i^d$.
Theorem \ref{singularConic} implies that $B$ has the SLP. Equivalently, for every $0\leq j\leq \lfloor\frac{d}{2}\rfloor$
$$\det\Hess^j(G) = \sum_{\mathcal{I}\subseteq{\{s-2\tau(X),\dots ,s\}}, \vert\mathcal{I}\vert = 2j+1}c_{\mathcal{I}}\prod_{i\in\mathcal{I}}\alpha_iL_i^{d-2j}\neq 0.$$
This implies that the polynomial in Equation (\ref{factoriff}) is non-zero and this completes the proof.
\end{proof}
Similarly, using that all Artinian algebras in codimension two have the SLP we have the following which proves non-vanishing of some of higher Hessians in the case where $X\subseteq \mathbb{P}^2$ contains points on a line.
\begin{theorem}\label{points-on-line}
Let $X=\{P_1,\dots ,P_s\}$ be a set of points in $\mathbb{P}^2$ and $A$ be an Artinian Gorenstein quotient of $A(X)$ with dual generator $F=\sum_{i=1}^s\alpha_iL_i^d$, for $d\geq 2\tau(X)$ and $\alpha_i\neq 0$ for every $i$. Suppose that the first difference of $h_A$ is equal to
$$\Delta h_A = (1,2,h_2-3,\dots , 1_k,\dots ,1_{\tau(X)}),$$
for some $1\leq k<\tau(X)$. Then there is a linear form $\ell$ such that for every $k-1\leq j\leq \lfloor\frac{d}{2}\rfloor$
$$\det\Hess^j_\ell(F)\neq 0.$$
\end{theorem}
\begin{proof}
Using Theorem \ref{davis} we get that there are exactly $\tau(X)+1$ points on a line and $s-\tau(X)-1$ off the line. We may assume that $P_1,\dots , P_{s-\tau(X)-1}$ lie off the line. For each $k-1\leq j\leq \lfloor\frac{d}{2}\rfloor$ we have $h_{A(X)}(j)=j+1+s-\tau(X)-1=s-\tau(X)+j $.\par
\noindent So for each $s-k\leq j\leq \lfloor\frac{d}{2}\rfloor$ by Lemma \ref{detLemma} we get
\begin{equation}\label{iff1}
\det \Hess^j(F) =\sum_{\mathcal{I}\subseteq{\{1,\dots ,s\}}, \vert\mathcal{I}\vert = h_{A(X)}(j)}c_{\mathcal{I}}\prod_{i\in\mathcal{I}}\alpha_iL_i^{d-2j},
\end{equation}
where $c_{\mathcal{I}}$ is non-zero if and only if $h_{A(X_\mathcal{I})}(j) =h_{A(X)}(j)=s-\tau(X)+j$. Since the Hilbert function of the coordinate ring of the points on a line in degree $j$ is at most $j+1$, in order for the coordinate ring of $\{P_i\}_{i\in \mathcal{I}}$ to have the Hilbert function equal to $s-\tau(X)+j$ in degree $j$, $\mathcal{I}$ must contain all the indices from $1$ to $s-\tau(X)+j-(j+1) = s-\tau(X)-1$. \par
\noindent This implies that
\begin{equation}\label{factoriff1}
\det \Hess^j(F) =\prod_{i=1}^{s-\tau(X)-1}\alpha_iL^{d-2j}_i\left(\sum_{\mathcal{I}\subseteq{\{s-\tau(X),\dots ,s\}}, \vert\mathcal{I}\vert = j+1}c_{\mathcal{I}}\prod_{i\in\mathcal{I}}\alpha_iL_i^{d-2j}\right).
\end{equation}
Denote by $Y:=\{P_{s-\tau(X)},\dots ,P_s\}$ the points in $X$ which lie on a line. Consider the Artinian Gorenstein quotient of $A(Y)$ with dual generator $G=\sum_{i=s-\tau(X)}^{s}\alpha_iL_i^d$ and denote it by $B$. Since $B$ is an Artinian algebra of codimension two it satisfies the SLP.
Equivalently, for every $0\leq j\leq \lfloor\frac{d}{2}\rfloor$
$$\det\Hess^j(G) = \sum_{\mathcal{I}\subseteq{\{s-\tau(X),\dots ,s\}}, \vert\mathcal{I}\vert = j+1}c_{\mathcal{I}}\prod_{i\in\mathcal{I}}\alpha_iL_i^{d-2j}\neq 0$$
This implies that $\det\Hess^j(F)\neq 0$ for every $k-1\leq j\leq \lfloor\frac{d}{2}\rfloor$.
\end{proof}
As a consequence of Theorems \ref{points-on-conic} and \ref{points-on-line} we provide a family of Artinian Gorenstein quotients of $X\subseteq \mathbb{P}^2$ satisfying the SLP.
\begin{corollary}\label{corSLP}
Let $X=\{P_1,\dots ,P_s\}$ be a set of points in $\mathbb{P}^2$ and $A$ be any Artinian Gorenstein quotient of $A(X)$ with dual generator $F=\sum_{i=1}^s\alpha_iL_i^d$, for $d\geq 2\tau(X)$. Then $A$ satisfies the SLP if $\Delta h_A$ is equal to one the following vectors
\begin{align}\label{11}
(1,2,\underbrace{1,\dots ,1}_m), \quad (1,2,2,\underbrace{1,\dots ,1}_m),\quad (1,2,3,\underbrace{1,\dots ,1}_m),
\end{align}
\begin{align}\label{22}
(1,\underbrace{2,\dots ,2}_m),\quad (1,2,3,\underbrace{2,\dots ,2}_m),
\end{align}
for some $m\geq 2$.
\end{corollary}
\begin{proof}
First we note that $\det \Hess^0(F) = F$ and since $F$ is assumed to be non-zero for a generic $\ell$ we have $\det \Hess_\ell^0(F) \neq 0 $. A well known result by P. Gordan and M. Noether \cite{GN} implies that the Hessian of every form in the polynomial ring with three variables is non-zero. Therefore, for a generic linear form $\ell$ we get that $\det \Hess_\ell^1(F)\neq 0$. \par
\noindent Using Theorems Theorem \ref{points-on-line} and \ref{points-on-conic} for the first difference vectors given in (\ref{11}) and (\ref{22}) respectively we conclude that $\det \Hess_\ell^j(F)\neq 0$ for every $2\leq j\leq \lfloor\frac{d}{2}\rfloor $ and a generic linear form $\ell$. This completes the proof.
\end{proof}
\subsection*{Summary} We end the section by summarizing what we have shown. Let $X=\{P_1, \dots , P_s\}\subseteq \mathbb{P}^n$ and $F=\sum_{i=1}^s\alpha_iL_i^d$, for $d\geq 2\tau(X)$ and $\alpha_i\neq 0$ for every $i$. For $n\geq 2$ if $X\subseteq \mathbb{P}^n$ lies on a rational normal curve then any Artinian Gorenstein quotient of $A(X)$ with dual generator $F$ satisfies the SLP, Theorem \ref{smoothconic}. This result is more general for $n=2$. In fact, if $X\subseteq \mathbb{P}^2$ lies on a conic (smooth or singular) then in Theorem \ref{singularConic} we prove that any Artinian Gorenstein quotient of $A(X)$ with dual generator $F$ satisfies the SLP. When $X\subseteq \mathbb{P}^2$, we show in Theorems \ref{points-on-conic} and \ref{points-on-line} that if the first difference of an Artinian Gorenstein quotient of $A(X)$ with dual generator $F$ is equal to $$\Delta h_A = (1,2,h_2-3,\dots , 1_k,\dots ,1_{\tau(X)}),\quad \text{or}\hspace*{2mm}\Delta h_A = (1,2,h_2-3,\dots , 2_k,\dots ,2_{\tau(X)})$$
for some $1\leq k<\tau(X)$, then there is a linear form $\ell$ such that $\det\Hess^j_\ell(F)\neq 0$ for every $k-1\leq j\leq \lfloor\frac{d}{2}\rfloor$. As a consequence of these results we show that any Artinian Gorenstein quotient $A$ of $A(X)$ with with $\Delta h_A$ given in (\ref{11}) and (\ref{22}) satisfies the SLP, Corollary \ref{corSLP}. \par
We also show in Proposition \ref{tophess-s-1} that for every $n\geq 2$ if the $j$-th Hilbert function of an Artinian Gorenstein quotient of $A(X)$ is equal to $s-1$, that is $h_A(j)=s-1$, then $\det\Hess^j_\ell(F)\neq 0$ for some $\ell$. Also for $X\subseteq \mathbb{P}^2$ in a general linear position we have that $\det\Hess^j_\ell(F)\neq 0$ for some $\ell$ if $h_A(j)=s-2$, Proposition \ref{tophess-s-2}.
\section{Acknowledgment}
The author would like to thank Mats Boij for useful and insightful comments and discussion that greatly assisted this research. Computations using the algebra software Macaulay2 \cite{13} were essential to get the
ideas behind some of the proofs. This work was supported by the grant VR2013-4545.
| proofpile-arXiv_065-274 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Over the years, an intriguing class of magnetic systems having competing magnetic interactions and resultant frustration has been a subject of much attention. The dynamical properties of such magnetic systems have considerable similarities with the dynamical properties of structural glasses. The most well studied member of this category of materials is the canonical spin glass (SG). The canonical SG represent the dilute magnetic alloys where minute amount (within few percent) of transition metal (TM) atoms like Mn, Fe etc., are randomly distributed in the matrix of noble metals like Cu, Au, Ag etc., and the localized magnetic moments on TM atoms interact through spatially oscillating Ruderman–Kittel–Kasuya–Yosida (RKKY) interaction. The canonical SGs have been intensely investigated over the last five decades, but a complete understanding of the SG phenomena is still far from being achieved {\color{blue}\cite{RMP1986, Mydosh2015}}. As the concentration of TMs is further increased, it gives rise to more complicated kinds of glass-like magnetic states with short range magnetic order like concentrated SG, cluster glass, mictomagnet etc. Such glass-like magnetic states can also be achieved in variety of other systems with competing magnetic interactions, other than metallic alloys. On the other hand, a distinctly different kind of glassy state is observed when a first order magnetic transition remains incomplete even beyond its supercooling limit in magnetic field (H)- temperature (T) phase space. The materials showing such glass-like magnetic behaviour arising out of the kinetic arrest of a first order phase transition are termed as magnetic glasses {\color{blue}\cite{MKC2003, RS2006, KS2006,AB2006, VKS2007, AB2008,PC2008,AB2009,RS2009,RS2013,EPL2013}}. It is important to note at this point that all such systems namely spin glass, cluster glass and magnetic glass are actually quite distinct in their microscopic ground state, while showing apparently similar non-equilibrium magnetic response. There exist subtle but distinct differences in their dynamical magnetic response too, and this can only be differentiated through a systematic and careful investigation of their metastable magnetic properties {\color{blue}\cite{RS2009, Sudip2019}}. However, such experiments are few and far between, and often observed magnetic properties are rationalized within a generalized mean-field theoretical framework of SG {\color{blue}\cite{Nayak2013,Chau2006,Narayana2011}}, without realizing that such framework is really suitable for canonical SGs.
Experimentally, a spin glass system is characterized by the temperature dependence of low-field magnetic susceptibility, which shows a sharp peak or cusp at the freezing temperature (T$_f$) {\color{blue}\cite{canmyd1972,Mulder1981}}. In dc magnetization measurements, there are two main protocols to investigate the spin glass systems {\color{blue}\cite{Guy1975}}. First, the zero field cooling (ZFC) protocol where the sample is initially cooled below T$_f$ in absence of magnetic field and the magnetization (M) is measured during warming after applying a small dc magnetic field (H). Another protocol is the field cooling (FC) protocol, where the sample is cooled in the presence of an applied H and the M is measured while warming the sample without changing the field. In the paramagnetic state, magnetization shows identical response namely general Curie-Weiss behavior under both the experimental protocols. However, while the ZFC M vs T curve shows a peak around T$_f$, the FC M vs T curve becomes relatively flat below T$_f$ and bifurcates from the ZFC curve. The magnetization in the ZFC state is strongly time dependent and requires infinite time to reach the equilibrium state below T$_f$. However in the literature the magnetic response in the FC state canonical SG mostly reported to be independent of temperature and time. Therefore, the FC state has been assumed to be the equilibrium state of the system that is equivalent to the ZFC measurements performed over infinite time {\color{blue}\cite{Mydosh2015}}. However, there are some earlier reports showing that this equilibrium description of the FC state may not be totally correct {\color{blue}\cite{Wang, Chamberlin1984, Bouchiat1985,Wenger1984, Nordblad1986}}. Recently, we have reported distinct memory effect in the FC state of canonical spin glass systems reinforcing these earlier claims that the FC state is possibly a non-equilibrium state {\color{blue}\cite{Sudip2020}}. This metastability of the FC state of canonical spin glasses can be considerably different from the metastable behavior observed in the FC state of other kind of glasses, for example, the cluster glass, concentrated spin glass or the glassy state below a kinetically arrested first order transition i.e. magnetic glass. In this context, here we present further evidence of metastability of the FC state of canonical SGs. We also show that the nature of this metastable behavior substantially differs from the well established magnetic response of the FC state of magnetic glasses {\color{blue}\cite{MKC2003, RS2006, KS2006,AB2006, VKS2007, AB2008,PC2008,AB2009,RS2009,RS2013,EPL2013}}. In case of magnetic glass, a system undergoing a first order magnetic transition, may show metastable magnetization at temperatures well below the supercooling limit due to a kinetic arrest of first order phase transition while cooled in certain magnetic field window {\color{blue}\cite{MKC2003, RS2006, KS2006,AB2006, VKS2007, AB2008, RS2013}}. The non-equilibrium nature of a magnetic glass is fairly well known by now, however, such metastable behavior is still described occasionally in the literature as spin glass like phenomena {\color{blue}\cite{Nayak2013, Chau2006,Narayana2011}}.
Here we present experimental studies on the non-equilibrium magnetic properties of canonical SG and magnetic glass to highlight: (i) the non-equilibrium nature of the FC state of canonical SG; (ii) the distinct differences between the dynamical magnetic properties of the Canonical SG and magnetic glass systems. In the sections below we present careful magnetic measurements performed on both the ZFC and FC states of two canonical SG systems, AuMn (1.8\%) and AgMn (1.1\%). First we show the presence of finite thermal hysteresis between the field cooled cooling (FCC) and field cooled warming (FCW) cycles in some temperature range below T$_f$, which to the best of our knowledge has not been reported for any canonical SG system. It underscores the metastable nature of the FC state of canonical SG state. In addition, we have also investigated the effect of thermal cycle on the ZFC and FC state and show that it has quite distinct effects on these states. The metastability of the FC state of canonical SG is further established with frequency dependence study of ac-susceptibility. While frequency dependence of the ac-susceptibility in the ZFC state is the hallmark of canonical SG, to the best of our knowledge it is the first time the results for such a study on the FC state of canonical SG is being presented. Finally, we use a specially designed protocol namely `cooling and heating in unequal field (CHUF)' to probe the non-equilibrium response of canonical SGs AuMn (1.8\%) and AgMn (1.1\%) and magnetic glasses Pr$_{0.5}$Ca$_{0.5}$Mn$_{0.975}$Al$_{0.025}$O$_3$ (PCMAO) and La$_{0.5}$Ca$_{0.5}$MnO$_3$ (LCMO) with contrasting ground states.
\section{Experimental details}
The AuMn (1.8\%) and AgMn (1.1\%) samples are prepared by standard induction melting process. PCMAO has been prepared by standard solid state method and LCMO has been prepared using chemical combustion method. The details of the sample preparation and characterization can be found elsewhere {\color{blue}\cite{PC2008,Nigam1983,SNair}}. The magnetization measurements are performed in MPMS3 SQUID magnetometer, (M/S Quantum design). DC magnetization is measured following three different protocols. In zero field cooled protocol (ZFC), the sample is initially cooled down to T = 2 K from above the freezing temperature (around 5 times T$_f$) in absence of any applied external magnetic field, and then measurements were made while warming the sample in presence of an applied magnetic field. In the field cooled cooling (FCC) protocol, M is measured in presence of a fixed applied field from a temperature greater than T$_f$ to 2 K while cooling down the sample. Then in the subsequent heating cycle, measurement is made without changing the applied field; this is termed as field cooled warming (FCW) protocol. We also use a specially designed protocol namely `cooling and heating in unequal field (CHUF)', in which magnetization is measured at a fixed measuring magnetic field (H$_M$) during heating after the sample has been cooled every time in a different cooling field (H$_C$) {\color{blue}\cite{AB2006,RS2009}}. The temperature dependence of magnetization is measured in both temperature stable and sweep mode. For the stable mode, magnetization is recorded after the temperature is made stable. In the sweep mode, the cooling and heating rate is 0.15 K/min. The frequency dependence of ac susceptibility measurements have been carried out in the temperature stable mode.
\begin{figure}[h]
\centering
\includegraphics[scale=0.32]{Fig_1}
\caption{ FCC and FCW curve at H= 50 Oe showing thermal hysteresis for AuMn (1.8\%) and AgMn (1.1\%) alloys. The magnetization values at different temperatures has been measured after stabilizing the temperature to reduce the uncertainties in temperature. The arrows in the insets indicate the cooling and heating cycles. }
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[scale=0.32]{Fig_2}
\caption{ Variation the temperature minima of AuMn (1.8\%) and AgMn (1.1\%) with the applied magnetic field obtained from the FCW cycle (Solid lines are guide to eye).}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[scale=0.40]{Fig_3}
\caption{ ZFC, FCC and FCW curve of (a) PCMAO measures at H = 50 kOe and (b) LCMO measured at H = 30 kOe. The arrows indicate the direction of the temperature sweep during measurements.}
\end{figure}
\section{Results and discussions}
\subsection{\label{subsec:level}Thermal hysteresis in the FC state of canonical spin glass and magnetic glass}
The main panel of figure {\color{blue}1(a)} and {\color{blue}(b)} shows the M vs T plots for AuMn (1.8\%) and AgMn (1.1\%) alloys measured in FCC and FCW protocols at the applied field of H = 50 Oe in the stable mode of measurement. Both the curves show a distinct peak at the freezing temperature, T$_f$ = 7 and 4.5 K, respectively, which matches with the earlier reports {\color{blue}\cite{Nigam1983}}. There are few interesting features to be noted carefully in this figure. First, the FC M vs T curves of both the systems show distinct thermal hysteresis i.e. M$_{FCC} \neq M_{FCW}$ below a temperature, T$_{hys}$, as shown in the insets of figure {\color{blue}1(a)} and {\color{blue}(b)}. This characteristic temperature T$_{hys}$ in both the samples is less than the respective freezing temperature, T$_f$, and the irreversibility temperature, T$_{ir}$, where FC and ZFC M-T curves bifurcate {\color{blue}\cite{Sudip2020}}, i.e. T$_{hys}$ $<$ T$_{ir}$ $<$ T$_f$. Secondly, the FCC and FCW M-T curves show considerable temperature dependence below the freezing temperature, T$_f$. In the FCC protocol, M initially decreases with decreasing temperature, shows a minimum, and then increases again at further lower temperatures. The following FCW curve also shows the minimum. However, there is an interesting difference between AuMn (1.8\%) and AgMn (1.1\%) as shown in the insets of figure {\color{blue}1}. In AuMn (1.8\%) system, the minimum in the FCC M-T curve occurs at T = 0.43 T$_f$, whereas, in AgMn (1.1\%), it occurs around T = 0.78 T$_f$ at the applied field, H = 50 Oe. Moreover, in AuMn (1.8\%) the FCW M-T curve shows the minimum at higher temperature compared to the FCC M-T curve, and this is opposite to the behavior observed in AgMn (1.1\%). The observed behavior is in contradiction with the common perception that FC magnetization remains flat with the variation of temperature below T$_f$. The FCW magnetization curve always remains below the FCC curves in the hysteresis region in both systems. Thirdly, the minima gradually shifts toward higher temperature as well as it gets suppressed with the increase in applied magnetic field in both the systems. The variation of temperature minima obtained from the FCW curve is shown in figure {\color{blue}2}. It may be noted here that we have performed these measurements in both sweep mode and stable mode and it show similar behavior in both protocols. The observed features in the FC state contradict the common understanding of the equilibrium nature of the FC state in canonical SGs. From the view point of equilibrium state, the FC susceptibility of canonical spin glass is supposed to remain independent of temperature in the mean field scenario {\color{blue}\cite{Parisi2006}} and reversible below T$_f$. The thermal hysteresis between the FCC and FCW curves is not in consonance to the equilibrium picture of the FC state. There are some earlier reports on the dependence of the FC dc susceptibility of spin glass on cooling rate and the presence of thermal hysteresis {\color{blue}\cite{Bouchiat1985,Wenger1984}}, but these aspects have been largely ignored in the literature and the FC state in canonical SG has been continued to be considered as an equilibrium state. We have also measured the FCC and FCW curves at different field. Thermal hysteresis is gradually suppressed with the increase in the value of applied magnetic field. In this context, it may be noted here that the thermal hysteresis is either commonly observed across a first order phase transition due to supercooling and superheating of high temperature and low temperature phase, respectively, or in a non-equilibrium state. It has been proposed earlier that, the FC state in canonical SGs gradually attains the equilibrium state having a lower susceptibility value over the period of long waiting time {\color{blue}\cite{Lundgren1985}}. The observation that FCW magnetization always remains below the FCC magnetization in the temperature region T $< T_{hys}$ (see figure {\color{blue}1}) may be due to the fact that thermal cycle assists the FC state to achieve equilibrium.
In figure {\color{blue}3(a)} and {\color{blue}(b)}, we have shown the ZFC, FCC and FCW curve of PCMAO and LCMO measured at H = 50 and 30 kOe, respectively. These systems are well studied magnetic glass that is obtained in certain H-T window. PCMAO is paramagnetic at room temperature. As we reduce the temperature, it subsequently undergoes antiferromagnetic (AFM) and ferromagnetic (FM) transition. The AFM to FM transition is a first order phase transition (FOPT), as evident from the thermal hysteresis between FCC and FCW curve. However, when the cooling field is less than a minimum cutoff field (which is greater than 80 kOe {\color{blue}\cite{AB2009}}) the FOPT is kinetically arrested and the low temperature state is a mixture of untransformed high temperature AFM phase and the transformed low temperature FM phase. The FM phase is the equilibrium phase and its fraction increases with increasing H {\color{blue}\cite{AB2006, AB2009}}. So, the ZFC state has smaller FM phase fraction compared to the FC state. Therefore, the ZFC magnetization is less than the FCC (or FCW) magnetization as seen in figure {\color{blue}3(a)}. On the other hand, LCMO has opposite phase diagram. It undergoes transition from the paramagnetic state at room temperature to FM state and that is followed by a first order phase transition to AFM state at further lower temperature. Contrary to PCMAO, the ZFC state of LCMO is the equilibrium AFM phase and as the sample is cooled at higher H, the FM to AFM transition gets arrested and the non-equilibrium FM phase fraction increases {\color{blue}\cite{AB2008,PC2008}}.
\begin{figure}[t]
\centering
\includegraphics[scale=0.40]{Fig_4}
\caption{ (a) Effect of thermal cycle on the ZFC state of AuMn (1.8\%) measured in presence of H = 200 Oe performed in sweep mode of the measurement. The sample is first cooled down to T = 2 K at H = 0. Then, H = 200 Oe is applied and the temperature is raised to progressively higher temperatures and subsequently cooled. The close views of the M-T behavior in the region circled and labelled as C1, C2 and C3 in (a) are shown in (b), (c) and (d), respectively. The thermal cycles are performed in the range (b) 3 to 2 K and back to 3.5 K, in (c) 5 to 2 K to 5.5 K and in (d) 7 to 2 K to 8 K and subsequent 8 to 2 K and back to 9 K.}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[scale=0.45]{Fig_5}
\caption{ Effect of thermal cycle on the field cooled state of (a) AuMn (1.8\%) at H = 200 Oe, (c) PCMAO at H = 30 kOe and (d) LCMO at H = 50 kOe. Fig (b) presents the magnified view of the temperature dependence of the magnetization during the initial cooling from high temperature and the thermal cycle up to 4 and 5 K of AuMn(1.8\%). In case of spin glass, thermal cycles are similar to minor hysteresis loop. Whereas, in PCMAO, M monotonically increases after each thermal cycle and in LCMO, M monotonically decreases. The arrows indicate the direction of temperature sweep during thermal cycle. The legends indicate the maximum temperature of the thermal cycles after initial cooling in presence of field from room temperature. }
\end{figure}
\subsection{\label{subsec:level}Effect of thermal cycle on the ZFC and FC state of spin glass and magnetic glass:}
To further probe the non-equilibrium nature of the FC state of canonical SG, we have investigated the effect of thermal cycle on the FC state and compared it with the effect of thermal cycle on ZFC state, which is the paradigm of non-equilibrium state. The result for ZFC and FC state of AuMn (1.8\%) have been shown in figure {\color{blue}4} and {\color{blue}5 (a)} respectively. In the ZFC state, we have initially cooled the sample down to T = 2 K and then applied H = 200 Oe. After that we have recorded the M-T curve during warming up to an intermediate temperature, T$_m$ (T$_m$ $<$ T$_f$) and again reduced the temperature to T = 2 K with simultaneously recording the M-T behavior. We have continued similar measurements with progressively higher value of T$_m$ until we reach close to T$_f$. For the FC state, we have performed similar protocol except the initial cooling of the sample, where it is cooled in the presence of field and thermal cycles are performed with different values of T$_m$s. Such thermal cycles is expected to bring a metastable state nearer to the stable state by providing additional thermal energy to the system. We have shown the data in figure {\color{blue}4} and {\color{blue}5(a)} for ZFC and FC state of AuMn(1.8\%) respectively. In case of ZFC state, after each thermal cycle from progressively higher temperature results into a large increase in magnetization and gradually approaching the FC magnetization value. In addition, the thermal cycle affects the magnetization in a very interesting way as we progressively increase T$_m$ toward T$_f$. We have shown in figure {\color{blue}4(b)}, {\color{blue}4(c)} and {\color{blue}4(d)} the close view of the M-T data during a couple of thermal cycles those have been marked in the figure {\color{blue}4(a)} as C1, C2 and C3, respectively. The cooling and subsequent heating cycle from all the intermediate temperatures shows irreversibility, pointing to the non-equilibrium nature of the ZFC state. In addition, the nature of this irreversibility also considerably changes at different temperatures, which reveals distinct dynamics in different temperature regime. Finally, when, the thermal cycle is performed within 2 to 8 K and back to 2 K (see figure {\color{blue}4(d)}), it looks similar to the thermal hysteresis observed in the FCC and FCW curve, shown in figure {\color{blue}1(a)}. The thermal cycle, on the other hand, affects the field cooled state quite differently. It looks more like the minor hysteresis loop when returning from progressively higher temperatures, however, the value of magnetization at T = 2 K is found to remain nearly same after each thermal cycle which is equal to the FC magnetization value. This value is, however, is slightly larger than the magnetization value obtained after the thermal cycle from T = 8 K in the ZFC state (see figure {\color{blue}4(d)}).\\
In figure {\color{blue}5(c)} and {\color{blue}(d)}, we have presented the effect of thermal cycle in the FC state of PCMAO and LCMO where the samples have been cooled in presence of H = 50 and 30 kOe respectively from T = 320 K (which is the paramagnetic state in both samples) down to magnetic glass state. It may be noted here that part of this data has been published before {\color{blue}\cite{AB2009}}, but is reproduced here to make the present work self-contained. In case of PCMAO, at H = 50 kOe the low temperature state is a mixture of equilibrium ferromagnetic state and non-equilibrium antiferromagnetic state. Thermal cycle from progressively higher temperature at a fixed field deliver thermal energy to the system and the non-equilibrium antiferromagnetic state transforms into ferromagnetic state. This is observed in figure {\color{blue}5(c)} as monotonic increase in magnetization at the low temperature after every thermal cycles {\color{blue}\cite{AB2009}}. On the contrary, in case of LCMO at H = 30 kOe, the thermal cycle at H = 30 kOe transform the ferromagnetic state into antiferromagnetic state and the magnetization decreases after each thermal cycle as shown in figure {\color{blue}5(d)}.
\begin{figure}[t]
\centering
\includegraphics[scale=0.40]{Fig_6}
\caption{ Frequency dependence of ac susceptibility of AuMn (1.8\%) measured with an ac field of H$_{ac}$ = 1 Oe in: (a) ZFC state and (b) FC state obtained with a cooling field of H$_{dc}$ = 200 Oe .}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[scale=0.5]{Fig_7}
\caption{ Results of CHUF measurement for (a) AuMn(1.8\%), (b) AgMn(1.1\%), (c) PCMAO and (d) LCMO.}
\end{figure}
\subsection{\label{subsec:level}Frequency dependence of ac susceptibility in the ZFC and FC state of spin glass:}
In figure {\color{blue}6 (a)} and {\color{blue}(b)}, we have shown the ac susceptibility measured at three different frequency in the ZFC and FC state in AuMn (1.8\%) respectively. The frequency dependence of ac susceptibility in the ZFC state has been measured during warming with an ac field of H${ac}$ = 1 Oe after the cooling the sample in zero applied dc field. In the FC state, the sample is cooled down to T = 2 K in presence of dc field, H = 200 Oe and ac susceptibility has been measured during warming in presence of the dc field with H${ac}$ = 1 Oe. The ZFC ac susceptibility shows sharp peak at T$_f$ = 7 K. Within the probing frequency range, the peak temperature increases by a very small amount which is in contrast to the behavior of freezing temperature in concentrated spin glasses, cluster glasses or blocking temperature of superparamagnetic particles where the dependence is larger {\color{blue}\cite{Mydosh2015}}. The frequency dispersion below T$_f$ indicates the broad distribution of relaxation times in the ZFC state of spin glass. It is interesting to note that in FC state, the peak is relatively broad in presence of the dc field, H$_{dc}$ = 200 Oe and it also shows distinct frequency dispersion below T$_f$. However, the peak temperature (T$_f$) has not been observed to get affected significantly. The observed frequency dispersion in the FC state further underlines the non-equilibrium nature of this state, as indicated by the low field dc measurements.
\subsection{\label{subsec:level}Cooling and Heating in Unequal Field (CHUF) in spin-glass and magnetic glass}
In figure {\color{blue}7}, we have shown the temperature dependence of magnetization measured using a specially designed protocol to probe the non-equilibrium behavior of the field cooled state of a system. In this protocol, magnetization is measured at a fixed measuring magnetic field (H$_M$) during heating after the sample has been cooled every time in a different cooling field (H$_C$) {\color{blue}\cite{AB2006,RS2009}}. This measurement highlights the equilibrium or non-equilibrium nature of the field cooled magnetic state and has been found to be extremely useful earlier in probing the ground state of magnetic glass as will be discussed below. In figure {\color{blue}7(c)}, we have shown the results for PCMAO where H$_M$ = 20 kOe, while H$_C$s are 10, 20 and 40 kOe. As PCMAO is cooled in presence of H$_C$ = 20 kOe, it results into a kinetic arrest of the first order transition from antiferromagnetic to ferromagnetic state and produces a low temperature magnetic state which is the mixture of ferromagnetic and antiferromagnetic phase. The most interesting feature of this measurement protocol is that, if a magnetic state is in a state of non-equilibrium, the temperature dependence of magnetization curves at a fixed field, H$_M$ behaves differently depending on whether H$_C$ $>$ H$_M$ or H$_C$ $<$ H$_M$ and it depends on the actual nature of the ground state as well. For example, in case of PCMAO (see figure {\color{blue}7(c)}), although H$_M$ = 20 kOe for every measurement, magnetization at low temperature is different indicating its path dependence. In addition, when H$_C$ = 40 kOe, magnetization is large as compared to the H$_C$ = H$_M$ = 20 kOe curve (reference curve, where H$_C$ = H$_M$) which indicates larger fraction of equilibrium FM phase than reference curve. With increase in temperature, it initially decreases slowly and finally below a certain temperature (around T = 60 K in this case) falls sharply and finally merges with the reference curve. On the other hand, for H$_C$ = 10 kOe, magnetization is smaller than the reference curve which indicates smaller fraction of equilibrium FM state. Magnetization initially increases sharply with temperature and reaches close to the reference curve and becomes flat. Finally again, it decreases sharply and all three curves merges together, highlighting the non-equilibrium state of the low temperature region. Note here that in this case where the ferromagnetic state is the ground state, the M-T curve (for H$_C$ = 10 kOe) changes slope sharply at two temperatures for H$_C$ $<$ H$_M$ and whereas in case of H$_C$ $>$ H$_M$, the M-T curve shows sharp change in slope only once.
On the other hand, in case of LCMO (see figure {\color{blue}7(d)}), where it undergoes a first order phase transition from high temperature ferromagnetic state to low temperature antiferromagnetic state for low cooling field shows opposite trend. As, we cool the system at higher fields, the transition remains more and more incomplete, increasing non-equilibrium ferromagnetic phase fraction. This is contrary to our previous example where low temperature equilibrium state is ferromagnet and increasing the cooling field increases equilibrium ferromagnetic phase fraction. In case of LCMO, the curve with H$_C$ = 45 kOe ($>$ H$_M$ = 30 kOe) changes slope twice and finally merges with the reference curve (H$_C$ = H$_M$). whereas, for H$_C$ = 15 kOe ($<$ H$_M$) changes slope only once and merges with the reference curve {\color{blue}\cite{AB2008}}. It may be noted here that some part of these data has been published before {\color{blue}\cite{AB2008}}, but is reproduced here to make the present work self-contained.
We have applied similar protocol for canonical spin glasses. The data have been shown in figure {\color{blue}7(a)} and {\color{blue}7(b)} for AuMn(1.8\%) and AgMn(1.1\%) samples respectively. In the case of AuMn(1.8\%), the measuring field H$_M$ = 1000 Oe and different cooling fields produce different magnetization curves. For the H$_C$ $<$ H$_M$, the magnetization at 2 K starts at lower value as compared to the magnetization for H$_C$ = H$_M$. For H$_C$ $>$ H$_M$, the trend is opposite. As, the temperature is increased, all the curves merge with the reference curve, H$_C$ = H$_M$ = 1000 Oe at a temperature, T$_{merge}$ which is smaller than T$_f$. In contrast to the cases of magnetic glasses, the M - T curves do not show any sharp change in slope in canonical SGs.
\section{Summary and conclusion}
Summarizing we can say that we have investigated the magnetic response of the FC state of two representative canonical SG systems AuMn(1.8\%) and AgMn(1.1\%) in details. In combination with our recent study of the memory effects in the same canonical SG systems $\color{blue}\cite{Sudip2020}$ the results of the present study unequivocally establish the distinct non-equilibrium nature of the FC state of the canonical SGs. The characteristic features of this non-equilibrium response of the FC state is quite different from those of the ZFC state, and this difference has been highlighted. There has been some earlier suggestions $\color{blue}\cite{Wang,Chamberlin1984,Bouchiat1985, Wenger1984,Nordblad1986}$ that the FC state of canonical SGs may not be an equilibrium state. However, over the years FC state of the canonical SGs has not been subject of intense scrutiny as much as the ZFC state, presumably due to the popularity and the implicit acceptance of the thermodynamic phase transition picture of the canonical SG state. Our present studies along with these earlier results clearly indicate the presence of a rugged energy landscape in the FC state of canonical SGs. In this direction there exists some suggestion of exponentially increasing `sparsity' of thermally accessible independent free energy levels with the decrease of temperature in the SG state $\color{blue}\cite{hooger1985}$.
Furthermore we have compared the non-equilibrium response of the FC state of canonical SG with that of the two representative magnetic glass systems namely PCMAO and LCMO. The distinct differences in the non-equilibrium properties of these two classes of magnetic materials is clearly distinguished. In the literature there is a tendency to attribute any non-equilibrium response observed in magnetic materials to spin-glass behavior. The present study indicates that each class of magnetic systems has its own finger-print magnetic response, which can be easily identified with simple but careful experiments.
\section{\label{sec:level}References: }
| proofpile-arXiv_065-275 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The 2-dimensional (2D) dielectric microcavities have been intensively
studied over the last two decades due to the potential as ultra-small
high quality resonators needed in integrated optical circuits and
sensors \citep{Microcavity_R.K.Chang,MC_Vahala,MC_J.Wiersig_RMP}.
One of the core subjects in the studies is the whispering gallery
modes (WGMs), a very long-lived resonances trapped by the total internal
reflection (TIR). The ideal WGMs in the uniform refractive index circular
cavity can have very high quality-factor ($Q$-factor), but their
isotropic emission due to the rotational symmetry are serious drawback.
Many researches have been done to overcome this problem by deforming
the shape of the cavity or involving the scatterers, but it also
causes $Q$-factor degradation and undesired complex phenomena such
as wave chaos \citep{wave_chaos_stone,wave_chaos_An,wave_chaos_Shinohara,wave_chaos_Sunada}
and mode interaction effects \citep{EP_W.D.Heiss,ARC_J.Wiersig,ARC_annular_J.Wiersig,EP_Petermann_S.-Y.Lee,EP_Chiral_J.Wiersig,EP_Coupled_J.W.Ryu,EP_diff_m_C.W.Lee}.
As a novel approach to remedy above drawback, recently proposed are
transformation cavities (TCs), dielectric cavities with the gradient
refractive index (GRIN) designed based on conformal transformation
optics \citep{cWGM_Y.Kim_TC_Nature_Phot}. The anisotropic WGMs supported
in the TCs, so-called conformal whispering gallery modes (cWGMs),
provide directional tunneling emission as well as retain $Q$-factors
of corresponding WGMs in uniform disk cavity. The TCs can be designed
through various analytic functions of complex variables or the composite
functions of those. Also, even if an analytic form of conformal mapping
has singularities inside the cavity region or is difficult to be found
out for a given cavity shape, the numerical conformal mapping method
can be used for those cases \citep{S.J.Park_QCM_OE}.
The cWGMs in generic boundary deformed TCs have the property that
both effects of shape deformation and GRIN generated by conformal
mapping are intermingled. Circle-shaped TCs belong to a special class
TC that is ruled by only the effect of spatial refractive index variation,
completely free from the deformation effect. The circle-shaped TC
can be formed by the center-shift conformal mapping which can be used
as an element of composite functions to achieve a realizable refractive
index profile or to break mirror symmetry for a given mirror symmetric
shaped TC. The investigation of the cWGM characteristics for the circle-shaped
TCs is also a groundwork on the researchs for various TCs using the
center-shift conformal mapping.
This paper is organized as follows. We briefly review the center-shift
conformal transformation and the original method for designing TCs
using a shrinking parameter to support cWGMs in Sec. II. An alternative
scheme to construct the TCs without the shrinking parameter and a
circle-shaped TC constructed through this scheme is presented in Sec.
III. In Sec. IV, the characteristics of cWGMs in the circle-shaped
TC is numerically investigated and the purity factor for measuring
the intactness of cWGM is defined and discussed. Finally we give
a summary in Sec. V.
\section{Center-Shift Conformal Transformation and Circle-Shaped Transformation
Cavity with Shrinking Parameter}
\subsection{Center-Shift Conformal Transformation}
\begin{figure}[b]
\includegraphics[scale=0.15]{CSCircle}
\caption{Conceptual diagrams for center-shift conformal transformation from
(a) $\eta$-plane to (b) $\zeta$-plane. \label{fig:mobius_tr}}
\end{figure}
A mapping to transforms a unit circle in $\eta$-plane to a center-shifted
unit circle in $\zeta$-plane is given by the following form,
\begin{equation}
\zeta=\frac{\eta+\delta}{1+\delta^{*}\eta},\,\,\,\,\,\,\left|\delta\right|<1,\label{eq:CS_Map_complex}
\end{equation}
where $\eta=u+iv$ and $\zeta=x+iy$ are complex variables in the
complex planes, respectively, and $\delta$ is a complex valued parameter
representing the center-shift. This mapping forms a subgroup of M\"{o}bius
transformations and we will call it the center-shift conformal transformation
in this paper. Since the arbitrary center-shift can be described by
taking the line of shift as the $u$-axis ($\delta\in\mathbb{R}$),
so the above mapping can be simplified as follows:
\begin{equation}
\zeta=f(\eta)=\frac{\eta+\delta}{1+\delta\eta},\,\,\,\,\,\,\left|\delta\right|<1.\label{eq:CS_Map}
\end{equation}
By the application of the above conformal mapping, a circle with
a radius of 1 in $\eta$-plane are transformed into a center-shifted
circle of the same size with the rotational symmetry broken in $\zeta$-plane
as shown in Fig. \ref{fig:mobius_tr}. For reference, the inverse
conformal mapping from $\zeta$-plane to $\eta$-plane is expressed
as follows:
\begin{equation}
\eta=g(\zeta)=\frac{\zeta-\delta}{1-\delta\zeta},\,\,\,\,\,\,\left|\delta\right|<1.\label{eq:inverse_CS_Map}
\end{equation}
\begin{figure*}
\includegraphics[width=18cm]{spaces}
\caption{Refractive index profiles in (a) original virtual, (b) wholly transformed,
(c) physical, and (d) reciprocal virtual spaces. These pictures were
drawn in circle-shaped transformation cavity without a shrinking parameter
at $n_{0}=1.8$, $\left|\delta\right|=0.2$, and $\gamma=\gamma_{\text{min}}$.
\label{fig:spaces}}
\end{figure*}
\subsection{Circle-Shaped Transformation Cavity with Shrinking Parameter}
Next, we look at the TCs which is previously proposed in \citep{cWGM_Y.Kim_TC_Nature_Phot}.
Considering the translational symmetry along the $z$-axis in a infinite
cylindrical dielectric cavity, the Maxwell equations can be reduced
to effective 2-dimensional scalar wave equation. Resonances in a TC
satisfying the outgoing-wave boundary condition, $\psi(\mathbf{r})\sim h(\phi,k)e^{ikr}/\sqrt{r}$
for $r\to\infty$, where position vector $\mathbf{r}=(x,y)=(r\cos\phi,r\sin\phi)$,
$k$ is the vacuum wavenumber, and $h(\phi,k)$ is the far-field angular
distribution of the emission, are obtained as the solutions of following
wave equation,
\begin{equation}
\left[\nabla^{2}+n^{2}(\mathbf{r})k^{2}\right]\psi(\mathbf{r})=0,\label{eq:HH}
\end{equation}
with the refractive index $n(\mathbf{r})$ given by
\begin{equation}
n(\mathbf{r})=\begin{cases}
n_{0}\left|\frac{d\zeta}{d\eta}\right|^{-1}, & \text{(interior)}\\
1, & \text{(exterior)}
\end{cases}.\label{eq:RI_TC}
\end{equation}
For the transverse magnetic (TM) polarization, the wave function $\psi(\mathbf{r})$
represents $E_{z}$, the $z$ component of electric field, and both
the wave function $\psi(\mathbf{r})$ and its normal derivative $\partial_{v}\psi(\mathbf{r})$
are continuous across the cavity boundary. For the transverse electric
(TE) polarization, the wave function $\psi(\mathbf{r})$ represents
$H_{z}$, the $z$ component of magnetic field, and both $\psi(\mathbf{r})$
and $n(\mathbf{r})^{-2}\partial_{v}\psi$ are continuous across the
cavity boundary. By the outgoing-wave boundary condition, the resonances,
which have discrete complex wavenumbers $k_{r}$ with negative imaginary
parts, exponentially decay in time. The frequency and the lifetime
of a resonance are given by $\omega=c\text{Re}[k_{r}]$ and $\tau=-1/2c\text{Im}[k_{r}]$
where $c$ is light speed in vacuum, respectively. The quality factor
of a resonance is defined as $Q=2\pi\tau/T=-\text{Re}[k_{r}]/2\text{Im}[k_{r}]$
where the oscillation period of light wave is $T=2\pi/\omega$.
To describe the conventional method for constructing TCs supporting
cWGMs, four spaces are usually considered : original virtual (OV),
wholly transformed (WT), physical (Ph), and reciprocal virtual (RV)
spaces \citep{BEM_TC}. First, a disk cavity with a homogeneous refractive
index $n_{0}$ and a unit radius $R_{0}$ is considered in complex
$\eta$-plane called OV space. The uniform index disk cavity in the
OV space is conformally transformed to a cavity in complex $\zeta$-plane
called WT space through a entire spatial conformal mapping multiplied
by $\beta$, such as a resizable center-shift transformation,
\begin{equation}
\zeta=\beta f(\eta)=\beta\frac{\eta+\delta}{1+\delta\eta},\,\,\,\,\,\,\left|\delta\right|<1.\label{eq:CS_Map_beta}
\end{equation}
In the WT space, interior and exterior refractive index profiles of
the cavity are inhomogeneous but the relative refractive index at
the boundary interface remains homogeneous, because the two cavities
in OV and WT spaces are mathematically equivalent. Next, we set the
exterior GRIN profile in WT space to 1 considering the realistic physical
situation then, finally, we can obtain a circle-shaped TC in physical
space. The refractive index of the circle-shaped TC in physical space
can be derived from Eq. \eqref{eq:CS_Map_beta} as following form,
\begin{equation}
n(\mathbf{r})=\begin{cases}
n_{0}\left|\frac{(\beta-\delta\zeta)^{2}}{\beta(\delta^{2}-1)}\right|^{-1},\,\,\,\,\,\,\left|\delta\right|<1, & \text{(interior)}\\
1, & \text{(exterior)}
\end{cases}.\label{eq:RI_CSCircle_beta_TC}
\end{equation}
The relative refractive index at the boundary interface is not homogeneous
in the physical space. The heterogeneity of the relative refractive
index acts as an important factor in forming the resonance characteristics
which differ from in a uniform index disk cavity, and the reason can
be easily predicted through RV space which is mathematically equivalent
to the physical space. The RV space can be obtained from inverse conformal
transformation over the entire domain of physical space.
In order to obtain cWGMs in the TCs, we finally apply a specific value
$\beta_{\text{max}}$ or lower to $\beta$ such that the minimum value
of the internal refractive index profile is at least $n_{0}$. This
is a TIR minimum condition that reduces the size of the cavity and
at the same time allows the ray trajectory at the boundary to satisfy
the TIR angle. The TIR minimum condition in this case is obtained
as follows:
\begin{equation}
\beta_{\text{max}}\equiv\frac{1-\left|\delta\right|}{1+\left|\delta\right|},\,\,\,\,\,\,\left|\delta\right|<1.\label{eq:beta_max}
\end{equation}
Here, we note that $\left|d\zeta/d\eta\right|^{-1}$ which forms
the refractive index of TCs, is a function of $\beta$, so not only
the cavity size but the GRIN profile changes according to the change
of $\beta$.
\section{Circle-Shaped Transformation Cavity without Shrinking Parameter}
In the aforementioned construncting method, the cavity maintains the
unit size in the RV space, but not in the physical space because of
the shrinking parameter. In particular, for the case of a circle-shaped
TC with no boundary shape change, it is easier to analyze pure spatial
index variation to completely exclude the parametrically changed variables
related to the boundary. Here, we propose a new TC design scheme that
maintains the dimensionless coordinates \citep{Microcavity_R.K.Chang}
in physical space by eliminating the size-scaling by confromal mapping.
We first consider a uniform index disk cavity in OV space ($\eta$-plane)
with the unit radius $R_{0}$ and the invariant reference refractive
index $n_{0}$ and then, conformally transform the entire space multiplied
by an index-proportion parameter $\gamma$ into a WT space ($\zeta$-plane)
through the mapping equation without $\beta$, Eq. \eqref{eq:CS_Map}.
Finally, forcing the exterior refractive index to 1 yields the TC
in physical space. As a result, the interior and exterior refractive
index in the physical space is as follows:
\begin{equation}
n(\mathbf{r})=\begin{cases}
n_{\text{in}}^{{\scriptscriptstyle \text{Ph}}}=n_{\text{0}}\gamma\left|\frac{d\zeta}{d\eta}\right|^{-1}\,, & \text{(interior)}\\
n_{\text{ex}}^{{\scriptscriptstyle \text{Ph}}}=1\,, & \text{(exterior)}
\end{cases}.\label{eq:RI_DTC}
\end{equation}
Additionally, through the inverse transformation from $\zeta$-plane
to $\eta$-plane, the interior and exterior refractive index in RV
space can be obtained as follows:
\begin{equation}
\tilde{n}(\tilde{\mathbf{r}})=\begin{cases}
n_{\text{in}}^{{\scriptscriptstyle \text{RV}}}=n_{\text{0}}\gamma\,, & \text{(interior)}\\
n_{\text{ex}}^{{\scriptscriptstyle \text{RV}}}=\left|\frac{d\eta}{d\zeta}\right|^{-1}\,, & \text{(exterior)}
\end{cases}\label{eq:RI_DTC_RV}
\end{equation}
where position vector $\tilde{\mathbf{r}}=(u,v)$. We shortly call
this newly constructed TC to $\gamma$-type TC and, for clarity, we
refer to the conventional TC as the $\beta$-type TC. $\gamma$-type
TC can satisfy the TIR condition by increasing the interior refractive
index in physical space independently of the profile-generating factor,
$\left|d\zeta/d\eta\right|^{-1}$ of Eq. \eqref{eq:RI_DTC}, without
reducing the cavity size. This is the most noticeable difference from
the $\beta$-type TC, in which decreasing $\beta$ reduces the cavity
size and simultaneously increases the profile-generating factor of
Eq. \eqref{eq:RI_TC} as a whole. To help understand, we have drawn
conceptual diagrams of four spaces using in the $\gamma$-type circle-shaped
TC in Fig. \ref{fig:spaces}.
$\beta$-type and $\gamma$-type are perfectly identical systems in
the viewpoint of physics. In the case of $\gamma$-type, $k_{r}$
means the dimensionless resonant wavenumber and multiplying $k_{r}$
by $n_{0}\gamma$ changed by TIR condition gives the internal dimensionless
resonant wavenumber, $\kappa\equiv n_{0}\gamma k_{r}R_{0}$. The free-space
wavenumber in $\beta$-type is obtained by dividing $k_{r}$ in $\gamma$-type
by $\beta$. One can properly select and use at convenience among
the two types. $\gamma$-type can be more advantageous in terms of
theoretical and numerical study such as analyzing the changes of resonance
distribution in a given wavelength region, adjusting the target frequency
of the resonance to be observed, tracing a specific resonance to finding
the optimizing conditions, or reobtaining the refractive index profile
for designing on real fabrication.
\begin{table}[H]
\begin{centering}
\begin{tabular}{|l|c|c|}
\hline
& $\beta$-type TC & $\gamma$-type TC\tabularnewline
\hline
\hline
conformal transformation & $\zeta=\beta f(\eta)$ & $\zeta=f(\eta)$\tabularnewline
\hline
refractive index in RV space & $n_{\text{in}}^{{\scriptscriptstyle \text{RV}}}=n_{0}$ & $n_{\text{in}}^{{\scriptscriptstyle \text{RV}}}=n_{0}\gamma$\tabularnewline
\hline
radius of cavity in physical space & $\beta R_{0}=\beta$ & $R_{0}=1$\tabularnewline
\hline
refractive index in physical space & $n_{\text{in}}^{{\scriptscriptstyle \text{Ph}}}=n(\zeta,\beta)$ & $n_{\text{in}}^{{\scriptscriptstyle \text{Ph}}}=n(\zeta)\gamma$\tabularnewline
\hline
\end{tabular}
\par\end{centering}
\caption{Comparison between $\beta$-type and $\gamma$-type TCs \label{tab:diff_TC_DTC}}
\end{table}
\begin{figure}
\includegraphics[scale=0.3]{ResTrace}
\caption{(a) is the changes of the range of refractive index profile $n_{\text{in}}^{{\scriptscriptstyle \text{Ph}}}$
(black and blue solid line) and the value of $n_{0}\gamma_{\text{min}}$
(red dashed line) according to the center-shift parameters $\left|\delta\right|$
in the $\gamma$-type circle-shaped TC with $n_{0}=1.8$ and $\gamma=\gamma_{\text{min}}$.
(b) and (c) are the changes of internal dimensionless wavenumber $\kappa$
and $Q$-factor for M(13,1) that varies with $\left|\delta\right|$,
respectively. Black and red lines in (b) are the real and the imaginary
parts of $\kappa=n_{0}\gamma k_{r}R_{0}$, respectively. Blue dashed
lines in (c) are $Q$-factor in the uniform index disk cavity with
$n_{0}$. \label{fig:delta_vs_n_Q}}
\end{figure}
By the center-shift conformal transformation Eq. \eqref{eq:CS_Map},
the disk cavity with homogeneous interior refractive index $n_{0}$
in OV space is transformed to the $\gamma$-type circle-shaped TC
with the following inhomogeneous interior refractive index profile
in the physical space.
\begin{equation}
n_{\text{in}}^{{\scriptscriptstyle \text{Ph}}}(\zeta)=n_{\text{0}}\gamma\left|\frac{(1-\delta\zeta)^{2}}{(\delta^{2}-1)}\right|^{-1},\,\,\,\,\,\,\left|\delta\right|<1,\label{eq:RI_CSCircle_in}
\end{equation}
Also, we can derive the profile of exterior refractive index for
$\gamma$-type TC in the RV space through the inverse conformal mapping
Eq. \eqref{eq:inverse_CS_Map} as follows:
\begin{equation}
n_{\text{ex}}^{{\scriptscriptstyle \text{RV}}}(\eta)=\left|\frac{(1-\delta^{2})}{(1+\delta\eta)^{2}}\right|,\,\,\,\,\,\,\left|\delta\right|<1.\label{eq:RI_CSCircle_RV_ex}
\end{equation}
To obtain cWGMs formed by TIR, the condition is required that the
minimum value of the interior refractive index profile is at least
greater than $n_{0}$ and we define $\gamma$ satisfying this condition
as $\gamma_{\text{min}}$. In the case of $\gamma$-type circle-shaped
TC, $n_{\text{in}}^{{\scriptscriptstyle \text{TP}}}$ has the minimum
value at $\eta=-\delta/\left|\delta\right|$, thus $\gamma_{\text{min}}$
is given as follows:
\begin{equation}
\gamma_{\text{min}}\equiv\frac{1+\left|\delta\right|}{1-\left|\delta\right|},\,\,\,\,\,\,\left|\delta\right|<1.\label{eq:gamma_min}
\end{equation}
Comparing the above equation with Eq. \eqref{eq:beta_max} used in
$\beta$-type TC, we can see that $\gamma_{\text{min}}$ lies in the
relationship of $1/\beta_{\text{max}}$.
\begin{figure*}
\includegraphics[width=17.9cm]{patterns_small}\caption{Refractive index profile and near field and far field intensity patterns
for the mode pair of M(13,1) at the shifting parameter (a) $\left|\delta\right|=0$,
(b) $0.05$, (c) $0.10$, (d) $0.15$, and (e) $0.30$ in the $\gamma$-type
circle-shaped TC with $n_{0}=1.8$ and $\gamma=\gamma_{\text{min}}$.
The refractive index of outside cavity set to air. In each step, up-side
(down-side) of near field pattern and blue solid (red dashed) line
in far-field pattern are the even-parity (odd-parity) mode. \label{fig:RI =000026 patterns}}
\end{figure*}
\section{Numerical Results}
\subsection{cWGMs in Circle-Shaped Transformation Cavity}
Using the $\gamma$-type circle-shaped TC presented above, we investigate
the characteristic changes of a specific cWGM for TM polarization
by changing in the range of $0\leq\left|\delta\right|\leq0.4$ under
the fixed conditions of reference refractive index $n_{0}=1.8$ and
index-proportion parameter $\gamma=\gamma_{\text{min}}$. These results
can be obtained from either finite element method (FEM) or boundary
element method (BEM) \citep{BEM_TC}, FEM-based COMSOL Multiphysics
were used in this study. Also, in this paper, we assign cWGMs to M($m$,
$l$) as a combination of mode indices corresponding to the angular
momentum index $m$ and the radial nodal number $l$ in the uniform
index disk cavity. According to the change of $\left|\delta\right|$,
the interior refractive index in OV space and the maximum and minimum
value of interior refractive index profile in physical spaces are
changed as shown in Fig. \ref{fig:delta_vs_n_Q} (a). \emph{ }As
$\left|\delta\right|$ increases, the maximum value of refractive
index profile $n_{\text{in}}^{{\scriptscriptstyle \text{Ph}}}$ gradually
widens the overall variation of the index profile with $n_{0}$
as the baseline. At the same time, $\gamma_{\text{min}}$ also increases
to satisfy the TIR condition as shown in Fig. \ref{fig:delta_vs_n_Q}
(b).
The internal dimensionless wavenumbers $\chi$ and $Q$-factors for
the mode pair of M(13,1) are changed as shown in Fig. \ref{fig:delta_vs_n_Q}
(c) and (d), respectively. In the uniform index disk cavity case with
$\left|\delta\right|=0$, all resonances except for the case with
$m=0$ are in doubly degenerate states due to the rotational symmetry.
Each degenerate pair is nearly degenerate under the condition of $\left|\delta\right|>0$
and each nearly degenerate pair can be divided into even- and odd-parity
modes due to the mirror symmetry for the $x$-axis. Nevertheless,
in the range we show, the wavelenth values for the cWGM pair appear
to overlap almost one with no significant deviation. In terms of the
RV space shown in Fig. \ref{fig:spaces} (d)), it means that the cWGM
pair is very little affected by the pure change in relative refractive
index at the boundary interface caused by $\left|\delta\right|$ in
our range.
\emph{ }As shown in Fig. \ref{fig:delta_vs_n_Q} (c), $\text{Re}[\kappa]$
associated with the internal wavelength of the mode pair of M(13,1)
does not change significantly with the varying in $\left|\delta\right|$,
on the other hand, $\text{Im}[\kappa]$ grows larger than that in
the homogeneous case and then decreases from a certain threshold (in
our case, about $\left|\delta\right|=0.05$) as $\left|\delta\right|$
increases. It is directly reflected in the $Q$-factor in Fig. \ref{fig:delta_vs_n_Q}
(d). The temporary rise of $Q$-factor, which is typical aspect of
cWGMs in TC satisfying the TIR condition \citep{cWGM_Y.Kim_TC_Nature_Phot,optimi_limacon_J.-W.Ryu},
is caused by the overall increasement in the refractive index profile
of TC due to the TIR condition, and the increament of $\gamma_{\text{min}}$
in the $\gamma$-type TC well explains why such behavior occurs.
\emph{ }For the case of $\ensuremath{\left|\delta\right|\ne0}$,
the relative refractive index at the boundary interface and the interior
refractive index profile exhibit a dipole distribution similar to
the case of the lima\c{c}on TC \citep{cWGM_Y.Kim_TC_Nature_Phot}.
Considering the emitting mechanism described through the Husimi function
\citep{Husimi_I.Kim_OE}, cWGMs in the circle-shaped TC can be predicted
to have a similar mode properties. We show the stepwise change of
refractive index profile and near- and far-field intensity patterns
for a even-odd mode pair of M(13,1) at $\left|\delta\right|=0$, $0.05$,
$0.10$, $0.15$, and $0.30$ in Fig. \ref{fig:RI =000026 patterns}.
\emph{ }As $\ensuremath{\left|\delta\right|}$ increases, the dipole
distribution of the refractive index becomes more pronounced due to
the increase in the variation width of the index profile, while the
near field intensity pattern at all steps maintains a cWGM morphology
confined well along the boundary.
In Ref. \citep{Husimi_I.Kim_OE,optimi_limacon_J.-W.Ryu}, it has already
discussed that the emission of cWGMs satisfying the TIR condition
is tunneled out as an evanescent wave in the region where the refractive
index is relatively low and, as the deformation parameter increases,
the nearly flat intensity band structure on the Husimi function for
cWGMs is almost unchanged, while the shape of the critical line is
further bent by the variation of the relative refractive index at
the boundary interface. Such non-constant critical angle creates a
unique light emission mechanism which the light tunnels out at regions
where the band structure is relatively close to the critical angle.
i.e., where the relative refractive index at the boundary interface
is relatively low.
In the TCs with a dipole distributed refractive index profile, the
critical line approaches the band structure only in one place, which
creates a single point emitting mechanism, and their external waves
form bi-directional emission if they have an axis symmetry. We can
confirm it through the far field intensity patterns in Fig. \ref{fig:RI =000026 patterns}.
\emph{ }As $\left|\delta\right|$ increases, the isotropic emission
of mode pair when $\left|\delta\right|=0$ turns into the bi-directional
emission and the mode pairs in each step have the same tendency for
the far-field intensity distribution regardless of parity. The bi-directionality
is bestly improved at $\left|\delta\right|=0.15$. Incidentally, if
$\left|\delta\right|\geqq0.2$, the maximum value of refractive index
profile requires a large rise above 4 which is difficult to implement,
but the far field distribution still has bi-directionality.
\begin{figure*}
\includegraphics[scale=0.45]{P-factors}
\caption{The change of $P$-factors for even-parity modes of (a) M(13,1) and
(b) M(13,2) according to $\left|\delta\right|$ in $\gamma$-type
circle-shaped TC with $n_{0}=1.8$ and $\gamma=\gamma_{\text{min}}$.
The data pickup domain for $P$-factors is set to $r_{{\scriptscriptstyle \text{RV}}}=0.8$
and the intensity of the near field patterns is normalized linear
scale. \label{fig:purity_1D}}
\end{figure*}
\subsection{Purity Factor for cWGMs}
We are here to present another tool for characterizing cWGMs. RV space
is easier to analyze the mode characteristics since it is the equvalent
space with physical space and formed with unit disk cavity with uniform
refractive index. In general, when the uniform index disk cavity is
progressively distorted by the shape deformation, WGMs with perfect
rotational symmetry begin to lose their inherent properties, accompained
by the $Q$-spoiling and the pattern distortion by the synthesis of
several angular momentum components. The angular memontum decomposition
is very useful method for analyzing such the angular momentum distribution
\citep{AMD}. The spread of $m$ derived from the analysis of angular
momentum components for a resonant mode can be used to gauge how much
the resonance in a slightly deformed cavity is distorted from a specific
resonance in the uniform index disk cavity. Such distortion also occurs
at the resonances in TCs. In the case of homogeneous cavities, the
distortion is due to the effect of shape deformation, whereas in the
case of circle-shaped TC without shape deformation, it is caused by
the non-uniformity of the relative refractive index at the boundary
interface. The variation of the distortion rate for each mode according
to the change of the system parameters has different criteria according
to the lifetime and wavelength of the resonance, but it is sufficient
to check how long the WGM characteristics of the uniform index disk
cavity are maintained in the cWGM.
To analyze the inherent mode properties inside TCs, we introduce the
angular momentum distribution in RV space equivalent with physical
space. In RV space, the wave function inside the uniform index disk
cavity can be expanded to cylindrical harmonics in polar coordinates
as follows:
\begin{equation}
\psi(r_{{\scriptscriptstyle \text{RV}}},\phi_{{\scriptscriptstyle \text{RV}}})=\sum_{m=-\infty}^{\infty}\alpha_{m}J_{m}\left(\kappa\frac{r_{{\scriptscriptstyle \text{RV}}}}{R_{0}}\right)e^{im\phi_{{\scriptscriptstyle \text{RV}}}},\label{eq:AMD_RV}
\end{equation}
where $r_{{\scriptscriptstyle \text{RV}}}(<R_{0})$ and $\phi_{{\scriptscriptstyle \text{RV}}}$
are the radius and angle of the position on the data pickup domain,
respectively, + (-) signs of angular momentum index mean CCW (CW)
traveling-wave components, $\alpha_{m}$ is the angular momentum distribution
for $m$, and $J_{m}$is the $m$th-order Bessel function of the first
kind. $\alpha_{m}$ is obtained through the Fourier expansion of the
above equation. Note that the data pickup domain must be chosen as
a concentric circle with a center of mass in the RV space, taking
into account the spatial transformation by conformal mapping. From
the above angular momentum distribution, our newly proposed measurand
that estimates the mode distortion rate, namely purity factor, can
be simply defined as follows :
\begin{equation}
P=\frac{\left|\alpha_{d}\right|^{2}}{\sum_{m=0}^{\infty}\left|\alpha_{m}\right|^{2}}\,,\label{eq:P-factor}
\end{equation}
where $\alpha_{d}$ is that for a dominant angular momentum index
$d$. Under the our situation of standing wave conditions by axis
symmetry, we only need to take one component, CCW or CW. This $P$-factor
means the contribution of resonance with $m=d$ in the uniform index
disk cavity in forming a specific resonance we are going to observe
in TCs.
To investigating the change of distortion rate for the cWGM, we obtained
the above $P$-factor for M(13,1) in the range of $0\leq\left|\delta\right|\leq0.4$
where the data pickup domain $r_{\text{RV}}=0.8$. We plotted it in
Fig. \ref{fig:purity_1D} attaching the $P$-factor for M(13,2) for
comparison. The $\left|\delta\right|$-dependent increase in the heterogeneity
of the relative refractive index at the boundary interface, which
acts as an effective deformation effect, reduces overall the $P$-factor
for both modes. Here, it should be noted that the variation of the
$P$-factor for M(13,1) is very small. It means that the resonance
properties in the uniform index disk cavity are maintained fairly
well in cWGMs with $l=1$. Whereas M(13,2), which relatively less
satisfies the TIR condition, reacts more sensitively to the center-shift
parametric changes on the boundary relative refractive index variation,
resulting in a greater collapse of the $P$-factor.
\section{Summary}
In this paper, we have newly proposed a construction scheme for TC
without the shrinking parameter related to TIR condition, and, in
the circle-shaped TC using it, investigated the characteristic changes
of a cWGM of which isotropic emission breaks in bi-directional emission
as the center-shift parameter increases. The enhancement of distortion
effect on the modes due to the pure spatial refractive index variation
according to the center-shift parameter can be verified through the
newly defined purity factor $P$ indicating how much the nature of
WGM in the uniform index disk cavity is maintained via the angular
momentum distribution in the RV space. In conclusion, it has been
examined that the circle-shaped TC can produce the bi-directional
emitting fundamental cWGMs of which $P$-factor is nearly one.
\section*{Acknowledgments}
We would like to thank Y. Kim and S.-J. Park for helpful discussions.
This work was supported by the National Research Foundation of Korea
(NRF) grant funded by the Korean government (MSIP) (2017R1A2B4012045
and 2017R1A4A1015565) and the Institute for Basic Science of Korea
(IBS-R024-D1).
| proofpile-arXiv_065-276 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Suppose two separated parties, Alice and Bob, aim to outputting random variables $X$ and $Y$, such that $(X,Y)$ is distributed exactly according to a target joint probability distribution $P$. That is to say, Alice and Bob want to sample a shared randomness $P$, and sometimes we call it a \emph{classical correlation}. {Then an important problem is, what is the minimum cost of generating an arbitrary classical correlation?}
{Actually this problem has been systematically studied \cite{zhang2012quantum,jain2013efficient,jain2017multipartite}.} Generally, $P$ is not a product distribution, thus \alice and \bob can share a seed correlation $(X',Y')$ and each applies a local operation on the corresponding subsystem without communication. The minimum {\emph{size}} of this seed distribution, {i.e., the half of the total number of bits}, is defined to be the \emph{randomized correlation complexity} of $P$, denoted $\R(P)$. Alternatively, the two parties can also share a \emph{quantum} state $\sigma$ as a seed state, on which the two parities apply local quantum operations without communication to generate $(X,Y)$. In this case, the minimum size of the quantum seed state $\sigma$, {i.e., the half of the total number of qubits}, is called the \emph{quantum correlation complexity}, denoted $\Q(P)$.
Instead of sharing seed states, \alice and \bob can also generate a correlation from scratch by communication only. When communicating quantum information, the minimum number of qubits exchanged between \alice and \bob, initially sharing nothing, to produce $P$ at the end of the protocol is defined as the {\em quantum communication complexity} of $P$, denoted $\qcomm(P)$. Similarly, one can also define the {\em randomized communication complexity} of $P$, denoted $\rcomm(P)$, as the minimum number of bits exchanged to produce $P$. It turns out that for any $P$, the correlation complexity and the communication complexity are always the same, namely $\qcomm(P) = \Q(P)$ and $\rcomm(P) = \R(P)$ \cite{zhang2012quantum}. Therefore, we can simply use the notations \Q and \R to denote the quantities in quantum and classical settings respectively. In this paper, {when generating classical correlations} by quantum procedures, we will mainly focus on the setting with seed states.
In fact, the full characterizations for \Q and \R have been achieved \cite{zhang2012quantum,jain2013efficient}, and for any classical correlation $P$. That is for any classical correlation $P$
\begin{equation}
\R(P) = \lceil \log_2 \rank_+(P) \rceil,
\label{eq:qcorr+rank}
\end{equation}
and
\begin{equation}
\Q(P) = \lceil \log_2 \prank(P) \rceil.
\label{eq:qcorrprank}
\end{equation}
Here for any nonnegative matrix $P\in \mbR_+^{n\times m}$, $\rank_+(P)$ is the nonnegative rank, which is defined as the minimum number $r$ such that $P$ can be decomposed as the summation of $r$ nonnegative matrices of rank $1$. And $\prank(P)$ is the positive semi-definite rank ({PSD-rank}), which is the minimum $r$ such that there are $r \times r$ positive semi-definite matrices $C_x$, $D_y\in\mbC^{r\times r}$, satisfying that $P(x,y) = \tr (C_x D_y)$, for all $x$ and $y$ \cite{fiorini2012linear,fawzi2015positive}.
It can be shown that the gap between nonnegative ranks and {PSD-ranks} can be huge, and this therefore reveals the remarkable advantages of quantum schemes in generating classical correlations. For example, consider the following $2^n \times 2^n$ matrix $M \in \mbR^{2^n\times 2^n}_+$ with rows and columns indexed by $n$-bit strings $a$ and $b$, and real nonnegative entries $M_{ab} := (1 - a^{\intercal} b)^2$, {where $a^{\intercal} b$ is the mod $2$ inner product between $a$ and $b$}. Then we have the following conclusions.
\begin{fact}[\cite{fiorini2012linear}]
It holds that $\rank_+(M) = 2^{\Omega(n)}$ and $\prank(M) = O(n)$.
\end{fact}
Though quantum advantages can be huge, and {extraordinary progress has been achieved} on physical implementation of quantum computation, it is widely believed that the availability of large scale quantum computers is still far \cite{arute2019quantum,preskill2018quantum}. As a consequence, in the near future the scale of quantum information processing, especially the scale of entanglement, is quite limited, say dozens or hundreds of qubits. Therefore, for some realistic classical correlations $P$, it is possible that $\lceil \log_2 \prank(P) \rceil$, {the necessary size of a shared seed quantum state that produces $P$ according to \cite{jain2013efficient}} exceeds the size that we can physically realize. In this situation, a natural question is, can we design a proper {quantum and classical hybrid protocol} to generate $P$ in such a way that, {it not only fulfills the task completely, but also fully exploits the potential of our quantum capability?} In this manuscript, by looking into the rich mathematical structures of {quantum and classical hybrid protocols}, we will give a positive answer to the above question.
{Particularly, we first consider the case that the only restriction on our capability to manipulate quantum states is the scale, which means we can require any quantum states whenever we want as long as their size is within our means, {which may depend on the classical messages exchanged}. Then we prove that if a hybrid protocol has to be utilized to generate a large classical correlation $P$, the protocol can be fully characterized by a concept called {\emph{$k$-block positive semi-definite ranks}, which is essentially a generalization of the concept of PSD-ranks, and reveals the relation between the amount of classic resource needed and the quantum scale available. By looking into the rich mathematical structures of this new concept, we prove that the shortage of one single qubit may require a huge amount of classical resource to compensate, thus providing new evidences of quantum advantages in generating classical correlations. Furthermore, we also consider another setting with more rigorous restrictions on our freedom of exploiting quantum {resource}, i.e., in addition to the restricted quantum scale, only one quantum state is provided for the players and it is independent of classical messages. Based on the idea of entanglement transformation, we show that the second model actually has similar power with the first one.}
{In the meanwhile, our results are also related to a famous open problem in quantum communication complexity theory. Quantum communication complexity was introduced by Yao in~\cite{Yao93}, which investigates the advantages and limit of the communication complexity models when the players are allowed to exchanges quantum messages. Dozens of examples have been discovered that exhibit the advantages of quantumness (see~\cite{10.1145/3357713.3384243} and references therein) as well as numerous methods proving the lower bounds on quantum communication complexity have been established~\cite{10.5555/1803907}. In the model introduced by Yao, the players may share classical random strings independent of the input before exchanging messages. This is named as the Yao's model. Thanks to Newman's theorem~\cite{NEWMAN199167}, we know that the shared randomness can only save at most $O(\log n)$ bits communication, where $n$ is the length of the inputs. Cleve and Buhrman in~\cite{CB97} introduced another model where the players are allowed to preshare arbitrary bipartite quantum states, which is named as the Cleve-Buhrman model. Using quantum teleportation \cite{bennett1993teleporting}, we may assume that the players in the Cleve-Buhrman model only exchange classical messages while the communication cost increases by at most factor 2. }
{A fundamental problem in communication complexity is how much communication can be saved if the players share entanglement. In other words, what is the largest separation between the Yao's model and the Cleve-Buhrman model? The role of entanglement in quantum computing has always been a core topic in the theory of quantum computation, which is studied in various models of computation. In particular, it has been shown in a very recent breakthrough result~\cite{JNVWY'20} that multi-prover interactive proof systems with sharing entanglement are able to decide the Halting problem, while the ones without sharing entanglement are in $\mathrm{NEXP}$~\cite{babai1991non}. However, little is known about the power of entanglement in communication complexity. Indeed, till now we do not have any nontrivial upper bound on the separation between the Yao's model and the Cleve-Burhman model. Meanwhile, we are not aware of any example that exhibiting a super-constant separation between these two models as well. In this paper, our results provide more facts on the power of entanglement in the context of generating classical correlation, which show that sharing entanglement can save the classical communication significantly, and thus hopefully shed a new light on this widely open problem.}
\section{The hybrid protocols}
Recall that for convenience we define the {size} of a bipartite distribution as the half of the total number of bits. Similarly, the size of a bipartite quantum state is the half of the total number of qubits. We suppose the largest bipartite quantum system we can manipulate has a size of $s$ qubits, and for convenience we call it \emph{quantum capability}. We now consider a target classical correlation $P\in \mbR_+^{n\times m}$ with $s<\lceil \log_2 \prank(P) \rceil$. Clearly, we cannot generate $P$ using a purely quantum scheme.
Therefore, we turn to analyze the possibility that combine quantum power and classical power together. To make the hybrid protocol valuable, we hope the extra classical cost needed will be dramatically smaller than that of a pure classical protocol. In the meantime, {as we have different ways to combine quantum subprotocols and classical ones for hybrid protocols in principle, we now analyze two main possibilities as below}.
\subsection{The classical-quantum hybrid}
{Suppose the target classical correlation can be expressed as a linear combination of two other ones, i.e., $P=\frac{1}{2}P_1+\frac{1}{2}P_2$, where $P_1$ and $P_2$ are nonnegative matrices}. Then one can easily construct examples with $\prank(P_1)<\prank(P)$ and $\prank(P_2)<\prank(P)$, which inspires us to design the following natural hybrid protocol. Assume $P=\sum_{i\in I}p_iP_i$, where $\{p_i\}$ is a probability distribution on $i\in I$, and for any $i\in I$, $P_i\in \mbR_+^{n\times m}$ is a classical correlation with $\lceil \log_2 \prank(P_i) \rceil\leq s$, then \alice and \bob can produce a sampling of $P$ as below. They first sample a shared output $i\in I$ classically according to the probability distribution $\{p_i\}$, then one of them prepares a bipartite quantum state $\rho_i$ that can serve as a seed state to produce $P_i$ and sends half of the qubits to the other party by quantum communication, which is within the quantum capability. After that, they generate a classical correlation $P_i$ by performing local measurements on $\rho_i$ like in a purely quantum protocol. Since $\sum_{i\in I}p_iP_i=P$, overall the hybrid protocol generates exactly the target classical correlation $P$.
Since in the first stage of the protocol \alice and \bob sample $i\in I$, we call this a \emph{classical-quantum hybrid protocol}. Here the classical cost is $c=\lceil \log_2 |I| \rceil$ bits, and the quantum cost is $q=\max_i\size(\rho_i)$ qubits. Since it holds that $q\leq s$, the current hybrid protocol can generate the target correlation $P$ within the quantum capability. Below is a simple example that demonstrates this idea.
Let
\begin{equation}\label{eq:diagonal}
P = \frac{1}{2^k}\begin{bmatrix}
P_1 & & & & \\
& P_2 & & &\\
& & \ddots & &\\
& & & & P_{2^k}
\end{bmatrix},
\end{equation}
where $2^k\cdot P\in \mbR_+^{2^kn\times 2^km}$ is a block diagonal matrix, and for the convenience of later discussion, we denote it by $\text{diag}(P_1,P_2,...,P_{2^k})$. For each $i\in[2^k]$, suppose $P_i\in \mbR_+^{n\times m}$ is a classical correlation satisfying $\prank(P_i) =2^s$. Then it can be seen that $P$, as a classical correlation, cannot be produced using a purely quantum protocol, as the current quantum capability $s$ is smaller than $\lceil \log_2 \prank(P) \rceil=k+s$. However, we can generate it using a hybrid protocol, where in the first stage it takes them classical communication of $k$ bits to sample $i\in[2^k]$, then they consume a shared quantum state of size $s$ qubits to generate the corresponding $P_i$. As long as they adjust the output labels properly, the overall output will be exactly a sample of $P$.
As pointed out before, examples of $P_i$ exist such that $\rank_+(P_i)\gg2^s$, {i.e., when sampling $P_i$ quantum schemes enjoy remarkable advantages over classical ones}. If this is the case, though we cannot produce $P$ using a purely quantum scheme directly, such a hybrid protocol may decrease the amount of classical resource dramatically.
Due to the above example, we are tempted to consider the following realistic problem. Still assume our quantum capability is known to be $s$, and the target classical correlation $P$ satisfies $s<\lceil \log_2 \prank(P) \rceil$. Then if we choose to generate $P$ using a classical-quantum hybrid protocol, what is the least amount of {extra classical resource} needed? Or, to put it another way, given an arbitrary classical correlation $P$, what is the minimum number $m$ such that $P$ can be expressed as the summation of $m$ nonnegative matrices with PSD-rank not larger than $2^s$? To answer this question, we first introduce the following definition, which is a generalization of the concept of PSD-rank.
\begin{definition} A \emph{$k$-block positive semi-definite factorization} of a nonnegative matrix $P\in \mbR_+^{n\times m}$ is a collection of positive semi-definite matrices $C_i=\text{diag}(C_i^1,...,C_i^r),D_j=\text{diag}(D_j^1,...,D_j^r)\in \mbC^{kr\times kr}$ that satisfy
\[P_{ij}=\tr (C_iD_j)=\sum_{l=1}^r\tr (C_i^lD_j^l),\ i=1,...,n,\ j=1...,m,\]
where $C_i^l,D_j^l\in \mbC^{k\times k}$ for each $i$, $j$, and $l$. And the {$k$-block positive semi-definite rank}, denoted $\kprank(P)$, is defined as the smallest integer $r$ for which such a $k$-block positive semi-definite factorization exists.
\end{definition}
We now prove that the question asked above is perfectly answered by the concept of $2^s$-block semi-definite ranks, where the corresponding classical-quantum hybrid protocol is exactly characterized by an optimal $2^s$-block positive semi-definite factorization.
\begin{theorem} Suppose the quantum capability is $s$ qubits. Then the minimum amount of classical communication needed in a classical-quantum hybrid protocol producing $P$ is exactly $\lceil\log_2{\tt rank}_{\tt psd}^{(2^s)}(P)\rceil$ bits.
\end{theorem}
\begin{proof} Suppose the minimal classical cost is $c$ bits. Then we have a factorization $P(x,y)=\sum_{i=1}^{2^c}p_iP_i(x,y)$, where $\{p_i\}$ is a probability distribution on $i\in [2^c]$, and each correlation $P_i$ can be generated by quantum communication of $\lceil\log_2\prank(P_i)\rceil\leq s$ qubits with a purely quantum protocol. Suppose a positive semi-definite factorization of $P_i$ is $P_i(x,y)=\tr(C_x^iD_y^i)$, where without loss of generality $C_x^i,D_y^i$ can be chosen as positive semi-definite matrices of size $2^s\times 2^s$ for any $x\in[n],y\in[m]$. Let $C_x=\text{diag}(p_1C_x^1,...,p_{2^c}C_x^{2^c})$ and $D_y=\text{diag}(D_y^1,...,D_y^{2^c})$. Then it can be seen that $C_x$ and $D_y$ are block diagonal positive semi-definite matrices with each block of size $2^{s}\times2^{s}$, and furthermore, $P(x,y)=\tr(C_xD_y)$ for any $x\in[n],y\in[m]$. Therefore, it holds that ${\tt rank}_{\tt psd}^{(2^s)}(P)\leq2^c$, i.e., $\lceil\log_2{\tt rank}_{\tt psd}^{(2^s)}(P)\rceil\leq c$.
On the other hand, suppose $r={\tt rank}_{\tt psd}^{(2^s)}(P)$. Then one can find block diagonal positive semi-definite matrices $C_x$ and $D_y$ of block size $2^{s}\times2^{s}$ such that $P(x,y)=\tr(C_xD_y)$ for any $x\in[n],y\in[m]$. That is to say, we can suppose $C_x=\text{diag}(C_x^1,...,C_x^{r})$ and $D_y=\text{diag}(D_y^1,...,D_y^{r})$, where $C_x^i$ and $D_y^i$ are positive semi-definite matrices of size $2^s\times 2^s$ for any $i\in[r]$. Define $P_i$ to be the classical correlation $Q_i/\Vert Q_i\Vert_1$, where $Q_i\in\mbR_+^{n\times m}$ and $Q_i(x,y)=\tr(C_x^iD_y^i)$ for any $x\in[n],y\in[m]$. Note that this is well-defined: If we let $p_i=\Vert Q_i\Vert_1$, then $p_i>0$ according to the definition of $2^s$-block diagonal positive semi-definite rank. Then it is not hard to see that $P=\sum_{i=1}^{r}p_iP_i$, and for each $i\in[r]$, it holds that $\prank(P_i)\leq2^s$. Therefore, one can design a classical-quantum hybrid protocol to generate $P$ corresponding to this factorization, where the cost of classical communication is $\lceil\log_2r\rceil$, implying that $c\leq\lceil\log_2{\tt rank}_{\tt psd}^{(2^s)}(P)\rceil$.
\end{proof}
In real-life implementations of sampling $P\in \mbR_+^{n\times m}$, we often allow a small deviation of $\epsilon$, which suggests us define an approximate version of $k$-block positive semi-definite rank, that is,
\begin{equation}
{\tt rank}_{\tt psd,\epsilon}^{(k)}(P)\equiv\min\{\kprank(Q):\text{ }Q\in \mbR_+^{n\times m}\text{ is a probability distribution and }\|P-Q\|_{1}\leq\epsilon\},
\end{equation}
{where $\|P-Q\|_{1}$ is the $1$-norm of $P-Q$, i.e., the summation of the absolute values of all entries of $P-Q$}. Then it can be seen that when tolerating a small additive error $\epsilon$, the cost of optimal classical-quantum protocol that samples $P$ approximately is characterized by the corresponding approximate $k$-block positive semi-definite rank.
Therefore, we now know that given the quantum capability $s$ qubits, suppose $s<\lceil \log_2 \prank(P) \rceil$, then in order to design a proper classical-quantum hybrid protocol generating $P$, estimating ${\tt rank}_{\tt psd}^{(2^s)}(P)$ is crucial. In the rest of the current section, we will focus on the characterization of ${\tt rank}_{\tt psd}^{(2^s)}(P)$.
Firstly, according to the properties of ranks and PSD-ranks, we immediately have the following lower bounds for ${\tt rank}_{\tt psd}^{(2^s)}(P)$.
\begin{fact}For any nonnegative matrix $P\in \mbR_+^{n\times m}$ and any integer $k\geq1$, it holds that
\begin{equation}\label{eq:withPSDRank}
\kprank(P)\geq \frac{\prank(P)}{k}, \ \ {\tt rank}_{\tt psd,\epsilon}^{(k)}(P)\geq \frac{{\tt rank}_{\tt psd,\epsilon}(P)}{k},
\end{equation}
and
\begin{equation}\label{eq:withRank}
\kprank(P)\geq \frac{\rank(P)}{k^2}, \ \ {\tt rank}_{\tt psd,\epsilon}^{(k)}(P)\geq \frac{{\tt rank}_{\epsilon}(P)}{k^2},
\end{equation}
where ${\tt rank}_{\tt psd,\epsilon}(P)$ and ${\tt rank}_{\epsilon}(P)$ are the approximate PSD-rank and the approximate rank of $P$, respectively, i.e., ${\tt rank}_{\tt psd,\epsilon}(P)\equiv\min\{\prank(Q):\text{ }Q\in \mbR_+^{n\times m}\text{ is a probability distribution and }\|P-Q\|_{1}\leq\epsilon\}$ and ${\tt rank}_{\epsilon}(P)\equiv\min\{\rank(Q):\text{ }Q\in \mbR_+^{n\times m}\text{ is a probability distribution and }\|P-Q\|_{1}\leq\epsilon\}$.
\end{fact}
The above two lower bounds on exact cases are tight. For example, let $P$ be the classical correlation in Eq.\eqref{eq:diagonal}, then it holds that $\prank(P)=2^{s+k}$ and ${\tt rank}_{\tt psd}^{(2^s)}(P)\leq 2^{k}$, where the second fact comes from that we can decompose $P$ into the summation of $2^k$ classical correlations with each corresponding to one $P_i$. Hence ${\tt rank}_{\tt psd}^{(2^s)}(P)\leq \prank(P)/2^s$, and combined with Eq.\eqref{eq:withPSDRank} this means that actually ${\tt rank}_{\tt psd}^{(2^s)}(P)= \prank(P)/2^s=2^k$. Furthermore, if one chooses $P_i$ such that $\rank(P_i)=\prank(P_i)^2=2^{2s}$ for any $i\in[2^k]$, then we have $\rank(P)=2^{2s+c}$, and ${\tt rank}_{\tt psd}^{(2^s)}(P)=\rank(P)/2^{2s}$, implying that Eq.\eqref{eq:withRank} can also be tight. However, later we will see that in some cases these relations can be very loose.
We next turn to upper bounds for $\kprank(P)$. It turns out that $\kprank(P)$ can be upper bounded by generalizing the idea in the example of Eq.\eqref{eq:diagonal}, and the notion of \emph{combinatorial rectangle} proposed by Yao \cite{yao1979some}, which plays a key role in communication complexity theory. Suppose $X\subseteq[n]$ and $Y\subseteq[m]$, then $X\times Y$ pins down a submatrix of $P$, called a combinatorial rectangle. Then we define a \emph{partition} of $P$ to be a series of nonzero combinatorial rectangles, where there is no overlap between any two of them and the union of all combinatorial rectangles contains all nonzero entries of $P$. If each combinatorial rectangle, regarded as a classical correlation after normalization, can be produced quantumly within the quantum capability, then $P$ can be generated by a classical-quantum protocol as a probability mixture of these combinatorial rectangles. Naturally, in this situation we are interested in the size of the optimal partition of $P$, which has the minimum number of combinatorial rectangles with each within the quantum capability. For this, we make the following definition.
\begin{definition} Let $P\in \mbR_+^{n\times m}$ be a classical correlation. Define the \textbf{$k$-partition number} of $P$, denoted $C^k(P)$, as the minimum size of a partition of $P$ with the property that each combinatorial rectangle has PSD-rank at most $k$. For convenience, we call these combinatorial rectangles a \textbf{$k$-partition} of $P$.
\end{definition}
Then we have the following proposition.
\begin{prop} For any nonnegative matrix $P\in \mbR_+^{n\times m}$ and any integer $k\geq1$, it holds that
\begin{equation}
\kprank(P)\leq C^k(p).
\end{equation}
\end{prop}
\begin{proof}Suppose $t=C^k(P)$, and $\{P_1,P_2,...,P_t\}$ is an optimal $k$-\textbf{partition} of $P$.
Define the weight of the $i$-th combinatorial rectangle is the summation of all its entries, denoted $w_i$. Then $\sum_{i=1}^tw_i=1$, and $\{w_i,i\in[t]\}$ is a valid probability distribution.
We expand the size of each $P_i$ to be $n\times m$ by adding zero entries with the positions of all nonzero entries the same as in $P$, which does not change its PSD-rank. For any $i\in[t]$ suppose an optimal positive semi-definite factorization of $P_i$ is $P_i(x,y)=\tr(C_x^iD_y^i)$, where $C_x^i,D_y^i$ are $k\times k$ positive semi-definite matrices for any $x\in[n],y\in[m]$. Let $C_x=\text{diag}(w_1C_x^1,...,w_{t}C_x^{t})$ and $D_y=\text{diag}(D_y^1,...,D_y^{t})$. Then it can be seen that $P(x,y)=\tr(C_xD_y)$ for any $x\in[n],y\in[m]$. Therefore, it holds that $\kprank(P)\leq C^k(p)$.
\end{proof}
We now consider a specific example of this upper bound. Again we go back to the one in Eq.\eqref{eq:diagonal}, and we already know that in this case ${\tt rank}_{\tt psd}^{(2^s)}(P)\leq 2^k$. Inspired by this example, the above upper bound naturally gives the same result, which means the amount of classical resource needed to perform a classical-quantum hybrid generating $P$ is at most $k$ bits. In the meantime, note that $\prank(P)=2^{s+k}$, that is to say, a purely quantum scheme producing $P$ needs a shared quantum state of size $s+k$ qubits. Therefore, it can be said that the $k$-bit classical resource involved in the classical-quantum protocol works quite efficiently, in the sense that it fulfills completely the task of the extra $k$-qubit quantum resource in a purely quantum scheme.
However, this is not always the case: It is possible that the effect of quantum resource of one single qubit needs a large amount of classical resource to compensate!
Before exhibiting such an example, we would like to remark that this can be regarded as another angle to reveal the remarkable advantages of quantum resource over classical resource in generating correlations. Our example will be based on {\emph{Euclidean distance matrices}} that have been extensively studied~\cite{lin2010nonnegative,hrubevs2012nonnegative,shitov2019euclidean}.
\begin{definition}(Euclidean Distance Matrix) Given $n$ distinct real numbers $c_1,...,c_n$, the corresponding Euclidean distance matrix (EDM) is the $n\times n$ symmetric and nonnegative matrix $Q(c_1,...,c_n)$ whose $(i,j)$-th entry $q_{i,j}$ is defined by
\[q_{ij}=(c_i-c_j)^2,\ i,j=1,...,n.\]
\end{definition}
\begin{fact}\cite{shitov2019euclidean}
There exist $n$ distinct real numbers $c_1,...,c_n$ such that $\rank(Q_1)=3,\prank(Q_1)=2$ and $\rank_+(Q_1)\geq 2\sqrt{n}-2$, where $Q_1=Q(c_1,...,c_n)$.
\end{fact}
We choose such a $Q_1$ with $q_{i,j}>0$ for any $i\neq j$, and let $\tilde{Q}_1=Q_1/\Vert Q_1\Vert_1$, then $\tilde{Q}_1$ is a classical correlatio,n with $\rank_+(\tilde{Q}_1)\geq2\sqrt{n}-2$. The above fact indicates that when generating $\tilde{Q}_1$, a quantum scheme enjoys remarkable advantages over any classical ones, as the cost of the former can be only one single qubit, while the latter needs at least classical resource of $\lceil\log n\rceil$ bits.
We now consider $\tilde{Q}_2=\tilde{Q}_1\otimes\tilde{Q}_1$, which is a classical correlation of size $n^2\times n^2$, and similarly for any positive integer $k$, we define $\tilde{Q}_k=\tilde{Q}_1^{\otimes k}$. Since $\prank(A\otimes B)\leq \prank(A)\cdot\prank(B)$ for any nonnegative matrices $A$ and $,B$, we have that $\prank(\tilde{Q}_2)\leq 4$ (actually it is not hard to see that $\prank(\tilde{Q}_2)= 4$), thus a purely quantum scheme only needs a quantum seed of size 2 qubits to generate $\tilde{Q}_2$. To study classical-quantum hybrid protocols generating $\tilde{Q}_2$, we now assume that $s=1$, i.e., our quantum capability is only one qubit, thus we cannot generate $\tilde{Q}_2$ using a purely quantum scheme directly. Then we turn to classical-quantum hybrid protocols to produce $\tilde{Q}_2$, and we are interested in the minimum classical resource needed. According to Theorem 1, we have to estimate $\lceil\log_2{{\tt rank}_{\tt psd}^{(2)}}(\tilde{Q}_2)\rceil$. We now prove the following conclusion.
\begin{prop}\label{ex1}
${{\tt rank}_{\tt psd}^{(2)}}(\tilde{Q}_2)\geq \log n$.
\end{prop}
\begin{proof}
Denote the $(i,j)$-entry of $\tilde{Q}_1$ by $\tilde{q}_{i,j}$, i.e., $\tilde{q}_{i,j}=\tilde{Q}_1(i,j)$. Then
\begin{equation}\label{eq:blcok}
\tilde{Q}_2=\tilde{Q}_1\otimes\tilde{Q}_1=\begin{bmatrix}
0 & \tilde{q}_{1,2}\tilde{Q}_1 & \dots & \tilde{q}_{1,n}\tilde{Q}_1\\
\tilde{q}_{2,1}\tilde{Q}_1 & 0 & \dots & \tilde{q}_{2,n}\tilde{Q}_1\\
\vdots & \vdots & \ddots & \vdots \\
\tilde{q}_{n,1}\tilde{Q}_1 & \tilde{q}_{n,2}\tilde{Q}_1 & \dots & 0
\end{bmatrix}.
\end{equation}
For the convenience of later discussion, we call $\tilde{q}_{i,j}\tilde{Q}_1$ the $(i,j)$-th block of $\tilde{Q}_2$ when $i\neq j$, and apparently for any $i\in[n]$ the $(i,i)$-th block is a zero matrix. For any other matrix $M$ with the same size $n^2\times n^2$, we also use this term to address the corresponding submatrix of $M$ with exactly the same position. Suppose $\tilde{Q}_2=\sum_{k=1}^rP_k$, where $P_k$ is a nonnegative matrix and {$\prank(P_k)\le 2$} for any $k\in[r]$. Then we need to prove that $r\geq \log n$.
Suppose $r<\log n$. Then we claim that for any $i\neq j$, there must be an integer $k_0\in[r]$ such that the $(i,j)$-th block of $P_{k_0}$ has rank $3$ or $4$. This can be proved as below. Suppose this is not the case, i.e., for any $k\in[r]$, the rank of the $(i,j)$-th block of $P_{k}$ is $1$ or $2$, then according to the fact that for any rank-$2$ nonnegative matrix $A$ it holds that $\rank_+(A)=2$~\cite{cohen1993nonnegative}, the summation of the $(i,j)$-th blocks of all $P_{k}$ has a nonnegative rank smaller than $2\log n$. However, this summation is actually $\tilde{q}_{i,j}\tilde{Q}_1$, whose nonnegative rank is at least $2\sqrt{n}-2$, much larger than $2\log n$, which is a contradiction. Therefore, for any block, there exists $k\in[r]$ such that this block of $P_k$ has rank $3$ or $4$.
We now fix an arbitrary $k\in[r]$, and focus on {the blocks of $P_k$ which have rank $3$ or $4$}. We claim that all these blocks can be covered by a \emph{position rectangle}, which will be explained later. Suppose the $(i,j)$-th and the $(i',j')$-th blocks, denoted $P_k^{(i,j)}$ and $P_k^{(i',j')}$, have rank $3$ or $4$, then they have PSD-rank 2, where we use the fact that $\prank(P_k)=2$ and the relation $\prank(A)\geq\sqrt{\rank(A)}$ for any nonnegative matrix $A$ \cite{gouveia2013lifts}. Note that it holds that
\begin{equation}\label{eq:2+2}
\prank\begin{pmatrix}\begin{bmatrix}
P_k^{(i,j)} & *\\
0 & P_k^{(i',j')}
\end{bmatrix}\end{pmatrix}=
\prank\begin{pmatrix}\begin{bmatrix}
P_k^{(i,j)} & 0\\
* & P_k^{(i',j')}\\
\end{bmatrix}\end{pmatrix}=4,
\end{equation}
where the star can be any $n\times n$ nonnegative matrix. Since $\prank(P_k)=2$, this means that the locations of the blocks of $P_k$ with rank $3$ or $4$ have to be well-organized, and the patterns in Eq.\eqref{eq:2+2} cannot exist. Let $A = \{i\in[n]:\exists j\in[n] \text{ such that the }(i,j)\text{-th block has rank 3 or 4}\}$, and $B = \{j\in[n]:\exists i\in[n] \text{ such that the }(i,j)\text{-th block has rank 3 or 4}\}$. Then the observation given by Eq.\eqref{eq:2+2} implies that $A\cap B=\emptyset$. Therefore, if we can call the set $A\times B$ a {position rectangle}, then it covers all the positions of the blocks of $P_k$ with rank $3$ or $4$, and note also that the position rectangle does not contain any diagonal blocks.
We now consider the corresponding position rectangles for all $P_k$. It can be seen that these rectangles may have overlap, but they need to cover all the off-diagonal blocks of $\tilde{Q}_2$, because of the fact that for each off-diagonal block there exists a $k_0\in[r]$ such that the corresponding block of $P_{k_0}$ has rank $3$ or $4$. Therefore, $r$ should be at least the minimum number of monochromatic-1 rectangles needed to cover all the 1s in the communication matrix of the inequality function, which means $r\geq \log n$~\cite{Nisan97}. This is contradicted to the assumption $r<\log n$. This completes the proof.
\end{proof}
{Therefore, to compensate the single-qubit shortage of quantum resource in generating $\tilde{Q}_2$, one has to consume classical resource of $\log\log n$ bits roughly, even with the quantum capability of one qubit. Note that here $n$ could be any positive integer, making a sharp difference from the example in Eq.\eqref{eq:diagonal}.}
In fact, using the similar technique, we can strengthen the conclusion in the following two different ways. These facts on $\tilde{Q}_2$ clearly reveals the rich mathematical structure of classical-quantum hybrid protocols and $k$-block positive semi-definite rank. {Indeed, the first corollary below shows that when $n$ is large, even if the quantum capability is qutrit, i.e., only one dimension smaller than 2 qubits, any classical-quantum hybrid protocol that produces $\tilde{Q}_2$ will need a large amount of classical resource.}
\begin{corollary}\label{corollary1}
${{\tt rank}_{\tt psd}^{(3)}}(\tilde{Q}_2)\geq \log n$.
\end{corollary}
\begin{proof} The proof is almost the same with the previous proposition, except that now the blocks $P_k^{(i,j)}$ and $P_k^{(i',j')}$ introduced above can have PSD-rank 2 or 3, but the patterns in Eq.\eqref{eq:2+2} still cannot exist. Therefore, the proof still works.
\end{proof}
At the same time, the following corollary implies that for any positive integer $k$, there always exist classical correlations $P$ such that the cost of a purely quantum scheme to sample $P$ is $k$ qubits, but if the quantum capacity is $k-1$ qubits, i.e., a shortage of one single qubit for a purely quantum scheme, {then in any classical-quantum hybrid protocol sampling $P$ a large amount of classical resource has to be needed}.
\begin{corollary}\label{corollary2}
For any positive integer $k\geq2$, ${{\tt rank}_{\tt psd}^{(2^{k-1})}}(\tilde{Q}_k)\geq \log n$.
\end{corollary}
\begin{proof} We prove it by induction. First, according to Proposition \ref{ex1}, we know that it is true when $k=2$. We suppose it holds when $k=i_0$, i.e., ${{\tt rank}_{\tt psd}^{(2^{i_0-1})}}(\tilde{Q}_{i_0})\geq \log n$, and we now focus on ${{\tt rank}_{\tt psd}^{(2^{i_0})}}(\tilde{Q}_{i_0+1})$. Since $\tilde{Q}_{i_0+1}$ can be expressed in a similar way as Eq.\eqref{eq:blcok}, for convenience we also use the term of the $(i,j)$-th block to address the corresponding submatrix, except that now it is not $\tilde{q}_{i,j}\tilde{Q}_1$, but $\tilde{q}_{i,j}\tilde{Q}_{i_0}$. Again we suppose $\tilde{Q}_{i_0+1}=\sum_{k=1}^rP_k$, where $P_k$ is a nonnegative matrix and $\prank(P_k)\leq 2^{i_0}$ for any $k\in[r]$. And we need to prove that $r\geq \log n$.
Suppose $r<\log n$. Then for any $i\neq j$, there must be an integer $k_0\in[r]$ such that the $(i,j)$-th block of $P_{k_0}$, denoted $P_{k_0}^{(i,j)}$, has PSD-rank larger than $2^{i_0-1}$. If this is not true, then $\sum_{k=1}^rP_{k}^{(i,j)}$, which is actually $\tilde{q}_{i,j}\tilde{Q}_{i_0}$, can be a summation of $r<\log n$ nonnegative matrices with each having PSD-rank not larger than $2^{i_0-1}$, contradicted with the assumption that ${{\tt rank}_{\tt psd}^{(2^{i_0-1})}}(\tilde{Q}_{i_0})\geq \log n$.
Then again we fix a $k\in[r]$ and look at the blocks of $P_k$ with PSD-rank larger than $2^{i_0-1}$. By a similar observation as Eq.\eqref{eq:2+2}, we know that these special blocks of $P_k$ also appear in a similar pattern as the blocks with rank $3$ or $4$ in the case of $\tilde{Q}_2$, and their positions can also covered by a position rectangle. Therefore, a similar argument proves that we must have $r\geq\log n$.
\end{proof}
\subsection{The quantum-classical hybrid}
In classical-quantum hybrid protocols, {the major restriction on exploiting quantum power is the size of available quantum states}. Within the quantum capability, we have the freedom to control and manipulate any quantum state. Particularly, when producing a classical correlation, with respect to the classical sampling result $i$ in the first stage, we are able to ask for any corresponding quantum state $\rho_i$. However, sometimes this kind of freedom is still expensive to us. For this, we now consider a new hybrid protocol with more rigorous restrictions, that is, only one quantum state independent of classical messages is available for the players, and thus the classical-quantum hybrid protocols introduced above do not work any more. Since the quantum state is fixed, we can choose its preparation as the first action, and hence call the new protocol a \emph{quantum-classical hybrid} one.
Given one single copy of shared quantum state, say $\rho$, one may think of utilizing it in the following natural way: Based on the shared state, \alice and \bob produce a classical correlation $P'$. After sampling $x'$ and $y'$ according to $P'$, both of them make two proper local classical samplings accordingly, then give their outputs $x$ and $y$, hoping that the final output is exactly distributed corresponding to the target $P$. However, it can be argued that, this is not possible in general. Indeed, since the second stage is a classical local sampling for each party, each operation can be regarded as a special form of quantum operation. Then if the above protocol is possible, each party can merge this special quantum operation into the local quantum operation he/she performs when producing $P'$, resulting in a valid composite quantum operation. Therefore, based on the original seed quantum state of size $s$, \alice and \bob is able to generate $P$ directly, which is a contradiction.
Due to this observation, one may wonder, with such a rigourous restriction on quantum resource available, whether quantum can make essential contributions or not in this task? It turns out the answer to this question is {again affirmative}. To explain why this is the case, we first recall two useful facts.
First, if we choose all bipartite quantum states $\rho_i$ involved in a classical-quantum hybrid protocol to be pure, we still have the same power in generating classical correlations, even if the quantum capability is unchanged~\cite{sikora2016minimum}. Second, we also need the following well-known result by Nielsen.
\begin{fact}\cite{nielsen1999conditions}\label{fact:nielsen}
$\ket\Psi$ and $\ket\Phi$ are two $d\times d$ bipartite pure quantum states, and $\lambda_{\Psi}$ and $\lambda_{\Phi}$ are the vectors of their Schmidt coefficients respectively. Then $\ket\Psi$ can be transformed to $\ket\Phi$ using local operations and classical communication (LOCC) if and only if $\lambda_{\Psi}$ is majorized by $\lambda_{\Phi}$.
\end{fact}
Suppose $\lambda_{\Psi}=(\lambda_{\Psi,1},...,\lambda_{\Psi,d})$ and $\lambda_{\Phi}=(\lambda_{\Phi,1},...,\lambda_{\Phi,d})$ are real $d$-dimensional vectors. We say $\lambda_{\Psi}$ is majorized by $\lambda_{\Phi}$ if for any $k\in[d]$, i.e.,
\[\sum_{i=1}^{k}\lambda_{\Psi,i}^{\downarrow}\le \sum_{i=1}^{k}\lambda_{\Phi,i}^{\downarrow},\]
with equality holding when $k=d$, and here the $\downarrow$ indicates the descending order of the eigenvalues
For example, {if} \alice and \bob share $s$ Einstein-Podolsky-Rosen (EPR) pairs, i.e., a pair of qubits which are in a maximally entangled state, then as a whole bipartite pure state the corresponding vector of Schmidt coefficients is $\lambda_{s-EPR}=(2^{-s},2^{-s},...,2^{-s})$. Then, for any $2^s\times 2^s$ pure quantum state $\ket\Phi$, it is easy to check that $\lambda_{s-EPR}$ is majorized by $\lambda_{\Phi}$
With the above two facts, we can design a quantum-classical hybrid protocol to generate a target classical correlation $P$ as below. Suppose that an optimal classical-quantum hybrid protocol that generates $P$ corresponds to a decomposition $P=\sum_{i\in I}p_iP_i$, and for any $i\in I$, $P_i$ can be produced quantumly using a bipartite quantum state $\rho_i$ within quantum capability $s$. According to the above discussion, we can assume that all $\rho_i$ are pure. Then in a quantum-classical hybrid protocol, \alice and \bob first share $s$ EPR pairs, which is within quantum capability. Next they sample an integer $i\in I$ classically with respect to the distribution $\{p_i\}$. After obtaining shared $i$, they transform the $s$ EPR pairs into $\rho_i$ using LOCC. According to Fact \ref{fact:nielsen}, this can be fulfilled with certainty, though needs some classical communication. Then they are able to sample $P_i$ by performing local quantum operations on $\rho_i$. It is not hard to see that the overall output will be exactly a sampling of $P$, as in a classical-quantum hybrid protocol.
It can be seen that the resource consumptions in a quantum-classical hybrid protocol are quite similar with those in the corresponding classical-quantum hybrid protocol, except some extra classical communication is needed in the part that transforms $s$ EPR pairs into $\rho_i$, {which turns out to be at most $2^s-1$ bits~\cite{nielsen1999conditions}}. Therefore, we have the following conclusion.
\begin{prop}\label{quantum-and-classical}
Suppose $P$ is a classical correlation with $\prank(P)>2^s$, where $s$ is the quantum capability. Then the classical communication needed in a quantum-classical hybrid protocol to sample $P$ is at most $\lceil\log_2{\tt rank}_{\tt psd}^{(2^s)}(P)\rceil+2^s-1$ bits.
\end{prop}
Consider the facts that for state-of-the-art technology $s$ is still quite small, and that classical communication is relatively cheap, the performance of a quantum-classical hybrid protocol is comparable with that of the corresponding classical-quantum protocol, though it suffers from more rigorous restriction to access quantum {resource}.
\section{The advantages of shared entanglement over shared randomness in communication complexity}
{As mentioned before, in communication complexity theory a fundamental open problem is to exhibit and prove the advantages of shared entanglement over shared randomness in computing boolean functions}. Though hybrid protocols for generating classical correlations deal with a different and simpler task, they provide us an angle to look into the advantages of shared entanglement over shared randomness in communication protocols.
For this, we now consider and compare the following two specific settings. The mission is still sampling a classical correlation $P$. In the two settings, \alice and \bob first share two different resources of a same size respectively: one is entangled quantum state, and the other is public randomness. We set the amount of shared resources in such a way that to fulfill the task, {they may need more computational resource, which we suppose to be quantum communication}. Therefore, we can see that one of the two settings is actually a purely quantum protocol, while the other is a classical-quantum hybrid protocol. We compare the amount of quantum communication needed in the second stage. Clearly, this is a reasonable way to compare the computational power of the shared entanglement and public randomness involved in the first stage.
More specifically, suppose $P\in \mbR_+^{n\times m}$ is the target classical correlation. And we let the common size of the initial shared resources be $\lceil \log_2 \prank(P) \rceil$ bits or qubits. Then in the purely quantum protocol, the quantum communication needed in the second stage is zero, as the shared quantum state in the first stage is already sufficient to sample $P$. { As a result, to compare the two settings, the remaining problem is estimating how much quantum communication is needed in classical-quantum hybrid protocols. For convenience, we denote this quantity by $t$ qubits.}
We immediately have two trivial lower and upper bounds for $t$. First, if $\prank(P)<\rank_+(P)$, which is usually the case, then $t>0$. Second, \alice and \bob can choose to throw away the shared randomness and generate $P$ from scratch in the second stage, and the corresponding cost is $\lceil \log_2 \prank(P)\rceil$ qubits. Therefore, it holds that
\begin{equation}\label{eq:trivialbounds}
t\leq \lceil \log_2 \prank(P)\rceil.
\end{equation}
Actually, we can prove the following result, which provides a nontrivial lower bound for $t$.
\begin{lemma} In a classical-quantum hybrid protocol that generates $P\in \mbR_+^{n\times m}$, suppose the costs of the first and the second stages are $c$ bits and $s$ qubits respectively. Then it holds that
\begin{equation}
2s+c\geq\lceil \log_2 \rank(P)\rceil.
\end{equation}
\end{lemma}
\begin{proof}According to the structures of classical-quantum hybrid protocols, we have $P=\sum_{i=1}^{2^c}P_i$, and $\prank(P_i)\leq2^s$ for any $i\in[2^c]$. Then using the relation $\prank(A)\geq\sqrt{\rank(A)}$ for any nonnegative matrix $A$, it holds that $\rank(P_i)\leq2^{2s}$. In the meanwhile, we also have that
\begin{equation}
\rank(P)\leq\sum_{i=1}^{2^c}\rank(P_i)\leq 2^{2s+c},
\end{equation}
which concludes the proof.
\end{proof}
Recall that in our setting we set $c$ to be $\lceil \log_2 \prank(P) \rceil$, hence the above lemma implies the following fact.
\begin{corollary}\label{coro:lowerboundforT}
\begin{equation}
t\geq \frac{1}{2}\left(\lceil \log_2 \rank(P)\rceil-\lceil \log_2 \prank(P) \rceil\right).
\end{equation}
\end{corollary}
{Note that there exists nontrivial nonnegative matrices $P$ such that $\prank(P)=\sqrt{\rank(P)}$~\cite{lee2017some}}. If we choose such $P$ as our target classical correlation, the result given by Corollary \ref{coro:lowerboundforT} is actually
\begin{equation}
t\geq \frac{1}{2}\lceil \log_2 \prank(P)\rceil.
\end{equation}
This indicates that the trivial upper bound in Eq.\eqref{eq:trivialbounds} can be tight up to a factor $1/2$.
\section{Conclusion}
Motivated by the fact that the scale of near-term quantum computing is quite limited, in this paper we proposal two kinds of hybrid protocols that combine classical power and quantum power to generate large-scale classical correlations. By looking into the connections between these two models, we show that their performances are close, thus we can choose to focus on the more flexible one of them, i.e., the model of classical-quantum hybrid protocols. {Particularly, we show that this kind of protocols can be fully characterized by the new {concepts of $k$-block positive semi-definite rank and $k$-block positive semi-definite factorization that we proposed}. By specific examples, we show that hybrid protocols have rich mathematical structures, which, from two different viewpoints, indicate the remarkable quantum advantages in generating classical correlations. Indeed, we witness the cases where in order to compensate the shortage of single-qubit quantum resource, a large amount of classical resource has to be consumed. In the meanwhile, by comparing two specific settings with the same amount but different {kinds} of beforehand shared {resources}, we may gain a better understanding of the different power of shared entanglement and public randomness in communication complexity theory.}
\begin{acknowledgments}
We thank Xun Gao and Zhengfeng Ji for helpful comments. X.L. and Z.W. are supported by the National Key R\&D Program of China, Grant No. 2018YFA0306703 and the start-up funds of Tsinghua University, Grant No. 53330100118. This work has been supported in part by the Zhongguancun Haihua Institute for Frontier Information Technology. P.Y. is supported by the National Key R\&D Program of China 2018YFB1003202, National Natural Science Foundation of China (Grant No. 61972191), the Fundamental Research Funds for the Central Universities 0202/14380068, a China Youth 1000-Talent grant and Anhui Initiative in Quantum Information Technologies Grant No. AHY150100.
\end{acknowledgments}
\bibliographystyle{alpha}
| proofpile-arXiv_065-277 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Supplement Material}
\end{document}
| proofpile-arXiv_065-278 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}\label{sec-introd}
In the final fate, a star collapses under its own gravity when the internal pressure from nuclear fuel is depleted. The quantum pressure of fermions kicks in to rescue. If the mass of the star is below $0.7$ solar mass, the degeneracy pressure of neutrons alone would be able to stop the collapse~\cite{tov1,tov2}. Effects of repulsive nuclear force help support the neutron star up to higher masses $>1.4 M_{\odot}$. When a star is more massive than the upper mass limit of the neutron star, it would collapse into a black hole eventually. However, there is a possibility that under extreme pressure and density, the quarks within hadrons would become effectively deconfined from the localized hadrons but still be confined by gravity within the star. The deconfined phase of quarks could generate larger pressure to sustain even more massive neutron stars or even quark stars.
Even in the deconfined phase, quarks can still form bound states via the remaining Coulomb-like strong interaction mediated by unconfined gluons, the multiquark states. Observations of multiquark candidates such as pentaquark and tetraquark have been accumulated for decades, see e.g. Ref.~\cite{Aaij:2020fnh} for the latest report. It is only natural to imagine an abundance of multiquarks in the core of dense stars where the deconfined quarks are extremely compressed tightly close together. Due to the nonperturbative nature of the strong interaction, the difficulty of lattice QCD approach when dealing with finite baryon density, and a reliability issue of MIT bag as a tool to study the behaviour of the deconfined quarks and gluons in the dense star, we use the equation of state of the deconfined nuclear matter from the holographic model as a complementary tool to investigate the properties of the dense star. There are some studies on the holographic model of deconfined quark matter in the dense star e.g. in D3/D7 system~\cite{Ecker:2019xrw} and in D4/D8/$\overline{\text{D8}}$ system~\cite{bch,bhp}.
Recent work~\cite{Annala:2019puf} reveals potentially two effective power-law equations of states~(EoS) interpolating between low and high density EoS calculated from the Chiral Effective Field Theory~(CET)~\cite{Tews:2012fj} and perturbative QCD~\cite{Andersen:2011sf,Mogliacci:2013mca}. The empirical EoS gives adiabatic index and sound speed characteristic of the quark matter phase, showing evidence of quark core within the NS. In this work, we revisit the holographic model investigated in Ref.~\cite{bhp} and match the EoS of multiquark nuclear matter with the low and high density EoS and demonstrate that it can interpolate well between the two regions. The masses of NS with multiquark core are consistent with current observations, allowing NS with $M\gtrsim 2 M_{\odot}$~\cite{Demorest,Antoniadis}. Depending on the colour states of multiquark, the mass could be as high as $2.2-2.3 M_{\odot}$, still too light to be a candidate for the object recently found by LIGO/Virgo~\cite{Abbott:2020khf} which requires mass around $2.50-2.67 M_{\odot}$.
This work is organized as the following. Section~\ref{sec-holomq} reviews holographic model studied in Ref.~\cite{bhp} and presents the EoS of multiquark nuclear matter. Section~\ref{sec-eosns} summarizes the EoS from CET and piecewise polytrope used in the interpolation and EoS of the multiquark core in the high density region. Thermodynamic analysis of phase transition between the baryonic matter and multiquark phase is discussed in Section~\ref{sectPT}. Mass-radius diagram, mass-central density relation and thermodynamic properties of NS with multiquark core are explored in Section~\ref{sec-mr}. Section~\ref{sec-con} concludes our work.
\section{Holographic multiquark and the EoS}\label{sec-holomq}
Within the framework of gauge-gravity duality from superstring theories, bound states of quarks in the boundary gauge theory can be described holographically by strings and branes. Mesons can be expressed as a string hanging in the bulk with both ends locating at the boundary of the AdS space\cite{maldacena2} while baryons can be represented by D$p$-brane wrapped on the $S^{p}$ with $N_c$ strings attached and extending to the boundary of the bulk space\cite{witb,gross&ooguri}. The gauge theory from the original AdS/CFT duality is still different from the actual gauge theory described by QCD. The gauge theory from gravity dual that captures most features of QCD is the Sakai-Sugimoto (SS) model\cite{ss lowE, ss more}. In this model, hadrons naturally exist in the confined phase however, another kind of bound states of quarks can also occur in the deconfined phase at the intermediate temperatures above the deconfinement, the multiquark states~\cite{bch,bhp}. See e.g. Ref.~\cite{Burikham:2011zz} for a concise review of holographic multiquarks.
\subsection{Holographic multiquark configuration}
The configuration in the SS model consists of D4-brane background and D8/$\overline{\text{D8}}$ flavor branes. $N_c$ D4-branes provides 4D SU($N_c$) Yang-Mills gauge theory holographically. On the other hand, $N_f$ D8/ $N_f$ $\overline{\text{D8}}$ flavor branes provide a description for confinement/deconfinement phase transition depending on the configuration of the branes. In terms of symmetry, $N_f$ D8/ $N_f$ $\overline{\text{D8}}$ flavor branes poses the global symmetries U$(N_f)_L$ and U$(N_f)_R$ which can fully describe U$(N_f)_L$ $\times$ U$(N_f)_R$ chiral symmetry breaking when the D8 and $\overline{\text{D8}}$ are connected. At low energy, the classical solution of the field configuration on the gravity side suggests a cigar-like shape for the compactified spatial direction of a confined background. At high temperature, the cylindrically compactified background spacetime with flavor branes in parallel embedding is preferred, therefore the broken chiral symmetry is restored and the corresponding nuclear matter phase becomes deconfined~\cite{Aharony_chiral}.
In the deconfined phase~\cite{bch}, there are 3 possible configurations as shown in Fig.~\ref{phase}: (i)
the parallel configuration of both D8-branes and $\overline{\text{D8}}$ representing the
$\chi_S$-QGP~(chiral symmetric quark-gluon plasma) and (ii) connected D8-$\overline{\text{D8}}$ without sources in the bulk
representing the vacuum with broken chiral symmetry. Another stable configuration~(iii) is multiquark phase consisting of the connected D8-$\overline{\text{D8}}$ branes with the D4-brane as the baryon vertex submerged and localized in the middle of the D8 and $\overline{\text{D8}}$. The baryon vertex can be attached with radial hanging strings that represent colour charge of the multiquark configuration.
\begin{figure}
\centering
\includegraphics[width=0.45 \textwidth]{phase_config} \caption{Different
configurations of D8 and $\overline{\text{D8}}$-branes in the Sakai-Sugimoto model that are dual to the phases of
(a) ${\chi}_S$-QGP, (b) vacuum and (c) multiquark phase.~\cite{bch} } \label{phase}
\end{figure}
\subsection{Equation of state} \label{sec-eos}
Holographically, the grand canonical potential and the chemical potential of the multiquark matter are given by~\cite{bhp}
\begin{eqnarray}
\Omega &=& \int^{\infty}_{u_{c}}du {\displaystyle{\left[
1-\frac{F^{2}}{f(u)(u^{8}+u^{3}n^{2})}\right]^{-\frac{1}{2}}}}
\frac{u^{5}}{\sqrt{u^{5}+n^{2}}},\; \label{eq:Grand}\\
\mu &=& \int^{\infty}_{u_{c}}du {\displaystyle{\left[
1-\frac{F^{2}}{f(u)(u^{8}+u^{3}n^{2})}\right]^{-\frac{1}{2}}}}\frac{n}{\sqrt{u^{5}+n^{2}}}\nonumber\\
& &+ \frac{1}{3}u_{c}\sqrt{f(u_{c})}+n_{s}(u_{c}-u_{T}) \label{eq:mu}
\end{eqnarray}
\noindent respectively, where $u$ is a radial coordinate of the background metric of
the bulk spacetime in the SS model in a deconfined phase at finite temperature $T$, $f(u)\equiv 1-u_{T}^{3}/u^3,$ $ u_T=16{\pi}^2 R^3 T^2/9,$ $ R^3\equiv
\pi g_s N_c l_{s}^3,$ $l_s$ is the string length and $g_s$ is the string coupling. $u_{c}$ is the position of the baryon vertex source as shown in Fig.~\ref{phase}. $n_{s}$ is the number fractions of radial strings $k_{r}$ in the unit of $N_{c}$ that represents the colour charges of a multiquark configuration. $n(u)$ is the baryon number density which is a constant of the configuration given by
\begin{eqnarray}\label{d_const}
n(u)&=&\frac{u\hat{a}'_{0} }{\sqrt{f(u)(x'_4
)^2+u^{-3}(1-(\hat{a}'_{0})^2)}}=\text{const.}
\end{eqnarray}
where $x_4$ is the compactified coordinate transverse to the probe D8/$\overline{\text{D8}}$ branes with arbitrary periodicity $2\pi R$. The $\hat{a} = 2
\pi \alpha^{\prime}\hat{A}/(R\sqrt{2 N_{f}})$ is a rescaled version of $\hat{A}$, the originally diagonal $U(1)$ gauge field, where $\alpha^{\prime}$ is a universal Regge slope. The position $u_{c}$ of the vertex is determined from the
equilibrium condition of the D8-D4-strings configuration~(see
Appendix A of Ref.~\cite{bch}). Another constant of the configuration is
\begin{eqnarray}\label{x4}
(x^{\prime}_{4})^{2}& = & \frac{1}{u^{3}f(u)}\Big[
\frac{f(u)(u^{8}+u^{3}n^{2})}{F^{2}}-1 \Big]^{-1}=\text{const.},
\label{eq:x4}
\end{eqnarray}
where $F$ is a function of $u_c$, $n$, $T$ and $n_s$, given by
\begin{equation}
F^{2} = u_{c}^{3} f(u_{c}) \left( u_{c}^{5}+n^2 -\frac{n^2
\eta_{c}^{2}}{9 f(u_{c})}\right),
\end{equation}
where $\eta_{c}\equiv 1+\frac{1}{2}\left(\frac{u_T}{u_c}\right)^3
+3 n_s \sqrt{f(u_c)}$.
Thermodynamic relations of multiquark states can be found in Ref.~\cite{bhp}. The grand potential $G_{\Omega}$ can be written as
\begin{equation}
dG_{\Omega} = -P dV -S dT-N d\mu
\end{equation}
where the state parameters $P$, $V$, $S$,
$T$, and $N$ are the pressure, volume, entropy, temperature, and the total
number of particles of the system respectively. Since the change
of volume is not our main concern, we define the volume density of
$G_{\Omega}$, $S$ and $N$ to be $\Omega$, $s$ and $n$,
respectively. Therefore, we have, at a particular $T$ and $\mu$,
\begin{equation}
P=-G_{\Omega}/V \equiv -\Omega(T,\mu).
\end{equation}
Assuming that the multiquark states are spatially uniform, we
obtain
\begin{equation}
n=\frac{\partial P}{\partial \mu}(T,\mu).
\end{equation}
Using the chain rule,
\begin{equation}
\frac{\partial P}{\partial n}\Bigg\vert_{T}=\frac{\partial
\mu}{\partial n}\Bigg\vert_{T}\; n,
\end{equation}
so that
\begin{equation}
P(n,T,n_s)=\mu(n,T,n_s)~n -\int_{0}^{n} \mu(n',T,n_s)
~\text{d}(n'),\label{pmud}
\end{equation}
where the regulated pressure is assumed to be zero when
there is no nuclear matter, i.e. $n=0$.
\subsubsection{Equation of state for multiquark}
In the limit of small $n$, the baryon chemical potential in Eqn.\eqref{eq:mu} can be
approximate as
\begin{eqnarray}
\mu & \simeq & \mu_{source} + \alpha_{0} n - \beta_{0}(n_s) n^3, \label{muofd}
\end{eqnarray}
where
\begin{eqnarray}
\mu_{source}& \equiv &\frac{1}{3}u_{c}\sqrt{f(u_{c})}+n_{s}(u_{c}-u_{T})\notag \\
\alpha_{0}& \equiv &\int_{u_0}^{\infty} du
\frac{u^{-5/2}}{1-\frac{f_{0}u_{0}^8}{fu^8}} ~, \notag \\
\beta_{0}(n_s)& \equiv &\int_{u_0}^{\infty} du
\frac{u^{-5/2}}{2\sqrt{1-\frac{f_{0}u_{0}^8}{f u^{8}}}}\nonumber\\
& &\times\left[\frac{f_0
u_{0}^{3}}{fu^8-f_{0}u_{0}^{8}}
\left(1-\frac{\eta_{0}^{2}}{9f_0}-\frac{u_{0}^{5}}{u^{5}}\right)+\frac{1}{u^5}\right],\notag
\end{eqnarray}
and $u_{0}$ is the position when $x'_{4}(u_{0})=\infty$ as shown in Fig.~\ref{phase}.
By substituting Eqn.\eqref{muofd} into Eqn.\eqref{pmud}, the pressure in the limit of small $n$ can be expressed as
\begin{equation}
P\simeq \frac{\alpha_{0}}{2} n^2 -\frac{3 \beta_{0}(n_s)}{4} n^4.
\label{eq:Plow}
\end{equation}
In the limit of large $n$ and relatively small $T$,
\begin{eqnarray}
\mu & \approx & \mu_{source} + \frac{n^{2/5}}{5} \frac{\Gamma
\left(\frac{1}{5}\right)
\Gamma\left(\frac{3}{10}\right)}{\Gamma\left(\frac{1}{2}\right)}\nonumber\\
&&+\frac{u_{c}^{3} f_{c}}{10}
\left(1-\frac{\eta_{c}^{2}}{9f_c}\right)n^{-4/5} \frac{\Gamma
\left(-\frac{2}{5}\right)\Gamma\left(\frac{19}{10}\right)}{\Gamma\left(\frac{3}{2}\right)}
\label{eq:muhd}
\end{eqnarray}
\noindent where the term from lower limit of integration in Eqn.\eqref{eq:mu}, $u_{c}^5/n^2$ approaches zero as $n$ becomes very large. Again by using Eqn.\eqref{pmud}, we obtain
\begin{equation}
P \simeq \frac{2}{35}\left( \frac{\Gamma\left(\frac{1}{5}\right)
\Gamma\left(\frac{3}{10}\right)}{\Gamma\left(
\frac{1}{2}\right)}\right) n^{7/5}.\label{eq:Phigh}
\end{equation}
Also the energy density can be found via the relation
$d\rho=\mu dn$ and the chemical potential is given by
\begin{equation}
\mu = \int_{0}^{n}\frac{1}{\eta}\left( \frac{\partial P}{\partial \eta}\right)~d\eta +\mu_{0},
\end{equation}
where $\mu_{0}\equiv \mu(n=0)$. The main results from Ref.~\cite{bhp} are summarized as
\begin{eqnarray}
P &=& a n^{2}+b n^{4}, \notag \\
\rho &=& \mu_{0}n+a n^{2}+\frac{b}{3}n^{4}, \label{eosmq1}
\end{eqnarray}
for small $n$ and
\begin{eqnarray}
P &=& k n^{7/5}, \notag \\
\rho &=& \rho_{c}+\frac{5}{2}P+\mu_{c}\left[ \left( \frac{P}{k}\right)^{5/7}-n_{c} \right]\notag \\
&&+kn_{c}^{7/5}-\frac{7k}{2}n_{c}^{2/5}\left( \frac{P}{k}\right)^{5/7}, \label{eosmq2}
\end{eqnarray}
for large $n$ respectively. For $n_{s}=0$, it is numerically determined in Ref.~\cite{bhp} that $n_{c}=0.215443, \mu_{c}=0.564374$ for large $n$, and $a=1, b=0, \mu_{0}=0.17495$ for small $n$. For $n_{s}=0.3$ we have $n_{c}=0.086666, \mu_{c}=0.490069$ for large $n$, and $a=0.375, b=180.0, \mu_{0}=0.32767$ for small $n$. And $k=10^{-0.4}$ is valid for both cases reflecting universal behaviour at high density. $n_{c}, \mu_{c}$ are the number density and chemical potential where the EoS changes from large to small $n$. Notably, these coefficients and exponents of the power laws in the EoS are completely determined by the holographic SS model with only two free parameters, the number of hanging strings representing the colour charges $n_{s}$ and the energy density scale~\cite{bhp}.
\section{EoS of the NS} \label{sec-eosns}
The structure of neutron star can be investigated through observations and modeling of the strongly interacting hadronic matter for both the low-density crust and high-density core inside the star. However, with absence of the direct first-principle calculation at densities above the nuclear matter saturation (baryon number) density $n_0 \approx 0.16 ~{\rm fm}^{-3}$, an accurate determination of the state of matter inside the NS cores is still not possible. Recent observations start offering empirical constraints in both opposing low density and high density limits of the nuclear matter inside the NS. Therefore, not only the model-independent approach~\cite{Annala:2019puf} to the problem has become feasible but also it could provide a hint for the viable physical equation of states of nuclear matter inside the NS.
\subsection{EoS of nuclear matter in low and intermediate density regime}
At low density, there is a limitation that comes from the well-studied NS crust region~\cite{Fortin} to the density $n_{\rm CET} \equiv 1.1n_0$, where matter occupies the hadronic-matter phase using chiral effective field theory (CET) which provides the EoS to good precision, currently better than $\pm 24\%$ \cite{Gandolfi,Tews:2012fj}.
For very low density crust, EoS can be found from Table 7 of Ref.~\cite{Hebeler_2013}, it can be fit with the series of polytropes below,
\begin{eqnarray}
P(\rho) &=& \kappa_a \rho ^{\Gamma_a} + \alpha_a, \; \text{for} \; 0 \leq \rho \leq \rho_a,\notag \\
P(\rho) &=& \kappa_b \rho ^{\Gamma_b}, \; \text{for} \; \rho_a \leq \rho \leq \rho_b,\notag \\
P(\rho) &=& \kappa_c \rho ^{\Gamma_c}, \; \text{for} \; \rho_b \leq \rho \leq \rho_c,\notag \\
P(\rho) &=& \kappa_d \rho ^{\Gamma_d}, \; \text{for} \; \rho_c\leq \rho \leq \rho_d. \notag \\ \label{EoS7}
\end{eqnarray}
where $(\kappa_a,\Gamma_a,\alpha_a)$ = (280.00, 2.0000, $-6.0000\times 10^{-21}$) and $(\kappa_b,\Gamma_b)$ = ($2.15247\times 10^{-3}$, 1.22213), $(\kappa_c,\Gamma_c)$ = ($9.08176\times 10^{-4}$, 0.622687), $(\kappa_d,\Gamma_d)$ = ($3.70286\times 10^{-4}$, 1.61786), while $(\rho_a,\rho_b,\rho_c,\rho_d)$ = ($2.66284\times 10^{-7}$, 0.237033, 2.46333, 75.1364) for the pressure and density expressed in the unit of $\text{MeV fm}^{-3}$ and $\text{MeV fm}^{-3}c^{-2}$ respectively.
For slightly higher density in the range $75.1364~\text{MeV/fm}^{3}<\rho c^{2}<165.3~\text{MeV/fm}^{3}$ of Table 5 of Ref.~\cite{Hebeler_2013}, the energy density and pressure of the nuclear matter can be expressed as
\begin{eqnarray}
\rho(\bar{n})c^{2}/T_{0} &=& a_{0}\bar{n}^{5/3}+b_{0}\bar{n}^{2}+c_{0}\bar{n}^{\gamma +1},
\label{eq:E_Nucl}
\end{eqnarray}
where $\bar{n}=n/n_{0}$ and
\begin{eqnarray}
P(\bar{n})/T_{0} &=& \frac{2}{3}n_{0}a_{1}\bar{n}^{5/3}+n_{0}b_{1}\bar{n}^{2}+\gamma n_{0}c_{1}\bar{n}^{\gamma +1},
\label{eq:P_Nucl}
\end{eqnarray}
for $T_{0}=36.84$ MeV and dimensionless parameters $a_{0}=176.209, b_{0}=-250.992, c_{0}=100.253$. For the upper limit and the lower limit~(the blue dashed lines in Fig.~\ref{eosfig}), $(a_1, b_1, c_1)$ = (1.55468, $-2.50096$, 1.44835) and (1.33832, $-2.0337$, 1.07001) respectively.
For intermediate density, the stiff, intermediate and soft piecewise polytrope extension of the equation of states to higher densities from Ref.~\cite{Hebeler_2013} each with three exponents $\Gamma_1, \Gamma_2$ and $\Gamma_3$, can be written as follows,
\begin{eqnarray}
P(\rho) &=& \kappa_1 \rho ^{\Gamma_1}, \; \text{for} \; \rho_1 \leq \rho \leq \rho_{12},\notag \\
P(\rho) &=& \kappa_2 \rho ^{\Gamma_2}, \; \text{for} \; \rho_{12} \leq \rho \leq \rho_{23},\notag \\
P(\rho) &=& \kappa_3 \rho ^{\Gamma_3}, \; \text{for} \; \rho_{23}\leq \rho \leq \rho_{max}. \notag \\ \label{eospp}
\end{eqnarray}
With mass density $\rho = m n$,
\begin{enumerate}
\item the stiff EoS (red dashed line in Fig.~\ref{eosfig}) has the exponents $(\Gamma_1, \Gamma_2, \Gamma_3) = (4.5, 5.5, 3.0)$ where $(\rho_{12}, \rho_{23}, \rho_{max}) = (1.5\rho_s, 2.0\rho_s, 3.3\rho_s)$ and $(\kappa_1, \kappa_2, \kappa_3)$ = (11.6687, 51.7666, 2.56345).
\item the intermediate EoS has the exponents $(\Gamma_1, \Gamma_2, \Gamma_3) = (4.0, 3.0, 2.5)$ where $(\rho_{12}, \rho_{23}, \rho_{max}) = (3.0\rho_s, 4.5\rho_s, 5.4\rho_s)$ and $(\kappa_1, \kappa_2, \kappa_3)$ = (2.89711, 1.30607, 1.07402).
\item the soft EoS has the exponents $(\Gamma_1, \Gamma_2, \Gamma_3) = (1.5, 6.0, 3.0)$ where $(\rho_{12}, \rho_{23}, \rho_{max}) = (2.5\rho_s, 4.0\rho_s, 7.0\rho_s)$ and $(\kappa_1, \kappa_2, \kappa_3)$ = (0.0321845, 2.63607, 0.572502),
\end{enumerate}
when the pressure and density are in the unit of $\text{GeV fm}^{-3}$ and $\text{MeV fm}^{-3}c^{-2}$ respectively. The density scale is $\rho_{s}c^{2}=150.273$~MeV$ $fm$^{-3}$.
These equations of states will be used to construct the $P-\mu$ diagram in Fig.~\ref{pmufig} for thermodynamic comparison with the multiquark phase.
\subsection{EoS of SS model for high density}
At high densities inside the NS core, baryons are tightly compressed, quarks and gluons would be so close together that individual quark and gluon are deconfined from a single baryon and yet the interaction could still be sufficiently strong. The gluons and quarks become deconfined but could still form bound states of multiquark. The multiquarks can possess colour charges in the deconfined phase while keeping the star colour singlet in totality, similar to ionized gas of positive and negative electric charges with total neutrality. In the multiquark model of Ref.~\cite{bch}, the colour charge is quantified by the number fractions of hanging radial strings $n_{s}$. For extreme density and low temperature~(less than a trillion Kelvin), the deconfined phase of quarks and gluons should be in the multiquark phase instead of the pure gas of weakly interacting quarks and gluons where perturbative QCD~(pQCD) is applicable.
The multiquark EoS (\ref{eosmq1}),(\ref{eosmq2}) are expressed in dimensionless form. Apart from the colour charge parameter $n_{s}$, there is only one parameter we can choose to determine the entire behaviour of the EoS, the energy density scale $\epsilon_{s}$ which will give the physical density and pressure $\rho \epsilon_{s}, P\epsilon_{s}$. After choosing $\epsilon_{s}$, the corresponding distance scale of the SS model is fixed by $r_{0}=\left( G\epsilon_{s}/c^{4}\right)^{-1/2}$~\cite{bhp}.
The pQCD calculation, for the deconfined quarks and gluons of Ref.~\cite{Kurkela,Gorda} is also displayed for comparison in Fig.~\ref{eosfig}.
\subsection{Phase transition between confined baryonic matter and multiquark matter} \label{sectPT}
Under extreme pressure and density in low temperature environment~($T \lesssim 10^{12}$ K, the quark-gluon plasma phase transition temperature), baryons are compressed against one another so tightly that quarks inside begin to move freely among neighbouring baryons. Strong interaction is still strong and possibly nonperturbative. The baryonic matter should then undergo a deconfinement phase transition to the multiquark nuclear matter where the quarks form bound state via the remaining strong interaction in the deconfined vacuum~\cite{Bergman:2007wp,bch,bhp}. Following Ref.~\cite{Hoyos:2016zke}, we compare the free energy~(essentially negative pressure~\cite{Bergman:2007wp}) by {\it assuming} the {\it onset} value~($\mu_{0}=\mu(n=0)$) of chemical potential~(per quark) to be the same for baryonic matter and multiquark phase,
\begin{equation}
\mu_{mq,0} = \frac{\mu_{b,0}}{3}N_{q}=\mu_{q,0}N_{q}, \label{mona}
\end{equation}
where $N_{q}$ is the number of quarks in each multiquark and $\mu_{q,0}$ is the chemical potential per quark at the onset value. Using nuclear EoS, we set $\mu_{q,0}=308.55$ MeV~\cite{Hoyos:2016zke}. Since the SS model fixes $\mu_{mq,0}$ once the energy density scale $\epsilon_{s}$ is fixed, $N_{q}$ can be calculated subsequently. By this assumption, the phase transition is a first order since there will be a discontinuity in the density between nuclear matter and multiquark phase.
The transition point can be determined from the $P-\mu$ diagram as shown in Fig.~\ref{pmufig}. For chemical potential above the phase transition value, the pressure of the thermodynamically prefered phase will be larger. The multiquark EoS is presented with three choices of the energy density scale $\epsilon_{s}=23.2037, 26, 28$ GeV/fm$^{3}$ for $n_{s}=0, 0.3$. The colourless multiquark with $n_{s}=0$ is always less prefered thermodynamically than the nuclear EoS. The choices of $\epsilon_{s}$ are in the minimum range of values that would give NS masses within the constrained values from recent observations. As explained in subsequent section, the values of $\epsilon_{s}$ are also chosen so that the EoS interpolates reasonably well between the nuclear CET EoS at low densities and pQCD at high densities.
Notably, the multiquark EoS with $\epsilon_{s}=23.2037, 26$ GeV/fm$^{3}$ are almost overlapping the stiff nuclear EoS of the CET. With $\epsilon_{s}=26$ GeV/fm$^{3}, n_{s}=0.3$, the multiquark phase is more prefered thermodynamically than the stiff nuclear EoS above the transition point at $\mu_{q}=374.0$ MeV. For $\epsilon_{s}=26-28$ GeV/fm$^{3}$, the SS model predicts that in order to interpolate between the known stiff EoS at low density and the high density region consistent with conformal EoS, the core of massive NS should contain multiquark each composed of $N_{q}\simeq 25-30$ quarks corresponding to roughly $8-10$ baryons.
Fig.~\ref{pmufig} shows that the multiquark phase with $n_{s}=0$ is always less prefered thermodynamically than all of the nuclear equations of states and therefore we will not consider this possibility any further. On the other hand, the multiquark EoS for $n_{s}=0.3$ is notably almost identical to the stiff EoS of nuclear matter around the transition point demonstrating that it is a good extension of the stiff nuclear EoS to higher densities~(for $\epsilon_{s}\simeq 25$ GeV/fm$^{3}$, the multiquark and stiff EoS are overlapping almost completely). For intermediate and soft nuclear EoS, the multiquark phase is less prefered thermodynamically given the assumption (\ref{mona}). Here and henceforth we will consider only the possibility of the NS with multiquark core connecting to the {\it stiff} nuclear crust.
\begin{figure}
\centering
\includegraphics[width=0.50\textwidth]{pmu}
\caption{$P-\mu$ diagrams of multiquark comparing to stiff, intermediate and soft nuclear matter when the onset chemical potential value per quark is set to $308.55$ MeV. The energy density scale for the multiquark SS model is set to $\epsilon_{s}=23.2037, 26, 28$~GeV/fm$^{3}$~(three blue dashed lines) respectively. }
\label{pmufig}
\end{figure}
\subsection{Matching of holographic multiquark EoS with low-density nuclear matter EoS}
The results, Fig. 1 and 2, of Ref.~\cite{Annala:2019puf} suggest a possibility that there is a double-power-law type EoS interpolating between the high and low density EoS given by pQCD and CET. One such candidate can be found in the early work of holographic SS model~\cite{bhp} where the multiquark phase is shown to dominate at large density and low temperature. By adjusting $\epsilon_{s}=23.2037$ GeV/fm$^{3}$ to give transition density $\rho_{c}c^{2}=0.8028$ GeV/fm$^{3}$ as suggested by the turning point of EoS in Fig. 1 of Ref.~\cite{Annala:2019puf}, a good interpolating equation of states of $n_{s}=0.3$ multiquark matter given by (\ref{eosmq1}), (\ref{eosmq2}) can be achieved as shown in Fig.~\ref{eosfig}. The green dashed line is the average empirical EoS connecting between pQCD and nuclear phases. As shown in Sect.~\ref{sectPT}, even though this EoS can interpolate well between low and high density, it is not thermodynamically prefered than the nuclear phases as suggested by CET. By increasing the energy density scale slightly to $\epsilon_{s}=26$ GeV/fm$^{3}$ to give transition density $\rho_{c}c^{2}=0.8996$ GeV/fm$^{3}$ as also depicted in Fig.~\ref{eosfig}, the multiquark phase becomes thermodynamically prefered than the stiff nuclear phase and still provide a good interpolation between low and high density.
\begin{figure}
\centering
{\includegraphics[width=0.49\textwidth]{PD03} }
\caption{EoS of multiquark interpolating between nuclear matter and extreme density region for $\epsilon_{s}=23.2037, 26$ GeV/fm$^{3}$, notice the density jump at the phase transition.}
\label{eosfig}
\end{figure}
\section{MR diagram of NS with multiquark core} \label{sec-mr}
The Tolman-Oppenheimer-Volkoff equation~\cite{tov1,tov2,bhp} is used in the consideration of mass profile of the NS,
\begin{eqnarray}
\frac{dP}{dr}&&=-\frac{(\rho c^{2} +P)}{2}\frac{8\pi Pr^{3}+2M(r)}{r(r-2M(r))}, \notag \\
\frac{dM(r)}{dr}&&=4\pi \rho r^{2},
\end{eqnarray}
where $M(r)$ is the accumulated mass of the star up to radius $r$. In determination of the mass-radius diagram shown in Fig.~\ref{mrfig}, we use the multiquark EoS given in (\ref{eosmq1}),(\ref{eosmq2}) for high density region. As the density and pressure go down within the star and reach transition point with the stiff EoS, the new piecewise polytrope EoS (\ref{eospp}) is adopted until it reaches the low density region where the EoS given in (\ref{eq:P_Nucl}), (\ref{eq:E_Nucl}), and (\ref{EoS7}) will be used subsequently. From Fig.~\ref{eosfig}, we focus our consideration to 3 scenarios: \\
(i) $n_{s}=0.3,\epsilon_{s}=26~(28)$ GeV/fm$^{3}$ with transition to stiff at $\rho_{ms(2)}c^{2} =0.4678~(0.4389)$ GeV/fm$^3$ and $\rho_{mns(2)}c^{2} =0.2891~(0.2734)$ GeV/fm$^3$~(see Fig.~\ref{eosfig} where only $\epsilon_{s}=26$ GeV/fm$^{3}$ case is shown); \\
(ii) pure multiquark star with $n_{s}=0.3$ at $\epsilon_{s}=23.2037$ GeV/fm$^{3}$; \\
(iii) pure multiquark star with $n_{s}=0.3$ at $\epsilon_{s}=26$ GeV/fm$^{3}$. \\
The last two scenarios are the hypothetical multiquark star with no baryon crust. Scenario (ii) and (iii) are possible if the central temperature of the star is sufficiently high so that the surface temperature of the star is still higher than the nuclear-multiquark phase transition temperature.
From Fig.~\ref{mrfig} for NS containing multiquark core with $n_{s}=0.3$ continuing to stiff EoS, the maximum masses $\sim 2.2 M_{\odot}$ with radii around 11.8-11.5 km for $\epsilon_{s}=26-28$ GeV/fm$^{3}$, the larger energy density scale corresponds to smaller radius. For pure multiquark star with no baryon crust and $n_{s}=0.3$, the maximum mass for $\epsilon_{s}=23.2037~(26)$ GeV/fm$^3$ is $\sim 2.2~(2.1) M_{\odot}$ with radius around $11.3~(10.65)$ km respectively. An important prediction of the SS model is the existence of NS with multiquark core and stiff nuclear crust in the mass range $1.7-2.2 M_{\odot}$ and radii $14.5-11.5$ km for $\epsilon_{s}=26-28$ GeV/fm$^{3}$ as depicted by the plateau-like black-red curve in the MR diagram of Fig.~\ref{mrfig}. For comparison, the NS masses with $1~\sigma$ uncertainties from observations~\cite{Antoniadis,Abbott:GW190425,Abbott:GW170817,Cromartie:2019kug} are also depicted in Fig.~\ref{mrfig}.
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{MR_star}\\
\includegraphics[width=0.5\textwidth]{Mrho}
\caption{MR diagram and mass-central density of NS and quark star. The colour label represents the corresponding nuclear phase at the center of the star. Each point corresponds to a star with mass profile consisting of subsequent layers of nuclear phases in order of high to low density: multiquark, polytrope~(stiff), and CET. The pure hypothetical multiquark star has only multiquark layers. Observational NS masses are also presented for comparison.} \label{mrfig}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{MR_core}
\caption{MR diagram of multiquark core for each curve in Figure~\ref{mrfig}. } \label{mcrfig}
\end{figure}
As shown in Fig.~\ref{mcrfig}, the multiquark {\it core} at the maximum mass for $n_{s}=0.3$ multiquark continuing to stiff EoS has mass and radius $1.49~(1.66) M_{\odot}$ and $8.3~(8.7)$ km for $\epsilon=26~(28)$ GeV/fm$^{3}$ respectively. For pure multiquark star, the mass and radius of the core are the same values as the entire star. Note that this multiquark {\it core} contains both high and low density layers governed by (\ref{eosmq2}) and (\ref{eosmq1}).
In order for the nuclear matter in the core of NS or the entire star to be in the multiquark phase, the central temperature and(or) density needs to be sufficiently large. The deconfinement phase transition temperature to quark-gluon plasma at low density around $n_{0}$ is estimated by the Heavy-Ion collisions at BNL RHIC~\cite{Arsene:2004fa} and CERN ALICE~\cite{ALICE:2017jyt} to be higher than $10^{12}$ K~(the Hagedorn temperature, $150$ MeV or $1.7 \times 10^{12}$ K). However at larger densities, theoretical models including the holographic SS model suggest the possibility that the QCD vacuum could become deconfined and quarks could form multiquark states while the chiral symmetry is still broken~\cite{Bergman:2007wp, bch} even at low temperature. Diquark in the colour superconductivity model~\cite{Alford:2007xm} and other multiquark~\cite{Jaffe:1976yi} could also form at high density and low temperature. Phase diagram in the SS model of Ref.~\cite{bch} shows the region of moderate temperature~($T<10^{12}$ K) and large $\mu$ where the multiquark in the deconfined vacuum is thermodynamically prefered down to the low temperature region. Moreover by the analysis in Section~\ref{sectPT}, the multiquark phase is more prefered thermodynamically than the confined stiff nuclear matter for sufficiently large $\mu_{q}$.
The temperature profile of the star can be calculated from the chemical potential profile within the star via the relation~(see e.g. Ref.~\cite{Burikham:2012kn}),
\begin{equation}
\frac{T(r)}{T_{0}} = \frac{\mu(r)}{\mu_{0}},
\end{equation}
where $T_{0}, \mu_{0}$ are the temperature and chemical potential at the reference point respectively. In Fig.~\ref{Tfig}, the temperature profile is plotted in the unit of $T_{0}$ and $\mu_{0}=(\rho_{0}+P_{0})/n_{0}$, calculated from the central values by the first law of thermodynamics. At the phase transition point, the deconfinement~(confinement) phase transition temperature between the multiquark and nuclear phase is determined by the chemical potential. The transition values for the NS with $n_{s}=0.3, \epsilon_{s}=26$ GeV/fm$^{3}$ multiquark core and stiff nuclear crust are $T_{\rm dec}=0.6741~T_{0}$ K and $\mu_{\rm dec}=374.0$ MeV respectively. For such hybrid star~(NS with multiquark core), the surface temperature is $T_{\rm surf}=0.5643~T_{0}$. For a neutron star with surface temperature $10^{6}$ K, the core temperature would be around $1.77\times 10^{6}$ K.
\begin{figure}[h]
\subfigure[$~n_{s}=0.3,\epsilon_{s}=26$ GeV/fm$^{3}$ multiquark core and stiff nuclear crust. ]
{\includegraphics[width=0.45\textwidth]{Tprof1} \label{Tfig1}}
\subfigure[$~n_{s}=0.3,\epsilon_{s}=23.2037$ GeV/fm$^{3}$ pure multiquark star. ]
{\includegraphics[width=0.45\textwidth]{Tprof2} \label{Tfig2}}
\subfigure[$~n_{s}=0.3,\epsilon_{s}=26$ GeV/fm$^{3}$ pure multiquark star.]
{\includegraphics[width=0.45\textwidth]{Tprof3} \label{Tfig3}}
\caption{Temperature profile of NS with multiquark core and pure multiquark star.}
\label{Tfig}
\end{figure}
Multiquark EoS contains two power-laws governing at high and low density, the corresponding multiquark matter is called the multiquark core and crust in Ref.~\cite{bhp}, but to avoid confusion we instead label them with ``mqh, mql'' respectively in this work. Each region gives different adiabatic indices $\gamma$ and sound speed $c_{s}$ as shown in Fig.~\ref{gamfig}. Interestingly, $\gamma \approx 1~(2.5)$ for high~(low) density multiquark respectively while $c_{s}^{2}>1/3$ violating the conformal bound for the high density region and most of the low density region. In the high density region $c_{s}^{2}\simeq 0.426$ for $n_{s}=0.3$, this is the value slightly above the conformal bound obeyed by the typical massless free quarks phase. The adiabatic index $\gamma$ of the high-density multiquark~(mqh) is very close to 1~(again the conformal limit of free quarks) while the low-density multiquark~(mql) has $\gamma \approx 2.5$, behaving more similar to the hadronic nuclear matter, but with colour charges and deconfined. On the other hand, $n_{s}=0$ colourless multiquark at high density has $\gamma \simeq 1.5, c_{s}^{2}\lesssim 0.55$.
The maximum mass, corresponding radius, central density, and transition density for each variation of stars with multiquark cores are summarized in Table~\ref{tab1}.
\begin{table}
\footnotesize
\begin{tabular}{ |c|c|c|c|c|c| }
\hline
Matter content &$M_{max}$ & $R_{M_{max}}$& $\rho_{0}c^2$ & $\rho_{c}c^2$ & $\rho_{\text{mq\&b}}c^2$\\
inside the star &$(M_{\odot})$ & (km) & \scriptsize (GeV/fm$^{3}$) &\scriptsize (GeV/fm$^{3}$) &\scriptsize (GeV/fm$^{3}$) \\
\hline
&&&&& ($\rho_{\text{ms}}c^2$) \\
&&&&& 0.4678 \\
\scriptsize mq\&stiff &&&&& ($\rho_{\text{mns}}c^2$) \\
$ \epsilon_{s}$ = 26& 2.226 & 11.76 & 1.216 & 0.8996 & 0.2891 \\
\cline{6-6}
&&&&&$\mu_{\text{dec}}$(GeV)\\
&&&&& 0.3740\\
\hline
&&&&& ($\rho_{\text{ms2}}c^2$) \\
&&&&& 0.4389 \\
\scriptsize mq\&stiff &&&&& ($\rho_{\text{mns2}}c^2$) \\
$ \epsilon_{s}$ = 28 & 2.177 & 11.49 & 1.403 & 0.9688 & 0.2734 \\
\cline{6-6}
&&&&&$\mu_{\text{dec}}$(GeV)\\
&&&&& 0.3605\\
\hline
\scriptsize pure mq &&&&&\\
$ \epsilon_{s}$ = 26& 2.111 & 10.66 & 1.396 & 0.8996 & - \\
&&&&&\\
\hline
\scriptsize pure mq&&&&&\\
$ \epsilon_{s}$ = 23.2067 & 2.235 & 11.29 & 1.246 & 0.8028 & - \\
&&&&&\\
\hline
\end{tabular}
\caption{Properties of massive neutron stars with multiquark cores ($n_s = 0.3$ and $\epsilon_{s}$ = 26, 28 GeV fm$^{-3}$) and stiff crust in comparison with pure mutiquark stars ($n_s = 0.3$, $\epsilon_{s}$ = 26 GeV fm$^{-3}$ and $\epsilon_{s}$ = 23.2037 GeV fm$^{-3}$, respectively) at maximum masses.}
\label{tab1}
\end{table}
\section{Conclusions and Discussions}\label{sec-con}
The holographic SS model of multiquark nuclear matter has been applied to the inner core of NS at moderate to low temperature~(less than a trillion Kelvin). The EoS of the multiquark is interpolated between the high density and the low density where the CET is applicable. The transition density $\rho_{c}$ between the power laws in the empirical EoS is fixed once the energy density scale $\epsilon_{s}$ is chosen. The EoS of multiquark phase with colour-charge fraction $n_{s}=0.3$ can interpolate well between the high and low density regions when we set $\epsilon_s = 23.2037-28$ GeV/fm$^{3}$. This energy density scale corresponds to multiquark with the number of quarks $N_{q}\simeq 25-30$~(roughly $8-10$ baryons) per multiquark for $n_{s}=0.3$~(Fig.~\ref{pmufig}). Phase transitions from baryonic matter to deconfined multiquark have been studied and it is found that the multiquark phase at e.g. $n_{s}=0.3, \epsilon_s = 26, 28$ GeV/fm$^{3}$~(generically $>25$ GeV/fm$^{3}$) are more thermodynamically prefered than the stiff nuclear phase above the transition points.
As shown in Fig.~\ref{eosfig}, the EoS for high-density multiquark has the same slope as the EoS of pQCD implying that its behavior could be more similar to that of free quark in spite of its bound state while the EoS for low-density multiquark passes through the region where the low-density nuclear EoS is used as good approximation. These nice behaviors imply that the existence of the multiquark phase~(which naturally contains high and low density profile as predicted by the SS model, see Ref.~\cite{bhp}) provides a missing link between the CET and the pQCD energy scales. MR diagram for various stiff-crust scenarios demonstrate that NS with multiquark core can have mass in the range $1.96-2.23~(1.70-2.17) M_{\odot}$ and radii $14.3-11.8~(14.5-11.5)$ km for $\epsilon_{s}=26~(28)$ GeV/fm$^{3}$ respectively. Note that the higher mass corresponds to smaller radius.
At higher temperature in the order of few trillions Kelvin, the population of multiquarks should become less and the deconfined phase would consist mainly of weakly coupled quarks and gluons. Holographic models including the SS model predict the multiquark phase to be thermodynamically prefered than the QGP phase~\cite{bch} for moderate to low temperature at high densities. In newly formed NS or exotic quark star if the core temperature could reach over a few trillions Kelvin, it is possible to have this weakly-coupled quarks and gluons in the most inner core follow by multiquark layers resulting in even larger mass of the NS most likely larger than $2 M_{\odot}$.
For aged NS with lower temperatures however, we expect only the multiquark phase to exist in the core. As density decreases with radial distance, the multiquark matter undergoes phase transition into confined baryonic matter or even coexist in mixed phase. For all scenarios that we consider, the NS with multiquark core could exist in a wide range of masses $M>2.0 M_{\odot}$ with radii around $11.5-14.3$ km for $n_{s}=0.3, \epsilon_{s}=26-28$ GeV/fm$^{3}$. There is a considerable number of observations of NS with masses above $2 M_{\odot}$, e.g. Ref.~\cite{Clark:2002db,Romani:2012rh,Romani:2012jf,vanKerkwijk:2010mt,Linares:2018ppq,Bhalerao:2012xe,Cromartie:2019kug,Nice:2005fi,Demorest:2010bx,Freire:2007sg,Quaintrell:2003pn}. It seems the massive NSs are abundant and our analyses suggest that they likely contain the multiquark cores. In Ref.~\cite{Abbott:2020lqk}, LIGO/Virgo set constraints on equatorial ellipticities of the millisecond pulsars to be less than $10^{-8}$. It would be interesting to explore the deformation of the NS containing multiquark core and check consistency with its EoS in the future work. However, the MR diagram of millisecond NS is minimally affected by the spin as demonstrated in Ref.~\cite{Miller:2019cac}.
\begin{acknowledgments}
P.B. is supported in part by the Thailand Research Fund (TRF),
Office of Higher Education Commission (OHEC) and Chulalongkorn University under grant RSA6180002. S.P. is supported in part by the Second Century Fund: C2F PhD Scholarship, Chulalongkorn University.
\end{acknowledgments}
| proofpile-arXiv_065-279 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Photonic crystals are periodic arrangements of scattering units (typically, dielectric spheres or rods) that exhibit frequency ranges (band gaps) for which no optical modes exist in the infinite structure and light propagation is forbidden \cite{yablonovitch93,joan08}. Thus, photonic crystals play the same role for light as semiconductor crystals do for electrons. They have numerous promising prospects for applications in optical technologies and, in particular, for guiding of light \cite{joan97,russell03}, lasing \cite{painter99,park04} and quantum optics \cite{lodahl04,faraon08}.
Photonic crystals exist in nature \cite{vigneron12} (e.g., natural opals \cite{sanders64} or wings of some butterflies \cite{argyros02,remo16}) or can be fabricated using modern nanofabrication techniques \cite{wijnhoven98,blanco00,campbell00,marichy16}. However, neither nature nor humans do a perfect job and real-life photonic crystals always have some degree of imperfection: fluctuating sizes or positions of elementary building units, vacancies, interstitial or substitution impurities, cracks \cite{koenderink05,toninelli08}. Whereas these imperfections do not destroy the band gap provided that they are not too strong, they introduce an interesting new feature in the spectrum: spatially localized optical modes appear in the band gap, especially near its edges \cite{john87}. Localization of eigenmodes of wave equations or of eigenstates of the Schr\"{o}dinger equation by disorder is a ubiquitous phenomenon discovered by Philip Anderson \cite{anderson58} and bearing his name \cite{lagendijk09,segev13}. Anderson localization of electromagnetic waves in general and of light in particular has been predicted by Anderson himself \cite{anderson85} and by Sajeev John \cite{john84}. Later on, it has been observed in fully disordered one- \cite{berry97,kemp16}, quasi-one- \cite{chabanov00,chabanov01} and two-dimensional \cite{dalichaouch91,schwartz07} disordered media whereas observing it in three dimensions (3D) turned out to be difficult \cite{sperling16,skipetrov16njp}. Even though Sajeev John proposed a way to facilitate localization of light in 3D by using disordered photonic crystals instead of fully disordered suspensions or powders a long time ago \cite{john87,john91}, no clear experimental realization of this idea has been reported up to date. Some signatures of Anderson localization have been observed in reflection of short optical pulses from a disordered photonic crystal \cite{douglass11} although the authors did not claim the observation of Anderson localization.
The idea of facilitating localization of light in 3D by using a photonic structure with a band gap arises from the localization criterion following from the scaling \cite{abrahams79} and the self-consistent \cite{vollhardt80,vollhardt92} theories of localization \cite{john93}:
\begin{eqnarray}
{\cal N}_{\textrm{EM}}(\omega) D_0(\omega) \ell_0^*(\omega) \lesssim \red{\text{const} \sim 1},
\label{loccrit}
\end{eqnarray}
where ${\cal N}_{\textrm{EM}}(\omega)$ is the density of electromagnetic modes (states), $D_0(\omega) = \red{v_{\mathrm{E}} \ell_0^*(\omega)}/3$ is the \red{``bare'' diffusion coefficient of light (i.e., the value that the diffusion coefficient would have in the absence of localization effects)}, \red{$v_{\mathrm{E}}$ is the energy transport velocity \cite{albada91,tiggelen92}}, and \red{$\ell_0^*(\omega)$} is the \red{transport} mean free path \red{in the absence of localization effects}. \red{In a fully disordered isotropic medium without any short- or long-range order, ${\cal N}_{\textrm{EM}}(\omega) \sim k(\omega)^2/v_{\mathrm{E}}$ and} we obtain the standard Ioffe-Regel criterion of localization: $\red{k\ell_0^*} \sim k\ell \lesssim \red{\text{const} \sim 1}$, where \red{$k(\omega)$ is the effective wave number, $\ell$ is the scattering mean free path, and we made use of the fact that $\ell_0^*$ and $\ell$ are of the same order}. This criterion corresponds to a very strong scattering with $\ell$ shorter than the wavelength of light. If, however, the density of states ${\cal N}_{\textrm{EM}}(\omega)$ is suppressed with respect to its value in the \red{fully disordered} medium, the criterion (\ref{loccrit}) becomes easier to obey. In a photonic crystal, ${\cal N}_{\textrm{EM}}(\omega) \to 0$ near a band edge and hence localized states are expected to appear for arbitrary weak disorder \cite{john91}.
Large and dense ensembles of cold atoms constitute a new experimental platform for the investigation of multiple light scattering \cite{labeyrie99,labeyrie03,kaiser05}. The very good knowledge of the properties of individual, isolated atoms and the constantly increasing degree of control of large atomic ensembles make atomic systems ideal candidates for verifying the existing theoretical predictions as well as for going beyond them by playing the role of ``quantum simulators'' \cite{lewenstein07,sanchez10}. However, whereas Anderson localization of matter waves in 3D random optical potentials has been successfully realized \cite{jendr12,semeghini15}, the somewhat reciprocal situation of light localization by scattering on cold atoms turns out to be difficult to implement \cite{kaiser09}. In addition to experimental difficulties of producing cold atomic clouds that are large and dense at the same time, theoretical calculations have pointed out that the vectorial nature of electromagnetic waves and the dipole-dipole interaction between nearby atoms may be a fundamental obstacle for Anderson localization of light \cite{skip14prl,bellando14}. Applying a static magnetic field to suppress the dipole-dipole interactions is a possible way to circumvent this obstacle \cite{skip15prl,skip18prl} but strong fields are required \cite{skip16pra}. An easier way towards light localization by cold atoms may be to arrange atoms in a periodic 3D lattice and enjoy the relaxation of the localization criterion (\ref{loccrit}) near an edge of a photonic band gap.
In this paper, we investigate spatially localized quasimodes that are introduced in an open 3D diamond atomic lattice of finite size by a randomness in atomic positions. Randomly displacing the atoms from their positions in the lattice is different from introducing disorder by randomly removing the atoms---a situation studied in Ref.\ \onlinecite{antezza13}---and allows for varying the strength of disorder while keeping the atom number constant. Thus, we can follow a transition from the perfect photonic crystal for vanishing disorder to a fully disordered system for strong disorder. After discussing the impact of boundary states, we establish that for a moderate amount of disorder $W$, two localization transitions exist near edges of a photonic band gap that the diamond lattice exhibits. A finite-size scaling analysis of one of these transitions yields the precise position of the mobility edge and an estimation of the critical exponent $\nu$ of the localization length. Increasing $W$ eventually leads to the closing of the band gap and the disappearance of localized states. A relation between the band gap formation, Anderson localization, and the near-field dipole-dipole coupling between the atoms is conjectured. Finally, implications of our results to experiments with cold atoms are discussed.
\section{The model}
\label{sec:model}
We consider \red{$N$} identical two-level atoms arranged in a diamond lattice. The lattice is a superposition of two face-centered cubic lattices (lattice constant $a$) with basis vectors $\vec{e}_1 = (0, a/2, a/2)$, $\vec{e}_2 = (a/2, 0, a/2)$, $\vec{e}_3 = (a/2, a/2, 0)$ and $\vec{e}_1 + \vec{e}$, $\vec{e}_2 + \vec{e}$, $\vec{e}_3 + \vec{e}$, where $\vec{e} = (a/4, a/4, a/4)$. A sample of finite size is obtained from the unbounded lattice by keeping only the atoms inside a sphere of diameter $L$ \red{and volume $V = (\pi/6) L^3$} centered at the origin (see the inset of Fig.\ \ref{fig_dos} for a 3D rendering of the resulting sample). Disorder is introduced by displacing each atom by a random distance $\in [0, Wa]$ in a random direction, with $W$ being a dimensionless parameter characterizing the strength of disorder. The atoms have resonance frequencies $\omega_0$ and resonance widths $\Gamma_0$; their ground states have the total angular momentum $J_g = 0$ while their excited states have $J_e = 1$ and are thus three-fold degenerate, with the three excited states having the same energies but different projections $J_z = m$ ($m = 0$, $\pm 1$) of $\vec{J}_e$ on the quantization axis $z$. We have already used such a model of resonant two-level atoms coupled via the electromagnetic field to study random ensembles of atoms in our previous work \cite{skip14prl} where the Hamiltonian of the system was given. The model was generalized to include external dc magnetic \cite{skip15prl,skip18prl} or electric \cite{skip19prb} fields. It has been also used to study photonic crystals that we consider here \cite{antezza13,skip20epj}. Following these previous works, we will study localization properties of quasimodes $\bm{\psi}_m$ of the atomic system found as eigenvectors of a $3N \times 3N$ Green's matrix ${\hat G}$:
\begin{eqnarray}
{\hat G} \bm{\psi}_m = \Lambda_m \bm{\psi}_m,\;\;\;\; m = 1,\ldots,3N.
\label{eigen}
\end{eqnarray}
The matrix ${\hat G}$ describes the coupling between the atoms via the electromagnetic waves (light) and is composed of $N \times N$ blocks of size $3 \times 3$. A block ${\hat G}_{jn}$ gives the electric field created at a position $\vec{r}_n$ of the atom $n$ by an oscillating point dipole at a position $\vec{r}_j$ of the atom $j$ ($j$, $n = 1, \ldots, N$). It has elements
\begin{eqnarray}
G_{jn}^{\mu \nu} &=& i\delta_{jn} \delta_{\mu \nu}
+ (1 - \delta_{jn}) \frac{3}{2}
\frac{e^{i k_0 r_{jn}}}{k_0 r_{jn}}
\nonumber \\
&\times& \left[ P(i k_0 r_{jn}) \delta_{\mu \nu}
+ Q(i k_0 r_{jn})
\frac{r_{jn}^{\mu} r_{jn}^{\nu}}{(r_{jn})^2} \right],
\label{green}
\end{eqnarray}
where $P(x) = 1-1/x+1/x^2$, $Q(x) = -1+3/x-3/x^2$, $\vec{r}_{jn} = \vec{r}_n - \vec{r}_j$, and the indices $\mu$, $\nu = x, y, z$ denote the projections of $\vec{r}_{jn}$ on the axes $x$, $y$, $z$ of the Cartesian coordinate system: $r_{jn}^{x} = x_{jn}$, $r_{jn}^{y} = y_{jn}$, $r_{jn}^{z} = z_{jn}$.
\red{The inverse of the resonant wave number of an isolated atom $k_0 = \omega_0/c$ provides a convenient length scale by which we will normalize all other length scales. Here $c$ is the speed of light in the free space.}
\begin{figure}[t]
\includegraphics[width=\columnwidth]{fig_dos.pdf}\\
\vspace{-6mm}
\includegraphics[width=\columnwidth]{fig_dos_zoom.pdf}\\
\vspace{-8mm}
\caption{\label{fig_dos}
(a) Density of states of perfect ($W = 0$, black) and disordered (red, green, blue) photonic crystals for different disorder strengths: $W = 0.1$ (red), 0.2 (green). The blue line corresponds to a fully random ensemble of atoms. Averaging is performed over \red{461}, 175 and 82 random configurations for $W = 0.1$, 0.2, and the fully random case, respectively. Vertical dashed lines show band edges. Inset: A 3D rendering of a perfect diamond lattice of atoms. (b) Zoom on the band gap. Yellow shading shows frequency ranges in which we find localized quasimodes for $W = 0.1$.
}
\end{figure}
\begin{figure*}[t]
\includegraphics[width=0.9\textwidth]{fig_ipr_perfect_vs_disordered.pdf}
\vspace{-5mm}
\caption{\label{fig_ipr_perfect}
Eigenvalues of a single realization of the Green's matrix for perfect [left column, panels (a) and (c)] and disordered [right column, panels (b) and (d)] diamond crystals of two different sizes $k_0 L = 40$ (upper row) and 60 (lower row). Each point in the graph corresponds to an eigenvalue and its grey scale represents the IPR of the corresponding eigenvector, from light grey for IPR = 0 (extended eigenvectors) to black for the maximum IPR (different for each panel, most localized eigenvectors). Vertical dashed lines show band edges. Only a part of the eigenvalue spectrum $(\omega-\omega_0)/\Gamma_0 \in [-2,2]$ is shown.
}
\end{figure*}
An eigenvector $\bm{\psi}_m = (\psi_m^{1}, \ldots, \psi_m^{3N})^T$ of the matrix ${\hat G}$ describes the spatial structure of the $m$-th quasimode: $\psi_m^{3(j-1)+\mu}$ gives the $\mu$-th component of the electric field on the atom $j$. The corresponding eigenvalue $\Lambda_m$ yields the eigenfrequency $\omega_m$ and the decay rate $\Gamma_m/2$ of the quasimode:
$\omega_m = \omega_0 - (\Gamma_0/2) \mathrm{Re} \Lambda_m$ and $\Gamma_m/2 = (\Gamma_0/2) \mathrm{Im} \Lambda_m$.
Spatial localization of quasimodes can be quantified by the so-called inverse participation ratio (IPR):
\begin{eqnarray}
\mathrm{IPR}_m = \sum\limits_{j=1}^N \left\{\sum\limits_{\mu=1}^3 \left| \psi_m^{3(j-1)+\mu} \right|^2 \right\}^2,
\label{ipr}
\end{eqnarray}
where we assume that the eigenvectors $\bm{\psi}_m$ are normalized:
\begin{eqnarray}
\sum\limits_{j=1}^N \sum\limits_{\mu=1}^3 \left| \psi_m^{3(j-1)+\mu} \right|^2 = 1.
\label{norm}
\end{eqnarray}
It is easy to see that $\mathrm{IPR}_m = 1$ for a state localized on a single atom and $\mathrm{IPR}_m = 1/N$ for a state that is uniformly delocalized over all $N$ atoms of the system. Generally, $\mathrm{IPR}_m \sim 1/M$ for a state localized on $M$ atoms.
The spectral distribution of quasimodes can be characterized by the density of states (DOS) ${\cal N}(\omega)$ defined in an open system as \cite{skip20epj,caze13}
\begin{eqnarray}
{\cal N}(\omega) = \frac{1}{3 N \pi}\sum\limits_{m=1}^{3N} \frac{(\Gamma_m/2)}{(\omega - \omega_m)^2 + (\Gamma_m/2)^2}.
\label{dos}
\end{eqnarray}
${\cal N}(\omega)$ is normalized such that the number of states inside an infinitely narrow frequency interval $d\omega$ is
$dN = 3N {\cal N}(\omega) d\omega$.
\red{
Thanks to such a normalization, ${\cal N}(\omega)$ converges to a limiting shape corresponding to the infinite crystal as the size of the crystal increases \cite{skip20epj}. Note that in our formalism, the number of quasimodes is equal to the size $3N$ of the matrix ${\hat G}$ and hence increases with $N$ for all frequencies, including those inside the band gap. However, as discussed elsewhere \cite{skip20epj}, the quasimodes corresponding to the frequencies inside the band gap are confined near the crystal boundary and hence their number grows proportionally to the crystal surface $\pi L^2 \propto N^{2/3}$. This growth is slower than the growth of the total number of modes and hence the relative weight of these quasimodes tends to zero in the thermodynamic limit $N \to \infty$ and ${\cal N}(\omega) \propto 1/L$ \cite{skip20epj,bin18}.}
\red{In this paper, we will present results for crystals of four different sizes $k_0 L = 30$, 40, 50 and 60 composed of $N = 2869$, 6851, 13331 and 22929 atoms, respectively. These numbers of atoms have been adjusted to maintain the same lattice constant $k_0 a = 3.4$ and the same average atomic number density $\rho/k_0^3 = 0.2$.} The lattice constant is chosen small enough for a band gap to open in the spectrum of the ideal lattice \cite{antezza09pra}
\red{as we illustrate by DOS calculations shown in Fig.\ \ref{fig_dos} for}
the perfect ($W = 0$) and disordered \red{crystals of size $k_0 L = 50$}.
For disordered lattices, DOS has been averaged over many independent random atomic configurations using the Monte Carlo method \cite{binder97}. DOS inside the band gap is different from zero due to the finite size of the considered sample \cite{skip20epj,bin18}. We observe that the band gap narrows when disorder in atomic positions is introduced ($W = 0.1$) and closes for strong enough disorder ($W = 0.2$). No signature of a band gap is found for a fully random system in which the atomic positions $\vec{r}_j$ are chosen randomly inside a sphere without any reference to the periodic diamond structure. Therefore, it turns out that our disordered photonic crystal preserves a band gap only for relatively weak disorder $W < 0.2$.
It is worthwhile to note that DOS ${\cal N}(\omega)$ reflects only the atomic component of elementary excitations of the system comprising the atoms and the electromagnetic field. Thus, low ${\cal N}(\omega)$ does not necessarily correspond to a small number of excitations at a given frequency $\omega$ but can simply mean that the atomic subsystem is weakly involved and the excitations look very much like freely propagating photons. This typically happens far from the atomic resonance, for $|\omega - \omega_0| \gg \Gamma_0$, where the coupling of light with atoms is inefficient. The absence of free-field solutions that have no atomic component for frequencies inside the band gap has been demonstrated previously \cite{klugkist06,antezza09pra}. A gap in ${\cal N}(\omega)$ thus corresponds to a gap in the total density of states and a gap in the density of electromagnetic modes ${\cal N}_{\textrm{EM}}(\omega)$ entering the localization criterion (\ref{loccrit}), even though ${\cal N}(\omega) \ne {\cal N}_{\textrm{EM}}(\omega)$.
\red{
In addition to DOS ${\cal N}(\omega)$, another interesting quantity is the {\em local} density of states (LDOS) ${\cal N}(\omega, \vec{r})$. In a photonic crystal of finite size, LDOS exhibits rapid spatial variations within each unit cell of the crystal and slow overall evolution with the distance to the boundaries \cite{leistikow11,mavidis20}. Disorder introduces fluctuations of LDOS and the statistics of the latter may serve as a criterion for Anderson localization \cite{schubert10}. However, calculation of LDOS for our model would require finding eigenvectors $\bm{\psi}_m$ of the matrix ${\hat G}$ which is a much more time-consuming computational task than finding the eigenvalues $\Lambda_m$ that are needed to calculate ${\cal N}(\omega)$ [see Eq.\ (\ref{dos})]. Even though we present some results for $\bm{\psi}_m$ in Figs.\ \ref{fig_ipr_perfect}--\ref{fig_state} below, their statistical analysis including the calculation of the average LDOS $\langle {\cal N}(\omega, \vec{r}) \rangle$ is beyond the scope of this work.
}
\section{Localized states inside the band gap}
\label{sec:states}
It follows from Fig.\ \ref{fig_dos}(b) that some quasimodes cross \red{over} the edges of the band gap when disorder is introduced in the photonic crystal (compare DOS corresponding to $W = 0$ and $W = 0.1$). In order to study the spatial localization properties of these modes, we show quasimode eigenfrequencies $\omega$ and decay rates $\Gamma$ together with their IPR for the perfect diamond crystal and a single realization of the disordered crystal in Fig.\ \ref{fig_ipr_perfect}. For the perfect crystal [left column of Fig.\ \ref{fig_ipr_perfect}, panels (a) and (c)], the vast majority of the modes both inside and outside the band gap are extended and have $\mathrm{IPR} \sim 1/N$. The distribution of quasimodes on the frequency-decay rate plane changes only slightly upon increasing the size of the system from $k_0L = 40$ to $k_0L = 60$ [compare panels (a) and (c) of Fig.\ \ref{fig_ipr_perfect}]. In contrast, the disordered photonic crystal exhibits some localized modes with appreciable IPR near band edges and in particular near the upper band edge, see Fig.\ \ref{fig_ipr_perfect}(b) and (d). These modes have decay rates (life times) that are significantly smaller (longer) than the decay rates (life times) of any modes of the perfect crystal. In addition, the number of such localized modes increases and their decay rates decrease significantly when the disordered crystal gets bigger [compare panels (b) and (d) of Fig.\ \ref{fig_ipr_perfect}]. Such a combination of spatial localization with small decay rates and the scaling with the sample size is typical for disorder-induced quasimode localization \cite{skip14prl,skip16prb}.
\begin{figure}[t]
\includegraphics[width=\columnwidth]{fig_ipr_30.pdf}\\
\vspace{-5mm}
\includegraphics[width=\columnwidth]{fig_cm_30.pdf}
\vspace{-10mm}
\caption{\label{fig_ipr_cm}
Eigenvalues of a single realization of the Green's matrix for a disordered diamond crystal of size $k_0 L = 30$. Each point in the graph corresponds to an eigenvalue and its grey scale represents the IPR (a) or the center of mass $r_{\mathrm{CM}}$ (b) of the corresponding eigenvector.
\red{The spatial structure of the eigenvectors corresponding to the two eigevalues indicated by arrows in panel (a) is illustrated in Fig.\ \ref{fig_state}.}
Vertical dashed lines show band edges. Only a part of the eigenvalue spectrum $(\omega-\omega_0)/\Gamma_0 \in [-2,2]$ is shown.
}
\end{figure}
\begin{figure}[t]
\includegraphics[width=\columnwidth]{fig_state_loc.pdf}\\
\includegraphics[width=\columnwidth]{fig_state_ext.pdf}
\vspace{-5mm}
\caption{\label{fig_state}
\red{Visualization of eigenvectors (quasimodes) corresponding to the eigenvalues indicated by arrows in Fig.\ \ref{fig_ipr_cm}(a). A quasimodes $\bm{\psi}_m$ is represented by $N$ red spheres centered at the locations $\vec{r}_j$ ($j = 1, \ldots, N$) of the $N$ atoms and having radii proportional to the intensities
$I_m^j = \sum_{\mu=1}^{3} | \psi_m^{3(j-1)+\mu} |^2$
of the quasimode on the atom. The quasimode (a) is spatially localized and has a relatively high IPR whereas the quasimode (b) is spatially extended. Grey spheres in both panels visualize the spherical region occupied by the disordered photonic crystal.}
}
\end{figure}
In addition to extended modes everywhere in the spectrum, isolated localized modes appear in the middle of the band gap of the perfect crystal [see Fig.\ \ref{fig_ipr_perfect}(a) and (c)]. Their $\mathrm{IPR} \sim 5 \times 10^{-2}$ is small but still considerably larger than $1/N \sim 10^{-4}$ expected for extended modes. Such modes do not disappear and become even more numerous in the disordered crystal [see Fig.\ \ref{fig_ipr_perfect}(b) and (d)]. They differ from the modes near band edges by their much larger decay rates that are virtually independent of the crystal size. Our previous work suggests that all modes in the middle of the band gap of a photonic crystal are confined near the crystal boundary, which may explain their $\mathrm{IPR} \propto 1/N^{2/3} \gg 1/N$ \cite{skip20epj}. In the presence of disorder, some of these modes may, in addition, be restricted to a small part of sample surface \cite{maximo19}, which may explain their larger IPR. To confirm this explanation, we compute the center of mass of a mode $\bm{\psi}_m$ as
\begin{eqnarray}
\vec{r}_{\mathrm{CM}}^{(m)} = \sum\limits_{j=1}^{N} \vec{r}_j \left[ \sum\limits_{\mu=1}^{3} |\psi_m^{3(j-1)+\mu}|^2 \right].
\label{rcm}
\end{eqnarray}
Figure \ref{fig_ipr_cm} shows that the modes in the middle of the band gap, including those having large IPR, tend to have the absolute value of their center of mass $r_{\mathrm{CM}}$ to be of order of the radius $L/2$ of the atomic sample. These modes are therefore confined at the sample boundary as we have anticipated. The confinement at the boundary explains the relatively large decay rates of these modes and the weak dependence of decay rates on the sample size. Although the role of surface modes discussed above may appear to be important in the calculations presented in this work, this is due to the relatively small sizes $k_0 L = 30$--60 of considered atomic samples limited by the computational constraints to which our numerical calculations are subjected. In the limit of $k_0 L \to \infty$ relevant for the analysis of modes localized by disorder in the bulk, surface modes play no role. In finite samples accessible to numerical calculations, the impact of surface modes can be minimized by using a scaling analysis presented in the next section.
\red{The need for a scaling analysis is also due to the absence of a univocal relation between the decay rate $\Gamma$ of a quasimode and its localization properties. Indeed, some of the black points in Fig.\ \ref{fig_ipr_cm}(a) correspond to much larger $\Gamma$ than some of the grey points, showing that the IPR and $\Gamma$ are not directly related. However, a relation can be established between the scaling of (normalized) $\Gamma$ with the sample size $L$ and the spatial localization of quasimodes at a given frequency.} Surface states do not follow the same scaling with the sample size as the modes localized in the bulk, which provides a way of discriminating between these two types of modes.
\red{
Similarly to the panels (b) and (d) of Fig.\ \ref{fig_ipr_perfect}, Fig.\ \ref{fig_ipr_cm}(a) shows that quasimodes with large IPR appear inside the band gap of the photonic crystal due to disorder. The spatial structure of these spatially localized quasimodes is very different from that of the extended quasimodes with frequencies outside the band gap, as we illustrate in Fig.\ \ref{fig_state}.
}
\section{Finite-size scaling}
\label{sec:scaling}
The finite-size scaling analysis is a way to access the behavior of an infinitely large system from the experimental or numerical data available for finite-size systems only. It is a common approach for analysis of phase transitions \cite{privman90,binder97} and has been widely used to characterize Anderson localization transitions in electronic \cite{mackinnon81,shklovskii93,slevin01,slevin14}, optical \cite{sheikhan09,skip18prl} and mechanical \cite{pinski12epl,pinski12jpcm} systems. Very generally, one chooses a quantity (let's denote it by $\Omega$) that is supposed to take two very different values (say, 0 and $\infty$) for the infinitely large system in the two different phases. The behavior of the quantity $\Omega$ is then studied as a function of sample size $L$ and regions of the parameter space are identified in which $\Omega$ increases or decreases with $L$. A point (for 1D parameter space), a line (2D), or a surface (3D) separating these regions is identified as a boundary between the two phases at which $\Omega$ is independent of $L$. Moreover, it often turns out that even when the parameter space of the physical system under consideration is multidimensional, all the parameters can be combined into a single one that is the only relevant near the phase transition point. In this situation known as the `single-parameter scaling' \cite{mackinnon81}, the critical exponents of the transition can be estimated from the behavior of $\Omega$ with $L$ for finite $L$.
In the context of Anderson localization, the (dimensionless) electrical conductance $g$ of a sample of size $L$ was identified as the \red{most} relevant quantity to consider: $\Omega = g$ \cite{abrahams79}. Obviously, the conductance of a 3D metallic cube of side $L$ in which all the electronic eigenstates are extended, $g \propto L$, grows with $L$ whereas one expects a decreasing conductance $g \propto \exp(-L/\xi)$ if the electronic eigenstates are localized at the scale of localization length $\xi$ and the sample is an (Anderson) insulator. We thus see that in the limit of $L \to \infty$, $g \to \infty$ if the eigenstates are extended and $g \to 0$ if they are spatially localized. In addition, one expects $g$ to be independent of $L$ at the critical point \cite{abrahams79}. This is, by the way, the essence of the Thouless criterion of Anderson localization $g \sim \mathrm{const}$ \cite{thouless77,abrahams79}, where `const' is a number of order unity.
The conceptual picture described above needs some adjustments when it comes to its application to the physical reality. Indeed, in a disordered system, $g$ is a random quantity and it is not clear how exactly its scaling with the sample size should be understood \cite{shapiro86}. The simplest option of analyzing its average value $\langle g \rangle$ turned out to be not always appropriate because $\langle g \rangle$ may be dominated by rare realizations of disorder with large $g$ \cite{shapiro86,cohen88}. Another, more intelligent guess is to use the average of the logarithm of $g$, $\langle \ln g \rangle$. This indeed allows to obtain reasonable results \cite{slevin01} but has the weakness of being somewhat arbitrary as a choice: why $\langle \ln g \rangle$ and not $\langle (\ln g)^2 \rangle$, $\langle (\ln g)^3 \rangle$ or the mean value of some other function of $g$? Although averaging different functions of $g$ may yield identical results for the critical properties of the localization transition in some models \cite{slevin01}, it is not so for the model of point scatterers considered here \cite{skip16prb}. This is why studying the full probability distribution function $P(g)$ instead of statistical moments of $g$ or $\ln g$ is necessary \cite{shapiro86,cohen88}.
\red{Conductance $g$ and its probability distribution function $P(g)$ are not the only quantities that can be used for the scaling analysis of the Anderson transition. Alternatives include the distribution of eigenvalue (level) spacings \cite{shklovskii93} or the multifractal spectrum \cite{rodriguez10} as the most prominent examples. Note that although initially proposed for Hermitian systems \cite{shklovskii93}, the finite-size scaling of spacings between eigenvalues has been recently extended to the non-Hermitian case \cite{tzor20,huang20} and thus can, in principle, be applied to analyze open disordered systems as the one considered in this work. However, $g$ and $P(g)$ still remain the most simple and computationally accessible quantities to analyze.}
Conductance as a ratio of the electric current to the voltage that causes it, is a notion that is proper to electronics and seems to be impossible to generalize to light. However, Thouless has noticed that if one divides the typical decay rate $\Gamma/2$ of quasimodes of an open quantum or wave system by the average spacing between quasimode frequencies $\Delta \omega$, the resulting `Thouless conductance' is equal to the electrical conductance $g$ for a metal wire: $(\Gamma/2)/\Delta\omega = g$ \cite{thouless77}. The advantage of the Thouless definition is that it can be readily generalized to any waves independent of any electrical currents or potential differences in the considered physical system. In our open, finite-size photonic crystal we define
\begin{eqnarray}
g_m = \frac{\Gamma_m/2}{\langle |\omega_m - \omega_{m-1}| \rangle}
= \frac{\mathrm{Im}\Lambda_m}{\langle |\mathrm{Re}\Lambda_m - \mathrm{Re}\Lambda_{m-1}| \rangle},
\label{cond}
\end{eqnarray}
where the eigenfrequencies $\omega_m$ are assumed to be ordered. We note that in a closed system the matrix ${\hat{G}}$ would be Hermitian and its eigenvalues real. Then the denominator of Eq.\ (\ref{cond}) would be equal to $1/[3N {\cal N}(\omega)]$. However, in the open system that we consider, the relation between the average spacing between eigenfrequencies $\omega_m$ and DOS is only approximate because the definition of DOS (\ref{dos}) involves decay rates of quasimodes as well. In practice, we can still approximately write
\begin{eqnarray}
g_m \simeq \frac{\Gamma_m}{2} {\cal N}(\omega_m) 3N.
\label{cond2}
\end{eqnarray}
Using this definition instead of Eq.\ (\ref{cond}) would barely modify the results following from the finite-size scaling analysis below because neither $\langle |\omega_m - \omega_{m-1}| \rangle^{-1}$ nor ${\cal N}(\omega)$ exhibit singularities at the localization transition points.
In a disordered photonic crystal, the Thouless conductance defined by Eq.\ (\ref{cond}) is a random quantity and at fixed scatterer density $\rho$ and disorder strength $W$, its statistical properties can be characterized by a probability density function $P(\ln g; \omega, L)$. Here we choose to work with $\ln g$ instead of $g$ because $g$ varies in a rather wide range. The probability density is parameterized by the frequency $\omega$ of the quasimodes and the sample size $L$. We estimate $P(\ln g; \omega, L)$ for different $\omega$ around the upper edge of the band gap observed in Fig.\ \ref{fig_dos} by numerically diagonalizing many independent random realizations of the matrix ${\hat G}$ for different sizes $L$ of the disordered photonic crystal. Figure \ref{fig_distr} shows the results for $W = 0.1$ and a particular frequency $\omega = \omega_0 - 0.44 \Gamma_0$ for which the so-called Harald Cram\'{e}r's distance between probability density functions corresponding to the smallest and largest $L$ is minimized (see the inset of Fig.\ \ref{fig_distr}). The Harald Cram\'{e}r's distance is
\begin{eqnarray}
{\cal D}(\omega) &=& \int\limits_{-\infty}^{\infty} d(\ln g)
\left| P(\ln g; \omega, L = 30 k_0^{-1})
\right. \nonumber \\
&-& \left. P(\ln g; \omega, L = 60 k_0^{-1}) \right|^2.
\label{dif}
\end{eqnarray}
Interestingly, the frequency $\omega$ for which ${\cal D}(\omega)$ is minimal also corresponds to the frequency for which distributions $P(\ln g; \omega, L)$ corresponding to different $L$ tend to coincide for small $g$, see the main panel of Fig.\ \ref{fig_distr}. Following our previous work \cite{skip16prb}, we identify this relative $L$-independence of $P(\ln g; \omega, L)$ as a signature of a critical point of a localization transition (also called a mobility edge).
\red{The probability density of conductance near the transition from extended to localized states has been extensively studied in the past for both quasi-1D \cite{muttalib99,froufe02} and 3D \cite{markos93,markos99,muttalib05} disordered systems without band gaps. For small $g$, our $P(\ln g; \omega, L)$ exhibits a tail decreasing to zero as $g \to 0$ in agreements with the previous prediction \cite{markos99}. However, in contrast to the expectations \cite{markos93,markos99,muttalib05}, our $P(\ln g; \omega, L)$ does not have a smooth, size-independent shape for large $g$.} We attribute this fact to the following reason. The realistic physical model of two-level atoms arranged in a diamond lattice that we consider, may exhibit other physical phenomena in addition to the eigenmode localization near band edges. These phenomena may be due, for example, to the collective interaction between atoms (sub- \cite{guerin16,weiss18,moreira19} and superrradiance \cite{araujo16,cottier18}) or to the specific structure of their spatial arrangement (potential topological phenomena \cite{ryu09,takahashi17}). Without having any relation to quasimode localization, these phenomena may cause some particular features of $P(\ln g; \omega, L)$ and exhibit some $L$-dependence. Some of these features may disappear in the limit of $k_0 L \to \infty$ but it is impossible to claim such a disappearance from our calculations performed for finite $k_0 L = 30$--60, which are likely to be insufficient to clearly observe the behavior expected in the limit of $k_0 L \to \infty$. For example, we see from Fig.\ \ref{fig_distr} that $P(\ln g; \omega, L)$ exhibits a pronounced peak at $\ln g \gtrsim 5$. \red{The peak} shifts to larger $g$ and reduces in magnitude as $L$ increases. This peak corresponds to superradiant states with short lifetimes which always exist in a finite-size system but which have a statistical weight decreasing with $L$. It is likely that the peak would vanish in the limit of $L \to \infty$ which is, however, inaccessible for our calculations.
\begin{figure}[t]
\includegraphics[width=\columnwidth]{fig_distributions.pdf}\\
\vspace{-7mm}
\caption{\label{fig_distr}
Probability density of the logarithm of the Thouless conductance $g$ at the critical point of the localization transition for different sizes of the disordered crystal: $k_0 L = 30$ (black), 40 (red), 50 (green), 60 (blue). The numbers of random realizations of the matrix ${\hat G}$ used for different sizes are \red{2200}, \red{900}, \red{461} and \red{180}, respectively. All eigenvalues within a frequency interval of width $0.01 \Gamma_0$ around $\omega - \omega_0 = -0.44 \Gamma_0$ are used to estimate $P(\ln g; \omega, L)$. Probability densities corresponding to different sizes coincide for small $g$; the grey shaded area below $P(\ln g; \omega, L)$ illustrates the notion of $q$-th percentile $\ln g_q$ for the firth percentile $q = 0.05$. Inset: the distance ${\cal D}(\omega)$ between probability densities corresponding to $k_0 L = 30$ and 60 attains a minimum at the critical point $(\omega-\omega_0)/\Gamma_0 \simeq -0.44$.
\red{The step of frequency discretization is $0.01 \Gamma_0$ for this figure.}}
\end{figure}
We will use the small-$g$ part of $P(\ln g; \omega, L)$ that becomes $L$-independent at $\omega \simeq \omega_0 - 0.44 \Gamma_0$ (see Fig.\ \ref{fig_distr}), to quantify the localization transition. The finite-size scaling analysis of $P(\ln g; \omega, L)$ can be conveniently performed by analyzing its percentiles $\ln g_q$ \cite{slevin03}. The $q$-th percentile $\ln g_q$ is defined by a relation:
\begin{eqnarray}
q = \int\limits_{-\infty}^{\ln g_q} P(\ln g; \omega, L) d(\ln g)
\label{perc}
\end{eqnarray}
illustrated in Fig.\ \ref{fig_distr} for $q = 0.05$ (firth percentile). Independence of the small-$g$ part of $P(\ln g; \omega, L)$ of $L$ implies that $\ln g_q$ should be $L$-independent for small $q$ as well. Visual inspection of Fig.\ \ref{fig_distr} suggests that $q = 0.05$ is more or less the maximal value of $q$ for which the $L$-independence of $P(\ln g; \omega, L)$ can be assumed. For larger $q$, the dashed vertical line in Fig.\ \ref{fig_distr} would shift to the right and enter into the range of $\ln g$ in which $P(\ln g; \omega, L)$ corresponding to different $L$ are clearly different. The grey shaded area $q$ on the left from the dashed line would then be ill-defined.
\begin{figure*}[t]
\includegraphics[width=0.9\textwidth]{fig_scaling.pdf}
\vspace{-5mm}
\caption{\label{fig_scaling}
(a) Firth percentile $\ln g_{q = 0.05}$ of the Thouless conductance as a function of frequency $\omega$ for four different sizes $k_0 L$ of the disordered photonic crystal. Very large error bars in the range $(\omega-\omega_0)/\Gamma_0 \in (-0.58, -0.54)$ are not shown. Vertical dashed lines show the band edges. Panels (b) and (c) zoom on the spectral ranges in which $\ln g_{q = 0.05}$ drops near the lower and upper band edges, respectively. (d) Finite-size scaling analysis of the localization transition taking place at $\omega = \omega_c \simeq \omega_0 - 0.44\Gamma_0$ where curves corresponding to different crystal sizes cross in a single point $\{ (\omega_c-\omega_0)/\Gamma_0, \ln g_q^{(c)} \}$. Solid lines represent a joint polynomial fit of Eq.\ (\ref{fit3}) with $m = n = 3$ to the numerical data, dashed lines show their extrapolation beyond the range of data $\ln g_q \in [\ln g_q^{(c)} - \delta(\ln g_q), \ln g_q^{(c)} + \delta(\ln g_q)]$ used for the fit. $\delta(\ln g_q) = 2$ for this figure. The inset shows the best-fit values of the critical exponent $\nu$ for $q = 0.01$--$0.05$ with errors bars corresponding to the standard deviation, the grey area showing the 95\% confidence interval, and the dashed horizontal line indicating the average of $\nu$ over $q$.}
\end{figure*}
We have computed and analyzed \red{the} percentiles $\ln g_q$ for $q = 0.01$--0.05 and present the results for $q = 0.05$ in Fig.\ \ref{fig_scaling}. The results for smaller $q$ are similar but exhibit stronger fluctuations and larger error bars due to poorer statistics. As discussed above, crossings between $\ln g_q$ corresponding to different $L$ are potential signatures of localization transitions. Figure \ref{fig_scaling}(a) suggests that there are two pairs of such crossings, a pair near the lower edge of the band gap and another pair near the upper edge. Panels (b) and (c) zoom on the corresponding frequency ranges. Let us discuss the behavior with increasing the frequency $\omega$. First, a transition to localized states can be identified around $(\omega - \omega_0)/\Gamma_0 \simeq -1.015$ where a common crossing of $\ln g_q$ corresponding to $k_0 L = 40$, 50 and 60 takes place. The line corresponding to $k_0 L = 30$ does not pass through this common crossing point, most probably because this sample size is insufficient to observe the expected large-sample behavior. $\ln g_q$ remains a decreasing function of $L$ for $(\omega - \omega_0)/\Gamma_0 \gtrsim -1.015$ and up to $(\omega - \omega_0)/\Gamma_0 \simeq -0.97$. This is consistent with the appearance of states localized in the bulk of the disordered crystal at frequencies near a band egde (see Figs.\ \ref{fig_ipr_perfect} and \ref{fig_ipr_cm}). The states with frequencies in the middle of the band gap, $-0.97 \lesssim (\omega - \omega_0)/\Gamma_0 \lesssim -0.57$ in Fig.\ \ref{fig_scaling}(a), appear as relatively localized according to their IPR in Figs.\ \ref{fig_ipr_perfect} and \ref{fig_ipr_cm} but show a scaling behavior that identify them as extended (i.e., $\ln g_q$ grows with $L$). This is consistent with their surface nature: indeed, surface states are restricted to the boundary of the sample and hence the number of atoms on which they have significant amplitudes grows as $L^2$ instead of $L^3$ for extended states in the bulk. Thus, they have larger IPR as compared to the extended states in the bulk, but this IPR still decreases with $L$ (as IPR $\propto 1/L^2$ instead of $1/L^3$). This decrease is reflected in the growth of $\ln g_q$ shown in Fig.\ \ref{fig_scaling}(a). A second band of localized states arises near the upper edge of the band gap, for $-0.57 \lesssim (\omega - \omega_0)/\Gamma_0 \lesssim -0.44$.
Our results for $\ln g_q(\omega, L)$ around $(\omega - \omega_0)/\Gamma_0 \simeq -0.44$ are smooth and have sufficiently small error bars to allow for a quantitative analysis of the transition from localized to extended states. We apply the finite-size scaling procedure to analyze small-$q$ percentiles of $g$ in the framework of the single-parameter scaling hypothesis \cite{slevin03}. The latter postulates that in the vicinity of the localization transition point, $\ln g_q$ is a function of a single parameter $L/\xi(\omega)$, where $|\xi(\omega)|$ is the localization length on the one side from the mobility edge $\omega_c$ and the correlation length on the other side: $\ln g_q(\omega, L) = F_q[L/\xi(\omega)]$. Assuming that the divergence of $\xi(\omega)$ at the transition is power-law, we represent $\xi(\omega)$ as
\begin{eqnarray}
\xi(\omega) = \left[ \sum\limits_{j=1}^{m} A_j w^j \right]^{-\nu}
\label{fit0}
\end{eqnarray}
near $w = (\omega - \omega_c)/\omega_c = 0$. Here $A_j$ are constants and $\nu$ is the critical (localization length) exponent.
We thus can write
\begin{eqnarray}
\ln g_q(\omega, L) = F_q[L/\xi(\omega)] = {\cal F}_q[\psi(\omega, L)]
\label{fit1}
\end{eqnarray}
with a scaling variable
\begin{eqnarray}
\psi(\omega, L) = \left[ \frac{L}{\xi(\omega)} \right]^{1/\nu} = L^{1/\nu} \sum\limits_{j=1}^{m} A_j w^j.
\label{fit2}
\end{eqnarray}
Finally, the scaling function ${\cal F}_q(\psi)$ is expanded in Taylor series:
\begin{eqnarray}
{\cal F}_q(\psi) = \ln g_q^{(c)} + \sum\limits_{j=1}^{n} B_j \psi^j,
\label{fit3}
\end{eqnarray}
where $\ln g_q^{(c)}$ is the critical value of $\ln g_q$ independent of $L$.
Fits of Eq.\ (\ref{fit3}) to the numerical data are performed with with $\omega_c$, $\ln g_q^{(c)}$, $\nu$, $A_j$ ($j = 1, \ldots, m$), and $B_j$ ($j = 1, \ldots, n$) as free fit parameters. The orders $m$ and $n$ of the expansions (\ref{fit2}) and (\ref{fit3}) are chosen large enough to ensure that the $\chi^2$ statistics
\begin{eqnarray}
\chi^2 = \frac{1}{N_{\mathrm{data}}} \sum\limits_{j=1}^{N_{\mathrm{data}}}
\frac{\{ {\cal F}_q[\psi(\omega_j, L)] - \ln g_q^{(j)} \}^2}{\sigma_j^2}
\label{chi2}
\end{eqnarray}
is of the order 1. Here $N_{\mathrm{data}}$ is the number of data points $\{ \omega_j, \ln g_q^{(j)} \}$ and $\sigma_j$ are statistical errors of $\ln g_q^{(j)}$ shown by error bars in Fig.\ \ref{fig_scaling}. We only fit the numerical data in the range $\ln g_q \in [\ln g_q^{(c)} - \delta(\ln g_q), \ln g_q^{(c)} + \delta(\ln g_q)]$ around the critical value $\ln g_q^{(c)}$ estimated in advance by looking for the minimum of the sum of squares of differences between points corresponding to different $L$.
A joint fit to the numerical data corresponding to four different values of $L$ \red{and $q = 0.05$} is shown in Fig.\ \ref{fig_scaling}(d). It yields $\omega_c = \red{-0.4401 \pm 0.0003}$ and $\nu = \red{0.94 \pm 0.02}$ as the best fit parameters. We repeated the fits for other values of $q$ in the range from 0.01 to 0.05 with the same frequency resolution $0.005 \Gamma_0$ as in Fig.\ \ref{fig_scaling}(d) [see the inset of Fig.\ \ref{fig_scaling}(d) for the best-fit $\nu$] and with a twice finer resolution
\red{and $\delta(\ln g_q) = 1$ instead of $\delta(\ln g_q) = 2$ in Fig.\ \ref{fig_scaling}(d). In addition, we varied the orders $m$ and $n$ of the series expansions (\ref{fit2}) and (\ref{fit3}) from 1 to 3 and introduced an additional, irrelevant scaling variable \cite{slevin14}.} All fits yield consistent values of $(\omega_c - \omega_0)/\Gamma_0$ in the range \red{[$-0.441$, $-0.436$]}. The best-\red{fit} values of the critical exponent are more scattered but remain in the range $\nu = 0.8$--1.1, with large uncertainties up to 20\% for the narrower data range $\delta(\ln g_q) = 1$.
\section{Discussion}
\label{sec:disc}
Whereas the position of the mobility edge found from the finite-size scaling analysis agrees well with the estimation following from the analysis of $P(\ln g; \omega, L)$ (see Fig.\ \ref{fig_distr}), the value of the critical exponent $\nu$ turns out to be well below $\nu_{\text{AM}} \simeq 1.57$ found numerically for the Anderson model (AM) in the 3D orthogonal symmetry class and believed to be universal for disorder-induced localization transitions in 3D systems in the absence of any particular symmetry breaking mechanisms \cite{slevin14}. Cold-atom experiments mimicking the so-called quasiperiodic kicked rotor model indeed yielded values of $\nu$ compatible with $\nu_{\text{AM}}$ \cite{chabe08}, but $\nu \lesssim 1$ significantly different from $\nu_{\text{AM}}$ were reported in low-temperature electron transport experiments in doped semiconductors \cite{thomas85,itoh04}. Recently, values of $\nu \lesssim 1$ have been also found in numerical simulations and attributed to the differences between the physics of real materials and that of the paradigmatic Anderson model and, in particular, to the hybridization of conduction and impurity bands \cite{carnio19}. In our optical problem, the impurity band (i.e., the modes appearing in the band gap due to disorder $W \ne 0$) is not clearly separated from the band of propagating modes (i.e., the modes in the bands of the perfect crystal) either (see Fig.\ \ref{fig_dos}). This may be a reason for the value of the critical exponent $\nu$ different from $\nu_{\text{AM}}$. Other possible reasons may include a strong anisotropy of optical properties of a photonic crystal near a band edge due to the fact that the first modes that become allowed upon crossing a band edge propagate only in certain directions, and, of course, the vector nature of light of which the full impact on Anderson localization still remains to be understood.
To determine the precise value of $\nu$ and to obtain a better estimation of its uncertainty, more accurate calculations are required. Unfortunately, such calculations are difficult to perform using our approach. Indeed, the approach is based on the diagonalization of large $3N \times 3N$ non-Hermitian matrices ${\hat G}$ and has an advantage of yielding the whole spectrum of a single realization of an open disordered system at once. The downsides of this are that (i) the approach does not allow for focusing on a particular frequency range at a lower computational cost and (ii) studying large systems ($N \gtrsim 10^4$) is computationally expensive. Because the localization transition takes place in a narrow frequency range, only a small fraction of eigenvalues obtained by the numerical diagonalization of ${\hat G}$ is actually used for the estimation of $\nu$.
\red{Indeed, in Fig.\ \ref{fig_scaling}(d) we have chosen to analyze the behavior of $\ln g_q$ within an interval $\ln g_q^{(c)} \pm 2$, which restricts the number of eigenvalues of ${\hat G}$ used in the calculations of $\omega_c$ and $\nu$ to less than 1\%\ of the total number of eigenvalues. Narrowing the interval of considered $\ln g_q$ only decreases the fraction of useful eigenvalues whereas expanding this interval and using more eigenvalues would correspond to leaving the critical regime and hence is not desirable.} Thus, significantly increasing the statistical accuracy of \red{calculations} requires large amounts of computations. Although this drawback of our approach is general and complicates the analysis of fully random ensembles of atoms as well \cite{skip16prb,skip18prl}, its impact is amplified here by the particular narrowness of the frequency range in which the localization transition takes place and the low DOS in this range. Indeed, for scalar waves in a random ensemble of point scatterers studied in Ref.\ \onlinecite{skip16prb}, $\ln g_{q=0.05}$ grows from $\ln g_{q=0.05}^{(c)} - 1$ to $\ln g_{q=0.05}^{(c)} + 1$ in a frequency range $\delta\omega/\Gamma_0 \simeq 0.08$ whereas in the photonic crystal studied here the same growth takes place within $\delta\omega/\Gamma_0 \simeq 0.02$ [see Fig.\ \ref{fig_scaling}(d)]. In addition, DOS of the fully random system has no particular features in the transition region, whereas in the photonic crystal, the localization transition takes place near a band edge where DOS is quite low [see Fig.\ \ref{fig_dos}(b)]. These factors limit the statistical accuracy of our numerical data and make the high-precision estimation of $\nu$ a heavy computational task.
The frequency range in which the quasimodes are localized can be broadened and DOS in this range can be raised by increasing the strength of disorder $W$. However, the space for increase of $W$ without closing the gap and loosing localization altogether is rather limited. As we show in Fig.\ \ref{fig_dos}, the gap closes already for $W = 0.2$, and this closing is accompanied by the disappearance of states localized due to disorder. We illustrate this in Fig.\ \ref{fig_scaling_random}(a) where the firth percentile of conductance is shown as a function of frequency for $W = 0.2$ and the same sizes $L$ of the disordered photonic crystal as in Fig.\ \ref{fig_scaling}. Contrary to the latter figure, no crossings between lines $\ln g_q(\omega,L)$ occur in Fig.\ \ref{fig_scaling_random}(a), signaling the absence of localization transitions. Moreover, the values of $\ln g_q(\omega,L)$ in Fig.\ \ref{fig_scaling_random}(a) are rather high: $\ln g_q(\omega,L) > 2$ for all $\omega$. This means that at any frequency, less than 5\% of $g$ values obtained for different atomic configurations are smaller than $\exp(2) \approx 7$, which is incompatible even with the ``weakest'' form of the Thouless localization criterion requiring that some typical value of $g$ ($\langle g \rangle$, $\exp(\langle g \rangle)$, median $g$, etc.) is less than 1. Finally, another signature of the absence of localization is the monotonous growth of $\ln g_q$ with $L$ at all frequencies indicating that most probably, $\ln g_q \to \infty$ in the limit of $L \to \infty$, as it should be for spatially extended modes.
\begin{figure}[t]
\includegraphics[width=\columnwidth]{fig_scaling_02.pdf}\\
\vspace*{-5mm}
\includegraphics[width=\columnwidth]{fig_scaling_random.pdf}
\vspace{-10mm}
\caption{\label{fig_scaling_random}
Firth percentile $\ln g_{q = 0.05}$ of the Thouless conductance as a function of frequency $\omega$ for different diameters $L$ of a disordered crystal with disorder strength $W = 0.2$ (a) and a fully disordered spherical ensemble of resonant atoms (b). The average number density of atoms is the same as in the photonic crystal analyzed in Fig.\ \ref{fig_scaling}. Vertical dashed lines show band edges of the ideal crystal. The absence of crossings between curves corresponding to different $L$ confirms the absence of localization transitions in these systems.}
\end{figure}
Further increase of the strength of disorder $W$ beyond $W = 0.2$ does not modify the situation qualitatively as the behavior of the system gets closer to that of a fully random ensemble of atoms studied previously \cite{skip14prl}. The fully random limit is illustrated in Fig.\ \ref{fig_scaling_random}(b) that exhibits the same characteristic features as Fig.\ \ref{fig_scaling_random}(a) (absence of crossings between different curves, large values of $\ln g_q$ and its monotonous growth with $L$) and hence confirms the previously discovered absence of the localization of light in the fully random system \cite{skip14prl}.
The presence of localization only at weak disorder highlights the important differences between localization phenomena in disordered crystals and fully random media. As it has been largely discussed in the literature starting from the pioneering works of Sajeev John \cite{john87,john91,john93}, the localization in a photonic crystal takes place due to an \textit{interplay} of order and disorder in contrast to the localization in a fully random medium that is due to disorder only. Whereas localized states may appear in a \red{3D} random medium only when the strength of disorder exceeds some critical value, even a weak disorder introduces spatially localized modes in the band gap of a disordered photonic crystal and the notion of critical disorder does not exist. However, the possibility of reaching localization at arbitrary weak disorder is counterbalanced by the narrowness of frequency ranges inside the band gap in which the density of states is large enough to allow for observation of the localization of light in an experiment or a numerical simulation. Increasing disorder widens the relevant frequency ranges but also tends to close the band gap and hence to suppress the `order' part of the interplay between order and disorder. A compromise is reached at some intermediate disorder strength that is sufficient to significantly affect wave propagation at frequencies near band edges but not large enough to close the band gap. For the atomic crystal considered in this work, such a compromise seems to be reached around $W = 0.1$ for which the band gap remains open (see Fig.\ \ref{fig_dos}) while localized states become visible (see Fig.\ \ref{fig_ipr_perfect}).
The disappearance of localized modes with the increase of disorder strength $W$ allows for an additional insight about the reasons behind the absence of Anderson localization of light in a completely random 3D ensemble of point scatterers. Indeed, recent work \cite{skip19prb,sgrignuoli20prb} has confirmed the initial suggestion \cite{skip14prl} that the resonant dipole-dipole coupling between scatterers impedes the formation of spatially localized optical modes in 3D. This explanation seems to be supported by the fact that localized modes do arise in a photonic crystal where the distance $\Delta r$ between any two scatterers (atoms) is always larger than a certain minimal distance ($a\sqrt{3}/4$ for a diamond crystal with a lattice constant $a$ considered in this work) and hence the strength of the dipole-dipole coupling between scatterers that scales as $1/\Delta r^3$, is bounded. The increase of $W$ enhances chances for two atoms to be closer, the minimum possible distance between atoms being equal to $(\sqrt{3}/4 - 2W)a$ in our model. The probability for two neighboring atoms to get infinitely close because of disorder becomes different from zero for $W \geq \sqrt{3}/8 \simeq 0.22$. This estimation of disorder strength for which dipole-dipole interactions should become particularly strong, is reasonably close to the approximate value $W \simeq 0.2$ for which localized modes disappear [see Fig.\ \ref{fig_scaling_random}(a)] and the band gap closes (see Fig.\ \ref{fig_dos}). The closeness of the two values suggests a relation between the near-field dipole-dipole interactions, the photonic band gaps, and the spatial localization of optical modes although the exact nature of this relation still remains to be established. Although our analysis supports the arguments based on Eq.\ (\ref{loccrit}) and suggesting that the underlying crystalline structure facilitates the localization phenomenon due to the suppression of DOS near band edges, it also highlights the importance of yet another feature of a crystal---the existence of a minimal distance between two scattering units (atoms or, more generally, ``particles''). At the same time, the impact of the crystalline structure of the atomic lattice on the spatial localization of optical modes does not reduce to keeping neighboring atoms far apart from each other. One of the consequences of the crystalline structure is the fact that in our photonic crystal, the localized modes are red-shifted with respect to the atomic resonance frequency ($\omega < \omega_0$) in contrast to the blue-shifted localized modes that arise in a fully disordered ensemble of atoms in a strong magnetic field \cite{skip15prl,skip18prl}.
A final remark concerns the spatially extended quasimodes in the middle of the band gap, corresponding to large $\ln g_{q=0.05} \gtrsim 2$ between $(\omega-\omega_0)/\Gamma_0 \simeq -0.97$ and $-0.57$ in Fig.\ \ref{fig_scaling}. As we have illustrated already in Fig.\ \ref{fig_ipr_cm}, most of these quasimodes are bound to the surface of the crystal. Their statistical weight is thus expected to decrease with $L$ roughly as the surface-to-volume ratio $1/L$, which tends to zero when $L \to \infty$ but remains significant in our calculations restricted to rather small $L$. Nevertheless, we clearly see from Figs.\ \ref{fig_scaling}(a--c) that the frequency range in the middle of the bandgap where $\ln g_{q=0.05}$ takes large values $\ln g_{q=0.05} \gtrsim 2$ and remains a globally growing function of $L$, shrinks as $L$ increases. No transition point where curves $\ln g_q$ corresponding to different $L$ cross can be identified around $(\omega-\omega_0)/\Gamma_0 \simeq -0.97$ or $-0.57$, which is especially clear in Fig.\ \ref{fig_scaling}(b) whereas less obvious in Fig.\ \ref{fig_scaling}(c) due to much stronger fluctuations of the numerical data. We note that the above picture of surface modes playing less and less important role as $L$ increases is certainly only a rough approximation to the complete explanation of the evolution of the spectrum in the middle of the band gap. Nontrivial features that are already seen from our results and call for explanation include the nonmonotonous behavior of $\ln g_q$ with $L$ near the high-frequency end of the interval $-0.97 \lesssim (\omega-\omega_0)/\Gamma_0 \lesssim -0.57$ [note the red line that crosses the green line around $(\omega-\omega_0)/\Gamma_0 \simeq -0.7$ in Fig.\ \ref{fig_scaling}(a)] and much stronger fluctuations around $(\omega-\omega_0)/\Gamma_0 \simeq -0.57$ than around $(\omega-\omega_0)/\Gamma_0 \simeq -0.97$ [compare Figs.\ \ref{fig_scaling}(c) and (b)]. Unfortunately, a study of these puzzling features is difficult to perform using our numerical method because it mobilizes significant computational power to obtain the full spectrum of the system of which only a very small fraction [i.e., a small number of eigenvalues $\Lambda_m$ of the matrix (\ref{green})] fall in the band gap where the density of states is low.
\section{Conclusions}
\label{sec:concl}
We performed a thorough theoretical study of the localization of light in a 3D disordered photonic crystal made of two-level atoms. The atoms are first arranged in a diamond lattice with a lattice constant $a$ and are then slightly displaced in random directions by random distances up to $Wa$. We show that spatially localized quasimodes appear near edges of the band gap of the ideal crystal when the disorder strength is $W = 0.1$ or smaller. $W = 0.2$ or larger leads to the closing of the band gap and the disappearance of localized states. The finite-size scaling analysis of the transition between extended and localized states near the high-frequency edge of the band gap suggests that the critical (localization-length) exponent of the transition $\nu$ is in the interval 0.8--1.1, which is different from $\nu_{\text{AM}} \simeq 1.57$ corresponding to the Anderson transition of the 3D orthogonal universality class to which the investigated transition might be expected to belong because of the absence of any particular symmetry breaking mechanisms and, in particular, the preserved time-reversal symmetry.
From the practical standpoint, arranging atoms in a diamond lattice may be a realistic alternative to subjecting them to strong magnetic fields in order to reach the localization of light in cold-atom systems. Indeed, atomic lattices can be readily designed by loading atoms in optical potentials created by interfering laser beams with carefully adjusted phases and propagation directions \cite{greiner02,anderlini07}. Some degree of disorder may arise in such lattices due to experimental imperfections; ways to create additional, controlled disorder have been largely explored in recent years \cite{sanchez10}. Calculations presented in this work provide quantitative estimates for disorder strengths and frequency ranges for which localized quasimodes should appear in lattices of cold atoms featuring a $J_g = 0 \to J_e = 1$ transition. Examples of appropriate chemical elements for vapors of which laser cooling technologies are readily available include strontium (Sr) or ytterbium (Yb). Multiple scattering of light in large ensembles of Sr atoms has been already reported \cite{bidel02} and high atomic number densities have been reached in experiments with Yb \cite{takasu03}. In addition, some of our conclusions may hold for atomic species with more complicated level structure, which may be easier to manipulate and control in an experiment (e.g., rubidium). This opens a way towards the experimental observation of phenomena reported in this work.
\vspace*{-4mm}
\begin{acknowledgments}
\vspace*{-3mm}
All the computations presented in this paper were performed using the Froggy platform of the CIMENT infrastructure (\href{https://ciment.ujf-grenoble.fr}{{\tt https://ciment.ujf-grenoble.fr}}), which is supported by the Rhone-Alpes region (grant CPER07\verb!_!13 CIRA) and the Equip@Meso project (reference ANR-10-EQPX-29-01) of the program {\it Investissements d'Avenir} supervised by the {\it Agence Nationale de la Recherche}.
\end{acknowledgments}
\vspace*{-5mm}
| proofpile-arXiv_065-280 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Condensed matter offers a fascinating test bed to explore different concepts in non-relativistic and relativistic quantum field theories~\cite{Coleman:2003ku}, with some prominent examples being massless Dirac quasiparticles in graphene~\cite{Novoselov:2005es}, Majorana fermions in superconductors~\cite{Fu:2008gu, Mourik:2012je, Rokhinson:2012ep}, and anyons in two-dimensional electron gases~\cite{Bartolomei:2020gs}. In ultrathin ferromagnets, chiral interactions of the Dzyaloshinskii-Moriya form~\cite{Dzyaloshinsky:1958vq, Moriya:1960go, Moriya:1960kc, Fert:1980hr, Crepieux:1998ux, Bogdanov:2001hr} allow for the existence of skyrmions~\cite{Bogdanov:1989vt, Bogdanov:1994bt}, which are topological soliton solutions to a nonlinear field theory bearing resemblance to a model for mesons and baryons proposed by Skyrme~\cite{Skyrme:1961vo, Skyrme:1962tr}. While these two-dimensional particles have been actively studied for their potential in information storage applications~\cite{Kiselev:2011cm, Sampaio:2013kn}, their original ties to nucleons have been revisited through three-dimensional extensions called hopfions~\cite{Sutcliffe:2017da}, which also provide an intriguing connection to Kelvin's proposal for a vortex theory of atoms~\cite{Thomson:1867}.
Pairs of skyrmions and antiskyrmions, their antiparticle counterpart, can be generated in a variety of ways, such as nucleation under local heating~\cite{Koshibae:2014fg}, homogeneous spin currents~\cite{Stier:2017ic, EverschorSitte:2018bn}, and surface acoustic waves~\cite{Yokouchi:2020cl}. Pairs also appear in ultrathin chiral ferromagnets with frustrated exchange interactions when the magnetization dynamics is driven by spin-orbit torques (SOTs)~\cite{Ritzmann:2018cc}. While both skyrmions and antiskyrmions are metastable states in such systems~\cite{Leonov:2015iz, Lin:2016hh, Rozsa:2017ii}, their motion can be qualitatively different under spin-orbit torques~\cite{Ritzmann:2018cc}. In particular, an antiskyrmion driven beyond its Walker limit can shed skyrmion-antiskyrmion pairs, much like the vortex-antivortex pairs produced during vortex core reversal~\cite{VanWaeyenberge:2006io}, which are then driven apart by the SOTs. Because such nonlinear processes are observed to involve a variety of creation and annihilation events involving particles and antiparticles, the intriguing analogies with high-energy physics compel us to explore whether this system could offer any insight, albeit tangential, into the more general question of matter-antimatter asymmetry in the universe. After all, the Sakharov conditions for baryogenesis~\cite{Sakharov:1967}, namely, baryon number violation, charge conjugation and combined charge conjugation-parity violation, and out-of-equilibrium interactions, appear to be naturally fulfilled in the aforementioned case: no conservation laws exist for the number of skyrmions and antiskyrmions, the Dzyaloshinskii-Moriya interaction (DMI) breaks chiral symmetry and lifts the degeneracy between skyrmion and antiskyrmions, and dissipative torques (spin-orbit and Gilbert damping) representing nonequilibrium processes play a crucial role in pair generation.
In this paper, we examine theoretically the microscopic processes leading to an imbalance in the number of skyrmions and antiskyrmions produced as a result of SOT-driven antiskyrmion dynamics. The remainder of this paper is organized as follows. In Sec. II, we describe the atomistic model used and the dynamics simulated. Section III discusses the main scattering processes that occur between an antiskyrmion and the generated skyrmion-antiskyrmion pair. Detailed phase diagrams of the generation processes are presented in Sec. IV, where the role of the SOTs and material parameters such as the strength of the Dzyaloshinskii-Moriya interaction and polarization angle are discussed. In Sec. V, we present the minimum-energy paths for two scattering processes. Finally, some discussion and concluding remarks are given in Sec. VI.
\section{Model and method}
The system studied is illustrated in Fig.~\ref{fig:geometry}(a).
\begin{figure
\centering\includegraphics[width=8.5cm]{Figure01}
\caption{(a) Film geometry illustrating the Pd/Fe bilayer on an Ir(111) substrate, with schematic illustrations of a skyrmion $s$ and antiskyrmion $\bar{s}$. $\mathbf{B}$ is the applied field and $\theta_p$ is the angle associated with the spin polarization vector $\mathbf{p}$. (b) Phase diagram of antiskyrmion dynamics under fieldlike (FL) and dampinglike (DL) spin-orbit torques~\cite{Ritzmann:2018cc}.
}
\label{fig:geometry}
\end{figure}
Following Refs.~\onlinecite{Romming:2013iq, Dupe:2014fc, Ritzmann:2018cc} we consider a ferromagnetic PdFe bilayer, which hosts the skyrmions $s$ and antiskyrmions $\bar{s}$, on an Ir(111) substrate through which we assume an electric current flows in the film plane, resulting in a spin current generated by the spin Hall effect flowing in the $z$ direction and polarized along $\mathbf{p}$, which is characterized by the angle $\theta_p$ measured from the $x$ axis. A magnetic field $\mathbf{B}$ is applied along the $z$ direction, which defines the uniform background state of the PdFe system. We model the magnetic properties of the PdFe film with a hexagonal lattice of magnetic moments $\mathbf{m}_i$, one atomic layer in thickness, whose dynamics is solved by time integration of the Landau-Lifshitz equation with Gilbert damping and spin-orbit torques,
\begin{eqnarray}
\frac{d \mathbf{m} }{dt} = -\frac{1}{\hbar} \mathbf{m} \times \mathbf{B_{\mathrm{eff}}} + \alpha \mathbf{m} \times \frac{d \mathbf{m} }{dt} + \nonumber \\ \beta_\mathrm{FL} \mathbf{m} \times \mathbf{p} + \beta_\mathrm{DL} \mathbf{m} \times \left( \mathbf{m} \times \mathbf{p} \right),
\label{eq:LLG}
\end{eqnarray}
where $\alpha = 0.3$ is the damping constant and $\hbar \beta_\mathrm{FL}$ and $\hbar \beta_\mathrm{DL}$ characterize the strength of the fieldlike (FL) and dampinglike (DL) contributions of the spin-orbit torques, respectively, in meV. The effective field, $\mathbf{B}_i^{\mathrm{eff}}=-\partial \mathcal{H}/\partial \mathbf{m}_i$, is expressed here in units of energy and is derived from the Hamiltonian $\mathcal{H}$,
\begin{eqnarray}
\mathcal{H} = -\sum_{\langle ij \rangle} J_{ij} \mathbf{m}_i \cdot \mathbf{m}_j - \sum_{\langle ij \rangle} \mathbf{D}_{ij} \cdot \left( \mathbf{m}_i \times \mathbf{m}_j \right) \nonumber \\ - \sum_{i} K m_{i,z}^2 - \sum_{i} \mathbf{B} \cdot \mu_\mathrm{s}\mathbf{m}_i.
\label{eq:Hamiltonian}
\end{eqnarray}
The first term is the Heisenberg exchange interaction, which includes coupling up to ten nearest neighbors and involves frustrated exchange: $J_1 = 14.73$, $J_2=-1.95$, $J_3=-2.88$, $J_4=0.32$, $J_5=0.69$, $J_6=0.01$, $J_7=0.01$, $J_8=0.13$, $J_9=-0.14$, and $J_{10}=-0.28$, where all $J_{ij}$ are given in meV. The second term is the DMI between nearest neighbors, with $\mathbf{D}_{ij}$ oriented along $\hat{\mathbf{r}}_{ij} \times \hat{\mathbf{z}}$ and $ \| \mathbf{D}_{ij} \| = 1.0$ meV. The third term describes a uniaxial anisotropy along the $z$ axis with $K = 0.7$ meV. The fourth term represents the Zeeman interaction with the applied field $\mathbf{B}$, where we take $\mu_s = 2.7\mu_\mathrm{B}$ for iron. The material parameters are obtained from first-principles calculations of the layered system in Fig.~\ref{fig:geometry}(a)~\cite{Dupe:2014fc}. We note that the applied field of 20 T is only slightly greater than the critical field $B_c$, $B=1.06 B_c$, below which the magnetic ground state comprises a skyrmion lattice phase. Under these conditions, \emph{both} isolated skyrmions and antiskyrmions are metastable states due to the frustrated exchange interactions, with skyrmions being energetically favored by the DMI.
Figure~\ref{fig:geometry}(b) represents the phase diagram, indicating different dynamical regimes under SOTs for a system in which the initial state comprises a single isolated antiskyrmion (the ``seed''). The linear, deflected, and trochoidal regimes denote the motion involving single-particle dynamics, while annihilation represents the region in which the seed loses its metastability. The focus here is on $s\bar{s}$ pair generation, which predominantly occurs under small fieldlike torques and large dampinglike torques. We simulated the dynamics in a variety of system sizes $L \times L$ with periodic boundary conditions, with $L$ ranging from 100 to 800 in order to mitigate finite-size effects that primarily involve collisions from generated particles re-entering the simulation area. The time integration of Eq.~(\ref{eq:LLG}) was performed using the Heun method with a time step of 1 fs.
\section{Scattering processes}
The propensity for the initial seed $\bar{s}$ to produce particles ($s$) and antiparticles ($\bar{s}$) is determined by the scattering processes that immediately follow the formation of the $s\bar{s}$ pair, which depend on the strengths of $\beta_\mathrm{FL}$ and $\beta_\mathrm{DL}$. Three key scattering processes are illustrated in Fig.~\ref{fig:pairprocess} for $\theta_p = 0$.
\begin{figure
\centering\includegraphics[width=8.5cm]{Figure02}
\caption{Main scattering processes following pair generation from the seed $\bar{s}$ under SOT. (a) Maximal production, minimal asymmetry process $(N=2,\eta=0)$ leading to proliferation in which the generated $s\bar{s}$ pair splits and collision between the seed and generated $\bar{s}$ conserves skyrmion number. (b) $(N=2,\eta=0)$ process leading to premature saturation or stasis, where collision between the seed and generated $\bar{s}$ proceeds through a transient $Q=-2$ state ($2\bar{s}$) before decaying to an $\bar{s}\bar{s}$ bound pair, preventing further generation. (c) Minimal production, maximal asymmetry process ($N=1,\eta =1$) in which the generated $s\bar{s}$ pair splits and collision between the seed and generated $\bar{s}$ is inelastic, leading to annihilation of seed $\bar{s}$. Crosses denote the point of reference in the film plane and the color map indicates the charge density $q$ of a unit cell. Arrows are shown for moments for which $\sqrt{m_{i,x}^2+m_{i,y}^2} > 0.9$, and the open circles denote the approximate position of the core.}
\label{fig:pairprocess}
\end{figure}
The different processes illustrated typically occur for specific ranges of fieldlike and dampinglike parameters, as will be discussed later. We use a color map based on the local topological (skyrmion) charge density $q$, which is computed from three neighboring moments $\mathbf{m}_{i}, \mathbf{m}_{j}, \mathbf{m}_{k}$ as~\cite{Bottcher:2019hf}
\begin{equation}
q_{ijk} = -\frac{1}{2\pi} \tan^{-1}\left[ \frac{\mathbf{m}_{i} \cdot \left(\mathbf{m}_{j} \times \mathbf{m}_{k} \right)}{1+ \mathbf{m}_{i} \cdot \mathbf{m}_{j} + \mathbf{m}_{i}\cdot \mathbf{m}_{k} + \mathbf{m}_{j}\cdot \mathbf{m}_{k}} \right].
\end{equation}
This represents the contribution from half a unit cell. We use $Q$ to denote the total charge, which represents a sum over $q_{ijk}$, and we adopt the convention where $Q=1$ for $s$ and $Q =-1$ for $\bar{s}$. The processes are characterized by their potential for particle production, measured by $N = N_s + N_{\bar{s}}$, which denotes the total numbers of skyrmions ($N_s$) and antiskyrmions ($N_{\bar{s}}$) produced from the initial antiskyrmion, and by the asymmetry in this production, which is measured by the parameter $\eta = (N_s - N_{\bar{s}})/N$. We consider only processes for which $N>0$ (the seed $\bar{s}$ is not included in this count). In Fig.~\ref{fig:pairprocess}(a) a maximal production ($N=2$), minimal asymmetry ($\eta=0$) scattering process leading to the proliferation of particles is shown for $(\hbar \beta_\mathrm{FL},\hbar \beta_\mathrm{DL}) = (0.02,1.5)$ meV. An $s\bar{s}$ pair nucleates from the tail of the $\bar{s}$ seed as it undergoes trochoidal motion, which then splits and is followed by a number-conserving collision between the two $\bar{s}$ particles. The $s$ particle escapes the zone of nucleation, and the two $\bar{s}$ particles become new sources of $s\bar{s}$ pair generation. In this scenario, $s$ and $\bar{s}$ are produced in equal numbers, and the process continues indefinitely but can be slowed by annihilation processes, which become more probable as the density of particles increases. In Fig.~\ref{fig:pairprocess}(b), a similar $N=2,\eta = 0$ process is shown for $(\hbar \beta_\mathrm{FL},\hbar \beta_\mathrm{DL}) = (0.1,1.35)$ meV, but here, the scattering between the two $\bar{s}$ results in a transient higher-order $Q=-2$ antiskyrmion state ($2\bar{s}$), which subsequently decays into an $\bar{s}\bar{s}$ bound pair that executes a rotational motion about its center of mass. As a result, further pair generation is suppressed. Figure~\ref{fig:pairprocess}(c) illustrates a minimal production ($N = 1$), maximal asymmetry ($\eta = 1$) process at $(\hbar \beta_\mathrm{FL},\hbar \beta_\mathrm{DL}) = (0.13,1.1)$ meV, where the scattering between the seed and generated $\bar{s}$ results in a non-conservation process where the seed $\bar{s}$ is annihilated, which takes place via the creation and annihilation of a meron-antimeron~\cite{Desplat:2019dn}. This scattering event leaves the generated $s$ to propagate away and the surviving $\bar{s}$ to restart the process.
Examples of the growth rates are given in Fig.~\ref{fig:genrate}, where $Q(t)$ is shown for the three cases presented in Fig.~\ref{fig:pairprocess}.
\begin{figure}
\centering\includegraphics[width=8.5cm]{Figure03}
\caption{Representative examples of different growth regimes of the total skyrmion charge, $Q$ for three values of $(\hbar \beta_\mathrm{FL},\hbar \beta_\mathrm{DL})$. (a) Proliferation, (0.02, 1.5) meV. (b) Stasis or premature saturation, (0.13, 1.1) meV. (c) Linear growth, (0.1, 1.35) meV.}
\label{fig:genrate}
\end{figure}
The data are obtained from simulations of a $500 \times 500$ system over $0.1$ ns with $\theta_p = 0$. Above this timescale, propagating particles can reenter the simulation geometry through the periodic boundary conditions which result in spurious collisions and annihilation events. $Q_s$ and $Q_{\bar{s}}$ are found by summing over separately the contributions from $q_{ijk}>0$ and $q_{ijk}<0$, respectively. Figure~\ref{fig:genrate}(a) illustrates the growth where the process in Fig.~\ref{fig:pairprocess}(a) dominates, where a proliferation of particles can be seen. Unlike the single event in Fig.~\ref{fig:pairprocess}(a) the growth in Fig.~\ref{fig:genrate}(a) also comprises processes such as those described in Figs.~\ref{fig:pairprocess}(b) and ~\ref{fig:pairprocess}(c), which results in an overall asymmetry in the production and finite topological charge that increases with time. When the seed immediately undergoes the scattering process in Fig.~\ref{fig:pairprocess}(b), the generation stops for all future times, and a stasis regime is found [Fig.~\ref{fig:genrate}(b)]. Such processes can also occur after a certain time interval following proliferation, which results in premature saturation. Cases in which the scattering process in Fig.~\ref{fig:pairprocess}(c) repeats periodically results in an approximately linear growth in the number of skyrmions, which is shown in Fig.~\ref{fig:genrate}(c).
\section{Generation phase diagrams}
A ($\beta_\mathrm{FL},\beta_\mathrm{DL}$) phase diagram of the skyrmion production and asymmetry is presented in Fig.~\ref{fig:phasediag}(a).
\begin{figure
\centering\includegraphics[width=8.5cm]{Figure04}
\caption{(a) ($\beta_\mathrm{FL},\beta_\mathrm{DL}$) phase diagram illustrating the total number of skyrmions and antiskyrmions produced over 0.1 ns, where $N$ is represented by the circle size on a logarithmic scale and the asymmetry parameter is shown on a linear color scale. (b) $N$ and (c) $\eta$ as a function of DL torques for the proliferation regime (for different FL torques). (d) $N$ as a function of DL torques for linear growth ($\eta = 1$).}
\label{fig:phasediag}
\end{figure}
As for Fig.~\ref{fig:genrate}, the data were obtained after simulating 0.1 ns on a $500 \times 500$ spin system with periodic boundary conditions and $\theta_p = 0$. The size of the circles represents $N$ on a logarithmic scale, while the color code represents $\eta$ on a linear scale. Three different regimes can be identified visually as the strength of $\beta_\mathrm{FL}$ is increased. For low values of $\beta_\mathrm{FL}$ (primarily $\hbar\beta_\mathrm{FL} \lesssim 0.07$ meV), we observe a regime in which proliferation dominates where large numbers of $s$ and $\bar{s}$ are generated, which is mainly driven by the process in Fig.~\ref{fig:pairprocess}(a). Both $N$ and $\eta$ increase with the dampinglike torques in this regime, as shown in Figs.~\ref{fig:phasediag}(b) and \ref{fig:phasediag}(c), respectively, which can be understood from the fact that $\beta_\mathrm{DL}$ represents a nonconservative torque that transfers spin angular momentum into the system. For intermediate values of $\beta_\mathrm{FL}$ (primarily $0.08 \lesssim \hbar\beta_\mathrm{FL} \lesssim 0.11$ meV), a linear growth regime is seen which is characterized by $\eta \simeq 1$ and moderate values of $N$. As for the proliferation regime, the rate of production in the linear regime increases with $\beta_\mathrm{DL}$ as shown in Fig.~\ref{fig:phasediag}(d). Finally, for large values of $\beta_\mathrm{FL}$ (primarily $\hbar\beta_\mathrm{FL} \gtrsim 0.13$ meV) and close to the boundaries of the generation phase, we observe predominantly a stasis regime where generation stops after the nucleation of a single $s\bar{s}$ pair and the formation of a bound $\bar{s}\bar{s}$ state, as shown in Fig.~\ref{fig:pairprocess}(b).
The roles of DMI and the spin polarization angle are shown in Fig.~\ref{fig:thetad}, where $(\theta_p,D_{ij})$ phase diagrams for $N$ and $\eta$ are presented for the three distinct dynamical regimes discussed above: proliferation [(0.02, 1.5) meV, Fig.~\ref{fig:thetad}(a)], linear growth [(0.1, 1.35) meV, Fig.~\ref{fig:thetad}(b)], and stasis [(0.13, 1.1) meV, Fig.~\ref{fig:thetad}(c)], where the numbers in parentheses indicate values of $(\hbar \beta_\mathrm{FL},\hbar \beta_\mathrm{DL})$.
\begin{figure
\centering\includegraphics[width=8.5cm]{Figure05}
\caption{($\theta_p,D_{ij}$) phase diagram illustrating the total number of skyrmions and antiskyrmions produced over 0.1 ns, where $N$ is represented by the circle size on a logarithmic scale and $\eta$ is shown on a linear color scale for (a) $(\beta_\mathrm{FL},\beta_\mathrm{DL}) =$ (0.02, 1.5) meV, (b) (0.1, 1.35) meV, and (c) (0.13, 1.1). (d) $\eta$ and $N$ as a function of $D_{ij}$ for the case in (a).}
\label{fig:thetad}
\end{figure}
A weak dependence on $\theta_p$ can be seen. This arises from the interplay between the SOT-driven dynamics of the antiskyrmion helicity, which possesses twofold rotational symmetry about the antiskyrmion core in its rest state, and the underlying hexagonal lattice structure, which introduces a weak lattice potential that arises because of the compact nature of the core~\cite{Ritzmann:2018cc}. Variations in the magnitude of $D_{ij}$, on the other hand, lead to greater changes in the qualitative behavior, where transitions between stasis, linear growth, and proliferation can be seen as $D_{ij}$ is increased for all three base cases considered. This behavior is exemplified in Fig.~\ref{fig:thetad}(d), where $N$ and $\eta$ are shown as a function of $D_{ij}$ for the cases shown in Fig.~\ref{fig:thetad}(a). These results also suggest that a finite threshold for $D_{ij}$ is required for pair generation to take place, a threshold that is also dependent on the strength of the SOT applied.
\section{Minimum-energy paths for merging and annihilation processes}
We can note that both stasis and proliferation states can be found at the phase boundaries. This results from the fact that the scattering processes in Figs.~\ref{fig:pairprocess}(b) and \ref{fig:pairprocess}(c) involve nearly identical energy barriers (in the absence of SOTs), where only slight differences in the relative helicities of the scattering $\bar{s}$ states determine the outcome. To see this, we look at minimum-energy paths (MEPs) on the multidimensional energy surface defined by the Hamiltonian in Eq.~(\ref{eq:Hamiltonian}) at $\beta_\mathrm{FL}=\beta_\mathrm{DL}=0$. We use the geodesic nudged elastic band method (GNEB)~\cite{Bessarab:2015method} to compute the MEPs, for which intermediate states of the system along the reaction coordinate are referred to as images.
\begin{figure*}[hbt]
\centering\includegraphics[width=17.5cm]{Figure06}
\caption{(a, b) Minimum-energy paths for the merging of the $\bar{s}\bar{s}$ pair into (a) a $2\bar{s}$ state and (b) an $\bar{s}$ state. The image indices are given in the bottom left corner. The associated energy profile along the (normalized) reaction coordinate, where (c) corresponds to the paths that results in the $2\bar{s}$ state and (d) corresponds to the path that results in the $\bar{s}$ state. The total topological charge remains constant at $Q=-2$ in (c), while its variation with the reaction coordinate is shown in (d). The inset in (c) shows the saddle point configuration (image 7), where the dashed arrows indicate the reference axis along which the clockwise (CW) or counterclockwise (CCW) Bloch states are defined and through which the merging of $\bar{s}$ occurs. The inset in (d) represents an expanded view of the region around the energy barrier.}
\label{fig:meps}
\end{figure*}
First, the MEP for the merging into a higher-order $2\bar{s}$ state is shown in Fig.~\ref{fig:meps}(a), where the image index is shown in the bottom left corner. The corresponding energy profile along the reaction coordinate is shown in Fig.~\ref{fig:meps}(c). This path resembles the mechanism identified in Fig.~\ref{fig:pairprocess}(b), which, under SOTs, subsequently results in the formation of a bound $\bar{s}\bar{s}$ pair and suppresses generation. The initial state (A) in the GNEB method is set as a pair of metastable, isolated $\bar{s}$ states, where both $\bar{s}$ have the same helicity. The antiskyrmions then undergo a rotation of helicity, during which the total energy increases, to reach a higher-energy configuration at image 6. The next image, image 7, corresponds to the barrier top, in the form of a saddle point, and precedes the merging. At the saddle point, the antiskyrmions come into contact from the side and join through their counterclockwise and clockwise rotating Bloch axes, respectively, with a helicity difference of about $\pi$ rad. The corresponding energy barrier is found to be $\Delta E = 1.089 J_1$, where $J_1 = 14.73$ meV is the exchange constant for the Heisenberg interaction between nearest neighbors and is employed here as a characteristic energy scale. Subsequent images correspond to the merging into the final metastable $2\bar{s}$ state via the antiskyrmions' Bloch axes, accompanied by a decrease in the total energy of the system. The total topological charge remains constant throughout this process.
Next, we describe the path corresponding to the merging of the $\bar{s}\bar{s}$ pair into a single $\bar{s}$ via a process that does not conserve the total topological charge. The MEP is shown in Fig.~\ref{fig:meps}(b), with the corresponding energy profile shown in Fig.~\ref{fig:meps}(d). This mechanism resembles the process presented in Fig.~~\ref{fig:pairprocess}(c), through which an inelastic collision of two antiskyrmions results in the destruction of the seed and leads to a linear growth in the number of skyrmions [Fig.~\ref{fig:pairprocess}(c)]. Similar to the mechanism described above, the initial state is set as a pair of isolated, metastable $\bar{s}$ states, where both $\bar{s}$ have the same helicity. From there, the helicities of the antiskyrmions rotate as the energy increases, until the system reaches the barrier top at image 6. This state is very similar to the saddle point of the MEP in Fig.~\ref{fig:meps}(a), with, once more, a corresponding energy barrier of $\Delta E = 1.089 J_1$. However, the difference in the helicities seems to be, in this case, slightly inferior to $\pi$ rad. The following images correspond to the merging into a metastable single $\bar{s}$ state. This involves the destruction of one unit of negative topological charge, which occurs via the nucleation of a meron of charge $Q=+\frac{1}{2}$ at image 8. This is accompanied by a sharp decrease in the total energy of the system, as well as a drop in the total negative topological charge, from $-2$ to $-1$. The meron then annihilates with the extra antimeron of charge $Q=-\frac{1}{2}$, thus leaving a single $\bar{s}$ state of charge $Q=-1$ at image 9, accompanied by a further drop in the total energy.
The above results show that, in the generation regime, the scattering processes undergone by the $\bar{s}$ seed closely resemble the paths of minimum energy at zero SOT. Additionally, we find that the paths for the merging of the $\bar{s}\bar{s}$ pair into either a $2\bar{s}$ state or an $\bar{s}$ state traverse very similar saddle points, where only a small relative difference in the helicities appears to determine the fate of the final state. The associated energy barriers are practically identical and relatively low, of the order of $J_1$. This weak differentiation between the saddle points is in line with the fact that the boundaries of the phase diagram in Fig. 4 are not sharp and that small variations in the applied torques are sufficient to transition between the stasis and linear growth regimes.
\section{Discussion and concluding remarks}
With the frustrated exchange and in the absence of dipolar interactions, setting $D_{ij}$ to zero restores the chiral symmetry between skyrmions and antiskyrmions, where SOTs result in circular motion with opposite rotational sense for $s$ and $\bar{s}$~\cite{Leonov:2015iz, Lin:2016hh, Zhang:2017iv, Ritzmann:2018cc}. While the focus here has been on the consequences of generation from an antiskyrmion seed, the choice of an anisotropic form of the Dzyaloshinskii-Moriya interaction, i.e., one that energetically favors antiskyrmions over skyrmions~\cite{Nayak:2017hv, Hoffmann:2017kl, Camosi:2018eu, Raeliarijaona:2018eg, Jena:2020db}, would result in the opposite behavior whereby skyrmion seeds would lead to pair generation and proliferation of antiskyrmions over skyrmions~\cite{Ritzmann:2018cc}.
Naturally, dipolar interactions are present in real materials, and their role has not been considered in this present study. This is justified for the following reasons. First, the long-range nature of dipolar interactions becomes apparent only as the film thickness is increased, i.e., beyond several nanometers. The system considered here is one atomic layer thick, which represents the limit in which the dipolar interaction is well described by a local approximation which results in the renormalization of the perpendicular magnetic anisotropy constant. Second, dipolar interactions favor a Bloch-like state for skyrmions and modify the energy dependence of the helicity for antiskyrmions. However, these corrections would almost be negligible in comparison with the strength of the frustrated exchange and Dzyaloshinskii-Moriya interactions considered. Finally, the inclusion of dipolar interactions would not suppress the Walker-like transition of the antiskyrmion dynamics, which results in pair generation.
In summary, we have presented results from atomistic spin dynamics simulations of skyrmion-antiskyrmion generation processes that result from the SOT-driven dynamics of an initial antiskyrmion state. Three fundamental scattering processes are identified, namely, elastic collisions, double-antiskyrmion bound states, and antiskyrmion annihilation, which form the basis of more complex generation processes leading to stasis, linear growth, and proliferation of particles. We investigated how the strength of the spin-orbit torques, including the orientation of the spin polarization with respect to the lattice, and the DMI constant affect the generation processes. Overall, the asymmetry in the production of particles and antiparticles from a given seed is driven by the strength of the chiral symmetry breaking, here measured by $D_{ij}$, and the nonequilibrium torques leading to pair generation, here characterized by $\beta_\mathrm{DL}$. Last, we investigated the paths of minimum energy at zero SOT for the two fundamental scattering processes that respectively lead to the stasis and linear growth regimes. We found that these resemble the processes undergone by the seed under SOT, and that the two mechanisms involve extremely similar saddle points, which explains the lack of sharp boundaries between the two regimes.
\begin{acknowledgments}
This work was supported by the Agence Nationale de la Recherche under Contract No. ANR-17-CE24-0025 (TOPSKY), the Deutsche Forschungsgemeinschaft via TRR 227, and the University of Strasbourg Institute for Advanced Study (USIAS) for via a fellowship, within the French national program ``Investment for the Future'' (IdEx-Unistra).
\end{acknowledgments}
| proofpile-arXiv_065-281 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Acknowledgements}
We thank Prof. Jungseek Hwang and Dr. Jae Hyun Yun for useful discussion on the extended Drude analysis, Dr. J. D. Denlinger for fruitful discussion on the CEF effect. This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIP) (No.2015R1A2A1A15051540), the Institute for Basic Science in Korea (Grant No. IBS-R009-D1) and the Supercomputing Center/Korea Institute of Science and Technology Information with supercomputing resources including technical support (KSC-2016-C1-0003).
| proofpile-arXiv_065-282 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{APCP - Alternating Phase Carr-Purcell sequence}
We can achieve longer coherence times --- and further learn about the nature of the magnetic-like fluctuations that limit the coherence --- with Carr-Purcell sequences \cite{PhysRev.94.630}. A schematic of the sequence is shown in Fig. \ref{fig:RawDataAndPulseSequences}.
Applying a standard Carr-Purcell sequence --- in which all $\pi$ pulses are in phase --- resulted in the loss of signal after a small number of pulses ($\lesssim 10$). We attribute this to inaccuracies in $\pi$ pulses which build over successive pulses.
To reduce the problems introduced by imperfect pulses, we use the alternating-phase Carr-Purcell sequence (APCP): the phase of every other $\pi$ pulse was shifted by $180^{\circ}$ to minimize error accumulation \cite{slichter1990}.
T$_2$ was measured by two methods: the first is as previously described and shown in Fig. \ref{fig:RawDataAndPulseSequences}.
In the second method we simply monitor the polarization signal as a function of time during the APCP sequence. Because each $\pi$ pulse rotates the spins through the pole of the Bloch sphere, we are able to effectively measure the readout amplitude at the time of each APCP pulse, allowing for much more rapid data acquisition \cite{PhysRev.94.630}.
Both methods gave identical results for T$_2$ to within our experimental error.
Typical data is shown in Fig. \ref{fig:APCPraw}. The coherence time is significantly longer than what was observed for Hahn echo. However, we note that the decay is poorly described by an exponential (which would appear as a straight line on the log-linear scale of Fig. \ref{fig:APCPraw}).
This is not surprising: we expect an inhomogenous distribution of trapping sites in the sample and consequently a distribution of decoherence rates \cite{upadhyay2016longitudinal}. We model this as a distribution of exponential decay curves; for simplicity we assume a flat distribution of decay rates from zero to some maximum rate. The resulting function is fit to the data (as shown in Fig. \ref{fig:APCPraw}) to determine that maximum rate. In Fig. \ref{fig:APCPraw}, we see some slight discrepancies between the model at very short times and at very long times. The short time discrepancy is likely due to a long tail of decay rates (missed by our model's sharp cutoff); the long time discrepancy indicates that the distribution does not actually remain constant as the decay rate goes to zero. In the remainder of the paper, we take T$_2$ to be the inverse of the average decay rate.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=\linewidth]{fig_APCP_raw.pdf}
\caption{
Data taken for Rb atoms with an alternating-phase Carr-Purcell sequence, as discussed in the text, taken at a magnetic field of 45~G, with a 13.25~$\mu$s delay between $\pi$-pulses.
\label{fig:APCPraw}
}
\end{center}
\end{figure}
The APCP T$_2$ shows a strong dependence on the time delay $\tau$ between the $\pi$ pulses, as seen in Figure \ref{fig:APCP_T2}.
T$_2$ increases with increasing APCP frequency up to the maximum frequency we were able to explore (limited by the duration of our $\pi$ pulses).
During APCP, the superposition is most sensitive to perturbations at a frequency of $\frac{1}{2\tau}$ (using the notation of Fig. \ref{fig:RawDataAndPulseSequences}) and harmonics \cite{RevModPhys.89.035002}.
The data of Fig. \ref{fig:APCP_T2} indicates that the stochastic magnetic-like fluctuations limiting the Hahn-echo T$_2$ are primarily at frequencies $\lesssim 1$~kHz.
Whether longer T$_2$ times could be obtained at even higher APCP frequencies is an open question.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=\linewidth]{fig_APCP_T2_vs_f.pdf}
\caption{
Measured APCP T$_2$ vs $\pi$-pulse repetition rate for different sample conditions. All measurements are for the $F=3$, $m_F = 0, -1$ superposition of $^{85}$Rb.
The ``normal'' samples have orthohydrogen fractions in the range of $3 \times 10^{-5}$ to $5 \times 10^{-5}$
and rubidium densities from $5 \times 10^{16}$~cm$^{-3}$ to $1 \times 10^{17}$~cm$^{-3}$.
The ``high ortho'' sample has an orthohydrogen fraction of $1.3 \times 10^{-3}$.
The ``high Rb'' sample has a total rubidium density of $4 \times 10^{17}$~cm$^{-3}$.
The data shown is an average of measurements from multiple samples, each measured over multiple days. The error bars represent the standard deviations of those measurements (where available; the points missing error bars are expected to have comparable fractional variations). The variation is due to both sample reproducibility and to changes that occur over time, as discussed in the text.
\label{fig:APCP_T2}
}
\end{center}
\end{figure}
Figure \ref{fig:APCP_T2} shows the measured coherence times for both our highest-purity samples and for samples with elevated rubidium and orthohydrogen densities.
These lower-purity samples show a measurable decrease in T$_2$.
We model the decoherence at the highest $\pi$-pulse repetition rate, under the assumption that the decoherence rate is linear in both rubidium density and orthohydrogen fraction. The data from the impure crystals indicates that a significant fraction --- but not all --- of the decoherence in our highest-purity samples is from the rubidium dopants and orthohydrogen impurities. We speculate that the remaining decoherence comes from the pulse sequence itself.
One source of errors in the pulse sequence is off-resonant coupling out of our two-level system to other Zeeman levels. We observe a reduction in our T$_1$ when we run the APCP pulse sequence at high repetition rates. This effect is more significant (and leads to shorter T$_2$'s) at lower magnetic fields, where the frequency splitting between different $m_F$ transitions is smaller.
For all the samples probed, T$_2$ measurements taken the first day after sample growth are consistently shorter than on subsequent days, and T$_2$ is often observed to continue to slowly increase on a timescale of weeks.
We speculate this is due to the conversion of orthohydrogen to parahydrogen inside our matrix after the sample is grown. Ortho-para conversion in the solid phase has been observed, but under the conditions of our experiment the timescale for conversion in an undoped crystal is much too long to play a significant role \cite{schmidt1974diffusion, RevModPhys.52.393, shevtsov2000quantum}. Paramagnetic impurities --- such as the rubidium atoms themselves --- are also known to act as a catalyst for ortho-para conversion. However, at the rubidium densities employed in this work, one would expect negligible catalysis of the bulk on the timescale of days \cite{shevtsov2000quantum}. Consistent with this expectation, we see no spectroscopic signs of a significant decrease in the average orthohydrogen fraction after the sample is grown. We speculate that the rubidium atoms are converting some of their nearest-neighbor orthohydrogen molecules (which would be precisely those orthohydrogen molecules that play the most important role in limiting T$_2$), causing the orthohydrogen fraction in the \emph{local} environment of each rubidium atom to decrease over time.
The long coherence times demonstrated under the APCP protocol make rubidium atoms in parahydrogen very promising for AC magnetic field sensing (at a frequency chosen by the APCP sequence). If detection techniques allow one to efficiently measure single atoms \cite{chambers2019imaging}, a single-atom quantum sensor could be developed.
This would enable single-molecule NMR experiments \cite{taylor2008high}. Single nitrogen vacancy (NV) sensors in solid diamond have already demonstrated NMR detection of nearby single $^{13}$C nuclear spins within the diamond \cite{zhao2012sensing, kolkowitz2012sensing, taminiau2012detection, abobeih2018one}. However, the detection of molecules is more difficult: without a method to implant molecules of interest inside the bulk diamond, the molecules must instead be attached to the surface.
Unfortunately, the surface is associated with magnetic field noise and significantly reduced NV coherence times \cite{myers2014probing, kim2015decoherence, de2017tailoring, PhysRevX.9.031052, myers2017double, PhysRevX.9.031052}. Parahydrogen, on the other hand, allows for gentle introduction of molecular species into the bulk during sample growth \cite{Momose1998, yoshioka2006infrared, tam:1926}.
{\color{black}
We propose that rubidium could be used to make single-molecule NMR measurements of nearby molecules co-trapped within the parahydrogen matrix.
At a bias magnetic field of 110~G (as was used for the data in Fig. \ref{fig:APCP_T2}), the precession frequencies for $^{13}$C and $^{1}$H would be $1 \times 10^5$~Hz and $5 \times 10^5$~Hz respectively.
Following the protocol of Ref. \cite{zhao2012sensing}, one can detect nuclear spins using a APCP sequence with $\pi$-pulses at twice the precession frequency.
This is slightly outside the pulse frequency range explored in this work, but we expect it is straightforward to achieve with higher-power RF electronics.
Assuming we are able to efficiently detect the spin state of a \emph{single} rubidium atom, if we scale the results of Ref. \cite{zhao2012sensing} using the coherence times measured in this work, we would expect to be able to sense a single proton at a distance of 10~nm within 1~s.
With the addition of field gradients, this could be extended to perform magnetic resonance imaging of the structure of single molecules, as was previously proposed for NV centers \cite{PhysRevX.5.011001, staudacher2013nuclear, mamin2013nanoscale, sushkov2014magnetic}. Nuclear spin imaging at the single-nucleus level would be of tremendous value for understanding biochemistry and for applications in medicine and drug development.
In future work, we hope to move from the ensemble measurements presented here to the single-atom measurements needed for single-molecule NMR and MRI.
}
Our longest measured APCP T$_2$ time, for our best sample, was 0.1~s.
This is over an order-of-magnitude longer than has been achieved with near-surface NV centers to date {\color{black} \cite{myers2014probing, kim2015decoherence, de2017tailoring, PhysRevX.9.031052, myers2017double, PhysRevX.9.031052}}.
In future work it may be possible to achieve longer spin coherence times with the use of more sophisticated dynamical decoupling pulse sequences \cite{barry2019sensitivity} and with the growth of higher-purity samples. It may also be possible to achieve greater magnetic field sensitivity with nonclassical superposition states \cite{PhysRevB.100.024106}.
\section*{Acknowledgements}
This material is based upon work supported by the National Science Foundation under Grants No. PHY-1607072 and PHY-1912425.
We gratefully acknowledge helpful conversations with Amar C. Vutha.
| proofpile-arXiv_065-283 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}\label{sec:introduction}
Score following is a fundamental task in MIR and the basis for applications such as automatic accompaniment \cite{Raphael10_MusicPlusOneML_ICML, Cont10_ScoreMusic_IEEE-TPAMI}, automatic page turning \cite{ArztWD08_PageTurning_ECAI} or the synchronization of live performances to visualizations \cite{ArztFGGGW15_AIConcertgebouw_IJCAI, ProckupGHK13_Orchestra_IEEE-MM}. These applications require a real-time capable system that aligns a musical performance to a symbolic score representation in an online fashion.
To solve this, existing systems either require a computer-readable score representation (e.\,g.\, extracted using Optical Music Recognition (OMR)\cite{CalvoHP19_UnderstandingOMR_CRR}) or rely on fixed-size (small) snippets of sheet images.
Models from the latter category are by design only capable of handling fixed-sized excerpts of the sheet image due to a limited action space to predict the next position in the score.
This is a severe constraint, as the sheet image snippet has to (at least partly) correspond to the incoming audio excerpt. If it does not match the audio anymore (due to some tracking error), no proper prediction can be formed.
To overcome this limitation, we attempt score following directly in the full sheet image, enabling the system to observe the whole page at any given time. This makes the problem significantly more challenging, e.\,g.,\, due to repetitive musical score structures, compared to locally constrained systems that only look at snippets.
To the best of our knowledge, we present the first system that requires neither OMR nor any other form of score pre-processing and directly follows musical performances in full sheet images in an end-to-end fashion.\footnote{Code and data will be made available on-line: \url{https://github.com/CPJKU/audio_conditioned_unet}}
Specifically, we formulate score following as a \textit{referring image segmentation task} and introduce an appropriate model architecture in \Sect{sec:image_seg}, including a conditioning mechanism in the form of a Feature-wise Linear Modulation (FiLM) layer \cite{PerezSDVDC18_FILM_AAAI} as a central building block.
In \Sect {sec:experiments}, we demonstrate the system on polyphonic piano music, taken from the MSMD dataset \cite{DorferHAFW18_MSMD_TISMIR}. To analyze its generalization capabilities, we also test it on real musical performances taken from the MSMD test split, in \Sect{sec:real_perf}. The results will show that our model outperforms current state-of-the-art image based trackers in terms of alignment precision, but also that it currently lacks robustness across different audio conditions.
\begin{figure*}[t]
\centering
\includegraphics[width=0.78\textwidth]{task.pdf}
\caption{\small{
Score Following Task as modelled in this work: Given a score (sheet image) and an incoming musical performance (audio), the goal is to predict the current location in the score in real time. The audio is fed to the tracker frame by frame and the system processes the last 40 frames to predict the position for the latest frame (the \textit{Target Frame} marked red in (a)).
The ground truth (bounding box around current score position; see (a)) is given as a binary segmentation mask. The system predicts a probability for each pixel to correspond to the current audio; thus it highlights those regions that are most likely to match the correct location. Ideally, this should be only a single region. However, in (b) we see that such a prediction is not perfect: while the highest probability is assigned to the correct position in staff four, there is also a small likelihood in the last staff, as the notes are the same for both locations. To predict the location correctly, the system needs to consider the whole audio up to the current point in time, which motivates our design choices introduced in \Sect{sec:image_seg}.}
}
\label{fig:sf_task}
\end{figure*}
\section{Related Work}
\label{sec:related_work}
Score following approaches can be broadly categorized into those that rely on the presence of a computer-readable score representation, such as MusicXML or MIDI, and those that try to do without such a symbolic representation.
In the former category, techniques like Dynamic Time Warping (DTW) \cite{Arzt16_MusicTracking_PhD, ArztFGGGW15_AIConcertgebouw_IJCAI, Dixon05_ODTW_IJCAI} and Hidden Markov Models (HMMs) \cite{Cont06_ScoreFollowingViaNmfAndHMM_ICASSP, OrioLS03_scorefollowing_NIME, NakamuraCCOS15_ScoreFollowingSemiHMM_ISMIR} are applied to achieve robust and reliable tracking results. The main issue with these approaches is the need for computer-readable score representations, which must either be created manually in a tedious and time consuming process, or automatically extracted using OMR. In the OMR case, the faithfulness of the symbolic score to what is depicted on the sheet image strongly depends on the quality of the OMR system, which may introduce errors that impede the actual score following task. Empirical evidence for this claim was published in \cite{HenkelBDW19_ScoreFollowingRL_TISMIR}, where a DTW-based score following system that relied on MIDI scores extracted via an OMR system had difficulties tracking synthetically created test data.
Several recent publications deal with the latter category, and investigate score following in the context of non-computer-readable score representations, represented as raw sheet images. In \cite{DorferAW16_ScoreFollowDNN_ISMIR}, the authors propose a multi-modal deep neural network to predict the position within a sheet snippet based on a short audio excerpt. In \cite{DorferHW18_ScoreFollowingAudioSheet_ISMIR} and \cite{HenkelBDW19_ScoreFollowingRL_TISMIR}, score following is formulated as a reinforcement learning (RL) problem, where the RL agent's task is to adapt its reading speed in an unrolled sheet image, conditioned on an audio excerpt.
One of the limitations of all these methods is that they require the score to be represented in an unrolled form, i.\,e.,\, staves need to be detected in the sheet image, cut out and presented to the score following system in sequence.
To overcome this, \cite{HenkelKW19_AudioConditionedUNet_WORMS} introduced a system that directly infers positions within full sheet images for monophonic piano music. However, the temporal aspect
of score following was neglected altogether --- based on an audio excerpt all possible matching positions in the full sheet image are highlighted, including those that were already played --- making it interesting preliminary work, but not a reasonable score following system.
In the following we build upon their foundation and incorporate long term audio context, proposing the first fully capable score following system that works on entire sheet images, without needing any pre-processing steps.
\section{Score Following as a Referring Image Segmentation Task}\label{sec:image_seg}
Similarly to \cite{HenkelKW19_AudioConditionedUNet_WORMS}, we model score following as an image segmentation task --- more specifically, as a \textit{referring} image segmentation task. In computer vision, the goal of referring image segmentation is to identify a certain region or object within an image based on some language expression \cite{MaoHTCYM16_GenCompObjects_CVPR, YeRLW19_CrossModalAttention_CVPR}. It shows similar characteristics as the multi-modal approach to score following --- we want to locate the position in the sheet image that the incoming audio refers to, meaning we treat the audio as the language expression, and the score image as the entity to reason about.
More precisely, our modeling setup is as follows: based on the incoming musical performance up to the current point in time, the task of the model is to predict a segmentation mask for the given score that corresponds to this currently played music, as shown in \Fig{fig:sf_task}. The ground truth for this task is chosen to be a region around the current position in the score with a fixed width and a height depending on the height of the staff. While the size of this mask can be arbitrarily chosen, we define it such that it provides a meaningful learning target first and foremost.
A challenging question arising with such a setup is how to combine the different input modalities, audio and score. While \cite{DorferAW16_ScoreFollowDNN_ISMIR, DorferHW18_ScoreFollowingAudioSheet_ISMIR, HenkelBDW19_ScoreFollowingRL_TISMIR} learn a latent representation for both input modalities which are subsequently concatenated and further processed, we follow the direction of \cite{HenkelKW19_AudioConditionedUNet_WORMS} instead, and employ a \textit{conditioning mechanism} that directly modulates the activity of feature detectors that process the score image.
The former setup would be problematic due to the increasing number of parameters. Furthermore, this design is able to retain the fully-convolutional property of our model, i.\,e.,\, if desired one could process sheet images of arbitrary resolution.\footnote{Note that while we do not investigate this further and work with fixed-sized sheets, this could be useful in a real world application.}
In contrast to \cite{HenkelKW19_AudioConditionedUNet_WORMS}, we apply the conditioning mechanism on top of a recurrent layer to provide a longer temporal context. This permits the audio input up until the current point in time to guide the segmentation towards the corresponding position in the score image. We argue that it is necessary for this task to have such a long temporal audio context in order to form more reliable predictions.
For example, it is common to have repeating note patterns in the score spanning over a longer period of time in the audio. Existing trackers that use only a fixed-size audio input are not able to distinguish between such patterns, if they exceed the given audio context.
\subsection{Feature-wise Linear Modulation}
The Feature-wise Linear Modulation (FiLM) layer is a simple linear affine transformation of feature maps, conditioned on an external input \cite{PerezSDVDC18_FILM_AAAI}. The purpose of using this layer is to directly interfere with the learned representation of the sheet image by modulating its feature maps, assisting the convolutional neural network to focus only on those parts that are required for a correct segmentation. In our case, the external input $\mathbf{z}$ is the hidden state of a recurrent layer that takes as input an encoded representation of an audio excerpt. This encoded representation is created by a neural network, e.\,g.,\, as depicted in \Table{tab:spec_net}.
The FiLM layer itself is defined as
\begin{equation}
f_{\text{FiLM}}(\mathbf{x}) = \mathbf{s}(\mathbf{z}) \cdot \mathbf{x} + \mathbf{t}(\mathbf{z}),
\end{equation}
where $\mathbf{s}(\cdot)$ (for scaling) and $\mathbf{t}(\cdot)$ (for translation) are arbitrary vector-valued functions implemented as neural networks. Their values depend on the conditioning vector $\mathbf{z}$, and together they define an affine transform of the tensor $\mathbf{x}$ which refers to the collection of feature maps of a particular convolutional layer, after normalization. The affine transformation is performed per feature map, meaning that each feature map $k$ is scaled and translated by $s_k(\cdot)$ and $t_k(\cdot)$, respectively, with $k$ identifying the $k$-th output of the two functions. The number of output values for $\mathbf{s}(\cdot)$ and $\mathbf{t}(\cdot)$ is the same as the number of feature maps contained in $\mathbf{x}$, denoted by $K$ (cf. \Fig{fig:film_layer}).
\begin{figure
\centering
\includegraphics[width=1.0\columnwidth]{film_layer.pdf}
\caption{\small{
Sketch of the FiLM layer \cite{PerezSDVDC18_FILM_AAAI}. The layer scales and translates the feature maps $\mathbf{x}$ using learned functions $\mathbf{s}(\mathbf{z})$ and $\mathbf{t}(\mathbf{z})$, respectively. $\mathbf{z}$ is an additional, external input carrying the conditioning information.
}}
\label{fig:film_layer}
\end{figure}
\subsection{Model Architecture}\label{sec:model_architecture}
\begin{figure*}[t]
\centering
\includegraphics[width=0.80\textwidth]{architecture.pdf}
\caption{\small{Audio-Conditioned U-Net architecture. Each block (A-I) consists of two convolutional layers with ELU activation and layer normalization.
The FiLM layer is placed before the last activation function. The spectrogram encoding given by the output of the network shown in \Table{tab:spec_net} is passed through a recurrent layer. The hidden state of this recurrent layer is then used for conditioning in the FiLM layer. Each symmetric block has the same number of filters starting with 8 in block A and increasing with depth to 128 in block E.}}
\label{fig:architecture}
\end{figure*}
\begin{table
\centering
\small
\begin{tabular}{cc}
\toprule
\textbf{Audio (Spectrogram)} $78 \times40$ \\
\midrule
2 x ( Conv(3, 1, 1)-24 - LN - ELU ) - MP(2) \\
2 x ( Conv(3, 1, 1)-48 - LN - ELU ) - MP(2) \\
2 x ( Conv(3, 1, 1)-96 - LN - ELU ) - MP(2) \\
2 x ( Conv(3, 1, 1)-96 - LN - ELU ) - MP(2) \\
Conv(1, 0, 1)-96 - LN - ELU \\
Dense(32) - LN - ELU\\
\midrule
\end{tabular}
\caption{\small{The context-based encoder used for the experiments. Conv($f$, $p$, $s$)-$k$ denotes a convolutional layer with $k$ $f \times f$ kernels, padding of $p$ and stride $s$. We use layer normalization (LN) \cite{BAKH16_LayerNorm_arxiv}, ELU activation \cite{ClevertUH15_ELU_ICLR} and max pooling (MP) with a pool size of $2 \times 2$. The output of the last layer is fed into a LSTM as shown in \Fig{fig:architecture}. The network resembles the one used in \cite{DorferHAFW18_MSMD_TISMIR}.}}
\label{tab:spec_net}
\end{table}
Our model is based on a U-Net architecture similar to the one used in \cite{HajicDWP18_OMRUNET_ISMIR}~for detecting musical symbols in sheet images.
U-Nets were originally introduced for medical image segmentation, to segment an image into different parts, by classifying each pixel into either foreground or background \cite{RonnebergerFB15_UNET_MICCAI}. This fits naturally to
our interpretation of the score following task, as a process of segmenting the sheet image into a region that corresponds to the current position in the audio input and labelling everything else as background.
The overall architecture, shown in \Fig{fig:architecture}, resembles the one proposed in \cite{HenkelKW19_AudioConditionedUNet_WORMS}, with several important differences. Based on the empirical findings reported in \cite{HenkelKW19_AudioConditionedUNet_WORMS}, we decide to incorporate conditioning information from the audio input in blocks B-H, leaving only blocks A and I without conditioning. However, we substitute the transposed convolutions in the decoder part of the network with bilinear upsampling operations with a factor of two, followed by a $1\times1$ convolution, both aimed at reducing checkerboard artifacts in the segmentation \cite{OdenaDO16_DeConvCheckerboard_DISTILL}. Due to the small batch size used during training (cf. \Sect{sec:experimental_setup}) as well as the presence of the recurrent layer, we replace batch normalization with layer normalization \cite{BAKH16_LayerNorm_arxiv}.
For deriving the conditioning information from the audio input, we test two different spectrogram encoders. One takes a spectrogram snippet with a length of 40 frames, corresponding to two seconds of audio; the spectrogram is processed by the network shown in \Table{tab:spec_net}, which is roughly similar to the one used in \cite{DorferHAFW18_MSMD_TISMIR}. The other version takes as input a \textit{single spectrogram frame}, using a dense layer with 32 units, layer normalization and ELU activation function. The output of the encoders is fed to an
LSTM \cite{HochreiterS97_LSTM_NeuralComp} layer with 128 units
and its hidden state is then used as the external input $\mathbf z$ in the FiLM layers.
\section{Experiments}\label{sec:experiments}
To study the properties of our proposed approach, we conduct experiments on polyphonic piano music provided by the MSMD \cite{DorferHAFW18_MSMD_TISMIR} dataset. While this section is mainly concerned with comparing data augmentation and different architectures, later on in \Sect{sec:real_perf} we will investigate the generalization capabilities of the system in terms of 16 real piano recordings from the MSMD test split. We will also contextualize the proposed system with related work described in \cite{HenkelBDW19_ScoreFollowingRL_TISMIR}, which we use as baselines for comparison.
\subsection{Data}
We use the \emph{Multi-modal Sheet Music Dataset (MSMD)} \cite{DorferHAFW18_MSMD_TISMIR}, a standard dataset for such evaluations, comprising polyphonic piano music from various composers including Bach, Mozart, and Beethoven. The sheet music is typeset with Lilypond\footnote{\url{http://lilypond.org/}}
and the audio tracks are synthesized from MIDI using Fluidsynth\footnote{\url{http://www.fluidsynth.org/}} together with a piano sound font.
The original MSMD splits used by \cite{HenkelBDW19_ScoreFollowingRL_TISMIR} encompass 354 train, 19 validation and 94 test pieces. The precise alignments between audio and sheet music in this dataset are created automatically. Despite that, it turned out that some of the pieces still contain alignment errors. We manually identified and fixed most of these errors, including wrongly aligned notes and missing or wrongly detected staves. One piece from the train set was removed, because we were not able to fix it. Thus, the cleaned dataset consists of 353 train, 19 validation, and 94 test pieces, which will be made publicly available.
If a piece consists of several pages, each page is treated as a single piece and the original MIDI information is partitioned accordingly.\footnote{This is mainly done to facilitate the training procedure. In an application, this could be solved by
some simple `hack' that turns pages when the score follower reaches the end of a page.}
Altogether, we have 945 train, 28 validation and 125 test pages.
The rendered score images have a resolution of $1181\times835$ pixels, are downscaled by a factor of three to $393\times278$ pixels, and are used as the input to the U-Net. Preliminary tests showed that the downscaling does not significantly impact the performance, and benefits the speed of the training process. For the ground truth annotations, we rely on the automatic notehead alignment described in \cite{DorferHAFW18_MSMD_TISMIR}.
The notehead alignments yield $(x,y)$ coordinate pairs in the sheet image, which are further adjusted for our purposes such that the $y$ coordinates correspond to the middle of the staff the respective note belongs to. Given these coordinates, we create a binary mask with a width of 10 pixels and an adaptive height depending on the height of the staff (see \Fig{fig:sf_task}). The task of the U-Net is now to infer a segmentation mask given the image of the score together with the conditioning information derived from the audio input. Note that in theory it should be possible to directly predict $x$ and $y$ coordinates instead of a segmentation mask, however as shown in \cite{LiuLMPFSY18_CoordConv_NeurIPS} this is a much harder task, and we were not able to achieve acceptable performance so far, even using their proposed \emph{CoordConv} layer.
The audio is sampled with 22.05~kHz and processed at a frame rate of 20 frames per second. The DFT is computed for each frame with a window size of 2048 samples and then transformed with a semi-logarithmic filterbank that processes frequencies between 60~Hz and 6~kHz, yielding 78 log-frequency bins. Lastly, the spectrogram bins are standardized to zero mean and unit variance. The audio conditioning network is presented either with 40 consecutive frames (two seconds of audio) or a single frame at a time. We use the \verb|madmom| python library
for all signal processing related computations \cite{BoeckKSKW16_Madmom_ACMMM}.
\subsection{Baselines and Evaluation Measures}
\label{sec:baselines}
In the following, we will present a series of experiments, comparing the new proposed full-page tracking system to several baselines (in order to better understand the importance of some of our design choices) as well as to related state-of-the-art approaches from the literature.
First, we evaluate two different spectrogram encoders, as introduced in \Sect{sec:model_architecture}, vis-\`a-vis a baseline version of our system that does not have the capability to summarize all the audio up to the current point in time, i.\,e.,\, that does not have memory in the form of an RNN. We do this in order to obtain empirical evidence for our argument that having access to long term temporal information is highly beneficial for obtaining good approximate solutions to the score following task. The two different encoders are denoted as context-based (CB) and frame-based (FB), using 40 spectrogram frames and a single frame, respectively. The baseline without temporal context uses the CB encoder and replaces the RNN layer with a fully connected layer of the same size. In the following this baseline will be denoted as NTC (no temporal context).
The \textit{evaluation measures} used for this comparison are of a geometric kind (bounding box pixel error and distance on printed score page), in order to focus on the new challenge of full-page orientation: we measure the pixel-wise evaluation metrics \emph{Precision}, \emph{Recall} and $F_1$-score that were also used in \cite{HenkelKW19_AudioConditionedUNet_WORMS}, and the mean and median alignment error between ground truth and prediction in centimeters, both with the network output thresholded at 0.5. To calculate the alignment error between the ground truth and the predicted probability mask (recall \Fig{fig:sf_task}), we calculate the center of mass over all pixels for both masks and compute the euclidean distance between the two centers to obtain the alignment error in pixels. Given a resolution of 72 dpi, the error is converted to centimeter using a factor of 0.0352 cm/pixel, under the assumption that the score image is printed on a sheet of DIN A4 paper.
In the second experiment we compare our system to alternative state-of-the-art approaches from the literature:
the first approach is based on an OMR system that extracts symbolic MIDI information from the sheet image. The symbolic MIDI information is then synthesized to audio. The system subsequently computes chroma features with a feature rate of 20~Hz from both the synthesized and the performance audio, and applies audio-to-audio alignment using a form of online DTW \cite{Mueller15_FMP_SPRINGER}.
This is the method described in \cite{HenkelBDW19_ScoreFollowingRL_TISMIR} and will be abbreviated as OMR in the upcoming result table.
The second and third approach, described in \Sect{sec:related_work}, are a multi-modal localization network (MM-Loc) \cite{DorferAW16_ScoreFollowDNN_ISMIR} and a Reinforcement Learning (RL) agent \cite{DorferHW18_ScoreFollowingAudioSheet_ISMIR, HenkelBDW19_ScoreFollowingRL_TISMIR}, both working with sheet image snippets.
The \textit{evaluation measure} for this will be of a more music-related kind (temporal tracking error in the performance), reflecting the intended purpose of the systems (score following), and permitting a direct comparison with alternative methods. Similarly to \cite{Dixon05_ODTW_IJCAI, Arzt16_MusicTracking_PhD}, we compute, for each note onset, the absolute time difference between prediction and ground truth. We set 5 threshold values, ranging from 0.05 to 5 seconds, and report the cumulative percentage of notes tracked with an error up to the given threshold.
Given the ground truth alignment from note onsets to the corresponding notehead coordinates in the sheet image, we can interpolate from the predicted positions in the sheet image back to the time domain.
This is straightforward for MM-Loc and the RL agent, because they both already use an unrolled score derived from the groundtruth, whereas the proposed method requires further processing.
We first need to compute the center of mass of the segmented region to obtain $x,y$ coordinates. We map the $y$ coordinate to the closest staff, and apply a similar interpolation as before in an unrolled score to get the time difference between the predicted and actual position in the score.
For evaluating the OMR baseline we face a problem that has already been noted in \cite{HenkelBDW19_ScoreFollowingRL_TISMIR} --- we do not have the required groundtruth alignment between the OMR-extracted score and the performance. Given that only onset positions are evaluated, we are justified to assume a perfect alignment between score and audio, if for each unit of time in the audio a constant distance in the score sheet is travelled. If the OMR system makes no errors, the alignment between OMR score and performance is a diagonal in the DTW global cost matrix, correcting the overall tempo difference by a linear factor. As in \cite{HenkelBDW19_ScoreFollowingRL_TISMIR}, we evaluate the OMR-based system by measuring the offset of the actual tracking position relative to the perfect alignment.
\subsection{Experimental Setup} \label{sec:experimental_setup}
All models are trained using the same procedure.
We optimize the \emph{Dice} coefficient loss \cite{MilletariNA16_VNet_3DV}, which is more suitable than e.\,g.,\, \emph{binary cross-entropy}, as we are facing an imbalanced segmentation problem with far more unimportant background pixels than regions of interest.
To optimize this target we use Adam \cite{KingmaB14_Adam_ICLR} with default parameters, an initial learning rate of $1e^{-4}$ and $L^2$ weight decay with a factor of $1e^{-5}$. If the conditioning architecture involves an LSTM, we use a batch-size of 4 and a sequence length of 16. For the audio conditioning model without a temporal context we use a batch size of 64. The weights of the network are initialized orthogonally \cite{SaxeMG13_OrthogonalInit_arxiv} and the biases are set to zero. If the loss on the validation set does not decrease for 5 epochs, we halve the learn rate and stop training altogether when the validation loss does not decrease over a period of 10 epochs or the maximum number of 100 epochs is reached. The model parameters with the lowest validation loss are used for the final evaluation on the test set. Similar to \cite{HenkelKW19_AudioConditionedUNet_WORMS}, we perform data augmentation in the image domain by shifting the score images along the $x$ and $y$ axis.
To investigate whether tempo augmentation improves model performance, we train all models without tempo augmentation as well as with 7 different tempo change factors ranging from 0.5 up to 1.5.
\subsection{Results}
In \Table{tab:architecture_comparison}, we compare different conditioning architectures, no long term temporal context (NTC), a context of 40 frames (CB) and a single frame (FB) in combination with an LSTM, respectively. We observe that the NTC model has the lowest performance, both in terms of the pixel-wise measures, as well as in terms of its alignment error. A possible reason for this could be ambiguities in the sheet image, since audio excerpts could match several positions in the score.
The results for CB and FB support our initial claim that a long term temporal context is required for this task. While both models achieve a good performance, CB outperforms FB in all measures. On average, the alignment error is around 1.25 cm and the median is at 0.51 cm, meaning that half of the time our model is less than 0.51 cm away from the true position. Furthermore, we observe that tempo augmentation improves the results for all models.
In \Table{tab:method_comparison}, we compare our best model from \Table{tab:architecture_comparison} to several baselines from the literature in terms of the cumulative percentage of onsets that are tracked with an error below a given threshold. We observe that the context-based proposed model (CB) outperforms all baselines except for the highest threshold. This suggests that our method is very precise on one hand, but on the other hand is not able to track all onsets with a timing error below five seconds.
\begin{table}
\centering
\small
\begin{tabular}{lcccccc}
\multicolumn{7}{c}{MSMD (125 test pages)} \\\toprule
& \textbf{TA} & \textbf{P} & \textbf{R} & $\mathbf{F_1}$ & $\mathbf{\overline{d}_{cm}}$ & $\mathbf{\widetilde{d}_{cm}}$ \\\midrule
\multirow{2}{*}{NTC} & \xmark & 0.696 & 0.665 & 0.678 & 3.70 & 2.37\\
& \cmark & 0.770 & 0.740 & 0.754 & 2.78 & 1.61\\\midrule
\multirow{2}{*}{CB}& \xmark & 0.810 & 0.790 & 0.799 & 1.62 & 0.73 \\
& \cmark & \textbf{0.854} & \textbf{0.835} & \textbf{0.843} & \textbf{1.25} & \textbf{0.51} \\\midrule
\multirow{2}{*}{FB} & \xmark & 0.790 & 0.768 & 0.778 & 1.82 & 1.21 \\
& \cmark & 0.820 & 0.816 & 0.816 & 1.58 & 0.80 \\\midrule
\end{tabular}
\caption{\small{ Different conditioning architectures with/without tempo augmentation (TA): no temporal context (NTC), context-based (CB) and frame-based (FB). For each model the parameters with lowest validation loss are chosen for evaluation on the test set. Measures: pixel-wise precision (P), recall (R) and $F_1$, and mean ($\overline{d}_{cm}$) and median ($\widetilde{d}_{cm}$) of alignment error in centimeters.
}}
\label{tab:architecture_comparison}
\end{table}
\begin{table
\centering
\small
\begin{tabular}{lcccc}
\multicolumn{5}{c}{MSMD (125 test pages)} \\\toprule
\textbf{Err. [sec]} & OMR\cite{HenkelBDW19_ScoreFollowingRL_TISMIR} & MM-Loc \cite{Dorfer18_MusicTracking_PhD} & RL\cite{HenkelBDW19_ScoreFollowingRL_TISMIR} & CB \\\midrule
$\leq 0.05$ & 44.7\% & 44.6\% & 40.9\% & \textbf{73.3\%} \\
$\leq 0.10$ & 51.9\% & 49.2\% & 43.3\% & \textbf{74.7\%} \\
$\leq 0.50$ & 76.0\% & 82.2\% & 79.7\% & \textbf{85.2\%} \\
$\leq 1.00$ & 85.0\% & 86.0\% & 87.8\% & \textbf{88.5\%} \\
$\leq 5.00$ & \textbf{97.4\%} & 92.0\% & 97.2\% & 93.7\%\\\midrule
\end{tabular}
\caption{\small{ Our best model (CB)
vs.~existing baselines, in terms of onsets tracked with an error below a given threshold.
For the RL agent we report the average over 10 runs due to its stochastic policy. In contrast to \cite{HenkelBDW19_ScoreFollowingRL_TISMIR}, OMR, MM-Loc and RL do not stop tracking if they fall out of a given tracking window. }}
\label{tab:method_comparison}
\end{table}
\section{Real Performances}\label{sec:real_perf}
To test the generalization capabilities of the system under real recording conditions, we evaluate our best model on the 16 piano recordings (corresponding to 25 score pages) from the MSMD test split introduced in \cite{HenkelBDW19_ScoreFollowingRL_TISMIR}, for which we also manually corrected some of the alignments.
We compare again to the baselines introduced in \Sect{sec:baselines}, which are likewise evaluated using the corrected alignments.
In line with \cite{HenkelBDW19_ScoreFollowingRL_TISMIR}, we compare four different settings with increasing difficulty.
The first is the same synthetic setting as in \Sect{sec:experiments}.
The second setting uses the performance MIDI synthesized with the same piano synthesizer used during training. The third uses the audio of the ``direct out'' audio output of the ``Yamaha AvantGrand N2'' hybrid piano used for recording, and the last one uses the audio recorded via a room microphone.
\Table{tab:eval_methods_rp} summarizes the results. Overall, we observe that the proposed system (CB) achieves more precise results in terms of time difference (i.e., higher percentages for the tighter error thresholds) in three out of four settings. For the last setting we observe a worse performance, which indicates that our model has possibly overfit to the synthesized audio and is not yet robust enough. OMR yields very robust results in all scenarios, which is possibly due to the used chroma features. While the results are not as precise, it outperforms the other methods for higher threshold values.
A possible explanation for this is that our model has more freedom in being able to perform big jumps on the sheet image paper, thus increasing the error possibility. Models relying on sheet snippets are not designed to perform such jumps and thus can also not make very extreme errors.
Furthermore, our model is more sensitive to the audio representation fed into the conditioning mechanism, as it influences the convolutional filters in multiple layers that process the sheet image.
Overall, we assume that this is an issue of the synthetic dataset which can be tackled by training on more diverse performances and a more robust audio model for the conditioning mechanism.
\begin{table}[ht]
\centering
\small
\begin{tabular}{lcccc}
\toprule
\textbf{Err. [sec]} & OMR \cite{HenkelBDW19_ScoreFollowingRL_TISMIR} & MM-Loc \cite{Dorfer18_MusicTracking_PhD} & RL \cite{HenkelBDW19_ScoreFollowingRL_TISMIR} & CB \\\midrule
\multicolumn{5}{r}{Original MIDI Synthesized (Score = Performance)} \\
\midrule
$\leq 0.05$ & 37.1\% & 41.6\% & 36.5\% & \textbf{69.8\%} \\
$\leq 0.10$ & 46.1\% & 44.2\% & 38.2\% & \textbf{70.6\%} \\
$\leq 0.50$ & 74.9\% & 77.6\% & 72.9\% & \textbf{80.6\%} \\
$\leq 1.00$ & \textbf{86.8\%} & 79.9\% & 79.8\% & 82.4\%\\
$\leq 5.00$ & \textbf{99.6\%} & 90.3\% & 96.5\% & 89.1\% \\
\midrule
\multicolumn{5}{r}{Performance MIDI Synthesized} \\
\midrule
$\leq 0.05$ & 28.9\% & 47.2\% & 23.4\% & \textbf{56.5\%} \\
$\leq 0.10$ & 39.8\% & 49.0\% & 24.8\% & \textbf{58.1\%} \\
$\leq 0.50$ & 71.7\% & \textbf{83.2\%} & 54.5\% & 80.9\% \\
$\leq 1.00$ & 83.4\% & \textbf{86.1\%} & 64.0\% & 84.4\% \\
$\leq 5.00$ & \textbf{98.8\%} & 96.0\% & 81.2\% & 90.1\% \\
\midrule
\multicolumn{5}{r}{Direct Out} \\
\midrule
$\leq 0.05$ & 22.6\% & 33.8\% & 27.7\% & \textbf{40.0\%} \\
$\leq 0.10$ & 33.0\% & 35.4\% & 29.1\% & \textbf{41.6\%} \\
$\leq 0.50$ & \textbf{70.3\%} & 59.7\% & 60.7\% & 64.2\% \\
$\leq 1.00$ & \textbf{83.9\%} & 63.4\% & 73.3\% & 69.3\% \\
$\leq 5.00$ & \textbf{99.3\%} & 75.3\% & 95.5\% & 81.1\% \\
\midrule
\multicolumn{5}{r}{Room Recording} \\
\midrule
$\leq 0.05$ & \textbf{22.6\%} & 20.7\% & 19.2\% & 9.4\% \\
$\leq 0.10$ & \textbf{32.2\%} & 24.3\% & 20.6\% & 10.5\% \\
$\leq 0.50$ & \textbf{70.2\%} & 54.1\% & 46.6\% & 21.5\% \\
$\leq 1.00$ & \textbf{82.7\%} & 57.3\% & 58.7\% & 26.2\% \\
$\leq 5.00$ & \textbf{97.4\%} & 70.2\% & 89.1\% & 44.3\% \\
\bottomrule
\end{tabular}
\caption{\small{ Comparing best performing model to several baselines on a set of 16 real piano recordings (25 pages) from the MSMD test split. Model evaluation is as described in \Table{tab:method_comparison}, with the difference that for the RL agent we report the average over 50 runs due to its stochastic policy and the smaller sample size.}}
\label{tab:eval_methods_rp}
\end{table}
\section{Discussion and Conclusion}\label{sec:conclusion}
We have proposed the first end-to-end trained score following system that directly works on full sheet images.
The system is real-time capable due to a constant runtime per step, it compares favorably with existing baselines on synthetic polyphonic piano music, and sets the new state of the art for sheet-image-based score following in terms of temporal alignment error.
However, there are still generalization problems for real piano recordings.
While the model shows a much more precise alignment in most scenarios, we see a performance deterioration over different recording conditions.
This will need to be solved in the future, either with a more robust audio model, or a data augmentation strategy that incorporates reverberation effects.
Future work will also require testing on scanned or photographed sheet images, to gauge generalization capabilities of the
system in the visual domain as well. As there is currently no dataset consisting of scanned sheet images with precise notehead to audio alignments, it will be necessary to curate a test set.
The next step towards a system with greater capabilities, is to either explicitly or implicitly incorporate a mechanism to handle repetitions in the score as well as in the performance.
We assume that the proposed method will be able to acquire this capability quite naturally from properly prepared training data, although we suspect its performance will heavily depend on its implicit encoding of the audio history so far, i.\,e.,\, how large an auditory context the recurrent network is able to store.
\section{Acknowledgments}
This project has received funding from the European Research Council (ERC)
under the European Union's Horizon 2020 research and innovation program
(grant agreement number 670035, project "Con Espressione").
| proofpile-arXiv_065-284 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
All graphs in this paper are finite and simple.
A graph is \dfn{intrinsically linked} (IL) if every embedding of it
in $\mathbb{R}^3$ (or, equivalently, $S^3$) contains a nontrivial 2-component link.
A graph is \dfn{linklessly embeddable} if it is not intrinsically linked (nIL).
A nIL graph $G$ is \textit{maxnil} if it is not a proper subgraph of a nIL graph of the same order.
The combined work of Conway and Gordon \cite{CG},
Sachs \cite{Sa} and Robertson, Seymour and Thomas \cite{RST} fully characterized IL graphs: a graph is IL if and only if it contains a graph in the Petersen family as a minor.
The Petersen family consists of seven graphs obtained from $K_6$ by $\nabla Y-$moves and $Y\nabla-$moves, as described in Figure~\ref{fig-ty}.
The $\nabla Y-$move and the $Y\nabla-$move preserve the IL property.
\begin{figure}[htpb!]
\begin{center}
\begin{picture}(160, 50)
\put(0,0){\includegraphics[width=2.4in]{fig-ty}}
\end{picture}
\caption{\small $\nabla Y-$ and $Y\nabla-$moves}
\label{fig-ty}
\end{center}
\end{figure}
The property of being maxnil is, in a way, analogous to the property of being maximal planar.
While it is well known that every maximal planar graph with $n$ vertices has $3n-6$ edges, an analogous statement for maxnil graphs does not exist.
For example, start with a maximal planar graph $G$ and add one vertex $v$ together with all the edges from $v$ to the vertices of $G$.
Such a graph is maxnil by \cite{Sa}, and if it has $n$ vertices, then it has $4n-10$ edges.
In fact, $4n-10$ is an upper bound on the number of edges of a maxnil graph on $n$ vertices.
This follows from work of Mader \cite{Ma} who proved that having more than $4n-10$ edges implies the existence of a $K_6$ minor, which implies the graph is IL.
On the other hand, {J{\o}rgensen \cite{J} and Dehkordi and Farr \cite{DF}} constructed maxnil graphs with $n$ vertices and $3n-3$ edges.
J{\o}rgensen's maxnil graphs are obtained from the J{\o}rgensen graph in Figure~\ref{FigJ}(a) by subdividing the highlighted edge incident to the vertex $y$ and then adding edges that connect every new vertex to $u$ and $v$.
We denote the graph obtained this way through $i$ subdivisions by $J_i$, $i\ge1$.
See Figure~\ref{FigJ}(b).
\begin{figure}[htpb!]
\begin{center}
\begin{picture}(400, 150)
\put(0,0){\includegraphics[width=5.4in]{JorgensenGraphPaneled}}
\end{picture}
\caption{\small (a) The J{\o}rgensen graph; (b) The graph $J_i$ in J{\o}rgensen's $3n-3$ family.}
\label{FigJ}
\end{center}
\end{figure}
Recently, Aires \cite{A} found a family of graphs with fewer than $3n-3$ edges.
For each value $n\ge 13$ with $n\equiv3$ (mod 10), he constructed a maxnil graph with $ \frac{14n}{5}-\frac{27}{5}$ edges.
He also proved that if $G$ is a maxnil graph with $n \ge 5$ vertices and $m$ edges, then $m\ge 2n$.
This bound is sharp;
the maxnil graph $Q(13,3)$ described by Maharry \cite{M}
has 26 edges and 13 vertices.
In Section 2, we present two constructions of maxnil graphs.
The first one is a family of maxnil graphs with $n\ge 10$ vertices and $3n-5$ edges.
This construction builds upon a maxnil graph on 10 vertices and 25 edges and uses edge subdivisions.
The second construction significantly improves on Aires' result on the number of edges.
Using clique sums of copies of $Q(13,3)$, we construct examples with a smaller ``edge-to-vertex ratio,'' as in the following theorem.\\
\textbf{Theorem.}
For each $n\ge 13$, there exists a maxnil graph $G$ with $n$ vertices and $m < \frac{25 n}{12} - \frac{1}{4}$ edges.\\
In Section 3, we study the properties of maxnil graphs under clique sums.
Some of these results are used in the constructions of Section 2.
We give sufficient and necessary conditions for when the clique sum of two maxnil graphs over $K_2$, $K_3$ or $K_4$ is maxnil.
{ J{\o}rgensen \cite{J} studied clique sums of graphs that are maximal without a $K_6$ minor. We give examples showing that the class of maxnil graphs and the class of graphs that are maximal without a $K_6$ minor are distinct.}
\section{Two families of maxnil graphs}
We note that the J{\o}rgensen graph is 2-apex, i.e., removing the vertices $u$ and $v$ leaves a planar graph $P$.
Furthermore, the embedding of $P$ in $\mathbb{R}^2$
shown in Figure~\ref{FigJ}(a) has no separating cycles,
i.e., for every cycle $C$ in $P$, one of the components of $\mathbb{R}^2 \setminus C$
contains no vertices of $P$.
These properties are generalized in the next Lemma, which we use to prove the graphs in the $3n-5$ family are nIL.
\begin{lemma}
\label{lemma-almost-non-separating}
Let $G$ be a graph with two nonadjacent vertices $u, v$ such that
there exists an embedding $\Sigma$ of $G-\{u,v\}$ in $\mathbb{R}^2$
where for every cycle $C$ in $\Sigma$,
$\mathbb{R}^2 \setminus C$ has a component $X$ such that
$X \cup C$ separates $u$ and $v$
(i.e., every path in $G$ from $u$ to $v$ contains a vertex in $X \cup C$).
Then embedding $u$ as $(0,0,1)$ and $v$ as $(0,0,-1)$
and connecting each of them to its neighbors in $\Sigma$ with straight edges
yields a linkless embedding of $G$ in $\mathbb{R}^3$.
\end{lemma}
\begin{proof}
Let $\Gamma$ denote the embedding of $G$ as described in the lemma,
and let $K \cup K'$ be a 2-component link in $\Gamma$.
We consider two cases.
\textit{Case 1.}
Neither $K$ nor $K'$ contains both $u$ and $v$.
Then we have three subcases:
zero, one, or both of $K$ and $K'$ are in $\Sigma$.
In each of these three subcases it is easy to see that
$K \cup K'$ is a trivial link.
We prove this for one of the three subcases here;
the other two are similar and easier.
Suppose $K$ contains $u$ but not $v$,
and $K' \subset \Sigma$.
Then $K$ consists of two edges incident to $u$
and a path $P \subset \Sigma$.
Connecting $u$ with straight line segments to every point in $P$
gives us a $\Gamma$-panel for $K$.
On the other hand, $K'$ bounds a disk $D$ in $\mathbb{R}^2$.
We isotop $D$, while keeping its boundary fixed,
by pushing its interior slightly below $\mathbb{R}^2$,
to make it disjoint from $K$
(since $K$ contains no points below $\mathbb{R}^2$).
It follows that $K \cup K'$ is a trivial link.
\textit{Case 2.}
One of the link's components, say $K$, contains both $u$ and $v$.
Then $K' \subset \Sigma$.
So $\mathbb{R}^2 \setminus K'$ has two components
such that one of them, $X$, separates $u$ and $v$.
Therefore all vertices of $K$ except $u$ and $v$ lie in $X$.
Now, $K$ has exactly two vertices, call them $a,b$, that are adjacent to $u$,
and two vertices, $c, d$, adjacent to $v$.
Note that $\{a,b\}$ is not necessarily disjoint, or even distinct, from $\{c,d\}$.
Furthermore, $K \cap X$ consists of two components, $P_1$ and $P_2$,
each of which is a path of length zero or greater.
We can assume $a, c \in P_1$ and $b,d \in P_2$.
We consider three subcases.
\begin{figure}[htpb!]
\begin{center}
\begin{picture}(440, 189)
\put(0,0){\includegraphics[width=6in]{fig-almost-nonsepai}}
\put(423,125){$\mathbb{R}^2$}
\put(206,125){$\mathbb{R}^2$}
\end{picture}
\caption{\small (a) Configuration for Case 2.1 ; (b) Configuration for Case 2.2}
\label{fig-almost-nonsepai}
\end{center}
\end{figure}
\textit{Case 2.1.}
$a=c$, $b=d$.
Join $a$ to $b$
by an arc $\beta \subset X$ (not necessarily in $\Sigma$),
and then connect each of $u$ and $v$
by straight line segments to every point in $\beta$.
See Figure~\ref{fig-almost-nonsepai}(a).
This gives us a disk bounded by $K$ and disjoint from $K'$.
Similarly to Case~1 above, $K'$ also bounds a disk disjoint from $K$.
Hence $K \cup K'$ is a trivial link.
\textit{Case 2.2.}
$a=c$, $b \ne d$.
Join $a$ to each of $b$ and $d$
by disjoint arcs $\beta$ and $\delta$ respectively, both in $X$,
such that $\beta \cup \delta \cup P_2$ is a simple closed curve.
See Figure~\ref{Fig1025}(b).
Connect each of $u$ and $v$ by straight line segments
to every point in $\beta$ and $\delta$ respectively.
This gives us two disks whose union with
the disk bounded by $\beta \cup \delta \cup P_2$ in $X$
is a disk bounded by $K$ and disjoint from $K'$.
And, as before, $K'$ bounds a disk disjoint from $K$.
Hence $K \cup K'$ is a trivial link.
\textit{Case 2.3.}
$a \ne c$, $b \ne d$.
This case is similar to Case~2.2, except that
we join $a$ to $b$ and $c$ to $d$
by disjoint arcs $\beta$ and $\delta$ in $X$
such that $\beta \cup \delta \cup P_1 \cup P_2$
is a simple closed curve.
\end{proof}
\subsection{The $3n-5$ family} We construct a family of graphs with $n$ vertices and $3n-5$ edges, for $n\ge 10$.
This family is obtained from the graph $G$ pictured in Figure~\ref{Fig1025}(a) through a sequence of subdivisions and edge additions.
The graph $G$ is obtained from the J{\o}rgensen graph by splitting (the opposite of contracting edges) the vertices $a$ and $b$ into the edges $ad$ and $bc$.
See Figures~\ref{FigJ}(a) and~\ref{Fig1025}(a).
With the notation in Figure~\ref{Fig1025}(a), construct the graph $G_1$ by subdividing the edge $xy$ with a new vertex $z_1$, then adding edges $z_1u$ and $z_1v$.
Construct graphs $G_i$, for $i\ge 2$, as follows: subdivide the edge $z_{i-1}y$ of $G_{i-1}$ with a new vertex $z_i$, then add edges $z_iu$ and $z_iv$ to $G_{i-1}$.
Notice that $G_i$ has one more vertex and three more edges than $G_{i-1}$.
The graph $G_i$ has $10+i$ vertices and $25+3i= 3(10+i) -5$ edges.
We note that the graphs $G_i$ can also be
obtained by successive splittings of the vertex $y$
into the edge $y z_i$.
\begin{figure}[htpb!]
\begin{center}
\begin{picture}(440, 170)
\put(0,0){\includegraphics[width=6in]{1025Paneled}}
\end{picture}
\caption{\small (a) The graph $G$ is maxnil with 10 vertices and 25 edges; (b) The graph $G_i$ is obtained through $i$ edge subdivisions and edge additions.}
\label{Fig1025}
\end{center}
\end{figure}
\begin{proposition} The graphs $G$ and $G_i$ in Figures~\ref{Fig1025}(a) and~\ref{Fig1025}(b) are linklessly embeddable.
\label{Gnil}
\end{proposition}
\begin{proof}
It is straightforward to check that
these graphs satisfy the hypotheses of Lemma~\ref{lemma-almost-non-separating}
and hence are nIL.
\end{proof}\
\begin{proposition} The graph $G$ in Figure~\ref{Fig1025}(a) is maxnil.
\label{Gmaxnil}
\end{proposition}
\begin{proof} Since $G$ is linklessly embeddable, it remains to show that adding any edge to $G$ gives an IL graph.
We note that by contracting the edges $ad$ and $bc$, we obtain the J{\o}rgensen graph as a minor of $G$.
If an edge $e$ other than $bd$ is added to $G-\{u,v\}$, such edge is preserved by these two edge contractions.
Thus $G+e$ contains a minor that itself contains the J{\o}rgensen graph added an edge.
Since the J{\o}rgensen graph is maxnil, $G+e$ is IL.
Same holds is $e=uv$ is added to $G$.
If the edge $bd$ is added, then contracting the edges $dt$, $cz$, $ux$ and $vy$ creates a $K_6$ minor of $G+bd$.
If an edge from $u$ to $G-\{u,v\}$ is added, say $ua$, then contracting the edges $cd$, $dt$, $by$ and $uz$ creates a $K_6$ minor of $G+ua$.
If an edge from $v$ to $G-\{u,v\}$ is added, say $vb$, then contracting the edges $ax$, $cz$, $du$ and $dt$ creates a $K_6$ minor of $G+vb$.
\end{proof}
\begin{proposition} All graphs $G_i$, $i\ge 1$ are maxnil.
\end{proposition}
\begin{proof} Since $G_i$ is linklessly embeddable, it remains to show that adding any edge to $G_i$ gives an IL graph.
Adding any edge $e$ different from $xy$ and disjoint from $\{z_1, z_2, \ldots, z_i\}$ to $G_i$ gives a graph $G_i+e$ that contains $G+e$ as a minor (obtained by contracting the path $xz_1z_2...z_i$).
Since $G$ is maxnil, $G+e$ is IL and so is $G_i+e$.
Adding an edge $e$ that is either $xy$ or has at least one endpoint in $\{z_1, z_2, \ldots, z_i\}$ to $G_i $, gives a graph $G_i+e$ that contains $J_i+e$ as a minor (obtained by contracting the edges $ad$ and $bc$).
Since $J_i$ is maxnil, $J_i+e$ is IL and so is $G_i+e$.
\end{proof}
\subsection{The $Q(13,3)$ family} A graph $G$ is called \textit{triangular} if each edge of $G$ belongs to at least a triangle.
In a non-triangular graph, an edge that is not part of a triangle is a \textit{non-triangular edge.}
In Section 3, we study the properties of maxnil graphs under the operation of clique sum (defined in Section 3).
For the construction presented in the next theorem we use the result of Lemma~\ref{lemmajoin2} about clique sums of maxnil graphs over $K_2$.
\begin{theorem}
For each $n\ge 13$, there exists a maxnil graph $G$ with $n$ vertices and $m< \frac{25 n}{12} - \frac{1}{4}$ edges.
\label{main}
\end{theorem}
\begin{proof}
The construction is based on the maxnil graph $Q_{13,3}$ described by Maharry \cite{M}. See Figure~\ref{Q133}(a).
This graph has 13 vertices and 26 edges, and it is triangle free.
For each $n$ with $13\le n\le 39$, we construct a set of maxnil graphs with $n$ vertices and $2n$ edges by
adding $n-13$ new vertices, and then
choosing $n-13$ edges in $Q_{13,3}$
and connecting the two endpoints of each of them
to one of the new vertices.
Equivalently, we are taking the clique sum
of $Q_{13,3}$ with $n-13$ disjoint triangles,
over $n-13$ copies of $K_2$.
See Figure~\ref{Q133}(b).
By Lemma \ref{lemmajoin2}, the resulting graph is maxnil.
\begin{figure}[htpb!]
\begin{center}
\begin{picture}(330, 155)
\put(0,0){\includegraphics[width=4.8in]{Q133}}
\end{picture}
\caption{\small (a) $Q_{13,3}$ is a maxnil graph with 13 vertices and 26 edges; (b) A maxnil graph with 17 vertices and 34 edges obtained from $Q_{13,3}$ by adding four vertices of degree 2 and eight edges.}
\label{Q133}
\end{center}
\end{figure}
The graph on 39 vertices obtained this way is triangular, so the construction cannot proceed further.
To build graphs with a larger number of vertices we use multiple copies of $Q_{13,3}$ joined along an edge (clique sum over $K_2$).
Consider $k\ge 1$ copies of $Q_{13,3}$ and choose one edge in each copy.
Then join the $k$ graphs together by identifying the
$k$ chosen edges into one edge.
This graph, which we denote by $H_k$, is maxnil (by repeated application of Lemma~\ref{lemmajoin2}) and has $11k+2$ vertices and $25k+1$ edges.
All edges of $H_k$ are non-triangular and adding vertices of degree 2 (as above) along any subset of the edges of $H_k$ gives a maxnil graph.
For $ n \ge 13$, let $k=\lceil \frac{n-3}{36}\rceil$ and
add $n-(11k+2)$ vertices of degree 2 along any $n-(11k+2)$ edges of $H_k$.
With every added vertex of degree 2, the number of edges is increased by 2.
This gives a maxnil graph with $n$ vertices and $m=(25k+1)+2[n-(11k+2)] = 2n+3k-3$ edges.
Moreover,
$$m=2n+ 3 \lceil \frac{n-3}{36}\rceil-3 <2n +3 ( \frac{n-3}{36}+1)-3 =\frac{25n}{12}-\frac{1}{4}.$$
\end{proof}
\begin{remark}
The above shows there exist maxnil graphs of arbitrarily large order $n$
with an edge-to-vertex ratio of less than $\frac{25}{12} - \frac{1}{4n}$.
Whether this edge-to-vertex ratio can be lowered further is an open question.
\end{remark}
\section{Clique Sums of Maxnil Graphs}
In this section we study the properties of maxnil graphs under taking clique sums.
A set $S \subset V(G)$ is a \dfn{vertex cut set} of a connected graph $G$ if $G-S$ is disconnected.
We say a vertex cut set $S \subset V(G)$ is \dfn{minimal} if no proper subset of $S$ is a vertex cut set of $G$.
A graph $G$ is the \textit{clique sum} of $G_1$ and $G_2$ over $K_t$ if $V(G)=V(G_1)\cup V(G_2)$, $E(G)=E(G_1)\cup E(G_2)$, and the subgraphs induced by $V(G_1)\cap V(G_2)$ in both $G_1$ and $G_2$ are complete of order $t$.
Since the vertices of the clique over which a clique sum is taken is a vertex cut set in the resulting graph, the vertex connectivity of a clique sum over $K_t$ is at most $t$.
For a set of vertices $\{v_1, v_2, \ldots, v_k\} \subset V(G)$, $\big< v_1, v_2, \ldots, v_k\big>_G$ denotes the subgraph of $G$ induced by this set of vertices.
By abuse of notation, the subgraph induced in $G$ by the union of the vertices of subgraphs $H_1, H_2, \ldots, H_k$ is denoted by $\big< H_1,H_2, \ldots, H_k\big>_G$.
Holst, Lov\'asz, and Schrijver \cite{HLS} studied the behavior of the Colin de Verdi\'ere $\mu-$invariant for graphs under clique sums (Theorem 2.10).
Since a graph $G$ is nIL if and only if $\mu(G)\le 4$ (\cite{LS}, \cite{RST2}), their theorem implies the following.
\begin{theorem}[Holst, Lov\'asz, and Schrijver \cite{HLS}]If $G$ is the clique sum over $S$ of two nIL graphs, then $G$ is IL if and only if one can contract two or three components of $G-S$ so that the contracted nodes together with $S$ form a $K_7$ minus a triangle.
\label{ThmHLS}
\end{theorem}
Theorem~\ref{ThmHLS} implies that for $t\le 3$, the clique sum over $K_t$ of nIL graphs is nIL.
While Theorem~\ref{ThmHLS} shows when a clique sum is nIL, it does not establish when a clique sum of maxnil graphs is maxnil.
\begin{lemma} Any maxnil graph is 2-connected.
\label{lemma-2-connected}
\end{lemma}
\begin{proof} Let $G$ be a maxnil graph. If $G$ is disconnected, let $A$ and $B$ denote two of its connected components. Let $a\in V(A)$ and $b\in V(B)$. Then $G+ab$ is a nIL graph, as it can be obtained by performing two consecutive clique sums over $K_1$ of nIL summands, namely
\[G+ab=A\cup_{\{a\}}ab\cup_{\{b\}}(G-A).\] But this contradicts the maximality of $G$.
If the vertex connectivity of $G$ is one, assume $x\in V(G)$ is a cut vertex, that is $G-x=A\sqcup B$, with $A$ and $B$ nonempty, and no edges between vertices of $A$
and vertices of $B$. Let $a\in V(A)$ and $b\in V(B)$ be neighbors of $x$ in $G$.
Then $G+ab$ is nIL, as it can be obtained by performing two consecutive clique sums over $K_2$ of nIL summands.
If $\Delta$ denotes the triangle $axb$,
\[G+ab=\big<A, x\big>_G\cup_{ax}\Delta\cup_{xb}\big<B, x\big>_G.\] But this contradicts the maximality of $G$.
\end{proof}
\begin{lemma} Let $G$ be a maxnil graph with a vertex cut set $S=\{x,y\}$, and let $G_1,G_2,...,G_r$ denote the connected components of $G-S$.
Then
$xy\in E(G)$ and
$\big<G_i, S \big>_G$ is maxnil for all $1\le i \le r$.
\label{cliquesum2}
\end{lemma}
\begin{proof}
By Lemma~\ref{lemma-2-connected}, $x$ and $y$ are distinct
and each of them has at least one neighbor in each $G_i$.
Suppose $xy \not \in E(G)$.
Let $G' = G + xy$ and $G'_i = \big<G_i, S\big>_{G'}$.
Then, for every $i$, $G'_i$ is a minor of $G$
since if we pick a $j \ne i$ and
in $\big<G_i, G_j, S \big>_G$ contract $G_j$ to $x$,
we get a graph isomorphic to $G'_i$.
So $G'_i$ is nIL.
Then, by Theorem~\ref{ThmHLS},
$G' = G'_1 \cup_{xy} \cdots \cup_{xy} G'_r$ is nIL,
contradicting the assumption that $G$ is maxnil.
So $xy \in E(G)$.
For each $i$,
we repeatedly add new edges to $\big<G_i, S \big>_G$, if necessary,
to get a maxnil graph $H_i$.
Then $H := H_1 \cup_{xy} \cdots \cup_{xy} H_r$ is nIL
and contains $G$ as a subgraph,
so $H = G$ and every $\big<G_i, S \big>_G$
is maxnil.
\end{proof}
\begin{lemma}
Let $G_1$ and $G_2$ be maxnil graphs.
Pick an edge in each $G_i$ and label it $e$.
Then $G=G_1\cup_e G_2$ is maxnil
if and only if
$e$ is non-triangular in at least one $G_i$.
\label{lemmajoin2}
\end{lemma}
\begin{proof}
The graph $G$ is nIL by Theorem~\ref{ThmHLS}.
Suppose $e$ is non-triangular in at least one $G_i$, say $G_2$.
To prove $G$ is maxnil, it is enough to show that
for all $b_i \in V(G_i)$,
$G + b_1 b_2$ is IL.
Denote the endpoints of $e$ in $G$ by $x, y$.
By Lemma~\ref{lemma-2-connected}, $G_1$ is 2-connected,
so each of $x, y$ has at least one neighbor in $G_1$.
So if we contract $G_1$ to $b_1$ and then contract $b_1 b_2$ to $b_2$,
we obtain a graph $G'_2$ that
contains $G_2$ as a proper subgraph since
$b_2 x \in E(G'_2)$.
So $G'_2$ is IL since $G_2$ is maxnil.
But $G'_2$ is a minor of $G$, which is nIL,
so we have a contradiction.
To prove the converse, suppose $e $ is triangular in $G_1$ and $G_2$.
Let $t_i \in V(G_i)$ be adjacent to both endpoints of $e$.
Let $K$ be a complete graph on four vertices,
with vertices labeled $x, y, t_1, t_2$.
Denote the triangles induced by $x, y, t_i$
in $K$ and in $G_i$ by $\Delta_i$.
Then by Theorem~\ref{ThmHLS},
$G':= G_1 \cup_{\Delta_1} K \cup_{\Delta_2} G_2$ is nIL.
But $G'$ is isomorphic to $G + t_1 t_2$,
so $G$ is not maxnil.
\end{proof}
\begin{lemma}
Let $G$ be a maxnil graph with vertex connectivity 3 and a vertex cut set $S=\{x,y,z\}$.
Let $G_1,G_2,...,G_r$ denote the connected components of $G-S$.
Then $\big<S\big>_G\simeq K_3$ and
$\big<G_i, S \big>_G$ is maxnil for all $1\le i \le r$.
\label{cliquesum3}
\end{lemma}
\begin{proof}
Suppose $\big<S\big>_G \not \simeq K_3$.
Let $G'$ be the graph obtained from $G$
by adding one or more edges to $\big<S\big>_G$ so that
$S$ induces a triangle $T$ in $G'$.
For $1 \le i \le r$, let $G'_i = \big< G_i, T \big>_{G'}$.
We see as follows that $G'_i$ is nIL.
Pick any $j \ne i$, and in the graph $\big< G_i , G_j , S \big>_G$,
contract $G_j$ to an arbitrary vertex $v$ in $G_j$.
Then $v$ is connected to each of $x, y, z$
since $G$ is 3-connected
and hence each of $x, y, z$ has at least one neighbor in $G_j$.
The graph $M_i$ obtained this way is minor of $G$, and hence is nIL.
Performing a $\nabla Y$-move on $T\subset G_i'$ we obtain a subgraph of $M_i$.
Since $M_i$ is nIL, so is $G_i'$.
By Theorem~\ref{ThmHLS},
$G' = G'_1 \cup_{T} \cdots \cup_{T} G'_r$ is nIL,
which contradicts the maximality of $G$.
So $T = \big<S\big>_G\simeq K_3$.
To show $\big<G_i, S \big>_G$ is maxnil,
repeatedly add new edges to it, if necessary,
to get a maxnil graph $H_i$.
Then $H := H_1 \cup_{T} \cdots \cup_{T} H_r$ is nIL by Theorem~\ref{ThmHLS} and contains $G$ as a subgraph,
so $H = G$ and every $\big<G_i, S \big>_G$
is maxnil.
\end{proof}
Let $G$ be a graph and let $T=\big<x,y,z,t\big>_G$ be an induced $K_4$ subgraph (\dfn{tetrahedral graph}).
We say $T$ is \textit{strongly separating}
if $G-T$ has at least two connected components $C_1, C_2$
such that every vertex of $T$ has a neighbor in each $C_i$.
\begin{lemma}Let $G_1$, $G_2$ be maxnil graphs and let $G=G_1\cup_{\triangle} G_2$ be the clique sum of $G_1$ and $G_2$ over a $K_3$ subgraph $\Delta=\big<x,y,z\big>_G$.
Assume $\Delta$ is a minimal vertex cut set in $G$.
Then
$G$ is maxnil if and only if for some $i \in \{1,2\}$, every induced $K_4$ subgraph of the form $\big<x,y,z,t \big>_{G_i}$ is strongly separating.
\label{lemmajoin3}
\end{lemma}
\begin{proof}
By Theorem~\ref{ThmHLS}, $G := G_1 \cup_{\Delta} G_2$ is nIL.
Then $G$ is maxnil if and only if for every $t_1\in V(G_1)-V(\Delta)$ and $t_2\in V(G_2)-V(\Delta)$, $G' :=G+t_1t_2$ is IL.
First, suppose for some $i$ at least one of $x,y,z$ is not connected to $t_i$,
say $x t_2 \notin E(G_2)$.
Contracting $G_1 - \{y,z\}$ to $x$ produces $G_2+t_2x$ as a minor of $G'$.
Since $G_2$ is maxnil, this minor is IL, and hence $G'$ is IL, as desired.
So we can assume $\big<x,y,z,t_i \big>_{G_i}$ is a tetrahedral graph for both $i = 1,2$.
Assume every tetrahedral graph in $G_2$ that contains $\Delta$ is strongly separating.
So $G_2-\big<x,y,z,t_2\big>_{G_2}$ has at least two connected components each of which, when contracted to a single vertex, is adjacent to all four vertices $x, y, z, t_2$.
In Figure~\ref{K3clique} these vertices are denoted by $c_1$ and $c_2$.
Now, if the component of $G_1-\Delta$ that contains $t_1$
is contracted to $t_1$,
this vertex too will be adjacent to $x, y, z, t_2$.
So we get a minor of $G$ isomorphic to $K_7$ minus a triangle,
which is IL since it contains a Petersen family graph
(the one obtained by one $\nabla Y$-move on $K_6$) as a minor.
It follows that $G'$ is IL, and therefore $G$ is maxnil.
\begin{figure}[htpb!]
\begin{center}
\begin{picture}(200, 120)
\put(0,0){\includegraphics[width=3in]{K3clique}}
\end{picture}
\caption{\small A $K_7$ minus a triangle minor of the graph $G$.}
\label{K3clique}
\end{center}
\end{figure}
To prove the converse,
for $i=1,2$ let $t_i$ be a vertex in $G_i$ such that
$T_i := \big<x,y,z,t_i\big>_{G_i}$ is a tetrahedral graph
that is not strongly separating.
Let $G'=G+t_1t_2$.
Then $G' =
G_1\cup_{T_1} \big<x,y,z,t_1,t_2\big>_{G'}\cup_{T_2} G_2$.
Each of these click sums is over a $K_4$,
each summand is nIL, and each of $T_1, T_2$ is non-strongly separating;
so, by Theorem~\ref{ThmHLS}, $G'$ is nIL, and hence $G$ is not maxnil.
\end{proof}
Unlike the vertex connectivity 2 and 3 cases, it is not true that a minimal vertex cut set in a 4-connected maxnil graph must be a clique.
The four neighbors of $b$ in the graph depicted in Figure~\ref{Fig1025}(a) form a vertex cut set, but the graph induced by its vertices has exactly 2 edges.
The four neighbors of any vertex in the graph $Q_{13,3}$ in Figure~\ref{Q133}(a) form a discrete vertex cut set.
However, if a maxnil graph $G$ has vertex connectivity 4, the following lemma provides some restrictions on the shape of the subgraph induced by the vertices of any minimal vertex cut set.
\begin{lemma} Let $G$ be a maxnil graph and assume $\{x,y,z,t\}$ is a minimal vertex cut.
Let $S= \big<x,y,z,t\big>_G$.
Then $S$ is either a clique or a subgraph of $C_4$ (a 4-cycle).
\label{vertexcut4}
\end{lemma}
\begin{proof}
Assume that $S$ is neither a clique nor a subgraph of a 4-cycle.
This implies that
if $S$ has a vertex of degree at least 3, then it contains $K_{1,3}$ as a subgraph;
and if every vertex of $S$ has degree less than 3, then $S$ contains $K_3$ as a subgraph.
Below, we consider these two cases separately.
\textit{Case 1.}
$S$ has a $K_3$ subgraph.
We can assume that $x, y, z$ induce a triangle in $G$. If $G-S$ has at least three connected components, contracting each of them to a single node would produce a minor of $G$ isomorphic to $K_7$ minus a triangle, contradicting that $G$ is nIL.
So $G-S=G_1\sqcup G_2$, with $G_1$ and $G_2$ each connected.
Since $\{x,y,z,t\}$ is a minimal vertex cut set in $G$,
each of $x, y, z, t$ has at least one neighbor in each $G_i$, $i=1,2$.
Contracting $\big<G_i,t\big>_G$ to $t$ produces a minor of $G$, denoted by $G'_i$, which must be nIL.
Since $xyz$ is a triangle and each of $x, y, z$ has at least one neighbor in each $G_i$,
$\{x,y,z,t\}$ induces a clique $T$ in both $G'_1$ and $G'_2$.
By \cite{HLS}, the clique sum $G'=G'_1\cup _{T} G'_2$ is nIL since $G'-T=G_1\sqcup G_2$; but $G'$ strictly contains $G$ as a subgraph, a contradiction.
\textit{Case 2.}
$S$ has a $K_{1,3}$ subgraph.
We can assume that $t$ is adjacent to $x$, $y$ and $z$ in $G$. If $G-S$ has at least three connected components, contracting each of them to a single node would produce a minor of $G$ containing a subgraph isomorphic to $K_{3,3,1}$, thus $G$ is IL.
So $G-S=G_1\sqcup G_2$, with $G_1$ and $G_2$ connected.
For $i=1,2$, contracting each of $G_i$ to a single node $t_i$, deleting the edge $t_it$, deleting any existing edges of $\big<x,y,z \big>_G$, and then performing a $Y\nabla$-move at $t_i$, produces a nIL graph, denoted by $G'_i$.
Let $G'=G'_1\cup _{K_4}G'_2$ be the clique sum
over the complete graph with vertices $x,y,z,t$.
By Theorem~\ref{ThmHLS}, $G'$ is nIL since $G'-S=G_1\sqcup G_2$;
but $G'$ strictly contains $G$ as a subgraph, a contradiction.
\end{proof}
\begin{lemma} Let $G=G_1\cup_SG_2$ be the clique sum of maxnil graphs $G_1$ and $G_2$ over $S=\big<x,y,z,t\big>_G\simeq K_4$.
Assume $S$ is a minimal vertex cut set in $G$.
Then $G$ is maxnil if and only if, in both $G_1$ and $G_2$, $S$ is not strongly separating.
\label{lemmajoin4}
\end{lemma}
\begin{proof}
If $S$ is strongly separating in $G_1$ or $G_2$, then $G-S$ has at least three connected components and contracting each of them to a single node produces a minor isomorphic to $K_7$ minus a triangle.
If, in both $G_1$ and $G_2$, $S$ is not strongly separating,
then $\Gamma - S$ has only two connected components.
Contracting each of the two components to a single node produces $K_6$ minus an edge as a minor (not $K_7$ minus a triangle); hence $G$ is nIL by Theorem~\ref{ThmHLS}.
Adding an edge between a vertex in $G_1-S$ and a vertex $G_2-S$ and contracting $G_1-S$ and $G_2-S$ to single nodes produces a $K_6$ minor.
It follows that $G$ is maxnil in this case.
\end{proof}
\begin{figure}[htpb!]
\begin{center}
\begin{picture}(200, 120)
\put(0,0){\includegraphics[width=3.3in]{K5clique}}
\end{picture}
\caption{\small A maxnil graph that is a clique sum over $K_5$ .}
\label{K5clique}
\end{center}
\end{figure}
The graph $G$ of Figure~\ref{K5clique} is maxnil since $G - u$ is a maximal planar graph. If S=$\big<x, y, z, t, u\big>$, $G_1=\big<a,x, y, z, t, u\big>$, and $G_2=\big<b,x, y, z, t, u\big>$, then $S \simeq K_5$, $G_1\simeq G_2 \simeq K_6^-$ ($K_6$ minus one edge), and $G=G_1\cup_S G_2$.
This shows it is possible for the clique sum of two maxnil graphs over $S\simeq K_5$ to be nIL (and maxnil).
However, no clique $S$ of order 5 can be a minimal vertex cut set in a nIL graph $G$, since then any connected component of $G-S$ would form a $K_6$-minor together with $S$, which would imply $G$ is IL.
For $t \ge 6$, any clique sum over $K_t$ is IL since $K_6$ is IL.
\vspace*{.1in}
J{\o}rgensen studied clique sums of graphs that are maximal without a $K_6$ minor \cite{J}.
These are graphs that do not contain a $K_6$ minor and a $K_6$ minor is created by the addition of any edge.
The class of maxnil graphs and the class of graphs that are maximal without a $K_6$ minor are not the same, as shown in the following proposition.
\begin{figure}[htpb!]
\begin{center}
\begin{picture}(400, 200)
\put(0,0){\includegraphics[width=5.8in]{fig-telescope3d+}}
\put(300,70){\small $H=G\setminus\{v,w\}$}
\end{picture}
\caption{\small {A maxnil graph $G$ (left) that is not maximal without a $K_6$ minor is obtained by adding two vertices to a plane triangulation with nine vertices (right)}}
\label{fig-telescope}
\end{center}
\end{figure}
{ \begin{proposition}
The graph in Figure \ref{fig-telescope} is maxnil, and it is not maximal without a $K_6$ minor.
\label{noK6}
\end{proposition}
\begin{proof}
The graph $G$ in Figure \ref{fig-telescope} is obtained by adding vertices $v$ and $w$ to the plane triangulation $H$: vertex $v$ connects to all nine vertices of $H$ and vertex $w$ connects to vertices $a$, $b$ and $c$ of $H$.
The graph $H+v$ is maxnil, since it is a cone over a maximal planar graph \cite{Sa}.
The graph $G$ is the clique sum over $K_3=\big<a, b, c\big>_G$ of maxnil graphs $H+v$ and $K_4=\big<a,b,c,w\big>_G$.
The graph $\big<a,b,c,v\big>_{H+v}$ is the only induced $K_4$ subgraph in $H+v$ containing $a,b$ and $c$ and it is strongly separating in $H+v$.
So, by Lemma \ref{lemmajoin3}, $G$ is maxnil; in particular it has no $K_6$ minor.
The graph $G+vw$ is a clique sum over $K_4=\big<a,b,c,v\big>_G$ of graphs $H+v$ and $K_5=\big<a,b,c,v,w\big>$, both of which are $K_6$ minor free.
Hence, by \cite{J}, $G+vw$ is $K_6$ minor free, so $G$ is not maximal without a $K_6$ minor.
\end{proof}}
\begin{remark} { Starting with the graph $G$ in Proposition \ref{noK6}, one can construct graphs $G_n$ with $n\ge 11$ vertices that are maxnil but not maximal without a $K_6$ minor.
Take $G_{11}=G$ and construct $G_{11+k}$ from $G$ by triangulating the disk bounded by the triangle $efg$ with $k$ new vertices and adding edges between $v$ and these new vertices.
The argument used in the proof of Proposition \ref{noK6} shows that $G_n$, $n\ge 11$, is maxnil but not maximal without a $K_6$ minor.
We conjecture $n=11$ is the minimal order of a graph with this property, i.e., every maxnil graph with $n \le 10$ vertices is maximal without a $K_6$ minor.}
\end{remark}
\bibliographystyle{amsplain}
| proofpile-arXiv_065-285 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Gapless {fermionic} quasiparticles with linear spectrum protected by topology arise in many {condensed matter systems} in three dimensions \cite{NielsenNinomiya83, CallanHarvey85, Volovik03, Horava05, WanEtAl11}. In particular, accidental crossings of two inversion ($P$) or time-reversal ($T$) breaking bands at the Fermi energy lead to {stable} quasirelativistic particles with low-energy dispersion analogous to relativistic Weyl fermions \cite{Herring37, AbrisokovBenelavskii71}. Fourfold degenerate crossings with Dirac-like low-energy excitations occur for combined $P,T$ (and/or other similar protecting) symmetries \cite{BalatskyEtAl14, ArmitageEtAl18}. Similarly, in chiral superconductors and superfluids with gap nodes, Majorana-Weyl excitations arise at low energy \cite{Volovik85, Volovik1986a, Volovik1986b, Volovik90, Volovik03, ReadGreen00}.
By a very general theorem from topology \cite{Horava05}, the low-energy linear theory near the {three-dimensional} Fermi point node takes universally the ($\gamma$-matrix) form of a quasirelativistic Weyl/Dirac spectrum, with the precise form of the metric and other background fields depending on the microscopic details. It is then of interest to study the detailed form of this emergent Dirac operator with an explicit cutoff and compare to fundamental, Lorentz invariant fermions. Following this logic, the concept of so-called {momentum space} pseudo gauge fields \cite{Volovik03, ShapourianEtAl15, CortijoEtAl15, Landsteiner16, Fujimoto16, PikulinEtAl16, GrushinEtAl16, SukhachovEtAl17, SukhachovEtAl18, SukhachovEtAl18b, IlanEtAl19} and ``emergent" spacetime \cite{Volovik1986a, Volovik1986b, Volovik90, Volovik03, ReadGreen00, MesarosEtAl10, Son13, BanerjeeEtAl14, NissinenVolovik17, WeststromOjanen17, NissinenVolovik2018, GolanStern18, Nissinen2019, LiangOjanen19, WilsonEtAl20, JafariEtAl20} in non-relativistic condensed matter systems has emerged, where the low-energy fermions can experience background fields of various physical origins, similar to what appears for spin-1/2 (or even higher spin) fermions on curved spacetimes in general relativity or its non-relativistic generalizations with non-relativistic coordinate invariance.
Notably, {in the low-energy quasilinear theory, the local Fermi velocities form emergent tetrads which determine the geometry of the conical dispersion}. {The tetrads, and its field strength torsion}, couple to the quasiparticle momentum {effectively} as in gravity. {The effects of such fields in non-relativistic systems appearing at finite density $\mu_F$ and Fermi-momentum $p_F$ are expected to be very different from their relativistic counterparts appearing at $p=0$}. {Amongst other things}, the system at finite Fermi or crystal momentum is then charged under the field strength these geometric background fields \cite{MesarosEtAl10, JuricicEtAl12, ParrikarEtAl14, ShapourianEtAl15, PachosEtAl20}. In three spatial dimensions, this corresponds to the anomalous translational symmetry for chiral fermions, leading to axial anomalies in the system \cite{Nissinen2019, Burkov20} from momentum space translations. For other relevant condensed matter considerations of this anomaly, see e.g. \cite{VolovikMineev81, Volovik84, Volovik1986a, CombescotDombre86, BalatskiiEtAl86, Volovik87, Volovik95, BevanEtAl97, Volovik03, ZyuzinBurkov12, SonYamamoto12, Zahed12, ZhouEtAl13, SonSpivak13, BasarKharzeevZahed2013, Landsteiner2014, LucasEtAl2016, GoothEtAl17, ArmitageEtAl18}. {In this paper we point out that geometric (gravitational) contributions in the chiral anomaly, second order in gradients, are expected in generic non-homogenous condensed matter Weyl systems with momentum space fields (background spacetimes) due to inhomogenous deformations leading to torsion.}
{More generally, the appereance of the tetrad background fields in condensed matter Weyl systems is built-in in the low-energy theory}, thus opening the possibility of simulating Riemann-Cartan (or Newton-Cartan) spacetimes for the low-energy fermions. In case of non-trivial background torsion, the so-called chiral gravitational Nieh-Yan anomaly can appear \cite{NiehYan82, NiehYan82b}. In contrast to the axial anomaly with gauge fields, this anomaly depends on a non-universal UV cut-off parameter $\Lambda$, with canonical dimensions of momentum. While the status of the torsional contribution in relativistic systems has been debated for long \cite{Yajima96, ChandiaZanelli97, ObukhovEtAl97, Soo99, PeetersWaldron99, Comments, KubotaEtAl01}, the appearance of this term in non-relativistic condensed matter systems with explicit UV cutoff to the Weyl physics is a priori plausible \cite{ParrikarEtAl14, Nissinen2019}. {Aspects of the gravitational anomaly in condensed matter have} been considered in e.g. \cite{Zahed12, ZhouEtAl13, SunWan14, ParrikarEtAl14, PalumboPachos16, MaranerPachosPalumbo18, FerreirosEtAl19, CopettiLandsteiner19, Nissinen2019, Copetti20, Stone2019b} {including Weyl/Dirac fermions in superfluids, superconductors and semimetals. The dimensional hierarchy and descent relations of the torsional anomaly were recently analyzed in Ref. \cite{Stone2019b} from a Hamiltonian persperctive in a relativistic model}. {Nevertheless, it seems that any explicit value of the cutoff parameter has not been discussed in detail, except in the recent paper \cite{Nissinen2019} by one of the present authors.} {In the simplest possible terms}, the non-universal anomaly UV scale originates from the regime of validity of the quasirelativistic linear spectrum {and the associated anomalous transport}. {This UV-scale is, in the simplest possible terms, just the validity of the Taylor expansion close to the node which is experimentally a low-energy scale in the system \cite{Nissinen2019}}. {Generalizing this}, it seems that the NY anomaly non-universally probes the chiral spectrum and transport, well-defined only at low energies, and conversely, merging in some left-right asymmetric way to other bands as required by global consistency and symmetries. {Indeed, at face value, the spectrum and spectral flow can be terminated in a multitude of inequivalent ways}. If the system is anisotropic, the interplay of different scales in the system becomes essential, as evidenced by {the consideration of the anomaly} in e.g. Newton-Cartan geometry with quadratic spectrum along a preferred direction or finite temperature (see below).
Here we will further argue for {the torsional anomaly} term using the simplest {computational} apparatus for the chiral and axial anomaly: adiabatic spectral flow in the presence of torsional Landau levels \cite{NielsenNinomiya83, Volovik85}. In this context, the torsional LLs appeared implicitly already in Refs. \cite{Volovik85, BalatskiiEtAl86} and more recently in topological semimetals in \cite{ParrikarEtAl14} in {comparison with} Pauli-Villars regularization of Lorentz invariant fermions. On the other hand, such a relativistic regularization scheme is at best only an approximation in condensed matter systems, since the linear Weyl regime applies to low-energies with an explicit cutoff scale. This linear regime can be anisotropic and, furthermore, is continuously connected with the non-relativistic regime with quadratic dispersion. {Moreover, as discussed in this paper, the role of the spectral flow is drastically altered by the finite node momentum as compared to relativistic fermions.}
{The role of momentum space pseudo gauge fields, with momentum dependent axial charge also becomes evident in the geometric framework for the axial anomaly. Importantly, it is incorrect to assume the universal U(1) axial anomaly for such gauge fields, since the effective momentum space description has a finite regime of validity. To the best of our knowledge, it seems that this fact has been overlooked thus far.} {Related to the momentum dependence in the anomaly, the UV-scale can be supplemented by a infrared (IR) temperature scale of thermal fluctuations, in contrast to, say U(1), gauge fields}. {With some caveats, this IR anomaly becomes universal due to universality of thermal fluctuations close to the node. The thermal torsional anomaly and the associated currents were recently considered in Ref. \cite{NissinenVolovik2019}}. Contribution to the torsional NY anomaly at finite temperatures was further discussed in \cite{ImakiYamamoto19, Stone2019, NissinenVolovik19b, LiangOjanen19b, Imaki20} for relativistic fermions at $p=0$. The closely related role of torsion and viscoelastic thermal transport has been also studied e.g. in \cite{Shitade14, BradlynRead15, GromovAbanov15, Sekine16}. Here we will mostly focus on the non-universal UV contribution at zero temperature. For completeness, we comment on thermal effects by non-zero temperature gradients, which point to still new types of {anisotropic} torsional anomalies terms not present in systems with Lorentz invariance.
This rest of this paper is organized as follows. Section \ref{sec:spacetimes} discusses the low-energy Weyl Hamiltonian and the associated geometry in condensed matter systems from the perspective of emergent background spacetimes. The following Section \ref{sec:torsional_LLs} reviews the relativistic torsional anomaly and spectral flow argument, focusing on the extension to finite node momentum and the comparison with the anomaly for U(1) gauge fields presented in Appendix \ref{sec:appendix_EM}. Sec. \ref{sec:chiral} discusses the torsional anomaly in chiral superfluids and superconductors, where it can be matched with experiment \cite{Volovik95, BevanEtAl97, Nissinen2019}. This followed by a model of $T$-breaking strained semimetals in Sec. \ref{sec:WSM}. We also briefly discuss the role of torsion in the presence of thermal gradients in Sec \ref{sec:thermal}. We conclude with a comparison on previous results Sec. \ref{sec:comparison} and the conclusions and outlook of our results.
\section{Weyl fermions in condensed matter and relativistic systems}\label{sec:spacetimes}
\subsection{Weyl fermions in condensed matter}
We consider a {fermionic} system with broken time-reversal symmetry ($T$) or inversion ($P$). {In the vicinity of a generic degenerate crossing at $\vek{p}_W$, ignoring all other bands, the $2\times2$ Hamiltonian is $H = \sigma^a H_a$ in terms of the unit and Pauli matrices $\sigma^a$, $a=0,1,2,3$. This leads to the expansion}
\begin{align}
H(\vek{p}) = \sigma^a e_a^{i}(p-p_W)_{i} + \cdots \label{eq:HWeyl},
\end{align}
where
\begin{align}
e_a^i = \frac{\partial H_a}{\partial p_i}\bigg \vert_{p=p_W}. \label{eq:Taylor_tetrad}
\end{align}
The expansion is, of course, valid for $\abs{\vek{p}-\vek{p}_W}\ll p_W$ since the remainder is of the order of $\abs{\vek{p}-\vek{p}_W}^2$. This provides an explicit cutoff for the linear Weyl regime that is, nevertheless, continuously connected with the non-relativistic quadratic dispersing spectrum and the other bands.
{The existence of the Weyl node degeneracy is protected by topologuy in a finite region} since there are three parameters and three constraints \cite{Herring37, AbrisokovBenelavskii71, Volovik03, Horava05}. Via rotations and scalings, $\tilde{p}_a = e^i_a p_i$, the Hamiltonian becomes the right- or left-handed relativistic Weyl Hamiltonian, at Fermi momentum $\tilde{p}_W$,
\begin{align}
\tilde{H}(\vek{p}) = \chi \sigma^a (\tilde{p}-\tilde{p}_W)_a
\end{align}
where $\chi=\pm 1 = \text{sgn}(\det e^i_a)$ is the chirality, defined as the direction of (pseudo)spin with respect to the propagation momentum. The band energies are $E =(\tilde{p}-\tilde{p}_W)_0\pm \sqrt{\abs{\tilde{\vek{p}}-\tilde{\vek{p}}_W}^2}$. The role of the coefficients $e^\mu_a$ is simply to determine the (anisotropic) Fermi velocities of the {conical} dispersion $\omega^2 = -g^{ij}(p-p_W)_i (p-p_W)_j$ via the (inverse) metric
\begin{align}
g^{ij} = -\sum_{a,b=0,1,2,3} e^i_a e^j_b \delta^{ab} \equiv -e^i_a e^j_b \delta^{ab} \label{eq:cone_metric}
\end{align}
where the Einstein summation convention for repeated latin and greek indices will be henceforth assumed. The spatial tetrad $e_a^i$ is extended to a non-degenerate matrix $e^{\mu}_a$ by considering the operator $\sigma^a e_a^{\mu}i\partial_{\mu} =i \partial_t - H(\vek{p})$ with $\mu=t,x,y,z$. In particular, the coefficient $e^\mu_0=\{1,v^i\}$ is non-trivial in type-II Weyl semimetals and {in} superfluids and superconductors with superflow. The case with non-zero spatial $e^{t}_a$, $a=1,2,3$ was considered in \cite{NissinenVolovik17}. These break different symmetries, while the spacelike tetrads {transform} like gauge potentials corresponding to axial magnetic and electric fields. While the Hamiltonian \eqref{eq:HWeyl} is usually analyzed for translationally invariant systems, it remains valid for weak deformations. This can be seen {in} any consistent gradient expansion scheme, e.g. the semi-classical gradient expansion of the BdG Hamiltonian for superconductors/superfluids, or the Schrieffer-Wolff transformation for Bloch Hamiltonians \cite{WeststromOjanen17, LiangOjanen19}.
{We conclude that} the Hamiltonian \eqref{eq:HWeyl} has striking similarity to relativistic fermions coupled to non-trivial background geometry or gravity, {albeit with some important caveats}. More precisely, if we consider the low-energy Weyl fermion $\Psi_W$ in terms of the original excitations $\Psi$, {we see}
\begin{align}
\Psi(\vek{x},t) = e^{i \vek{p}_W \cdot \vek{x}} \Psi_W(\vek{x},t), \label{eq:momentum_rotation}
\end{align}
which, however, corresponds to the anomalous (chiral) rotations in the system, thus making the finite node momentum $p_W$ very important. In the rest of the paper, we will explicitly consider the anomaly implied by \eqref{eq:momentum_rotation} in the presence of non-trivial background fields $e^{\mu}_a(x)$, from Eq. \eqref{eq:Taylor_tetrad}, after reviewing the necessary {background geometry} in the next section. {U(1) gauge fields are assumed to be absent}. We will focus here on $T$-breaking systems, where in the simplest case one finds Weyl nodes of opposite chirality at $\pm \vek{p}_W$, whereas for inversion $P$ breaking systems one has at minimum four Weyl points, which are invariant under $T$ and map non-trivially to themselves under inversion.
\subsection{Quasirelativistic fermions}
We briefly summarize quasirelativistic fermions on curved Riemann-Cartan spacetimes here, see e.g. \cite{NiehYan82, ObukhovEtAl97, ChandiaZanelli97, ParrikarEtAl14}. These spacetimes are defined via an orthonormal frame $e^a = e^a_{\mu}dx^\mu$, {giving rise to metric as in \eqref{eq:cone_metric}}, and a (matrix) spin-connection $\hat{\omega}_{\mu} dx^{\mu}$, both of which couple to the Dirac (and Weyl) equations. {Informally}, the $e^a_{\mu}$ is a spacetime ``translation gauge field", while $\hat{\omega}$ is the gauge connection corresponding to local (Lorentz) rotations. See e.g. \cite{NiehYan82b}.
As discussed above and the Introduction, analogous fields arise in the low-energy Weyl Hamiltonian close to the nodes {in condensed matter systems on flat space, giving rise to emergent spacetimes for the low-energy fermions}. These are, however, not strictly relativistic in the sense that the emergent metric does not follow from locally Lorentz invariant spacetimes implied by general relativity, but rather from the microscopic non-relativistic UV theory at low energy. This what we call quasirelativistic and emergent.
Note that the spin-connection is strictly speaking a gauge field of {a local symmetry entering the Dirac operator. Therefore its emergence needs the corresponding local symmetry}. Notwithstanding, it arises however, e.g. in chiral superconductors and superfluids due to the local combined U(1) symmetry corresponding to gauge and orbital rotation symmetry \cite{LiuCross79, GolanStern18, Nissinen2019}. The tetrad and connection fields give rise to the torsion $T^a=de^a+(\hat{\omega} \wedge e)^a$ and curvature $\hat{R} = d\hat{\omega}_{\mu}-\hat{\omega} \wedge \hat{\omega}$ {field strength} tensors that equivalently characterise the spacetime. From the tetrad one can derive the spacetime metric, which enters as a secondary object, in contrast to usual Riemannian spacetimes where the connection is symmetric and uniquely fixed by the metric.
In terms of equations, the basic quantities are the tetrad $e^a_{\mu}$ and coordinate connection $\Gamma_{\mu\nu}^{\lambda}$. The former is the {metric matrix square-root}
\begin{align}
g_{\mu\nu} = e^a_{\mu} e^b_{\nu} \eta_{ab}, \quad e_{a}^\mu e_{b}^{\nu} \eta_{ab} = g^{\mu\nu}
\end{align}
by defining a local orthonormal frame, {in terms of $\eta_{ab}= \textrm{diag}(1,-1,-1,-1)$}. Now tensors $X^{a\cdots \mu \cdots}_{b\cdots \nu \cdots}$ can carry local orthonormal (Lorentz) indices and coordinate indices; the two bases can be transformed by contracting with $e^a_{\mu}$ or the inverse $e^{\mu}_a$. The connection consistent with basis changes {defined as $\nabla e^a_{\mu} = 0$}, has two parts, one for local orthonormal indices and one for coordinate indices and is metric compatible. The {connection} determines geometric parallel transport in the system. Without loss of generality it can be written as
\begin{align}
\omega^a_{\mu b} = e^a_{\lambda} e^{\nu}_{b} \Gamma^{\lambda}_{\mu\nu} - e^a_{\nu}\partial_{\mu} e^{\nu}_b \label{eq:spin-connection},
\end{align}
where $\Gamma_{\mu\nu}^{\lambda}$ is the coordinate connection with torsion
\begin{align}
T_{\mu\nu}^{\lambda} = \Gamma^{\lambda}_{\mu\nu} - \Gamma^{\lambda}_{\nu \mu}.
\end{align}
The connection can be decomposed in terms of torsion as
\begin{align}
\Gamma^{\lambda}_{\mu\nu} = \mathring{\Gamma}^{\lambda}_{\mu\nu} + C^{\lambda}_{\mu \nu},
\end{align}
where $\mathring{\Gamma}^{\lambda}_{\mu\nu} = \frac{1}{2}g^{\lambda\rho}(\partial_{\mu}g_{\nu\rho} +\partial_{\nu}g_{\mu\rho}-\partial_{\rho}g_{\mu\nu})$ is the Christoffel connection fully determined from the metric and $C^\lambda_{\mu\nu} = \frac{1}{2} (T^{\lambda}_{\ \mu\nu} + T_{\mu \ \nu}^{\ \lambda} - T^{\ \ \lambda}_{\mu \nu})$ is the contorsion tensor.
The low-energy quasirelativistic Weyl fermion theory is, in the chiral Dirac fermion basis $\psi = \left(\begin{matrix} \psi_L & \psi_R \end{matrix}\right)^T$, where $\psi_{R,L}$ are Weyl fermions and $\gamma^a = \overline{\sigma}^a \oplus \sigma^{a}$ with $\overline{\sigma}^a = (1,-\sigma^i)$,
\begin{align}
S_{D} = \int d^4 x e~ \frac{1}{2}\overline{\psi}\gamma^a (e^{\mu}_a i D_{\mu } - p_{Wa})\psi + \textrm{ h.c.} ~. \label{eq:Dirac_action}
\end{align}
where $e \equiv \det e^a_{\mu}$ and $D_{\mu}$ is the covariant derivative corresponding to the canonical momentum
\begin{align}
D_{\mu} = \partial_{\mu} - \frac{i}{4} \omega_{\mu}^{ab} \sigma_{ab} - i q A_{\mu}
\end{align}
where $\gamma^{ab} = \frac{i}{2}[\gamma^a,\gamma^b]$ and $A_{\mu}$ is a U(1) gauge potential with charge $q$. They enter the covariant derivative or canonical momentum due to local Lorentz (rotation) and gauge symmetries. For the emergent spin-connection to exist, the local rotation symmetry has to be dynamically generated. See Sec. \ref{sec:chiral} and \cite{Nissinen2019}. Importantly to our applications, the quantity $p_{Wa} = (\mu_W, \vek{p}_W )$ is the shift of the of the Weyl (or Dirac) node at chemical potential $\mu_W = e_0^\nu p_{W\nu}$ and $\vek{p}_{Wa} = e^i_a p_{Wi}$ in momentum space. The magnitude of latter is a UV-parameter that is fixed (up to small deformations) in the low-energy theory.
\subsection{Anisotropic Newton-Cartan fermions}\label{sec:Newton-Cartan}
A related concept to the {Riemann-Cartan spacetime \eqref{eq:Dirac_action}} is an anisotropic version of a {non-relativistic Newton-Cartan (NC) spacetime. In the latter, we single out a Newtonian time and, in our case, a preferred spatial direction with quadratic dispersion in contrast to the linear Riemann-Cartan case}. In what follows {in} Secs. \ref{sec:chiral} and \ref{sec:WSM}, this preferred direction is {along the Weyl node separation} with uniaxial symmetry and anisotropic scaling. Compared to the standard {NC} case, there is an additional gauge symmetry corresponding to a U(1) number conservation and a local Milne boost symmetry along the anisotropy direction \cite{Son13, BanerjeeMukherjee18, CopettiLandsteiner19, Copetti20}. These will both be gauge fixed to zero and will be applied {mostly} in the case of the chiral superconductor/superfluid, where they are absent naturally for Majorana-Weyl fermions. With the time coordinate fixed, the symmetries of the NC spacetime then correspond to the generalized Galilean transformations $x^i \to x^i +\xi^i(x,t)$ \cite{DuvalEtAl85, Son13, ObersEtAl14, BanerjeeEtAl14, WilsonEtAl20}.
{The} metric is
\begin{align}
g_{\mu\nu} = n_{\mu}n_{\nu} + h_{\mu\nu}
\end{align}
where now $n_\mu$ is a \emph{spacelike} vector, {$e^a_{\mu}$ a (degenerate) tetrad with metric $h_{\mu\nu}$ restricted to the orthogonal subspace}, with $e^0_\mu = \delta^0_\mu$ representing Newtonian time,
\begin{align}
h_{\mu\nu} = \eta^{ab} e^{a}_{\mu}e_{\nu}^b, \quad a,b =0,1,2,
\end{align}
with inverses
\begin{align}
n_{\mu}\ell^{\mu} = 1, \quad e^a_{\mu}\ell^{\mu} =0, \quad e^a_{\mu} e^{\mu}_b = \delta^a_b,\quad a=0,1,2.
\end{align}
The connection and torsion follow as
\begin{align}
\Gamma^{\lambda}_{\mu\nu} = \mathring{\Gamma}^\lambda_{\mu\nu}[h] + \ell^{\lambda}\partial_{\mu}n_{\nu},
\end{align}
from the condition that $\mathcal{L}_{\ell} h_{\mu\nu} = 0$, equivalent to $\nabla_{\mu}n_\nu = \nabla_{\lambda}h_{\mu\nu}=0$. The torsion is given as
\begin{align}
T^3_{\mu\nu} \equiv n_\lambda T^{\lambda}_{\mu\nu} = -\partial_{\mu}n_{\nu} + \partial_{\nu}n_{\mu}
\end{align}
and the standard spin-connection perpendicular to $\ell^\mu$, $\mathring{\omega}_{\mu\nu}[h]$, {as in} Eq. \eqref{eq:spin-connection}, amounting to local rotation symmetry along $\ell^\mu$. The fact that $n_{\mu}$ is covariantly constant is natural, since it can be identified with the direction corresponding to non-zero Weyl node separation in e.g. $T$-breaking Weyl systems.
We discuss in Sec. \ref{sec:chiral} the Landau level problem of Majorana-Weyl fermions corresponding to such a spacetime, with the (right-handed Weyl) action
\begin{align}
S_{W} = \int d^4x \sqrt{g} \psi^\dagger [(\tau^a c_{\perp}e_a^\mu i \partial_\mu - \tau^3\epsilon(i \partial_\ell)] \psi + \textrm{ h.c.} \label{eq:NC_fermion}
\end{align}
where $\epsilon(\partial_\ell) = \partial_\ell^2/(2m)-\mu_F$ in the anisotropic direction with $\partial_\ell = \ell^{\mu}\partial_{\mu}$, corresponding to the non-relativistic dispersion and degenerate metric $\ell^{\mu}\ell^{\nu} = g^{\mu\nu}-h^{\mu\nu}$. In this case the relative anisotropy of the two terms is $c_{\perp}/c_{\parallel} = mc_{\perp}/p_F$, where $p_F = \sqrt{2m\mu_F}$ and $c_{\parallel}=v_F$ the Fermi velocity. This NC model can be matched to the results discussed \cite{Nissinen2019}. Note that a very similar model with Lifshitz anisotropy was considered in \cite{CopettiLandsteiner19}, and the ensuing torsional anomalies for momentum transport in \cite{Copetti20}. For a semimetal under strain, the model {in Sec. \ref{sec:WSM} is correspondingly anisotropic but the precise connection to a specific NC model and symmetries remains to be worked out in full detail}.
\section{Torsional anomalies and Landau levels}\label{sec:torsional_LLs}
\subsection{Torsional Nieh-Yan anomaly}
Now consider Weyl fermions coupled to a tetrad with non-zero torsion and curvature with the U(1) gauge fields set to $A_{\mu} = A_{5 \mu} = 0$, {see however Appendix \ref{sec:appendix_EM}}. As for the U(1) gauge fields, or gravitational fields represented by the metric $g_{\mu\nu}$, the Weyl fermions are anomalous in the presence of non-zero torsion (and curvature).
We focus on a pair of complex fermions of opposite chirality with currents $j^{\mu}_{\pm}$. The (covariant) torsional anomaly for the axial current $j^{\mu}_5 = j^\mu_{+}-j^\mu_{-}$ is \cite{Yajima96, ObukhovEtAl97, ChandiaZanelli97, Soo99, PeetersWaldron99}
\begin{align}
\partial_{\mu} (e j^{\mu}_5) &= \frac{\Lambda^2}{4\pi^2} (T^a \wedge T_a - e^a \wedge e^b \wedge R_{ab}) \label{eq:NYanomaly}\\
& + \frac{1}{192\pi^2}\textrm{tr}(R\wedge R) \nonumber\\
= \frac{\Lambda^2}{4\pi^2} &\epsilon^{\mu\nu\lambda\rho}(\frac{1}{4}T^a_{\mu\nu}T_{a\lambda\rho} - \frac{1}{2}e^a_{\mu}e^b_{\nu}R_{ab\lambda\rho}) + O(\partial^4). \nonumber
\end{align}
For a discussion of the relativistic {torsional} anomaly term, we refer to \cite{NiehYan82, NiehYan82b, ObukhovEtAl97, ChandiaZanelli97, Comments}, and for applications in topological condensed matter systems, \cite{SunWan14, ParrikarEtAl14, FerreirosEtAl19, Nissinen2019, Stone2019, Copetti20, LiangOjanen19b}. For the mixed terms between torsion and U(1) gauge potentials, see e.g. \cite{KubotaEtAl01}. {We focus on the anomaly contribution solely due to the geometry (tetrads), we will not consider them}. Ref. \cite{FerreirosEtAl19} also considered novel ``axial" tetrads $e^a_{\mu R} \neq e^a_{\mu L}$ at two Weyl nodes $R,L$, with (vector like) $T^5$ appearing as in Eq. \eqref{eq:U1_anomaly_eqs}. We will require $e_R = \pm e_L$ but this is actually rather strong constraint basically only allowing for (improper) rotations that can be gauged away. In the chiral Weyl superfuid/conductor or minimal time-breaking semimetal, $e_R =-e_L$ but this just the chirality of the nodes and is built in the axial nature of torsion. Intriguingly the trace part of torsion arises as the gauge field of local Weyl scalings but this comes, since non-unitary, with a complex gauge coupling \cite{NiehYan82}. {The presence of different (chiral) tetrad couplings and overall symmetry considerations would be highly interesting for e.g. parity breaking and other non-minimal Weyl systems with several nodes, some of which coincide in momentum space}.
{To conclude}, we note the following salient properties related to the NY anomaly term: i) Despite appearances, it is given by the difference of topological terms, albeit in five dimensions \cite{ChandiaZanelli97}. ii) The NY anomaly term is of second order in gradients and therefore the leading contribution from the background geometry in linear response. iii) The UV-cutoff is isotropic in momentum space by (local) Lorentz invariance but is multiplied by the geometric term, which can be anisotropic. {In condensed matter applications, we do not expect Lorentz invariance so in principle non-isotropic anomaly coefficients can arise (see e.g. Sec. \ref{sec:thermal})}. iv) The NY term has contributions from the torsion and curvature, dictated by local exactness $d(e^a \wedge T_a) = T^a \wedge T_a -e^a\wedge e^b \wedge R_{ab}$. The two contributions are a priori independent before the geometry (the torsionful connection) is fixed. The anomaly is therefore physical input for the spacetime geometry or connection \cite{Nissinen2019}. In more pragmatic terms, the anomaly coefficient $\Lambda^2$ can be computed in the case when $\hat{\omega}_\mu = 0$, although the constraints of a consistent spacetime geometry should be kept in mind.
\subsection{Quasirelativistic fermions and torsional Landau levels}
Now we proceed to compute the torsional NY anomaly in non-relativistic systems utilizing the Landau level argument. To set the stage and remove confusions before presenting our main results, we briefly review (quasi)relativistic torsional Landau levels {with linear spectrum}, see e.g. \cite{ParrikarEtAl14}. The computation of the Landau levels is close to and inspired by the spectral flow obtained in \cite{Volovik85, BalatskiiEtAl86} for momentum space gauge fields at $p_W\neq 0$. Similar considerations for $p=0$ can be found in \cite{Stone2019, Stone2019b}.
The Weyl particles are governed by the effective Hamiltonian
\begin{align}
H_{\rm W} = \sigma^a e^{i}_{a}(i \partial_i - p_{W,i}) + \textrm{h.c.}
\end{align}
where $\vek{p}_W$ is the location of the Weyl point. Due to the lack of protecting symmetries (namely at least broken $P$ or $T$) the shift vector
\begin{align}
p_{W,\mu} = (\mu_W, \vek{p}_W)
\end{align}
is necessarily non-zero for the existence of the Weyl point. However, we will focus on the $T$-breaking case with two nodes of opposite chirality at $\pm\vek{p}_W$ and assume that $\mu_W$ is zero unless otherwise specified.
In this section, we assume that the coordinate dependence of the Hamiltonian arises solely from the tetrad $e^\mu_a(x)$, while the location of the node, {$p_{aW}$ is assumed to be constant}. Note that the coordinate momentum $p_{W\mu} \equiv e^a_\mu p_{Wa}$ can still vary and in the case $T^a_{\mu\nu} \neq 0$ there is non-zero torsion. Torsional LLs arise when, say, $\frac{1}{2}\epsilon^{ijk}T^3_{jk} = T_B\unitvec{z}^i$ is constant with the other torsion components and spin connection vanishing. We discuss later in Secs. \ref{sec:chiral}, \ref{sec:WSM} on how to make the identification between low-energy emergent gravitational fields and microscopic background fields in specific examples.
\subsubsection{Torsional Landau levels}
Specifically, the {assumed} (semi-classical) tetrads $e^a = e^a_{\mu} dx^{\mu}$ and the inverse $e_a = e^{\mu}_a \partial_{\mu}$ are, following \cite{Volovik85, BalatskiiEtAl86, ParrikarEtAl14},
\begin{align}
e^0 &= dt, \quad e^{1} = dx, \quad e^{2} = dy, \quad e^3 = dz-T(y)dx \nonumber \\
e_0 &= \partial_t,\quad e_{1} = \partial_x+T(y)\partial_z, \quad e_2 = \partial_y, \quad e_3 = \partial_z .\label{eq:torsion_tetrad}
\end{align}
Now we compute the spectrum of the Weyl fermions in the presence of a constant torsional magnetic field $T(y)=T^3_B y$. The corresponding metric is
\begin{align}
g_{\mu\nu}dx^{\mu}dx^{\nu} &= \eta_{ab}e^a e^{b} \nonumber \\
&= dt^2-(1+T(y)^2)dx^2-dy^2\\
&\phantom{=}-2 T(y) dx dz-dz^2 . \nonumber
\end{align}
The torsion is given as $T^3_{ij} = \partial_\mu e^3_\nu-\partial_{\nu}e^3_{\mu}$ or $T^3 = de^3 =\frac{1}{2} \partial_y T(x) dx \wedge dy$, i.e. $T^3_{xy} = -\partial_y T(y) =T_B^3$. In analogy with the electromagnetic tensor, we will call $\frac{1}{2} \varepsilon^{ijk} T_{jk}^a$ and $T^a_{0i}$ torsional magnetic and electric fields, respectively.
The Weyl hamiltonian couples to the non-trivial vierbein as, $\chi$ being the chirality,
\begin{align}
\label{eq:hamT}
H_\ch =& \frac{\ch}{2} \sigma^a e_a^i \hat{p}_i +\textrm{ h.c.} \nonumber \\
=& \ch\begin{bmatrix}\hat{p}_z && \hat{p}_x+\hat{p}_z T_{B}^3 y - i\hat{p}_y\\ \hat{p}_x+\hat{p}_zT_{B}^3 y + i\hat{p}_y && -\hat{p}_z \end{bmatrix}.
\end{align}
As usual, the energy eigenvalues are obtained from squaring the Hamiltonian
\begin{align*}
H^2 &= \sigma^a e^i_a \hat{p}_i e^j_b
\sigma^b \hat{p}_j = e^i_a e^j_b \sigma^a \sigma^b \hat{p}_i \hat{p}_j + e^i_a\sigma^a\sigma^b \{\hat{p}_i,e^j_b\}\hat{p}_j \\
& = e^i_a e^j_b (-\eta^{ab}+i\epsilon^{abc}\sigma^c) \hat{p}_i \hat{p}_j + \frac{iT_{B}^3}{2}[\sigma^2,\sigma^1] \hat{p}_z \\
& = -g^{ij}\hat{p}_j\hat{p}_j - T_{B}^3\sigma_3\hat{p}_z.
\\
& = \hat{p}_y^2 + \hat{p}_z^2 + (\hat{p}_x + T_{B}^3 \hat{y}\hat{p}_z)^2 - T_{B}^3\sigma_3 \hat{p}_z.
\end{align*}
We see \eqref{eq:hamT} is equivalent to a LL problem in a magnetic field [Eq. \eqref{eq:Hmag} for \(B^z = T_{B}^3\) and \(e = p_z\) in Appendix \ref{sec:appendix_EM}]. With those identifications, the spectrum is consequently [from Eq. \eqref{eq:relEMspectrum}]:
\begin{align}
\label{eq:tllspectrum}
E(p_z) = \begin{cases}\pm \sqrt{p_z^2+2|p_zT_{B}^3 |n}, \quad n\geq1 \\ \text{sgn}(T_{B}^3 )\ch|p_z|, \quad n = 0. \end{cases}
\end{align}
The lowest Landau level (LLL) is chiral and unpaired with the simple eigenfunctions, $\sigma^3=\pm1$,
\begin{align}
\Psi_{\sigma^3}(x,p_x,p_z) \sim e^{i (p_x x+p_z z)} e^{\pm(p_x y-p_z T_B y^2/2)} \label{eq:LLL_gaussian}
\end{align}
where the (pseudo)spin or helicity is determined by $\text{sgn}(p_zT_B)$. We stress that the shape of the spectrum is in general also modified due to the momentum replacing the electric charge: left-handed states now disperse as \(E<0\) and right-handed states as \(E>0\) (or vice versa, depending on the sign of the field), see Fig. \ref{fig:relativistic_TLL}.
\begin{figure}
\centering
\includegraphics[width=220pt]{Kuvaajat/TLL_rel_spectrum_occupied.pdf}
\caption{Dispersion of left-handed (LLL in blue) and right-handed Weyl fermions (LLL in red) at $p_W=0$ under a torsional magnetic field, respectively.}
\label{fig:relativistic_TLL}
\end{figure}
\subsubsection{Spectral flow and anomaly}
Analogously to the Landau level calculation with electromagnetic fields, we may turn on a constant torsional electric field parallel to \(T_{B}^3 \) by introducing time-dependence to the vierbein as \(e_z^3 = 1+T_{E}^3 t\) where $T_{E}^3 t \ll 1$. Then we have $e^z_3 = (1+T_{E}^3 t)^{-1} \approx 1-T_{E}^3 t$. This induces adiabatic time-dependence $\partial_t p_z = (\partial_t e^3_z) p_3$, analogous to the Lorentz force, which leads to spectral flow of states through the momentum dependent torsional electric field. The number currents, in the vicinity of the node $p_z = e^3_z p_3 = p_{Wz}=0$ are for both chiralities
\begin{align}
\label{eq:tllcurrent}
e j^0_\chi(t) &= \frac{T_{B}^3 }{2\pi} \int_{-\Lambda}^{\Lambda}\frac{dp^3}{2\pi}|p_z| \nonumber \\
&= - \Lambda^2\frac{T_{B}^3 (1+T_{E}^3 t)}{4\pi^2} = -\Lambda^2\frac{T^3_{xy} e_z^3}{4\pi^2},
\end{align}
where a cutoff \(\Lambda\) has been introduced to regularize the momentum dependent current density {and spectrum}. We see that for $E<0$, particles flow below the cutoff, whereas for $E>0$, holes flow above the cutoff, see Fig. \ref{fig:relativistic_spectral_flow}. Then, taking into account the fact that the tensorial current density is modified by the volume element $e d^4 x$ in the presence of torsion, see e.g. \cite{Soo99, BradlynRead15},
\begin{align}
\dot{e j^0_{\chi}} &= \mp \Lambda^2\frac{T^3_{xy} \part e_z^3}{4\pi^2} = \mp \Lambda^2\frac{T_{B}^3 T_{E}^3 }{4\pi^2} \nonumber\\
&= \mp\frac{\Lambda^2}{32\pi^2}\levic T^3_{\mu\nu} T^3_{\rho\sigma}, \label{eq:spectral_flow_anomaly}
\end{align}
from holes or particles moving above or below the cutoff, respectively, depending on the direction of the torsional electric field. This is the vacuum regularization that was {also} used in Ref. \onlinecite{ParrikarEtAl14} in the sense $n_{\rm vac} =\sum_{\abs{E_n}\leq \Lambda} \text{sgn}(E_n)$, where an additional factor of one half was present, presumably due to comparison with anomaly inflow from five dimensions. Generalizing this to a fully covariant expression, see the Appendix \ref{sec:appendix_EM}, gives
\begin{align}
\frac{1}{e}\partial_\mu(ej^\mu_{5}) = \frac{1}{e}\frac{\Lambda^2}{16\pi^2}\levic T^3_{\mu\nu} T^3_{\rho\sigma}, \label{eq:j_spectral_flow}
\end{align}
and in particular $\partial_{\mu} (ej^\mu)=0$ as required. We discuss {the relativistic vacuum and the spectral flow leading to \eqref{eq:j_spectral_flow}}, as compared to nodes at finite momenta and axial U(1) fields, more in the next section.
\begin{figure}
\centering
\includegraphics[width=.49\textwidth]{Kuvaajat/TLL_spectral_flow_2.pdf}
\caption{Relativistic spectral flow at $k=0$ in the presence of torsion, with the adibatic transfer of states. Dashed line indicates the location of the cutoff $\Lambda$. }
\label{fig:relativistic_spectral_flow}
\end{figure}
\subsubsection*{Torsional anomaly for \(p_W \neq 0\)}
If we now displace the Weyl nodes in the relativistic case \eqref{eq:hamT} by \(p_z = \pm p_{W}\) in momentum space, corresponding to a $T$-breaking Weyl system, the spectrum \eqref{eq:tllspectrum} takes the form
\begin{align}
E(p_z) = \begin{cases}\pm \sqrt{(p_z\pm
p_{W})^2+2|p_zT_{B}^3 |n}, \quad n\geq1 \\ \text{sgn}(\ch p_zT_{B}^3 )(p_z\pm p_{W}), \quad n = 0. \end{cases}
\end{align}
The lowest, chiral Landau level looks exactly like that of a Weyl fermion in an axial magnetic field, Eq. \eqref{eq:displacedham}. Higher levels are distorted due to the effective charge carried by the particles being their momentum. See Fig. \ref{fig:pseudotorsion}.
\begin{figure}[h]
\centering
\includegraphics[width=250pt]{Kuvaajat/TLL-condensed-specflow.pdf}
\caption{Left-handed Weyl particles at $k_z = k_0$ (LLL in red) and right-handed Weyl holes at $k_z = -k_0$ (LLL in blue) under a torsional magnetic field. Spectral flow is indicated with the arrows.}
\label{fig:pseudotorsion}
\end{figure}
Since the node is at finite momentum $p_W\neq 0$, also the spectral flow summation is centered around $p_W \pm \Lambda'$, {where $\Lambda'$ is a cutoff from e.g. the validity of the linear spectrum}. For notational convenience and comparison to Eq. \eqref{eq:j_spectral_flow}, we introduce the momentum cutoff as $\Lambda' = \frac{\Lambda_{\rm rel}^2}{2} p_W$, where we expect $\frac{\Lambda_{\rm rel}^2}{2} \ll 1$, this being the dimensionless ratio of the cutoff of the linear spectrum to $p_W$. The spectral flow results in the expression, where particles and holes simply add at the two nodes,
\begin{align}
\frac{1}{e}\partial_\mu(ej^\mu_{5}) = \frac{1}{e}\frac{p_W^2 \Lambda_{\rm rel}^2}{16\pi^2}\levic T^3_{\mu\nu} T^3_{\rho\sigma}
\end{align}
which shows that the NY anomaly cutoff is proportional to the node momentum $p_W$, and is small by a factor $\Lambda^2_{\rm rel}\ll 1$ corresponding to the validity of the linear Weyl approximation.
\subsubsection{Comparison of torsion to U(1) fields}
From Figs. \ref{fig:relativistic_TLL} and \ref{fig:pseudotorsion}, we see that the spectrum of torsional LLs resemble the LL spectrum of charged particles in U(1) axial and vector fields, with the momentum dependent charge to torsion kept in mind. {See appendix \ref{sec:appendix_EM} for a complete review of the U(1) case for comparison.} It is well-known that the contribution of torsion for complex chiral Weyl fermions can be equivalently cast in terms of the axial gauge field $\gamma^5 S^{\mu} \equiv \gamma^5 \varepsilon^{\mu\nu\lambda\rho} T_{\nu\lambda\rho}$ corresponding to the totally antisymmetric torsion, see e.g. \cite{ChandiaZanelli97, Soo99}. We stress that while the spectral equivalence of torsional and U(1) LLs is of course expected, the physical appearance of the anomaly is drastically different: the density of states of the LLs depend on momentum and thus the dimensional coefficient $\Lambda^2$ and the need for an explicit UV-cutoff appears. {Similarly, the physics of Figs. \ref{fig:relativistic_spectral_flow} and \ref{fig:pseudotorsion} is completely different, although both arise from spectral flow in momentum space under torsion.}
On this note, although the relativistic result in \eqref{eq:spectral_flow_anomaly} is familiar, there seems to be still confusion in the literature about the role of torsional Landau levels in momentum space and the validity of the NY anomaly due to the explicit UV cutoff. For relativistic Weyl fermions with Lorentz invariance up to arbitrary scales, the spectral flow is symmetric around $p=0$, leading to the conclusion that the anomaly indeed can cancel. This is simply by the observation that, in the absence of Lorentz symmetry breaking at high energy, no net transfer of occupied and empty states in the vacuum takes place during the adiabatic spectral flow, cf. Fig. \ref{fig:relativistic_spectral_flow}. The net transfer of $j_5$ requires left-right asymmetric regularization at the scale of $\Lambda$ with chirality disappearing above that scale, maintaining $\partial_{\mu}j^{\mu}=0$ \cite{ParrikarEtAl14}. Alternatively, at the very least, there is a divergence as $\Lambda\to\infty$. In contrast, for quasirelativistic Weyl fermions at finite node momentum and an explicit cutoff to the Weyl spectrum, the spectral flow can terminate due to the non-relativistic corrections at the cutoff scale of $\Lambda^2_{\rm rel}$, {also implying that chirality is no longer well-defined}, leading to net transport of states and momenta relative to the vacuum (and other quantum numbers of the Weyl fermions if present). {A related fact is that the momentum that plays the role of chirality, which remains physically well-defined irrespective of the scale}. {We also note that the flow is composed of particles and antiparticles (holes) at the different nodes}. It would be interesting to study the {detailed} role of the breakdown of relativistic spectrum and spectral flow numerically, following Ref. \onlinecite{SukhachovEtAl18}. {There only the charge density at finite chemical potential from the node is analyzed, corresponding to Fig. \ref{fig:B5E} and the expected deterioration away from the Weyl node is verified.}
\section{Chiral Weyl superfluids and superconductors}\label{sec:chiral}
Now we discuss the role of the torsional anomaly in $p$-wave superfluids and superconductors with gap nodes and associated Weyl-Majorana quasiparticles \cite{Volovik84, Volovik90, VollhardtWoelfle, ReadGreen00, PalumboPachos16, MaranerPachosPalumbo18}. Close to the nodes, the Fermi energy is tuned to the Weyl point due to the existence of the $p+ip$ pairing amplitude. The chiral anomaly is related to the non-conservation of momentum in the condensate and normal state quasiparticles \cite{BevanEtAl97}. The relation of {this to the} torsional gravitational anomaly and the LL spectral flow was briefly pointed out in Ref. \cite{Nissinen2019}. Earlier related work can be found in \cite{Volovik85, Volovik1986b, BalatskiiEtAl86, CombescotDombre86, Volovik90, KobayashiEtAl18, IshiharaEtAl19}.
The spinless gap amplitude, with equal spin pairing understood, takes the form
\begin{align}
\Delta(\vek{p}) = \frac{\Delta_0}{p_F} (\unitvec{m}+i \unitvec{n}),
\end{align}
where $c_{\perp} = \Delta_0/p_F$ has units of velocity. The direction $\unitvec{l}= \unitvec{m}\times \unitvec{n}$ is a low-energy Goldstone variable for the condensate. At low-energy, the direction of $\unitvec{l}$ can fluctuate and there is combined U(1) gauge symmetry \cite{LiuCross79} in the $\unitvec{m}-\unitvec{n}$ plane, leading to the Mermin-Ho relations between $\unitvec{l}$ and $\vek{v}_s$ \cite{MerminHo76, VollhardtWoelfle, Volovik03}. In the following, we focus on the Landau levels and torsion, keeping the magnitudes of $p_F$ and $\Delta_0$ fixed. {Related to this, for the superconductors, the end results apply the case where the EM potential $A_{\mu}=0$} which amounts to the case where we work in the gauge where $\mathbf{v}_s - \vek{A} \to \mathbf{v}_s$. In the following {computations} we will set $\mathbf{v}_s = 0$ as well, since this corresponds to the case where one has only torsion, see Ref. \onlinecite{Nissinen2019} for the general case {with superfluid velocity}. The orientation of the orthonormal triad $\unitvec{l}$ can still rotate {for the torsional textures}.
Considering {first} the simple homogenous case, the linearization of the BdG Hamiltonian takes the form of a Weyl Hamiltonian close to the nodes of $E(\vek{p})$ at $\vek{p}=\mp p_F\unitvec{l}$,
\begin{align}
H_{\rm BdG}(\hat{\vek{p}}) &= \left(\begin{matrix} \epsilon(\hat{\vek{p}}) & \frac{1}{2}\{\hat{\vek{p}},\Delta(\vek{p})\} \\ \frac{1}{2}\{\hat{\vek{p}},\Delta^{\dagger}(\hat{\vek{p}})\} & -\epsilon(-\vek{p})\end{matrix}\right) \\
&\approx \pm \tau^a e^i_a(p_i \mp p_{F,i}) .\nonumber
\end{align}
Note that the BdG excitations are Majorana, $\Phi^{\dagger}(\vek{p}) = \tau^1 \Phi(-\vek{p})$, as expected in a BCS paired system. Here we have taken the normal state dispersion $\epsilon(\vek{p}) = \frac{p^2-p^2_F}{2m}$, where $m$ is the $^3$He atom mass. The tetrads are
\begin{align}
e^i_1 = c_{\perp}\unitvec{m}, \quad e^i_{2} = -c_{\perp} \unitvec{n},\quad e^i_{3} =- c_{\parallel}\unitvec{l}, \label{eq:3HeA_tetrad}
\end{align}
where $c_{\parallel} \equiv \frac{p_F}{m} = v_F$. Henceforth, to conform with relativistic notation, we will work with dimensionless tetrads in units of $c_{\parallel} = 1$. The quasiparticle dispersion is $E(\vek{p})=\pm \sqrt{\epsilon(\vek{p})^2 + \vert\Delta(\vek{p})\vert^2} \approx \pm \sqrt{c_\parallel q_{\parallel}^2+c_{\perp}^2 q_{\perp}^2}$, with $\vek{q} = \vek{p}-\vek{p}_F$ for the Weyl quasiparticles. The linear expansion is valid when $\abs{\vek{p}-\vek{p}_F} \ll p_F$ which provides an explicit cut-off for the Weyl description, requiring that the remainder
\begin{align}
\frac{1}{2}\frac{\partial \epsilon(\vek{k})}{\partial k^i \partial k^j} (p-p_F)^i (p-p_F)^j = \frac{1}{2m} (\vek{p}-\vek{p}_F)^2 \\
\ll e_a^i (\vek{p}-\vek{p}_F)_i .\nonumber
\end{align}
This leads to the condition, in addition to the trivial $\vert \vek{p}-\vek{p}_F\vert \ll p_F$ from the Taylor expansion of $\epsilon(\vek{p})$, that
\begin{align}
E_{\rm Weyl} \ll m c_{\perp}^2 = \left(\frac{c_{\perp}}{c_{\parallel}}\right)^2 E_{F}.
\end{align}
which will prove important later. In particular, the energy cutoff for the Weyl quasiparticles is anisotropic in momenta $\vek{q} = \vek{p}-\vek{p}_F$ around the Weyl point,
\begin{align}
q_{\perp} \ll \left( \frac{c_{\perp}}{c_{\parallel}} \right)p_F, \quad q_{\parallel} \ll \left( \frac{c_{\perp}}{c_{\parallel}} \right)^2p_F, \label{eq:Weyl_momenta}
\end{align}
if we consider the Weyl fermion system in the case where the background fields couple parallel and perpendicular directions \cite{Nissinen2019}. {This happens in the chiral system since the three direction are coupled by $\unitvec{l} = \unitvec{m} \times \unitvec{n}$ and the corresponding Mermin-Ho relations.}
\begin{figure}
\centering
\includegraphics[width=200pt]{Kuvaajat/quadratic_red_blue.pdf}
\caption{The torsional LL spectrum for the anisotropic Newton-Cartan model in chiral superfluids/conductors with the spectral flow indicated. Note that we have inverted the hole-like right-handed Landau level at $-p_F$ and the spectrum is particle-hole doubled. Overall there is a corresponding factor of 2 from spin-degeneracy.}
\label{fig:quadratic_spectrum}
\end{figure}
\subsection{Landau levels in linear approximation}
To compute the LL levels in the order parameter texture corresponding to a torsional magnetic field, we can take the "weak-twist" texture \(\hat{\mathbf{m}} + i\hat{\mathbf{n}} = \hat{\mathbf{x}} + i\hat{\mathbf{y}} - iT_Bx\hat{\mathbf{z}}\) with \(|Bx| \ll 1\), which corresponds to $\l = \hat{\mathbf{z}} + T_Bx\hat{\mathbf{y}}$\cite{Volovik85, BalatskiiEtAl86, CombescotDombre86}. The BdG Hamiltonian then takes the form
\begin{align}
H_{\rm BdG}& = \begin{bmatrix}
\epsilon(\hat{\vek{p}}) & \frac{1}{2} \{\Delta ^i,\hat{p}_i\}\\
\frac{1}{2} \{\Delta^\dagger\phantom{.}^i,\hat{p}_i\}& -\epsilon(-\hat{\vek{p}})
\end{bmatrix}
\\ =& \begin{bmatrix}
\epsilon(\hat{p}_x, p_y, p_z) & \frac{\Delta_0}{p_F}[\hat{p}_x + i(p_y-T_Bp_z x)]\\
\frac{\Delta_0}{p_F}[\hat{p}_x - i(p_y-T_Bp_z x )]& -\epsilon(-\hat{p}_x,-p_y, -p_z)
\end{bmatrix}. \nonumber
\end{align}
Near the gap node $\vek{p} = -p_F\l$ we may linearize the operator $\epsilon(\hat{\vek{p}})$ as $\varepsilon_\mathbf{p} \approx -v_F\l \cdot(\hat{\vek{p}} + p_F\l) \approx -v_F(p_z+p_F)$. This leads to
\begin{align}
H_+ = e^i_a\tau^a(p_i-p_F e_i^3) = \tau^a (e^i_a \hat{p}_i - p_F\delta^3_a)
\end{align}
with
\begin{align}
e^i_a = (c_\perp\delta^i_1, -c_\perp[\delta^i_2-T_Bx\delta^i_3], -c_\parallel\delta^i_3),
\end{align}
where we remind that \(c_\parallel \equiv v_F\) and \(c_\perp \equiv \frac{\Delta_0}{p_F}\). This corresponds, up to the sign of the field $T_{B}$ and the tetrad, to the case \eqref{eq:torsion_tetrad} after a rotation in the $x-y$ plane.
After moving to scaled coordinates $c_\perp^{-1} x \equiv \Tilde{x}$, $c_\perp^{-1} y \equiv \Tilde{y}$, $c_\parallel^{-1}z \equiv \Tilde{z}$, corresponding to dimensionless and scaled momenta \(p_a \equiv e^i_ap_i\), we can define the annihilation operator \(\hat{a} \equiv \frac{1}{\sqrt{2|T_Bp_z|}}\left[(| T_Bp_z|\Tilde{x} - p_{\Tilde{y}}) + i\hat{p}_{\Tilde{x}} \right]\) to arrive at the Hamiltonian
\begin{align}
H_{p_z<0} = \begin{bmatrix}
p_3+p_F & \sqrt{2|T_Bp_z|}i\hat{a}^\dagger\\
-\sqrt{2|T_Bp_z|}i\hat{a} & -(p_3+p_F)
\end{bmatrix}, \label{eq:H_negative}
\end{align}
which is \eqref{eq:Hmag} after a Galilean boost \(p_3 \to p_3 + p_F\). The eigenstates are then
\begin{equation}
\Psi_{n,p_z<0} = \begin{pmatrix}u_n \phi_n \\ v_n \phi_{n-1}\end{pmatrix}e^{i(p_zz+p_yy)}.
\end{equation}
where $\phi_n \equiv \phi_n(x)$, for $n\geq0$, are harmonic oscillator eigenstates and vanish otherwise. The condition for normalization is \(|u_n|^2 + |v_n|^2 = 1\), corresponding to the BdG particle and hole amplitudes.
Carrying out a corresponding calculation at the Weyl point $\vek{p} = p_F\l$, we have the Hamiltonian
\begin{equation}
H_{p_z>0} = \begin{bmatrix}
p_3-p_F & -\sqrt{2|T_Bp_z|}i\hat{a}\\
\sqrt{2|T_Bp_z|}i\hat{a}^\dagger & -(p_3-p_F)
\end{bmatrix}, \label{eq:H_positive}
\end{equation}
which can be identified as the left-handed Hamiltonian \(H_- = -e^i_a\tau^a p_i\) after a rotation about \(\l\) such that \(\hat{\mathbf{m}} \to -\hat{\mathbf{m}}\) and \(\hat{\mathbf{n}} \to -\hat{\mathbf{n}}\).
Its eigenstates are
\begin{equation}
\Psi_{n,p_z>0} = \begin{pmatrix}u_n \phi_{n-1} \\ v_n \phi_{n}\end{pmatrix}e^{i(p_zz+p_yy)}.
\end{equation}
Depending on the chirality, i.e. sign of momentum at the node, the LLL is either particle- or holelike {as in Eq. \eqref{eq:LLL_gaussian}}. The conclusion is that the spectrum looks like the relativistic spectrum in Fig. \ref{fig:pseudotorsion}, when the linear approximation for $\epsilon(\vek{p}) \approx \pm c_{\perp}(p_z-p_F)$ is valid, Eq. \eqref{eq:Weyl_momenta}. This corresponds to the spectrum of axial U(1) fields with momentum dependent charge and density of states per LL. The density of states is \eqref{eq:dos} in the scaled coordinates, which gives, with $e^0_{\mu} = \delta^0_{\mu}$,
\begin{align}
j^0 dV = e j^0 d\Tilde{V}= \frac{|p_zT_B|}{4\pi^2} d\Tilde{V}.
\end{align}
\subsection{Anisotropic Newton-Cartan model}
We just showed that the simple order parameter texture in chiral superfluid or superconductor gives rise to the torsional LLs for the low-energy Weyl quasiparticles, in the linear regime close to nodes. We can however consider quadratic dispersion beyond the linear approximation
\begin{align}
\epsilon(\vek{p}) = \frac{\vek{p}^2}{2m}-\mu_F \to \frac{p_z^2}{2m} -\mu_F, \label{eq:NC_dispersion}
\end{align}
which corresponds to the anisotropic Newton-Cartan (Majorana-Weyl) fermion model in Sec. \ref{sec:Newton-Cartan}.
The above model has the same regime of validity in the chiral superfluid or superconductor as the linear approximation in Eq. \eqref{eq:Weyl_momenta}, since it also neglects the rotationally invariant dispersion $\epsilon(\vek{p})$ of the normal state, {see also Ref. \onlinecite{Nissinen2019}}. The chiral $p$-wave BCS state has the uniaxial anisotropy of Eq. \ref{eq:NC_dispersion}, however, and this carries to the low-energy Weyl description in the form of the emergent spacetime. The other benefit of the anisotropic model \eqref{eq:NC_dispersion} is that the LL spectrum can be computed for momenta far from $p_F$, up till $p=0$, corresponding to the filled levels of the non-relativistic Fermi system, which are absent in the relativistic linear model. {This is important for the global properties of the chiral spectrum and anomaly}. In this way the contribution to the anomalous current from the superfluid vacuum can be analyzed, see Sec. \ref{sec:vacuum_current}.
The spectrum follows simply from Eqs. \eqref{eq:H_negative}, \eqref{eq:H_positive} by the substitution $\mp(p_3\pm p_F) \to \pm\epsilon(\pm p_z)$. From squaring the Hamiltonian, the corresponding eigenvalues are at both nodes
\begin{align}
E_n &= \pm\sqrt{\epsilon(p_z)^2+c_\perp^2|T_Bp_z|2n}, \nonumber \\
E_0 &= \pm \text{sgn}(p_zT_B) \epsilon(p_z).
\end{align}
for \(n\geq 1\). The LLL state retains the gaussian form \eqref{eq:LLL_gaussian}. The condition for normalization is \(|u_n|^2 + |v_n|^2 = 1\), and consequently the particle and hole amplitudes are in both cases
\begin{equation}
u_n = \sqrt{\frac{E_n+\epsilon(p_z)}{2E_n}}, \qquad v_n = i\sqrt{\frac{E_n-\epsilon(p_z)}{2E_n}}.
\end{equation}
With $E_0 = \epsilon(p_z)$ we have $v_0 = 0$, meaning that the lowest level particles appear only for \(p_z < 0\). For \(p_z > 0\) \(u_0 = 0\) when \(E_0 = -\epsilon(p_z)\), so for positive momenta only holes appear at the lowest level, as we found for the linear model. In this case we must, however, remember that the hole spectrum arises due to the Majorana doubling of the BdG spectrum and is not physical. This cancels with a corresponding factor of two from spin-degeneracy in the Fermi system. This leads to the LL spectrum in Fig. \ref{fig:quadratic_spectrum}.
\subsection{Spectral flow, axial density and consistent anomalous vacuum current} \label{sec:vacuum_current}
Now we are equipped to compute the spectral flow resulting from torsional Landau levels, corresponding to the covariant torsional NY anomaly. For the anisotropic Newton-Cartan model we can also compute the consistent vacuum current of the condensate, since the dispersion takes into account the filled states below the Fermi-level which is not the case for the linear approximation close to the Weyl nodes. {For the chiral superfluid (or -conductor) we have to take into account that the particles are Majorana-Weyl but a factor of two results from the spin-degeneracy}.
\subsubsection{Axial density}
The {torsional spectral flow leads to the anomalous density} as
\begin{align}
e j^{0}_{\pm} = \int_{\mp p_F - \frac{p_F \Lambda^2}{2}}^{\mp p_F + \tfrac{p_F \Lambda^2}{2}} dp^3 N_{\rm LL}(p_z) = \pm \frac{p_F^2(\frac{c_{\perp}}{c_{\parallel}})^2}{4\pi^2} T_B e^3_z .
\end{align}
where the cutoff for the Weyl spectrum is taken at $\Lambda^2 = \left(\frac{c_{\perp}}{c_{\parallel}}\right)^2$, corresponding to Eq. \eqref{eq:Weyl_momenta} with $\frac{1}{2} \ll 1$. Remarkably {the LL results matches the more general torsional contribution} for the NY anomaly including curvature, as implied by the {anomalous} momentum non-conservation in the system as found in Ref. \onlinecite{Nissinen2019}. This result was found by matching the anomaly on emergent spacetime of background the chiral $p$-wave system to the corresponding BCS hydrodynamic result of the superfluid. In particular, including the effects of superflow leads to a spin-connection and curvature perpendicular to $\unitvec{l}$, as required by the Mermin-Ho relations \cite{MerminHo76}.
In the chiral superfluid (or superconductor) the above result holds for both the linear quasirelativistic and the anisotropic Newton-Cartan spacetime, as defined by the tetrad \eqref{eq:3HeA_tetrad}. This simply follows from the fact that the cutoff for the validity of {both models} coincides with \eqref{eq:Weyl_momenta}. In this case, therefore, the anisotropic model {NC} is expected to require the same cutoff {as the linear model} since the system is probed also in the perpendicular direction. This morally happens since $\unitvec{l}=\unitvec{m}\times\unitvec{n}$, making the triad dependent \cite{MerminHo76, LiuCross79, Nissinen2019}. Strictly speaking in the LL-model we approximated $\unitvec{l} \approx \unitvec{z}$ which for the general non-trivial textures is given higher order corrections \cite{CombescotDombre86}.
\subsubsection{Axial current}
{On the other hand}, for the non-relativistic {anisotropic NC} model, however, we can also compute the anomalous vacuum current, corresponding to the anomalous superfluid momentum from the filled states below $p_F$ \cite{Volovik85}. {The global spectrum has correct form, valid also outside the vicinity of the Weyl points}. The anomalous momentum current is given by
\begin{align}
\vek{j}_{\rm anom,\parallel} = -2 \int^{p_F}_{0} dp^3 N_{\rm LL}(p_z) p_3 = -\frac{p_F^3}{6\pi^2} \unitvec{l}(\unitvec{l} \cdot \nabla \times \unitvec{l}) \label{eq:vacuum_current}
\end{align}
and even extending to $p_z=0$, there is no need for a cutoff. See Fig. \ref{fig:quadratic_spectrum}.
This is actually the correct hydrodynamic result for the (weak-coupling) BCS system \cite{VolovikMineev81, Volovik85, CombescotDombre86} to lowest order in gradients, since the final answer for the anomalous vacuum current is sensitive only to the $e_3 = \unitvec{l}$ direction, even in the presence of $\vek{v}_s$ (corresponding to curvature in the perpendicular plane). Upon taking the time-derivative of this momentum, the hydrodynamics of the system produce the covariant current implied by the Weyl anomaly. If we assume, without any supporting arguments, that the curvature and torsion contribute to the current \eqref{eq:vacuum_current} as they enter the anomaly Eq. \eqref{eq:NYanomaly}, we get the same result if we apply the cutoff \eqref{eq:Weyl_momenta} as above, {even in the linear model}. We note that these findings are corroborated by the thermal contribution to the NY anomaly, as found in Ref. \cite{NissinenVolovik2019}. {The proper inclusion of curvature also ensures that states far away from the Fermi surface do not contribute to the currents}.
These considerations beyond the LL spectral flow {aside}, what we want to here emphasize is that the \eqref{eq:vacuum_current} current corresponds to the consistent anomaly, and can be derived from a corresponding Wess-Zumino terms {that should be generalized for torsional spacetimes \cite{Volovik1986c, Balatsky87, PeetersWaldron99, Landsteiner16, KurkovVassilevich18, Stone2019b, Copetti20}}. See especially \cite{Copetti20}, where the consistent and covariant anomalies are discussed in an anisotropic Lifshitz model, closely related to Eq. \eqref{eq:NC_fermion}. We leave the study of the consistent vacuum current from the perspective of gravitational anomalies with torsion for the future.
\section{Strained Weyl semimetals}\label{sec:WSM}
Semimetals with Weyl fermions arise in solid-state systems where the Fermi energy is tuned to a band-crossing in the Brillouin zone \cite{NielsenNinomiya83, WanEtAl11}. The tetrads arise universally via the {coefficients of the} linear expansion. In this case, the fermions are also charged leading to the possibility of the U(1) anomaly with electric fields \cite{NielsenNinomiya83}. In addition to the tetrads, {related} effective background (axial) fields {can be considered} with similar origin as in the chiral superconductor \cite{Volovik03} -- the (constant) shift of the Weyl node in momentum space {that leads} to the existence of the protected Fermi arc states \cite{Haldane14, Landsteiner16, GrushinEtAl16}. Here we would like to clarify {the related but physically distinct torsional contribution to anomalous transport} from the tetrads in the presence of elastic strains. In fact, due to the universal coupling of the tetrads to momentum \cite{ParrikarEtAl14, ShapourianEtAl15}, as in gravity, one expects that deformations of the (lattice) geometry would lead to effects that probe the Weyl fermions via the background tetrads. {This framework correctly takes into account the anomalous physics of the momentum dependent fields, see nevertheless \cite{ZhouEtAl13, SunWan14, Fujimoto16, PikulinEtAl16, GrushinEtAl16, HuangEtAl19, FerreirosEtAl19, Stone2019, HuangBo20}.}
We start in a roundabout way, first discussing the low-energy Weyl Hamiltonian and then considering a lattice model for a realistic $T$-breaking material.
\subsection{Bloch-Weyl fermions in crystals}
The low-energy Bloch-Weyl Hamiltonian is of the form \cite{NielsenNinomiya83, WanEtAl11, ArmitageEtAl18}
\begin{align}
h_{\pm}(\vek{k}) &= \pm \sigma^a (k_a \mp k_{F,a}) + \textrm{ h.c.} \nonumber\\
&= \pm \frac{\sigma^a}{2} e^{i}_{a}(k_i \mp k_{F,i}) +\textrm{ h.c.} .
\end{align}
where now
\begin{align}
e^i_a = \frac{\partial H_{\rm TB}(\vek{k})}{\partial k^a}\bigg\vert_{\vek{k}_F}
\end{align}
are simply the linear coefficients of the expansion of the underlying (tight-binding) Bloch Hamiltonian $H_{\rm TB}(\vek{k})$ near the Weyl nodes. Before we consider lattice deformations in this model, we remark on the interplay of the tetrads and momentum. The lattice momentum is \cite{ShapourianEtAl15}
\begin{align}
\hat{p}_a = \frac{i}{2a} \sum_{\vek{x}} c_{\vek{x}}^\dagger c_{\vek{x}+\unitvec{a}}- c_{\vek{x}+\unitvec{a}}^\dagger c_{\vek{x}} = \sum_{\vek{k}} \sin (k_a a) c^{\dagger}_{\vek{k}}c_{\vek{k}} .
\end{align}
Under non-trivial background fields, the Weyl system itself is anomalous under the lattice translation symmetry, $T_{3} = T_{\unitvec{z}}$, corresponding to the conservation of the lattice momentum $\hat{p}_3$,
\begin{align}
T_{\unitvec{z}}^{\dagger} c_{\pm \vek{k}} T_{\unitvec{z}} = e^{\pmi a\vek{k}_{\rm F}}c_{\pm \vek{k}_F} \label{eq:lattice_rotation}
\end{align}
which corresponds to an anomalous chiral rotation of the low-energy Weyl fermions at the $T$-breaking nodes $\pm \vek{k}_F$. Here $c^{\dagger}_{\vek{k}}$ creates the state corresponding to the lattice periodic Bloch state $\vert v_{\vek{k}}\rangle = \vert v_{\vek{k}+\vek{K}} \rangle$, with wave function
\begin{align}
\psi_{\vek{k}}(\vek{x}) = e^{i \vek{k}\cdot \vek{x}}v_{\vek{k}}(\vek{x}).
\end{align}
In the presence of elastic deformations corresponding to torsion, i.e. phonons, the anomalous chiral symmetry corresponding to translations is manifested as the non-conservation of {(lattice)} momenta between the Weyl fermions and the background phonons \cite{Nissinen2019, Burkov20}, as found in superfluid $^3$He-A for the $p+ip$-wave paired Fermi-liquid \cite{Volovik03}. See also \cite{CortijoEtAl15, FerreirosEtAl19, NissinenVolovikPRR19, Copetti20}.
\subsection{Elastic deformations}
Now consider general lattice deformations. The original unstrained lattice momenta entering the Weyl {Hamiltonian} are represented as $k_a$ and the deformed lattice is given as $k_i = e^{\ a}_i k_a$ in the coordinate system of the laboratory, where $e^{\ a}_{i} \neq \delta^a_i$ to first order in the strains. These will couple as expected in the continuum model, as long as we take into account the lattice model properly, as we now recall following \cite{ShapourianEtAl15}. See also \cite{FerreirosEtAl19}. We have the continuum linear strain tensor,
\begin{align}
e^{\ a}_i = \delta^a_i + w^{\ a}_i &= \delta^a_{i}+\partial_i u^a \nonumber\\
e_{\ a}^i = \delta^i_a - w^i_{\ a} &= \delta_a^{i}-\partial_j u^b \delta_{ab} \delta^{ij} \label{eq:continuum}
\end{align}
where $u^a/a \ll 1$, {in terms of the lattice constant}. This means that $k_{F,a}$ is held fixed, whereas $k_{F,i}$ with $\delta k_{F,i} = w_i^{\ a} k_{F,a}$ is deformed ({in the laboratory coordinates}). This becomes on the lattice
\begin{align}
k_a \to k_a -w_{\ a}^i \frac{\sin k_i a}{a} \approx e_{\ a}^i k_i, \nonumber \\
k_i \to k_i + w^{\ a}_i \frac{\sin k_a a}{a} \approx e_{i}^{\ a}k_a . \label{eq:lattice}
\end{align}
where $w_{\ a}^i = \partial_j u^{b} \delta_{ab}\delta^{ij}$ is defined above and in the last approximation, the linear approximation for strain as well as $k_i a \ll 1$, close to the $\Gamma$-point, are used. In addition we assume that we work with low-frequencies corresponding to the acoustic phonons, below the Debye energy \cite{ShapourianEtAl15}.
\subsection{Lattice model}
In general, a model for a $T$-breaking Weyl semimetal consist of layered 2D Wilson fermions tuned to a zero energy crossing in three dimensions \cite{Volovik03, SukhachovEtAl18}. For a model of this kind pertaining to a real material, Ref. \cite{PikulinEtAl16} considered a time-reversal invariant $k\cdot p$ close to the $\Gamma$-point, where the the Weyl node itself will be at finite momentum corresponding to four momenta in the Brillouin zone, the minimum for $P$-breaking system. While the $k\cdot p$ model is realistic, it is more convenient to work with an explicit model with a lattice regularization that produces the same results. In terms of a tight-binding model, they considered
\begin{align}
H_{\rm lat}(\vek{k}) = \epsilon(\vek{k}) + \left(\begin{matrix} h_{\rm lat}(\vek{k}) \\ & -h_{\rm lat}(\vek{k}) \end{matrix}\right), \label{eq:H_latt}
\end{align}
where we focus on the time-reversal odd block $h_{\rm latt}(\vek{k})$ of the $T$-invariant model \cite{Volovik03, PikulinEtAl16, SukhachovEtAl18},
\begin{align}
h_{\rm lat}(\vek{k}) = t_z(M - \sum_{i=x,y,z} c_{i} \cos k_i a) \sigma^3 \\
+ (t_x \sin k_xa ) \sigma^1 + (t_y \sin k_ya) \sigma^2 . \nonumber
\end{align}
For $-1<\frac{M-c_x-c_y}{c_z}<1$ the model $h_{\rm lat}(\vek{k})$ has Weyl points at
\begin{align}
\pm a\vek{k}_F = (0,0,\pm \arccos \frac{M-c_x-c_y}{c_z}),
\end{align}
otherwise it is gapped. The dimensionful tetrads are
\begin{align}
e^i_a(\pm \vek{k}_{F}) = a(t_x, t_y, \pm t_zc_z \sin a k_{F,z})\delta^i_a.
\end{align}
Inversion symmetry $P$ acts as $h_{\rm lat}(\vek{k}) \to \sigma^z h_{\rm latt}(-\vek{k}) \sigma^z$. For simplicity we set $c_z=1$, $c_{x,y} = c_{\perp}$, $t_{x,y} = t_{\perp}$ and assume uniaxial symmetry along $\unitvec{z}$ in the following. We expect \eqref{eq:lattice} to hold for the Weyl semimetal model Eq. \eqref{eq:H_latt}, originating from the $k\cdot p$ model {close to the $\Gamma$-point}.
For this tetrad we can {moreover} ignore the difference of lattice and coordinate indices, with $u_{ij} = \frac{1}{2}(\partial_i u_j + \partial_j u_i) + O(u^2)$ the symmetric lattice strain. The strain induces the deformation considered in Ref. \cite{CortijoEtAl15} and \cite{PikulinEtAl16, GrushinEtAl16}
\begin{align}
\delta h_{\rm lat}(\vek{k}) =& - t_z \beta_{\rm el}u_{zz} \sigma^3 \cos ak_z \nonumber\\
&+ t_{\perp}\beta_{\rm el}(u_{xz} \sigma^1+u_{yz} \sigma^2) \sin ak_z
\end{align}
which gives
\begin{align}
\delta e^i_a = a t_z \beta_{\rm el} u_{ii} \delta_a^i \sin (k_Fa) + at_{\perp} \beta_{\rm el} \sum_{i' \neq i} u_{ii'}\delta^{i'}_a \cos (k_F a)
\end{align}
where $\beta_{\rm el}$ is the Gr\"unesein parameter. Restricting to a uniaxial strain corresponding to the axis of the Weyl node orientation, with the approximation that $ak_F\ll 1$,
\begin{align}
e_a^z \to at_z (1+\beta_{\rm el}u_{zz})\delta_{a3} + a t_{\perp} \sum_{i=x,y} \beta_{\rm el}u_{zj}\delta_{a}^j, \nonumber \\
\delta e_3^z = at_z u_{zz}, \quad \delta e^z_1 = at_\perp u_{zx}, \quad \delta e^z_2 = a t_{\perp} u_{yz}.
\end{align}
This has the (dimensionless) inverse tetrad, up to the neglected terms $O(u^2)$ in strains,
\begin{align}
e^1_i &= \unitvec{x}, \quad e^2_i = \unitvec{y}, \nonumber\\
e^3_i &= \unitvec{z}-\beta_{\rm el}\left(u_{zx},\left(\tfrac{t_z}{t_\perp}\right) u_{zy},\left(\tfrac{t_z}{t_\perp}\right)u_{zz}\right) .
\end{align}
This is what we expected, based on the corresponding universal continuum limit \eqref{eq:continuum} and the lattice substitution \eqref{eq:lattice} coupling to geometry, apart from the (non-universal) couplings $\beta_{\rm el}$, $\left(\tfrac{t_z}{t_\perp}\right) $ between the phonons and electrons of the lattice model \cite{ShapourianEtAl15}.
Now in the presence of non-homogenous strain vector $e^3_z $ depending coordinates and time, torsion $T^3_{\mu\nu}$ {and spectral flow} will arise. The Landau level arguments of Sec. \ref{sec:torsional_LLs} and \ref{sec:chiral} apply for a torsional magnetic field from $u_{zx,zy}(x,y)$ (in the ``symmetric gauge") and an adiabatic electric field from $u_{zz}(t)$, as in \cite{PikulinEtAl16, GrushinEtAl16}.
\subsection{Torsional density of states in anomalous transport}
Armed with the geometric background fields corresponding to torsional (magnetic field), we can consider the anomaly resulting from the chiral rotation \eqref{eq:lattice_rotation}. The linear Weyl model is valid up to the approximation
\begin{align}
t_z(M - \sum_{i=x,y,z} c_{i} \cos k_i a) & \\
\approx \frac{t_za^2}{2} \bigg[c_{\perp}(k_x^2+k_y^2) & +(k_z \mp k_F)^2\bigg] \nonumber\\
\approx t_z a e_3^i(k_i-&k_{F,i}) = (t_za \sin k_Fa)q_{z}
\end{align}
which is simply restricted by the ignored terms of the remainder in the expansion. Apart from the trivial $q_z \ll k_F \ll 1/a$, also
\begin{align}
c_{x} \cos q_x a + c_{y}\cos q_ya \nonumber
\approx& \frac{c_{\perp} a^2}{2} (q_x^2 + q_y^2) = \frac{c_{\perp} a^2}{2}q_\perp^2\\
\ll& \frac{t_x}{t_z}a q_x + \frac{t_y}{t_z}a q_y = \frac{t_{\perp}}{t_z}a q_{\perp}
\end{align}
leading to the constraint $q_{\perp} \ll \frac{2 t_\perp }{c_\perp a t_z}$, meaning
\begin{align}
E_{\rm Weyl} \ll \frac{t_\perp^2}{c_{\perp} t_z},
\end{align}
for the perpendicular direction. We are working in the units where $-1<M-2c_{\perp}<1$ and $\cos k_Fa = M-2c_{\perp} \approx 1$. For the effects of any torsional anomaly from {magnetic strain}, we can just evaluate the chiral densities at the nodes,
\begin{align}
n_{\pm}(\Lambda) = ej^0_{\pm} = \int_{\pm k_F(1-\frac{\Lambda^2}{2})}^{\mp k_F(1+\frac{\Lambda^2}{2})} dk^3 N_{\rm LL}(k_z) \nonumber \\
=\mp \frac{k_F^2 \Lambda^2}{4\pi^2}\beta_{\rm el}\left(\tfrac{t_z}{t_\perp}\right)T_B e^3_z .
\end{align}
It is interesting to recall that for the chiral superfluid, while strictly it must be that $\Lambda^2 \ll 1$ since $q_{z} \ll k_F$, we found that the cutoff was parametrically high ``$\frac{1}{2} \ll 1$" in terms {of the validity of the Weyl description}. There however, due to the orthonormal triad, also the perpendicular direction couples to the transport, with the cutoff Eq. \eqref{eq:Weyl_momenta} which in real $^3$He-A is actually $\sim 10^{-6} p_F$.
For the semimetal, the case where $q_z \sim \frac{t_{\perp }}{t_{z} \sin k_Fa}q_{\perp} \ll k_F$ {arises when} assuming that we isotropically couple to the perpendicular directions for general {strain} field configurations. Plugging in real parameters, we expect that for e.g. Cd$_3$As$_2$, $t_\perp \sim t_z \sin k_Fa$ \cite{PikulinEtAl16}. {Another option would be to consider the Newton-Cartan model with quadratic spectrum $M-2c_{\perp}-\cos k_za$ along the Weyl node direction with {uniaxial strain} only, with the constraint $q_z \ll k_F$}. The same model with different parameter also applies for the Dirac semimetal Na$_3$Bi \cite{PikulinEtAl16} and references therein.
Independent of whether one has a torsional electric field $\partial_t e^3_z \neq 0$ or an electric field $E^z$ driving the spectral flow, as in Fig. \ref{fig:B5E} and \ref{fig:B5E5}, this will lead to the suppression of the density proportional to $\Lambda^2$, corresponding to the validity of the linear Weyl approximation, in the anomalous transport, as compared to the Fermi wavevector $k_F$ and {the pseudo gauge field in momentum space \cite{PikulinEtAl16, GrushinEtAl16}}. We note that this reduction of {anomalous axial density} is simply due to the momentum dependent density of states. {This, as we have explained, naturally follows from the tetrads and torsion coupling to momenta and should be contrasted with a U(1) gauge field and constant density of states, as dictated by the universal minimal coupling and the topology of U(1) gauge fields}.
\section{Thermal effects}\label{sec:thermal}
Finally we briefly recall and discuss thermal contributions to the torsional anomaly. There are two possible effects: i) the small but finite temperature enters the NY anomaly as the scale of thermal fluctuations in momentum space. These are analyzed in \cite{NissinenVolovik2019, NissinenVolovik19b, Stone2019} ii) There is a {related} finite thermal gradient in the system and one computes the thermal response via Luttinger's fictitious gravitational field \cite{Luttinger64}. We note that non-zero time-like torsion for the Luttinger spacetime implies the non-single valued time coordinate in the fictitious gravitational field \cite{BradlynRead15}. See also \cite{Stone12, GromovAbanov15, Sekine16, RyuEtAl17, ChernodubEtAl18, KobayashiEtAl18}.
Here we focus on the effects of a thermal gradient, the currents induced can be computed by coupling the system to fictitious spacetime metric, following Luttinger \cite{Luttinger64}. Specifically, we assume a thermal gradient
\begin{align}
\nabla \sigma = -\frac{1}{T}\nabla T
\end{align}
which is equivalent to a weak gravitational potential $g_{00} = 1+2\sigma$ in the system. The perturbation $\delta g_{00}$ couples to the Hamiltonian (energy current) $T^{00}$. In units where the velocity of propagation is $v=1$, the metric is
\begin{align}
ds^2 &= e^{+2\sigma}dt - \delta_{ij}dx^i dx^j \\
&\approx (1+2\sigma)dt^2 - \delta_{ij}dx^i dx^j
\end{align}
from which the linear response to the thermal gradient $\sigma$ can be calculated \cite{Luttinger64}. This can be generalized to a metric
\begin{align}
ds^2 = e^{2\sigma}(dt+e^{-\sigma}N_i dx^i)^2 - \delta_{ij}dx^i dx^j \\
= e^0_{\mu}e^{0}_{\nu} dx^{\mu}dx^{\nu} - \delta_{ij} d x^i dx^j,
\end{align}
now with a small gravimagnetic potential \cite{Volovik03, RyuEtAl17}
\begin{align}
A_{\mu}^{\rm g} = (e^{\sigma},N_i) \approx (1+\sigma, N_i) \equiv e^0_{\mu},
\end{align}
where $N_i$ describes a velocity field in the units where $v=1$. The gravitational thermal potential \cite{Volovik03, RyuEtAl17, KhaidukovZubkov2018}
\begin{align}
-\frac{1}{T}\nabla T = \nabla \sigma - \partial_t N_i. \label{eq:gravimagnetic}
\end{align}
whence
\begin{align}
e^0_{\mu} &= (e^{\sigma}, N_i), \quad e^a_\mu =\delta^{a}_{\mu}, \quad a=1,2,3 \\
e^{\mu}_0 &= (e^{-\sigma},0), \quad e^{\mu}_a = (e^{-\sigma}N_i,\delta^{i}_{a}), \quad a=1,2,3.
\end{align}
In this case Eq. \eqref{eq:gravimagnetic} becomes
\begin{align}
-\frac{1}{T}\nabla T = \nabla \sigma - \partial_t N_i = \partial_{i}e^{0}_{t} - \partial_{t}e^0_{i} = T^{0}_{i t}
\end{align}
where $T^0_{\mu\nu}= \partial_{\mu}e^0_{\nu}-\partial_{\nu}e^0_{\mu}$ is the temporal torsion, assuming zero temporal spin-connection $\omega^0_{\mu b} \equiv 0$. It is expected then, that one would have possibility for anomalous transport in terms of the combination of thermal gradient and vorticity $T^0_{ij} = \partial_i N_j -\partial_j N_j$ in the velocity field $N_i(x)$, as in the chiral vortical (and magnetic) effect \cite{KhaidukovZubkov2018, ImakiYamamoto19}.
Now similarly as we expect momentum density at the Weyl node $(P^{\mu})_{\rm node} = \Pi^{t\mu} = p_F e_3^{i}\delta_{i}^\mu e j^{0}_5$\cite{Nissinen2019} for the Weyl systems at finite $p_{Wa}=p_F\delta_{3a}$, or since $T^{0\mu} = e e^{\mu}_a T^{t a}$,
\begin{align}
e \Pi^{t 3}= \frac{p_F^3 \Lambda^2}{16\pi^2} e^3_\mu e_3^i \delta_i^{\mu} \epsilon^{0\nu\lambda\rho} e^3_{\nu} T^3_{\lambda\rho}
\end{align}
we expect an energy density of the form
\begin{align}
J^{t}_{\epsilon} = eT^{t}_{\ 0} = p_F e j^0_5= \frac{p_F T^2}{12v^2} \epsilon^{tijk} e_{i}^0 T^0_{jk}
\end{align}
where $T^{\mu}_{\ a} \equiv \frac{1}{e}\frac{\delta S}{\delta e^a_\mu}$. The anomaly of this current would be proportional to $T\nabla T$, and is indeed reminiscent of the chiral vortical effect \cite{GromovAbanov15, KhaidukovZubkov2018}. We can also expect mixed terms, in the sense that there should be a corresponding energy current from \emph{both} the momentum density and thermal current, $\partial_t e^i_3 \neq 0$, at the node
\begin{align}
J^i_{\epsilon} = e T^i_{\ 0} = \frac{p_F T^2}{6v^2} \epsilon^{0ijk} e^3_j \times T^0_{0k} + \frac{p_F T^2}{12v^2} \epsilon^{0ijk} e_t^0 T^{3}_{jk} ,
\end{align}
these ``mixed" contributions to the currents were identified and discussed in Ref. \cite{LiangOjanen19b}.
The message we want to convey here is that one can indeed expect anisotropic and ``mixed" contributions to the torsional anomalies, in the sense that the Lorentz invariant $\Lambda^2\eta_{ab} \to \Lambda_a \Lambda_b$ a generalized anisotropic tensor, in various condensed matter systems depending on the symmetries, perturbations and cutoffs. We leave the detailed discussion of such thermal gravitational contributions for the future, see however \cite{Stone2019, LiangOjanen19b} and the general discussion in \cite{NissinenVolovik2019}.
\section{On the relation of emergent torsion and pseudo gauge fields} \label{sec:comparison}
Here we summarize our findings in relation to earlier literature, where the momentum space field corresponding to the shift of the node is often considered as an axial gauge field \cite{Volovik85, Volovik03, CortijoEtAl15, Fujimoto16, PikulinEtAl16, GrushinEtAl16, SukhachovEtAl17, HuangEtAl19, FerreirosEtAl19, IlanEtAl19}. We note that torsion can be shown to enter as an axial gauge field constructed from the totally antisymmetric torsion $\gamma^5S^{\mu} =\epsilon^{\mu\nu\lambda\rho}T_{\nu\lambda\rho}$ \cite{ChandiaZanelli97, Soo99} coupling to the momentum. This is essentially what we found in Secs. \ref{sec:torsional_LLs} and \ref{sec:chiral} with the momentum space dependent LL density of states. {The LL calculation and anomaly itself should be performed by taking this momentum dependence into account, as we have done here}.
How are tetrads with torsion otherwise different from the momentum gauge field? The symmetries corresponding to the tetrads are translations which for finite node momenta, {requisite for condensed matter Weyl fermions}, corresponds to the anomalous chiral symmetry. There is no local gauge symmetry corresponding to the Berry curvature in momentum space. On the other hand, the geometric formulation is suited for such translation symmetries and reveals the background geometry of the spacetime emerging from the node \cite{Horava05}. The overall geometry can made consistent with the non-relativistic symmetries away from the Weyl node for a finite momentum range. For the anomalous axial density and anomaly, this leads to the parametric suppression compared to U(1) anomaly and the UV-scale $p_W$. The phenomenological implications of this are significant, even without the theoretical recourse to the emergent geometry.
We also note that Ref. \cite{FerreirosEtAl19} discusses torsion (and the conservation of momentum) in strained semimetals in terms of a model with both the axial gauge field from the node and the tetrad with elastic deformations. While such a ``splitting" between low-energy and high-energy momenta is in principle allowed, it makes the consideration of the momentum dependent anomalies more involved, with the danger of double counting. The momentum anomaly (without EM gauge fields) should be proportional $k_W \partial_{\mu}(e j^{\mu}_5)$, as found in \cite{Nissinen2019}.
The original paper \cite{ShapourianEtAl15} for {elastic} deformations takes an explicitly geometrical view point which nicely connects with the strain induced tetrad formalism proposed here. In the simplest possible terms, we start with the Weyl (or Dirac) Hamiltonian in flat space with the small deformation $e^i_a = \delta^i_a+\delta e_a^i$,
\begin{align}
H_{+} = \sigma^a(\hat{k}_a - k_{Wa}) &\to \frac{\sigma^a}{2} e^i_a (\hat{k}_i - k_{Wi}) + \textrm{ h.c.} \nonumber\\
&= \frac{\sigma^a}{2} (e_a^i k_i - k_{Wa}) + \textrm{ h.c.}. \\
&\approx \frac{\sigma^a}{2} ([\delta_a^i + \delta e^i_a] q_i + k_W\delta e^i_a) + \textrm{ h.c.} \nonumber
\end{align}
where now $k_W \delta e^i_a =-k_W \delta e^a_i$ is the momentum space gauge field in the Hamiltonian with (almost) constant tetrads \cite{Volovik85, BalatskiiEtAl86, ShapourianEtAl15, PikulinEtAl16, GrushinEtAl16, FerreirosEtAl19}. The right-hand side is the Hamiltonian in coordinate (or laboratory) space, which is the one we have experimental access to, and is deformed with respect to the orthogonal frame of $k_a$. We see that the momentum $\hat{k}_i$ couples to $e^{i}_a$, as expected, and the shift is essentially constant in the Hamiltonian, in the sense that $k_{Fa}$ is constant corresponding to the undeformed case, irrespective of the deformation. At the same time, the laboratory value changes though as $k_{Fi} = e^a_i k_{Fa}$. In the examples we considered, in the chiral superfluid and superconductor we explicitly have that $k_{F,i}=p_F e^3_i$, giving $k_{Fa} = p_F\delta^3_a$. Similarly, for the strained semimetal we consider the originally unstrained lattice Fermi wave vector $k_{Fa}(x) \to k'_{Fa}(x+u) \approx k_{Fa}(x) + \partial_i u^a k_{Fa}(x) \equiv e_i^a k_{Fa}$ under strain $x' = x+u$, giving Eq. \eqref{eq:continuum} as expected.
What this means more generally is that $\nabla k_{Fa}=0$, in terms of the connection corresponding to the emergent spacetime, as discussed in Sec. \ref{sec:spacetimes}. In fact this is one of the requirements for the consistent assignment of the low-energy geometry. On the other hand, all the torsional spacetimes we considered are in some sense abelian (or gravitoelectromagnetic) since the relevant fields can be identified as an abelian gauge fields in momentum space, amounting to what was called ``minimal coupling" trick in \cite{ParrikarEtAl14,ShapourianEtAl15}. In this case however, the gravitational character comes still evident in the momentum dependent charge and density of LLs, as expected for gravitational response, coupling to momenta and energy densities {including thermal effects}.
\section{Conclusions and outlook}\label{sec:conclusions}
In this paper, we have argued for the emergence of non-zero torsional anomalies in Weyl (and Dirac) systems with simple Landau level arguments. In particular, we were motivated by the possibility of non-zero torsional Nieh-Yan anomalies in condensed matter systems with an explicit cutoff and the lack of relativistic Lorentz symmetries. For the anomaly, the spectral flow in the presence of torsion clearly renders non-zero results for Weyl nodes at finite momentum. Although obtained with specific simple field configurations corresponding to the torsion with Landau level spectra, they are expected to generalize covariantly in terms of the relevant spatial symmetries of the system. We discussed two idealized spacetimes related to the {symmetries}, the linear Riemann-Cartan and the anisotropic Newton-Cartan spacetime with quadratic dispersion.
{We also briefly discussed the thermal torsion via Luttinger's fictitious spacetime, since we can expect mixed anomalies already from the inclusion of thermal gradients. This connects to gravitational anomalies and transport in general \cite{NissinenVolovik2019}. The recent results on universal anomaly coefficients in linear response thermal transport related to gravitational anomalies \cite{Landsteiner11, LoganayagamSurowka12, JensenEtAl13, Landsteiner2014, LucasEtAl2016, StoneKim18} are related. From the non-universal torsional anomaly, via e.g. the momentum dependent LL density of states, the expected gravitational anomaly polynomials at finite temperature arise already at the level of linear response from the universality of IR thermal fluctuations \cite{NissinenVolovik2019}.} Moreover, we expect that the emergent tetrads with coordinate dependence arise rather generally in any Weyl system, making sense of evaluating the linear response to these, even in flat space.
We {clarified} the relation between momentum space pseudo gauge fields and the emergent tetrads. It is important to realize that the spectral {or Hamiltonian} correspondence between torsion and U(1) magnetic fields, e.g. in a Landau level problem, is not yet enough for the anomalies to match in general. The simple LL spectral flow argument is enough to identify the non-universal cutoff appearing in the NY anomaly term. The message is that {low-energy tetrads and geometry} couple to the momentum in a universal way, even in lattice models with some caveats \cite{ShapourianEtAl15, CortijoEtAl15}, due to the non-universal coupling of the lattice phonons and fermions {as compared to pure continuum}. The UV scales appearing in the termination of anomalous chiral transport from such emergent fields, related to the Fermi-point momentum $p_W$ and the regime of validity of the effective Weyl/Dirac description, are naturally understood from the geometric perspective. In the presence of both independent U(1) fields and momentum space tetrads we should also expect many mixed terms, as studied e.g. in \cite{KubotaEtAl01, ParrikarEtAl14}. The mixed {torsional anomalies should also be carefully reconsidered with regards to finite node momentum, where we again expect differences to relativistic fermions}. {On this note} our results for the anomaly at finite momentum are in contrast to \cite{HuangBo20}, where a model with torsion is compared to a relativistic model at $p=0$ with pseudo gauge fields without consideration of node momentum coupling to the torsion or the cutoff of the quasirelativistic dispersion.
More formally, what we did amounts to applying the $K$-theory theorem of Horava \cite{Horava05} to the geometry of specific Weyl nodes in three dimensions, by keeping track of the UV symmetries and scales in the problem for the precise the form of the emergent geometry and fields coupling to the quasiparticles. The topology only guarantees the effectively Dirac like spectrum, with everything else depending on the microscopics.
Many interesting avenues remain in the geometric description of topological condensed matter systems with gapless fermions, {including also nodal line systems \cite{NissinenVolovik2018, Schnyder20}}. It would be extremely interesting to study the gravitational anomalies in Weyl and Dirac systems from the global symmetry perspective with many nodes Weyl, taking into account the relevant space group symmetries \cite{CortijoEtAl15, Manes12, JuricicEtAl12, SlagerEtAl13, RaoBradlyn20}. More generally, the appearance of low-energy quasirelativistic fermions {with exotic geometric backgrounds within feasible experimental reach is expected to give} more insight also to the physics of relativistic gravitational anomalies with torsion \cite{ChandiaZanelli97}, although the symmetries and status of the background fields {are} dramatically different.
\emph{Acknowledgements. ---} We thank Z.-M. Huang for correspondence on his work, T. Ojanen and P.O. Sukhachov for discussions. Finally we especially thank G.E. Volovik for discussions, support and collaborations on related subjects. This work has been supported by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (Grant Agreement no. 694248).
\section{Introduction}
Gapless {fermionic} quasiparticles with linear spectrum protected by topology arise in many {condensed matter systems} in three dimensions \cite{NielsenNinomiya83, CallanHarvey85, Volovik03, Horava05, WanEtAl11}. In particular, accidental crossings of two inversion ($P$) or time-reversal ($T$) breaking bands at the Fermi energy lead to {stable} quasirelativistic particles with low-energy dispersion analogous to relativistic Weyl fermions \cite{Herring37, AbrisokovBenelavskii71}. Fourfold degenerate crossings with Dirac-like low-energy excitations occur for combined $P,T$ (and/or other similar protecting) symmetries \cite{BalatskyEtAl14, ArmitageEtAl18}. Similarly, in chiral superconductors and superfluids with gap nodes, Majorana-Weyl excitations arise at low energy \cite{Volovik85, Volovik1986a, Volovik1986b, Volovik90, Volovik03, ReadGreen00}.
By a very general theorem from topology \cite{Horava05}, the low-energy linear theory near the {three-dimensional} Fermi point node takes universally the ($\gamma$-matrix) form of a quasirelativistic Weyl/Dirac spectrum, with the precise form of the metric and other background fields depending on the microscopic details. It is then of interest to study the detailed form of this emergent Dirac operator with an explicit cutoff and compare to fundamental, Lorentz invariant fermions. Following this logic, the concept of so-called {momentum space} pseudo gauge fields \cite{Volovik03, ShapourianEtAl15, CortijoEtAl15, Landsteiner16, Fujimoto16, PikulinEtAl16, GrushinEtAl16, SukhachovEtAl17, SukhachovEtAl18, SukhachovEtAl18b, IlanEtAl19} and ``emergent" spacetime \cite{Volovik1986a, Volovik1986b, Volovik90, Volovik03, ReadGreen00, MesarosEtAl10, Son13, BanerjeeEtAl14, NissinenVolovik17, WeststromOjanen17, NissinenVolovik2018, GolanStern18, Nissinen2019, LiangOjanen19, WilsonEtAl20, JafariEtAl20} in non-relativistic condensed matter systems has emerged, where the low-energy fermions can experience background fields of various physical origins, similar to what appears for spin-1/2 (or even higher spin) fermions on curved spacetimes in general relativity or its non-relativistic generalizations with non-relativistic coordinate invariance.
Notably, {in the low-energy quasilinear theory, the local Fermi velocities form emergent tetrads which determine the geometry of the conical dispersion}. {The tetrads, and its field strength torsion}, couple to the quasiparticle momentum {effectively} as in gravity. {The effects of such fields in non-relativistic systems appearing at finite density $\mu_F$ and Fermi-momentum $p_F$ are expected to be very different from their relativistic counterparts appearing at $p=0$}. {Amongst other things}, the system at finite Fermi or crystal momentum is then charged under the field strength these geometric background fields \cite{MesarosEtAl10, JuricicEtAl12, ParrikarEtAl14, ShapourianEtAl15, PachosEtAl20}. In three spatial dimensions, this corresponds to the anomalous translational symmetry for chiral fermions, leading to axial anomalies in the system \cite{Nissinen2019, Burkov20} from momentum space translations. For other relevant condensed matter considerations of this anomaly, see e.g. \cite{VolovikMineev81, Volovik84, Volovik1986a, CombescotDombre86, BalatskiiEtAl86, Volovik87, Volovik95, BevanEtAl97, Volovik03, ZyuzinBurkov12, SonYamamoto12, Zahed12, ZhouEtAl13, SonSpivak13, BasarKharzeevZahed2013, Landsteiner2014, LucasEtAl2016, GoothEtAl17, ArmitageEtAl18}. {In this paper we point out that geometric (gravitational) contributions in the chiral anomaly, second order in gradients, are expected in generic non-homogenous condensed matter Weyl systems with momentum space fields (background spacetimes) due to inhomogenous deformations leading to torsion.}
{More generally, the appereance of the tetrad background fields in condensed matter Weyl systems is built-in in the low-energy theory}, thus opening the possibility of simulating Riemann-Cartan (or Newton-Cartan) spacetimes for the low-energy fermions. In case of non-trivial background torsion, the so-called chiral gravitational Nieh-Yan anomaly can appear \cite{NiehYan82, NiehYan82b}. In contrast to the axial anomaly with gauge fields, this anomaly depends on a non-universal UV cut-off parameter $\Lambda$, with canonical dimensions of momentum. While the status of the torsional contribution in relativistic systems has been debated for long \cite{Yajima96, ChandiaZanelli97, ObukhovEtAl97, Soo99, PeetersWaldron99, Comments, KubotaEtAl01}, the appearance of this term in non-relativistic condensed matter systems with explicit UV cutoff to the Weyl physics is a priori plausible \cite{ParrikarEtAl14, Nissinen2019}. {Aspects of the gravitational anomaly in condensed matter have} been considered in e.g. \cite{Zahed12, ZhouEtAl13, SunWan14, ParrikarEtAl14, PalumboPachos16, MaranerPachosPalumbo18, FerreirosEtAl19, CopettiLandsteiner19, Nissinen2019, Copetti20, Stone2019b} {including Weyl/Dirac fermions in superfluids, superconductors and semimetals. The dimensional hierarchy and descent relations of the torsional anomaly were recently analyzed in Ref. \cite{Stone2019b} from a Hamiltonian persperctive in a relativistic model}. {Nevertheless, it seems that any explicit value of the cutoff parameter has not been discussed in detail, except in the recent paper \cite{Nissinen2019} by one of the present authors.} {In the simplest possible terms}, the non-universal anomaly UV scale originates from the regime of validity of the quasirelativistic linear spectrum {and the associated anomalous transport}. {This UV-scale is, in the simplest possible terms, just the validity of the Taylor expansion close to the node which is experimentally a low-energy scale in the system \cite{Nissinen2019}}. {Generalizing this}, it seems that the NY anomaly non-universally probes the chiral spectrum and transport, well-defined only at low energies, and conversely, merging in some left-right asymmetric way to other bands as required by global consistency and symmetries. {Indeed, at face value, the spectrum and spectral flow can be terminated in a multitude of inequivalent ways}. If the system is anisotropic, the interplay of different scales in the system becomes essential, as evidenced by {the consideration of the anomaly} in e.g. Newton-Cartan geometry with quadratic spectrum along a preferred direction or finite temperature (see below).
Here we will further argue for {the torsional anomaly} term using the simplest {computational} apparatus for the chiral and axial anomaly: adiabatic spectral flow in the presence of torsional Landau levels \cite{NielsenNinomiya83, Volovik85}. In this context, the torsional LLs appeared implicitly already in Refs. \cite{Volovik85, BalatskiiEtAl86} and more recently in topological semimetals in \cite{ParrikarEtAl14} in {comparison with} Pauli-Villars regularization of Lorentz invariant fermions. On the other hand, such a relativistic regularization scheme is at best only an approximation in condensed matter systems, since the linear Weyl regime applies to low-energies with an explicit cutoff scale. This linear regime can be anisotropic and, furthermore, is continuously connected with the non-relativistic regime with quadratic dispersion. {Moreover, as discussed in this paper, the role of the spectral flow is drastically altered by the finite node momentum as compared to relativistic fermions.}
{The role of momentum space pseudo gauge fields, with momentum dependent axial charge also becomes evident in the geometric framework for the axial anomaly. Importantly, it is incorrect to assume the universal U(1) axial anomaly for such gauge fields, since the effective momentum space description has a finite regime of validity. To the best of our knowledge, it seems that this fact has been overlooked thus far.} {Related to the momentum dependence in the anomaly, the UV-scale can be supplemented by a infrared (IR) temperature scale of thermal fluctuations, in contrast to, say U(1), gauge fields}. {With some caveats, this IR anomaly becomes universal due to universality of thermal fluctuations close to the node. The thermal torsional anomaly and the associated currents were recently considered in Ref. \cite{NissinenVolovik2019}}. Contribution to the torsional NY anomaly at finite temperatures was further discussed in \cite{ImakiYamamoto19, Stone2019, NissinenVolovik19b, LiangOjanen19b, Imaki20} for relativistic fermions at $p=0$. The closely related role of torsion and viscoelastic thermal transport has been also studied e.g. in \cite{Shitade14, BradlynRead15, GromovAbanov15, Sekine16}. Here we will mostly focus on the non-universal UV contribution at zero temperature. For completeness, we comment on thermal effects by non-zero temperature gradients, which point to still new types of {anisotropic} torsional anomalies terms not present in systems with Lorentz invariance.
This rest of this paper is organized as follows. Section \ref{sec:spacetimes} discusses the low-energy Weyl Hamiltonian and the associated geometry in condensed matter systems from the perspective of emergent background spacetimes. The following Section \ref{sec:torsional_LLs} reviews the relativistic torsional anomaly and spectral flow argument, focusing on the extension to finite node momentum and the comparison with the anomaly for U(1) gauge fields presented in Appendix \ref{sec:appendix_EM}. Sec. \ref{sec:chiral} discusses the torsional anomaly in chiral superfluids and superconductors, where it can be matched with experiment \cite{Volovik95, BevanEtAl97, Nissinen2019}. This followed by a model of $T$-breaking strained semimetals in Sec. \ref{sec:WSM}. We also briefly discuss the role of torsion in the presence of thermal gradients in Sec \ref{sec:thermal}. We conclude with a comparison on previous results Sec. \ref{sec:comparison} and the conclusions and outlook of our results.
\section{Weyl fermions in condensed matter and relativistic systems}\label{sec:spacetimes}
\subsection{Weyl fermions in condensed matter}
We consider a {fermionic} system with broken time-reversal symmetry ($T$) or inversion ($P$). {In the vicinity of a generic degenerate crossing at $\vek{p}_W$, ignoring all other bands, the $2\times2$ Hamiltonian is $H = \sigma^a H_a$ in terms of the unit and Pauli matrices $\sigma^a$, $a=0,1,2,3$. This leads to the expansion}
\begin{align}
H(\vek{p}) = \sigma^a e_a^{i}(p-p_W)_{i} + \cdots \label{eq:HWeyl},
\end{align}
where
\begin{align}
e_a^i = \frac{\partial H_a}{\partial p_i}\bigg \vert_{p=p_W}. \label{eq:Taylor_tetrad}
\end{align}
The expansion is, of course, valid for $\abs{\vek{p}-\vek{p}_W}\ll p_W$ since the remainder is of the order of $\abs{\vek{p}-\vek{p}_W}^2$. This provides an explicit cutoff for the linear Weyl regime that is, nevertheless, continuously connected with the non-relativistic quadratic dispersing spectrum and the other bands.
{The existence of the Weyl node degeneracy is protected by topologuy in a finite region} since there are three parameters and three constraints \cite{Herring37, AbrisokovBenelavskii71, Volovik03, Horava05}. Via rotations and scalings, $\tilde{p}_a = e^i_a p_i$, the Hamiltonian becomes the right- or left-handed relativistic Weyl Hamiltonian, at Fermi momentum $\tilde{p}_W$,
\begin{align}
\tilde{H}(\vek{p}) = \chi \sigma^a (\tilde{p}-\tilde{p}_W)_a
\end{align}
where $\chi=\pm 1 = \text{sgn}(\det e^i_a)$ is the chirality, defined as the direction of (pseudo)spin with respect to the propagation momentum. The band energies are $E =(\tilde{p}-\tilde{p}_W)_0\pm \sqrt{\abs{\tilde{\vek{p}}-\tilde{\vek{p}}_W}^2}$. The role of the coefficients $e^\mu_a$ is simply to determine the (anisotropic) Fermi velocities of the {conical} dispersion $\omega^2 = -g^{ij}(p-p_W)_i (p-p_W)_j$ via the (inverse) metric
\begin{align}
g^{ij} = -\sum_{a,b=0,1,2,3} e^i_a e^j_b \delta^{ab} \equiv -e^i_a e^j_b \delta^{ab} \label{eq:cone_metric}
\end{align}
where the Einstein summation convention for repeated latin and greek indices will be henceforth assumed. The spatial tetrad $e_a^i$ is extended to a non-degenerate matrix $e^{\mu}_a$ by considering the operator $\sigma^a e_a^{\mu}i\partial_{\mu} =i \partial_t - H(\vek{p})$ with $\mu=t,x,y,z$. In particular, the coefficient $e^\mu_0=\{1,v^i\}$ is non-trivial in type-II Weyl semimetals and {in} superfluids and superconductors with superflow. The case with non-zero spatial $e^{t}_a$, $a=1,2,3$ was considered in \cite{NissinenVolovik17}. These break different symmetries, while the spacelike tetrads {transform} like gauge potentials corresponding to axial magnetic and electric fields. While the Hamiltonian \eqref{eq:HWeyl} is usually analyzed for translationally invariant systems, it remains valid for weak deformations. This can be seen {in} any consistent gradient expansion scheme, e.g. the semi-classical gradient expansion of the BdG Hamiltonian for superconductors/superfluids, or the Schrieffer-Wolff transformation for Bloch Hamiltonians \cite{WeststromOjanen17, LiangOjanen19}.
{We conclude that} the Hamiltonian \eqref{eq:HWeyl} has striking similarity to relativistic fermions coupled to non-trivial background geometry or gravity, {albeit with some important caveats}. More precisely, if we consider the low-energy Weyl fermion $\Psi_W$ in terms of the original excitations $\Psi$, {we see}
\begin{align}
\Psi(\vek{x},t) = e^{i \vek{p}_W \cdot \vek{x}} \Psi_W(\vek{x},t), \label{eq:momentum_rotation}
\end{align}
which, however, corresponds to the anomalous (chiral) rotations in the system, thus making the finite node momentum $p_W$ very important. In the rest of the paper, we will explicitly consider the anomaly implied by \eqref{eq:momentum_rotation} in the presence of non-trivial background fields $e^{\mu}_a(x)$, from Eq. \eqref{eq:Taylor_tetrad}, after reviewing the necessary {background geometry} in the next section. {U(1) gauge fields are assumed to be absent}. We will focus here on $T$-breaking systems, where in the simplest case one finds Weyl nodes of opposite chirality at $\pm \vek{p}_W$, whereas for inversion $P$ breaking systems one has at minimum four Weyl points, which are invariant under $T$ and map non-trivially to themselves under inversion.
\subsection{Quasirelativistic fermions}
We briefly summarize quasirelativistic fermions on curved Riemann-Cartan spacetimes here, see e.g. \cite{NiehYan82, ObukhovEtAl97, ChandiaZanelli97, ParrikarEtAl14}. These spacetimes are defined via an orthonormal frame $e^a = e^a_{\mu}dx^\mu$, {giving rise to metric as in \eqref{eq:cone_metric}}, and a (matrix) spin-connection $\hat{\omega}_{\mu} dx^{\mu}$, both of which couple to the Dirac (and Weyl) equations. {Informally}, the $e^a_{\mu}$ is a spacetime ``translation gauge field", while $\hat{\omega}$ is the gauge connection corresponding to local (Lorentz) rotations. See e.g. \cite{NiehYan82b}.
As discussed above and the Introduction, analogous fields arise in the low-energy Weyl Hamiltonian close to the nodes {in condensed matter systems on flat space, giving rise to emergent spacetimes for the low-energy fermions}. These are, however, not strictly relativistic in the sense that the emergent metric does not follow from locally Lorentz invariant spacetimes implied by general relativity, but rather from the microscopic non-relativistic UV theory at low energy. This what we call quasirelativistic and emergent.
Note that the spin-connection is strictly speaking a gauge field of {a local symmetry entering the Dirac operator. Therefore its emergence needs the corresponding local symmetry}. Notwithstanding, it arises however, e.g. in chiral superconductors and superfluids due to the local combined U(1) symmetry corresponding to gauge and orbital rotation symmetry \cite{LiuCross79, GolanStern18, Nissinen2019}. The tetrad and connection fields give rise to the torsion $T^a=de^a+(\hat{\omega} \wedge e)^a$ and curvature $\hat{R} = d\hat{\omega}_{\mu}-\hat{\omega} \wedge \hat{\omega}$ {field strength} tensors that equivalently characterise the spacetime. From the tetrad one can derive the spacetime metric, which enters as a secondary object, in contrast to usual Riemannian spacetimes where the connection is symmetric and uniquely fixed by the metric.
In terms of equations, the basic quantities are the tetrad $e^a_{\mu}$ and coordinate connection $\Gamma_{\mu\nu}^{\lambda}$. The former is the {metric matrix square-root}
\begin{align}
g_{\mu\nu} = e^a_{\mu} e^b_{\nu} \eta_{ab}, \quad e_{a}^\mu e_{b}^{\nu} \eta_{ab} = g^{\mu\nu}
\end{align}
by defining a local orthonormal frame, {in terms of $\eta_{ab}= \textrm{diag}(1,-1,-1,-1)$}. Now tensors $X^{a\cdots \mu \cdots}_{b\cdots \nu \cdots}$ can carry local orthonormal (Lorentz) indices and coordinate indices; the two bases can be transformed by contracting with $e^a_{\mu}$ or the inverse $e^{\mu}_a$. The connection consistent with basis changes {defined as $\nabla e^a_{\mu} = 0$}, has two parts, one for local orthonormal indices and one for coordinate indices and is metric compatible. The {connection} determines geometric parallel transport in the system. Without loss of generality it can be written as
\begin{align}
\omega^a_{\mu b} = e^a_{\lambda} e^{\nu}_{b} \Gamma^{\lambda}_{\mu\nu} - e^a_{\nu}\partial_{\mu} e^{\nu}_b \label{eq:spin-connection},
\end{align}
where $\Gamma_{\mu\nu}^{\lambda}$ is the coordinate connection with torsion
\begin{align}
T_{\mu\nu}^{\lambda} = \Gamma^{\lambda}_{\mu\nu} - \Gamma^{\lambda}_{\nu \mu}.
\end{align}
The connection can be decomposed in terms of torsion as
\begin{align}
\Gamma^{\lambda}_{\mu\nu} = \mathring{\Gamma}^{\lambda}_{\mu\nu} + C^{\lambda}_{\mu \nu},
\end{align}
where $\mathring{\Gamma}^{\lambda}_{\mu\nu} = \frac{1}{2}g^{\lambda\rho}(\partial_{\mu}g_{\nu\rho} +\partial_{\nu}g_{\mu\rho}-\partial_{\rho}g_{\mu\nu})$ is the Christoffel connection fully determined from the metric and $C^\lambda_{\mu\nu} = \frac{1}{2} (T^{\lambda}_{\ \mu\nu} + T_{\mu \ \nu}^{\ \lambda} - T^{\ \ \lambda}_{\mu \nu})$ is the contorsion tensor.
The low-energy quasirelativistic Weyl fermion theory is, in the chiral Dirac fermion basis $\psi = \left(\begin{matrix} \psi_L & \psi_R \end{matrix}\right)^T$, where $\psi_{R,L}$ are Weyl fermions and $\gamma^a = \overline{\sigma}^a \oplus \sigma^{a}$ with $\overline{\sigma}^a = (1,-\sigma^i)$,
\begin{align}
S_{D} = \int d^4 x e~ \frac{1}{2}\overline{\psi}\gamma^a (e^{\mu}_a i D_{\mu } - p_{Wa})\psi + \textrm{ h.c.} ~. \label{eq:Dirac_action}
\end{align}
where $e \equiv \det e^a_{\mu}$ and $D_{\mu}$ is the covariant derivative corresponding to the canonical momentum
\begin{align}
D_{\mu} = \partial_{\mu} - \frac{i}{4} \omega_{\mu}^{ab} \sigma_{ab} - i q A_{\mu}
\end{align}
where $\gamma^{ab} = \frac{i}{2}[\gamma^a,\gamma^b]$ and $A_{\mu}$ is a U(1) gauge potential with charge $q$. They enter the covariant derivative or canonical momentum due to local Lorentz (rotation) and gauge symmetries. For the emergent spin-connection to exist, the local rotation symmetry has to be dynamically generated. See Sec. \ref{sec:chiral} and \cite{Nissinen2019}. Importantly to our applications, the quantity $p_{Wa} = (\mu_W, \vek{p}_W )$ is the shift of the of the Weyl (or Dirac) node at chemical potential $\mu_W = e_0^\nu p_{W\nu}$ and $\vek{p}_{Wa} = e^i_a p_{Wi}$ in momentum space. The magnitude of latter is a UV-parameter that is fixed (up to small deformations) in the low-energy theory.
\subsection{Anisotropic Newton-Cartan fermions}\label{sec:Newton-Cartan}
A related concept to the {Riemann-Cartan spacetime \eqref{eq:Dirac_action}} is an anisotropic version of a {non-relativistic Newton-Cartan (NC) spacetime. In the latter, we single out a Newtonian time and, in our case, a preferred spatial direction with quadratic dispersion in contrast to the linear Riemann-Cartan case}. In what follows {in} Secs. \ref{sec:chiral} and \ref{sec:WSM}, this preferred direction is {along the Weyl node separation} with uniaxial symmetry and anisotropic scaling. Compared to the standard {NC} case, there is an additional gauge symmetry corresponding to a U(1) number conservation and a local Milne boost symmetry along the anisotropy direction \cite{Son13, BanerjeeMukherjee18, CopettiLandsteiner19, Copetti20}. These will both be gauge fixed to zero and will be applied {mostly} in the case of the chiral superconductor/superfluid, where they are absent naturally for Majorana-Weyl fermions. With the time coordinate fixed, the symmetries of the NC spacetime then correspond to the generalized Galilean transformations $x^i \to x^i +\xi^i(x,t)$ \cite{DuvalEtAl85, Son13, ObersEtAl14, BanerjeeEtAl14, WilsonEtAl20}.
{The} metric is
\begin{align}
g_{\mu\nu} = n_{\mu}n_{\nu} + h_{\mu\nu}
\end{align}
where now $n_\mu$ is a \emph{spacelike} vector, {$e^a_{\mu}$ a (degenerate) tetrad with metric $h_{\mu\nu}$ restricted to the orthogonal subspace}, with $e^0_\mu = \delta^0_\mu$ representing Newtonian time,
\begin{align}
h_{\mu\nu} = \eta^{ab} e^{a}_{\mu}e_{\nu}^b, \quad a,b =0,1,2,
\end{align}
with inverses
\begin{align}
n_{\mu}\ell^{\mu} = 1, \quad e^a_{\mu}\ell^{\mu} =0, \quad e^a_{\mu} e^{\mu}_b = \delta^a_b,\quad a=0,1,2.
\end{align}
The connection and torsion follow as
\begin{align}
\Gamma^{\lambda}_{\mu\nu} = \mathring{\Gamma}^\lambda_{\mu\nu}[h] + \ell^{\lambda}\partial_{\mu}n_{\nu},
\end{align}
from the condition that $\mathcal{L}_{\ell} h_{\mu\nu} = 0$, equivalent to $\nabla_{\mu}n_\nu = \nabla_{\lambda}h_{\mu\nu}=0$. The torsion is given as
\begin{align}
T^3_{\mu\nu} \equiv n_\lambda T^{\lambda}_{\mu\nu} = -\partial_{\mu}n_{\nu} + \partial_{\nu}n_{\mu}
\end{align}
and the standard spin-connection perpendicular to $\ell^\mu$, $\mathring{\omega}_{\mu\nu}[h]$, {as in} Eq. \eqref{eq:spin-connection}, amounting to local rotation symmetry along $\ell^\mu$. The fact that $n_{\mu}$ is covariantly constant is natural, since it can be identified with the direction corresponding to non-zero Weyl node separation in e.g. $T$-breaking Weyl systems.
We discuss in Sec. \ref{sec:chiral} the Landau level problem of Majorana-Weyl fermions corresponding to such a spacetime, with the (right-handed Weyl) action
\begin{align}
S_{W} = \int d^4x \sqrt{g} \psi^\dagger [(\tau^a c_{\perp}e_a^\mu i \partial_\mu - \tau^3\epsilon(i \partial_\ell)] \psi + \textrm{ h.c.} \label{eq:NC_fermion}
\end{align}
where $\epsilon(\partial_\ell) = \partial_\ell^2/(2m)-\mu_F$ in the anisotropic direction with $\partial_\ell = \ell^{\mu}\partial_{\mu}$, corresponding to the non-relativistic dispersion and degenerate metric $\ell^{\mu}\ell^{\nu} = g^{\mu\nu}-h^{\mu\nu}$. In this case the relative anisotropy of the two terms is $c_{\perp}/c_{\parallel} = mc_{\perp}/p_F$, where $p_F = \sqrt{2m\mu_F}$ and $c_{\parallel}=v_F$ the Fermi velocity. This NC model can be matched to the results discussed \cite{Nissinen2019}. Note that a very similar model with Lifshitz anisotropy was considered in \cite{CopettiLandsteiner19}, and the ensuing torsional anomalies for momentum transport in \cite{Copetti20}. For a semimetal under strain, the model {in Sec. \ref{sec:WSM} is correspondingly anisotropic but the precise connection to a specific NC model and symmetries remains to be worked out in full detail}.
\section{Torsional anomalies and Landau levels}\label{sec:torsional_LLs}
\subsection{Torsional Nieh-Yan anomaly}
Now consider Weyl fermions coupled to a tetrad with non-zero torsion and curvature with the U(1) gauge fields set to $A_{\mu} = A_{5 \mu} = 0$, {see however Appendix \ref{sec:appendix_EM}}. As for the U(1) gauge fields, or gravitational fields represented by the metric $g_{\mu\nu}$, the Weyl fermions are anomalous in the presence of non-zero torsion (and curvature).
We focus on a pair of complex fermions of opposite chirality with currents $j^{\mu}_{\pm}$. The (covariant) torsional anomaly for the axial current $j^{\mu}_5 = j^\mu_{+}-j^\mu_{-}$ is \cite{Yajima96, ObukhovEtAl97, ChandiaZanelli97, Soo99, PeetersWaldron99}
\begin{align}
\partial_{\mu} (e j^{\mu}_5) &= \frac{\Lambda^2}{4\pi^2} (T^a \wedge T_a - e^a \wedge e^b \wedge R_{ab}) \label{eq:NYanomaly}\\
& + \frac{1}{192\pi^2}\textrm{tr}(R\wedge R) \nonumber\\
= \frac{\Lambda^2}{4\pi^2} &\epsilon^{\mu\nu\lambda\rho}(\frac{1}{4}T^a_{\mu\nu}T_{a\lambda\rho} - \frac{1}{2}e^a_{\mu}e^b_{\nu}R_{ab\lambda\rho}) + O(\partial^4). \nonumber
\end{align}
For a discussion of the relativistic {torsional} anomaly term, we refer to \cite{NiehYan82, NiehYan82b, ObukhovEtAl97, ChandiaZanelli97, Comments}, and for applications in topological condensed matter systems, \cite{SunWan14, ParrikarEtAl14, FerreirosEtAl19, Nissinen2019, Stone2019, Copetti20, LiangOjanen19b}. For the mixed terms between torsion and U(1) gauge potentials, see e.g. \cite{KubotaEtAl01}. {We focus on the anomaly contribution solely due to the geometry (tetrads), we will not consider them}. Ref. \cite{FerreirosEtAl19} also considered novel ``axial" tetrads $e^a_{\mu R} \neq e^a_{\mu L}$ at two Weyl nodes $R,L$, with (vector like) $T^5$ appearing as in Eq. \eqref{eq:U1_anomaly_eqs}. We will require $e_R = \pm e_L$ but this is actually rather strong constraint basically only allowing for (improper) rotations that can be gauged away. In the chiral Weyl superfuid/conductor or minimal time-breaking semimetal, $e_R =-e_L$ but this just the chirality of the nodes and is built in the axial nature of torsion. Intriguingly the trace part of torsion arises as the gauge field of local Weyl scalings but this comes, since non-unitary, with a complex gauge coupling \cite{NiehYan82}. {The presence of different (chiral) tetrad couplings and overall symmetry considerations would be highly interesting for e.g. parity breaking and other non-minimal Weyl systems with several nodes, some of which coincide in momentum space}.
{To conclude}, we note the following salient properties related to the NY anomaly term: i) Despite appearances, it is given by the difference of topological terms, albeit in five dimensions \cite{ChandiaZanelli97}. ii) The NY anomaly term is of second order in gradients and therefore the leading contribution from the background geometry in linear response. iii) The UV-cutoff is isotropic in momentum space by (local) Lorentz invariance but is multiplied by the geometric term, which can be anisotropic. {In condensed matter applications, we do not expect Lorentz invariance so in principle non-isotropic anomaly coefficients can arise (see e.g. Sec. \ref{sec:thermal})}. iv) The NY term has contributions from the torsion and curvature, dictated by local exactness $d(e^a \wedge T_a) = T^a \wedge T_a -e^a\wedge e^b \wedge R_{ab}$. The two contributions are a priori independent before the geometry (the torsionful connection) is fixed. The anomaly is therefore physical input for the spacetime geometry or connection \cite{Nissinen2019}. In more pragmatic terms, the anomaly coefficient $\Lambda^2$ can be computed in the case when $\hat{\omega}_\mu = 0$, although the constraints of a consistent spacetime geometry should be kept in mind.
\subsection{Quasirelativistic fermions and torsional Landau levels}
Now we proceed to compute the torsional NY anomaly in non-relativistic systems utilizing the Landau level argument. To set the stage and remove confusions before presenting our main results, we briefly review (quasi)relativistic torsional Landau levels {with linear spectrum}, see e.g. \cite{ParrikarEtAl14}. The computation of the Landau levels is close to and inspired by the spectral flow obtained in \cite{Volovik85, BalatskiiEtAl86} for momentum space gauge fields at $p_W\neq 0$. Similar considerations for $p=0$ can be found in \cite{Stone2019, Stone2019b}.
The Weyl particles are governed by the effective Hamiltonian
\begin{align}
H_{\rm W} = \sigma^a e^{i}_{a}(i \partial_i - p_{W,i}) + \textrm{h.c.}
\end{align}
where $\vek{p}_W$ is the location of the Weyl point. Due to the lack of protecting symmetries (namely at least broken $P$ or $T$) the shift vector
\begin{align}
p_{W,\mu} = (\mu_W, \vek{p}_W)
\end{align}
is necessarily non-zero for the existence of the Weyl point. However, we will focus on the $T$-breaking case with two nodes of opposite chirality at $\pm\vek{p}_W$ and assume that $\mu_W$ is zero unless otherwise specified.
In this section, we assume that the coordinate dependence of the Hamiltonian arises solely from the tetrad $e^\mu_a(x)$, while the location of the node, {$p_{aW}$ is assumed to be constant}. Note that the coordinate momentum $p_{W\mu} \equiv e^a_\mu p_{Wa}$ can still vary and in the case $T^a_{\mu\nu} \neq 0$ there is non-zero torsion. Torsional LLs arise when, say, $\frac{1}{2}\epsilon^{ijk}T^3_{jk} = T_B\unitvec{z}^i$ is constant with the other torsion components and spin connection vanishing. We discuss later in Secs. \ref{sec:chiral}, \ref{sec:WSM} on how to make the identification between low-energy emergent gravitational fields and microscopic background fields in specific examples.
\subsubsection{Torsional Landau levels}
Specifically, the {assumed} (semi-classical) tetrads $e^a = e^a_{\mu} dx^{\mu}$ and the inverse $e_a = e^{\mu}_a \partial_{\mu}$ are, following \cite{Volovik85, BalatskiiEtAl86, ParrikarEtAl14},
\begin{align}
e^0 &= dt, \quad e^{1} = dx, \quad e^{2} = dy, \quad e^3 = dz-T(y)dx \nonumber \\
e_0 &= \partial_t,\quad e_{1} = \partial_x+T(y)\partial_z, \quad e_2 = \partial_y, \quad e_3 = \partial_z .\label{eq:torsion_tetrad}
\end{align}
Now we compute the spectrum of the Weyl fermions in the presence of a constant torsional magnetic field $T(y)=T^3_B y$. The corresponding metric is
\begin{align}
g_{\mu\nu}dx^{\mu}dx^{\nu} &= \eta_{ab}e^a e^{b} \nonumber \\
&= dt^2-(1+T(y)^2)dx^2-dy^2\\
&\phantom{=}-2 T(y) dx dz-dz^2 . \nonumber
\end{align}
The torsion is given as $T^3_{ij} = \partial_\mu e^3_\nu-\partial_{\nu}e^3_{\mu}$ or $T^3 = de^3 =\frac{1}{2} \partial_y T(x) dx \wedge dy$, i.e. $T^3_{xy} = -\partial_y T(y) =T_B^3$. In analogy with the electromagnetic tensor, we will call $\frac{1}{2} \varepsilon^{ijk} T_{jk}^a$ and $T^a_{0i}$ torsional magnetic and electric fields, respectively.
The Weyl hamiltonian couples to the non-trivial vierbein as, $\chi$ being the chirality,
\begin{align}
\label{eq:hamT}
H_\ch =& \frac{\ch}{2} \sigma^a e_a^i \hat{p}_i +\textrm{ h.c.} \nonumber \\
=& \ch\begin{bmatrix}\hat{p}_z && \hat{p}_x+\hat{p}_z T_{B}^3 y - i\hat{p}_y\\ \hat{p}_x+\hat{p}_zT_{B}^3 y + i\hat{p}_y && -\hat{p}_z \end{bmatrix}.
\end{align}
As usual, the energy eigenvalues are obtained from squaring the Hamiltonian
\begin{align*}
H^2 &= \sigma^a e^i_a \hat{p}_i e^j_b
\sigma^b \hat{p}_j = e^i_a e^j_b \sigma^a \sigma^b \hat{p}_i \hat{p}_j + e^i_a\sigma^a\sigma^b \{\hat{p}_i,e^j_b\}\hat{p}_j \\
& = e^i_a e^j_b (-\eta^{ab}+i\epsilon^{abc}\sigma^c) \hat{p}_i \hat{p}_j + \frac{iT_{B}^3}{2}[\sigma^2,\sigma^1] \hat{p}_z \\
& = -g^{ij}\hat{p}_j\hat{p}_j - T_{B}^3\sigma_3\hat{p}_z.
\\
& = \hat{p}_y^2 + \hat{p}_z^2 + (\hat{p}_x + T_{B}^3 \hat{y}\hat{p}_z)^2 - T_{B}^3\sigma_3 \hat{p}_z.
\end{align*}
We see \eqref{eq:hamT} is equivalent to a LL problem in a magnetic field [Eq. \eqref{eq:Hmag} for \(B^z = T_{B}^3\) and \(e = p_z\) in Appendix \ref{sec:appendix_EM}]. With those identifications, the spectrum is consequently [from Eq. \eqref{eq:relEMspectrum}]:
\begin{align}
\label{eq:tllspectrum}
E(p_z) = \begin{cases}\pm \sqrt{p_z^2+2|p_zT_{B}^3 |n}, \quad n\geq1 \\ \text{sgn}(T_{B}^3 )\ch|p_z|, \quad n = 0. \end{cases}
\end{align}
The lowest Landau level (LLL) is chiral and unpaired with the simple eigenfunctions, $\sigma^3=\pm1$,
\begin{align}
\Psi_{\sigma^3}(x,p_x,p_z) \sim e^{i (p_x x+p_z z)} e^{\pm(p_x y-p_z T_B y^2/2)} \label{eq:LLL_gaussian}
\end{align}
where the (pseudo)spin or helicity is determined by $\text{sgn}(p_zT_B)$. We stress that the shape of the spectrum is in general also modified due to the momentum replacing the electric charge: left-handed states now disperse as \(E<0\) and right-handed states as \(E>0\) (or vice versa, depending on the sign of the field), see Fig. \ref{fig:relativistic_TLL}.
\begin{figure}
\centering
\includegraphics[width=220pt]{Kuvaajat/TLL_rel_spectrum_occupied.pdf}
\caption{Dispersion of left-handed (LLL in blue) and right-handed Weyl fermions (LLL in red) at $p_W=0$ under a torsional magnetic field, respectively.}
\label{fig:relativistic_TLL}
\end{figure}
\subsubsection{Spectral flow and anomaly}
Analogously to the Landau level calculation with electromagnetic fields, we may turn on a constant torsional electric field parallel to \(T_{B}^3 \) by introducing time-dependence to the vierbein as \(e_z^3 = 1+T_{E}^3 t\) where $T_{E}^3 t \ll 1$. Then we have $e^z_3 = (1+T_{E}^3 t)^{-1} \approx 1-T_{E}^3 t$. This induces adiabatic time-dependence $\partial_t p_z = (\partial_t e^3_z) p_3$, analogous to the Lorentz force, which leads to spectral flow of states through the momentum dependent torsional electric field. The number currents, in the vicinity of the node $p_z = e^3_z p_3 = p_{Wz}=0$ are for both chiralities
\begin{align}
\label{eq:tllcurrent}
e j^0_\chi(t) &= \frac{T_{B}^3 }{2\pi} \int_{-\Lambda}^{\Lambda}\frac{dp^3}{2\pi}|p_z| \nonumber \\
&= - \Lambda^2\frac{T_{B}^3 (1+T_{E}^3 t)}{4\pi^2} = -\Lambda^2\frac{T^3_{xy} e_z^3}{4\pi^2},
\end{align}
where a cutoff \(\Lambda\) has been introduced to regularize the momentum dependent current density {and spectrum}. We see that for $E<0$, particles flow below the cutoff, whereas for $E>0$, holes flow above the cutoff, see Fig. \ref{fig:relativistic_spectral_flow}. Then, taking into account the fact that the tensorial current density is modified by the volume element $e d^4 x$ in the presence of torsion, see e.g. \cite{Soo99, BradlynRead15},
\begin{align}
\dot{e j^0_{\chi}} &= \mp \Lambda^2\frac{T^3_{xy} \part e_z^3}{4\pi^2} = \mp \Lambda^2\frac{T_{B}^3 T_{E}^3 }{4\pi^2} \nonumber\\
&= \mp\frac{\Lambda^2}{32\pi^2}\levic T^3_{\mu\nu} T^3_{\rho\sigma}, \label{eq:spectral_flow_anomaly}
\end{align}
from holes or particles moving above or below the cutoff, respectively, depending on the direction of the torsional electric field. This is the vacuum regularization that was {also} used in Ref. \onlinecite{ParrikarEtAl14} in the sense $n_{\rm vac} =\sum_{\abs{E_n}\leq \Lambda} \text{sgn}(E_n)$, where an additional factor of one half was present, presumably due to comparison with anomaly inflow from five dimensions. Generalizing this to a fully covariant expression, see the Appendix \ref{sec:appendix_EM}, gives
\begin{align}
\frac{1}{e}\partial_\mu(ej^\mu_{5}) = \frac{1}{e}\frac{\Lambda^2}{16\pi^2}\levic T^3_{\mu\nu} T^3_{\rho\sigma}, \label{eq:j_spectral_flow}
\end{align}
and in particular $\partial_{\mu} (ej^\mu)=0$ as required. We discuss {the relativistic vacuum and the spectral flow leading to \eqref{eq:j_spectral_flow}}, as compared to nodes at finite momenta and axial U(1) fields, more in the next section.
\begin{figure}
\centering
\includegraphics[width=.49\textwidth]{Kuvaajat/TLL_spectral_flow_2.pdf}
\caption{Relativistic spectral flow at $k=0$ in the presence of torsion, with the adibatic transfer of states. Dashed line indicates the location of the cutoff $\Lambda$. }
\label{fig:relativistic_spectral_flow}
\end{figure}
\subsubsection*{Torsional anomaly for \(p_W \neq 0\)}
If we now displace the Weyl nodes in the relativistic case \eqref{eq:hamT} by \(p_z = \pm p_{W}\) in momentum space, corresponding to a $T$-breaking Weyl system, the spectrum \eqref{eq:tllspectrum} takes the form
\begin{align}
E(p_z) = \begin{cases}\pm \sqrt{(p_z\pm
p_{W})^2+2|p_zT_{B}^3 |n}, \quad n\geq1 \\ \text{sgn}(\ch p_zT_{B}^3 )(p_z\pm p_{W}), \quad n = 0. \end{cases}
\end{align}
The lowest, chiral Landau level looks exactly like that of a Weyl fermion in an axial magnetic field, Eq. \eqref{eq:displacedham}. Higher levels are distorted due to the effective charge carried by the particles being their momentum. See Fig. \ref{fig:pseudotorsion}.
\begin{figure}[h]
\centering
\includegraphics[width=250pt]{Kuvaajat/TLL-condensed-specflow.pdf}
\caption{Left-handed Weyl particles at $k_z = k_0$ (LLL in red) and right-handed Weyl holes at $k_z = -k_0$ (LLL in blue) under a torsional magnetic field. Spectral flow is indicated with the arrows.}
\label{fig:pseudotorsion}
\end{figure}
Since the node is at finite momentum $p_W\neq 0$, also the spectral flow summation is centered around $p_W \pm \Lambda'$, {where $\Lambda'$ is a cutoff from e.g. the validity of the linear spectrum}. For notational convenience and comparison to Eq. \eqref{eq:j_spectral_flow}, we introduce the momentum cutoff as $\Lambda' = \frac{\Lambda_{\rm rel}^2}{2} p_W$, where we expect $\frac{\Lambda_{\rm rel}^2}{2} \ll 1$, this being the dimensionless ratio of the cutoff of the linear spectrum to $p_W$. The spectral flow results in the expression, where particles and holes simply add at the two nodes,
\begin{align}
\frac{1}{e}\partial_\mu(ej^\mu_{5}) = \frac{1}{e}\frac{p_W^2 \Lambda_{\rm rel}^2}{16\pi^2}\levic T^3_{\mu\nu} T^3_{\rho\sigma}
\end{align}
which shows that the NY anomaly cutoff is proportional to the node momentum $p_W$, and is small by a factor $\Lambda^2_{\rm rel}\ll 1$ corresponding to the validity of the linear Weyl approximation.
\subsubsection{Comparison of torsion to U(1) fields}
From Figs. \ref{fig:relativistic_TLL} and \ref{fig:pseudotorsion}, we see that the spectrum of torsional LLs resemble the LL spectrum of charged particles in U(1) axial and vector fields, with the momentum dependent charge to torsion kept in mind. {See appendix \ref{sec:appendix_EM} for a complete review of the U(1) case for comparison.} It is well-known that the contribution of torsion for complex chiral Weyl fermions can be equivalently cast in terms of the axial gauge field $\gamma^5 S^{\mu} \equiv \gamma^5 \varepsilon^{\mu\nu\lambda\rho} T_{\nu\lambda\rho}$ corresponding to the totally antisymmetric torsion, see e.g. \cite{ChandiaZanelli97, Soo99}. We stress that while the spectral equivalence of torsional and U(1) LLs is of course expected, the physical appearance of the anomaly is drastically different: the density of states of the LLs depend on momentum and thus the dimensional coefficient $\Lambda^2$ and the need for an explicit UV-cutoff appears. {Similarly, the physics of Figs. \ref{fig:relativistic_spectral_flow} and \ref{fig:pseudotorsion} is completely different, although both arise from spectral flow in momentum space under torsion.}
On this note, although the relativistic result in \eqref{eq:spectral_flow_anomaly} is familiar, there seems to be still confusion in the literature about the role of torsional Landau levels in momentum space and the validity of the NY anomaly due to the explicit UV cutoff. For relativistic Weyl fermions with Lorentz invariance up to arbitrary scales, the spectral flow is symmetric around $p=0$, leading to the conclusion that the anomaly indeed can cancel. This is simply by the observation that, in the absence of Lorentz symmetry breaking at high energy, no net transfer of occupied and empty states in the vacuum takes place during the adiabatic spectral flow, cf. Fig. \ref{fig:relativistic_spectral_flow}. The net transfer of $j_5$ requires left-right asymmetric regularization at the scale of $\Lambda$ with chirality disappearing above that scale, maintaining $\partial_{\mu}j^{\mu}=0$ \cite{ParrikarEtAl14}. Alternatively, at the very least, there is a divergence as $\Lambda\to\infty$. In contrast, for quasirelativistic Weyl fermions at finite node momentum and an explicit cutoff to the Weyl spectrum, the spectral flow can terminate due to the non-relativistic corrections at the cutoff scale of $\Lambda^2_{\rm rel}$, {also implying that chirality is no longer well-defined}, leading to net transport of states and momenta relative to the vacuum (and other quantum numbers of the Weyl fermions if present). {A related fact is that the momentum that plays the role of chirality, which remains physically well-defined irrespective of the scale}. {We also note that the flow is composed of particles and antiparticles (holes) at the different nodes}. It would be interesting to study the {detailed} role of the breakdown of relativistic spectrum and spectral flow numerically, following Ref. \onlinecite{SukhachovEtAl18}. {There only the charge density at finite chemical potential from the node is analyzed, corresponding to Fig. \ref{fig:B5E} and the expected deterioration away from the Weyl node is verified.}
\section{Chiral Weyl superfluids and superconductors}\label{sec:chiral}
Now we discuss the role of the torsional anomaly in $p$-wave superfluids and superconductors with gap nodes and associated Weyl-Majorana quasiparticles \cite{Volovik84, Volovik90, VollhardtWoelfle, ReadGreen00, PalumboPachos16, MaranerPachosPalumbo18}. Close to the nodes, the Fermi energy is tuned to the Weyl point due to the existence of the $p+ip$ pairing amplitude. The chiral anomaly is related to the non-conservation of momentum in the condensate and normal state quasiparticles \cite{BevanEtAl97}. The relation of {this to the} torsional gravitational anomaly and the LL spectral flow was briefly pointed out in Ref. \cite{Nissinen2019}. Earlier related work can be found in \cite{Volovik85, Volovik1986b, BalatskiiEtAl86, CombescotDombre86, Volovik90, KobayashiEtAl18, IshiharaEtAl19}.
The spinless gap amplitude, with equal spin pairing understood, takes the form
\begin{align}
\Delta(\vek{p}) = \frac{\Delta_0}{p_F} (\unitvec{m}+i \unitvec{n}),
\end{align}
where $c_{\perp} = \Delta_0/p_F$ has units of velocity. The direction $\unitvec{l}= \unitvec{m}\times \unitvec{n}$ is a low-energy Goldstone variable for the condensate. At low-energy, the direction of $\unitvec{l}$ can fluctuate and there is combined U(1) gauge symmetry \cite{LiuCross79} in the $\unitvec{m}-\unitvec{n}$ plane, leading to the Mermin-Ho relations between $\unitvec{l}$ and $\vek{v}_s$ \cite{MerminHo76, VollhardtWoelfle, Volovik03}. In the following, we focus on the Landau levels and torsion, keeping the magnitudes of $p_F$ and $\Delta_0$ fixed. {Related to this, for the superconductors, the end results apply the case where the EM potential $A_{\mu}=0$} which amounts to the case where we work in the gauge where $\mathbf{v}_s - \vek{A} \to \mathbf{v}_s$. In the following {computations} we will set $\mathbf{v}_s = 0$ as well, since this corresponds to the case where one has only torsion, see Ref. \onlinecite{Nissinen2019} for the general case {with superfluid velocity}. The orientation of the orthonormal triad $\unitvec{l}$ can still rotate {for the torsional textures}.
Considering {first} the simple homogenous case, the linearization of the BdG Hamiltonian takes the form of a Weyl Hamiltonian close to the nodes of $E(\vek{p})$ at $\vek{p}=\mp p_F\unitvec{l}$,
\begin{align}
H_{\rm BdG}(\hat{\vek{p}}) &= \left(\begin{matrix} \epsilon(\hat{\vek{p}}) & \frac{1}{2}\{\hat{\vek{p}},\Delta(\vek{p})\} \\ \frac{1}{2}\{\hat{\vek{p}},\Delta^{\dagger}(\hat{\vek{p}})\} & -\epsilon(-\vek{p})\end{matrix}\right) \\
&\approx \pm \tau^a e^i_a(p_i \mp p_{F,i}) .\nonumber
\end{align}
Note that the BdG excitations are Majorana, $\Phi^{\dagger}(\vek{p}) = \tau^1 \Phi(-\vek{p})$, as expected in a BCS paired system. Here we have taken the normal state dispersion $\epsilon(\vek{p}) = \frac{p^2-p^2_F}{2m}$, where $m$ is the $^3$He atom mass. The tetrads are
\begin{align}
e^i_1 = c_{\perp}\unitvec{m}, \quad e^i_{2} = -c_{\perp} \unitvec{n},\quad e^i_{3} =- c_{\parallel}\unitvec{l}, \label{eq:3HeA_tetrad}
\end{align}
where $c_{\parallel} \equiv \frac{p_F}{m} = v_F$. Henceforth, to conform with relativistic notation, we will work with dimensionless tetrads in units of $c_{\parallel} = 1$. The quasiparticle dispersion is $E(\vek{p})=\pm \sqrt{\epsilon(\vek{p})^2 + \vert\Delta(\vek{p})\vert^2} \approx \pm \sqrt{c_\parallel q_{\parallel}^2+c_{\perp}^2 q_{\perp}^2}$, with $\vek{q} = \vek{p}-\vek{p}_F$ for the Weyl quasiparticles. The linear expansion is valid when $\abs{\vek{p}-\vek{p}_F} \ll p_F$ which provides an explicit cut-off for the Weyl description, requiring that the remainder
\begin{align}
\frac{1}{2}\frac{\partial \epsilon(\vek{k})}{\partial k^i \partial k^j} (p-p_F)^i (p-p_F)^j = \frac{1}{2m} (\vek{p}-\vek{p}_F)^2 \\
\ll e_a^i (\vek{p}-\vek{p}_F)_i .\nonumber
\end{align}
This leads to the condition, in addition to the trivial $\vert \vek{p}-\vek{p}_F\vert \ll p_F$ from the Taylor expansion of $\epsilon(\vek{p})$, that
\begin{align}
E_{\rm Weyl} \ll m c_{\perp}^2 = \left(\frac{c_{\perp}}{c_{\parallel}}\right)^2 E_{F}.
\end{align}
which will prove important later. In particular, the energy cutoff for the Weyl quasiparticles is anisotropic in momenta $\vek{q} = \vek{p}-\vek{p}_F$ around the Weyl point,
\begin{align}
q_{\perp} \ll \left( \frac{c_{\perp}}{c_{\parallel}} \right)p_F, \quad q_{\parallel} \ll \left( \frac{c_{\perp}}{c_{\parallel}} \right)^2p_F, \label{eq:Weyl_momenta}
\end{align}
if we consider the Weyl fermion system in the case where the background fields couple parallel and perpendicular directions \cite{Nissinen2019}. {This happens in the chiral system since the three direction are coupled by $\unitvec{l} = \unitvec{m} \times \unitvec{n}$ and the corresponding Mermin-Ho relations.}
\begin{figure}
\centering
\includegraphics[width=200pt]{Kuvaajat/quadratic_red_blue.pdf}
\caption{The torsional LL spectrum for the anisotropic Newton-Cartan model in chiral superfluids/conductors with the spectral flow indicated. Note that we have inverted the hole-like right-handed Landau level at $-p_F$ and the spectrum is particle-hole doubled. Overall there is a corresponding factor of 2 from spin-degeneracy.}
\label{fig:quadratic_spectrum}
\end{figure}
\subsection{Landau levels in linear approximation}
To compute the LL levels in the order parameter texture corresponding to a torsional magnetic field, we can take the "weak-twist" texture \(\hat{\mathbf{m}} + i\hat{\mathbf{n}} = \hat{\mathbf{x}} + i\hat{\mathbf{y}} - iT_Bx\hat{\mathbf{z}}\) with \(|Bx| \ll 1\), which corresponds to $\l = \hat{\mathbf{z}} + T_Bx\hat{\mathbf{y}}$\cite{Volovik85, BalatskiiEtAl86, CombescotDombre86}. The BdG Hamiltonian then takes the form
\begin{align}
H_{\rm BdG}& = \begin{bmatrix}
\epsilon(\hat{\vek{p}}) & \frac{1}{2} \{\Delta ^i,\hat{p}_i\}\\
\frac{1}{2} \{\Delta^\dagger\phantom{.}^i,\hat{p}_i\}& -\epsilon(-\hat{\vek{p}})
\end{bmatrix}
\\ =& \begin{bmatrix}
\epsilon(\hat{p}_x, p_y, p_z) & \frac{\Delta_0}{p_F}[\hat{p}_x + i(p_y-T_Bp_z x)]\\
\frac{\Delta_0}{p_F}[\hat{p}_x - i(p_y-T_Bp_z x )]& -\epsilon(-\hat{p}_x,-p_y, -p_z)
\end{bmatrix}. \nonumber
\end{align}
Near the gap node $\vek{p} = -p_F\l$ we may linearize the operator $\epsilon(\hat{\vek{p}})$ as $\varepsilon_\mathbf{p} \approx -v_F\l \cdot(\hat{\vek{p}} + p_F\l) \approx -v_F(p_z+p_F)$. This leads to
\begin{align}
H_+ = e^i_a\tau^a(p_i-p_F e_i^3) = \tau^a (e^i_a \hat{p}_i - p_F\delta^3_a)
\end{align}
with
\begin{align}
e^i_a = (c_\perp\delta^i_1, -c_\perp[\delta^i_2-T_Bx\delta^i_3], -c_\parallel\delta^i_3),
\end{align}
where we remind that \(c_\parallel \equiv v_F\) and \(c_\perp \equiv \frac{\Delta_0}{p_F}\). This corresponds, up to the sign of the field $T_{B}$ and the tetrad, to the case \eqref{eq:torsion_tetrad} after a rotation in the $x-y$ plane.
After moving to scaled coordinates $c_\perp^{-1} x \equiv \Tilde{x}$, $c_\perp^{-1} y \equiv \Tilde{y}$, $c_\parallel^{-1}z \equiv \Tilde{z}$, corresponding to dimensionless and scaled momenta \(p_a \equiv e^i_ap_i\), we can define the annihilation operator \(\hat{a} \equiv \frac{1}{\sqrt{2|T_Bp_z|}}\left[(| T_Bp_z|\Tilde{x} - p_{\Tilde{y}}) + i\hat{p}_{\Tilde{x}} \right]\) to arrive at the Hamiltonian
\begin{align}
H_{p_z<0} = \begin{bmatrix}
p_3+p_F & \sqrt{2|T_Bp_z|}i\hat{a}^\dagger\\
-\sqrt{2|T_Bp_z|}i\hat{a} & -(p_3+p_F)
\end{bmatrix}, \label{eq:H_negative}
\end{align}
which is \eqref{eq:Hmag} after a Galilean boost \(p_3 \to p_3 + p_F\). The eigenstates are then
\begin{equation}
\Psi_{n,p_z<0} = \begin{pmatrix}u_n \phi_n \\ v_n \phi_{n-1}\end{pmatrix}e^{i(p_zz+p_yy)}.
\end{equation}
where $\phi_n \equiv \phi_n(x)$, for $n\geq0$, are harmonic oscillator eigenstates and vanish otherwise. The condition for normalization is \(|u_n|^2 + |v_n|^2 = 1\), corresponding to the BdG particle and hole amplitudes.
Carrying out a corresponding calculation at the Weyl point $\vek{p} = p_F\l$, we have the Hamiltonian
\begin{equation}
H_{p_z>0} = \begin{bmatrix}
p_3-p_F & -\sqrt{2|T_Bp_z|}i\hat{a}\\
\sqrt{2|T_Bp_z|}i\hat{a}^\dagger & -(p_3-p_F)
\end{bmatrix}, \label{eq:H_positive}
\end{equation}
which can be identified as the left-handed Hamiltonian \(H_- = -e^i_a\tau^a p_i\) after a rotation about \(\l\) such that \(\hat{\mathbf{m}} \to -\hat{\mathbf{m}}\) and \(\hat{\mathbf{n}} \to -\hat{\mathbf{n}}\).
Its eigenstates are
\begin{equation}
\Psi_{n,p_z>0} = \begin{pmatrix}u_n \phi_{n-1} \\ v_n \phi_{n}\end{pmatrix}e^{i(p_zz+p_yy)}.
\end{equation}
Depending on the chirality, i.e. sign of momentum at the node, the LLL is either particle- or holelike {as in Eq. \eqref{eq:LLL_gaussian}}. The conclusion is that the spectrum looks like the relativistic spectrum in Fig. \ref{fig:pseudotorsion}, when the linear approximation for $\epsilon(\vek{p}) \approx \pm c_{\perp}(p_z-p_F)$ is valid, Eq. \eqref{eq:Weyl_momenta}. This corresponds to the spectrum of axial U(1) fields with momentum dependent charge and density of states per LL. The density of states is \eqref{eq:dos} in the scaled coordinates, which gives, with $e^0_{\mu} = \delta^0_{\mu}$,
\begin{align}
j^0 dV = e j^0 d\Tilde{V}= \frac{|p_zT_B|}{4\pi^2} d\Tilde{V}.
\end{align}
\subsection{Anisotropic Newton-Cartan model}
We just showed that the simple order parameter texture in chiral superfluid or superconductor gives rise to the torsional LLs for the low-energy Weyl quasiparticles, in the linear regime close to nodes. We can however consider quadratic dispersion beyond the linear approximation
\begin{align}
\epsilon(\vek{p}) = \frac{\vek{p}^2}{2m}-\mu_F \to \frac{p_z^2}{2m} -\mu_F, \label{eq:NC_dispersion}
\end{align}
which corresponds to the anisotropic Newton-Cartan (Majorana-Weyl) fermion model in Sec. \ref{sec:Newton-Cartan}.
The above model has the same regime of validity in the chiral superfluid or superconductor as the linear approximation in Eq. \eqref{eq:Weyl_momenta}, since it also neglects the rotationally invariant dispersion $\epsilon(\vek{p})$ of the normal state, {see also Ref. \onlinecite{Nissinen2019}}. The chiral $p$-wave BCS state has the uniaxial anisotropy of Eq. \ref{eq:NC_dispersion}, however, and this carries to the low-energy Weyl description in the form of the emergent spacetime. The other benefit of the anisotropic model \eqref{eq:NC_dispersion} is that the LL spectrum can be computed for momenta far from $p_F$, up till $p=0$, corresponding to the filled levels of the non-relativistic Fermi system, which are absent in the relativistic linear model. {This is important for the global properties of the chiral spectrum and anomaly}. In this way the contribution to the anomalous current from the superfluid vacuum can be analyzed, see Sec. \ref{sec:vacuum_current}.
The spectrum follows simply from Eqs. \eqref{eq:H_negative}, \eqref{eq:H_positive} by the substitution $\mp(p_3\pm p_F) \to \pm\epsilon(\pm p_z)$. From squaring the Hamiltonian, the corresponding eigenvalues are at both nodes
\begin{align}
E_n &= \pm\sqrt{\epsilon(p_z)^2+c_\perp^2|T_Bp_z|2n}, \nonumber \\
E_0 &= \pm \text{sgn}(p_zT_B) \epsilon(p_z).
\end{align}
for \(n\geq 1\). The LLL state retains the gaussian form \eqref{eq:LLL_gaussian}. The condition for normalization is \(|u_n|^2 + |v_n|^2 = 1\), and consequently the particle and hole amplitudes are in both cases
\begin{equation}
u_n = \sqrt{\frac{E_n+\epsilon(p_z)}{2E_n}}, \qquad v_n = i\sqrt{\frac{E_n-\epsilon(p_z)}{2E_n}}.
\end{equation}
With $E_0 = \epsilon(p_z)$ we have $v_0 = 0$, meaning that the lowest level particles appear only for \(p_z < 0\). For \(p_z > 0\) \(u_0 = 0\) when \(E_0 = -\epsilon(p_z)\), so for positive momenta only holes appear at the lowest level, as we found for the linear model. In this case we must, however, remember that the hole spectrum arises due to the Majorana doubling of the BdG spectrum and is not physical. This cancels with a corresponding factor of two from spin-degeneracy in the Fermi system. This leads to the LL spectrum in Fig. \ref{fig:quadratic_spectrum}.
\subsection{Spectral flow, axial density and consistent anomalous vacuum current} \label{sec:vacuum_current}
Now we are equipped to compute the spectral flow resulting from torsional Landau levels, corresponding to the covariant torsional NY anomaly. For the anisotropic Newton-Cartan model we can also compute the consistent vacuum current of the condensate, since the dispersion takes into account the filled states below the Fermi-level which is not the case for the linear approximation close to the Weyl nodes. {For the chiral superfluid (or -conductor) we have to take into account that the particles are Majorana-Weyl but a factor of two results from the spin-degeneracy}.
\subsubsection{Axial density}
The {torsional spectral flow leads to the anomalous density} as
\begin{align}
e j^{0}_{\pm} = \int_{\mp p_F - \frac{p_F \Lambda^2}{2}}^{\mp p_F + \tfrac{p_F \Lambda^2}{2}} dp^3 N_{\rm LL}(p_z) = \pm \frac{p_F^2(\frac{c_{\perp}}{c_{\parallel}})^2}{4\pi^2} T_B e^3_z .
\end{align}
where the cutoff for the Weyl spectrum is taken at $\Lambda^2 = \left(\frac{c_{\perp}}{c_{\parallel}}\right)^2$, corresponding to Eq. \eqref{eq:Weyl_momenta} with $\frac{1}{2} \ll 1$. Remarkably {the LL results matches the more general torsional contribution} for the NY anomaly including curvature, as implied by the {anomalous} momentum non-conservation in the system as found in Ref. \onlinecite{Nissinen2019}. This result was found by matching the anomaly on emergent spacetime of background the chiral $p$-wave system to the corresponding BCS hydrodynamic result of the superfluid. In particular, including the effects of superflow leads to a spin-connection and curvature perpendicular to $\unitvec{l}$, as required by the Mermin-Ho relations \cite{MerminHo76}.
In the chiral superfluid (or superconductor) the above result holds for both the linear quasirelativistic and the anisotropic Newton-Cartan spacetime, as defined by the tetrad \eqref{eq:3HeA_tetrad}. This simply follows from the fact that the cutoff for the validity of {both models} coincides with \eqref{eq:Weyl_momenta}. In this case, therefore, the anisotropic model {NC} is expected to require the same cutoff {as the linear model} since the system is probed also in the perpendicular direction. This morally happens since $\unitvec{l}=\unitvec{m}\times\unitvec{n}$, making the triad dependent \cite{MerminHo76, LiuCross79, Nissinen2019}. Strictly speaking in the LL-model we approximated $\unitvec{l} \approx \unitvec{z}$ which for the general non-trivial textures is given higher order corrections \cite{CombescotDombre86}.
\subsubsection{Axial current}
{On the other hand}, for the non-relativistic {anisotropic NC} model, however, we can also compute the anomalous vacuum current, corresponding to the anomalous superfluid momentum from the filled states below $p_F$ \cite{Volovik85}. {The global spectrum has correct form, valid also outside the vicinity of the Weyl points}. The anomalous momentum current is given by
\begin{align}
\vek{j}_{\rm anom,\parallel} = -2 \int^{p_F}_{0} dp^3 N_{\rm LL}(p_z) p_3 = -\frac{p_F^3}{6\pi^2} \unitvec{l}(\unitvec{l} \cdot \nabla \times \unitvec{l}) \label{eq:vacuum_current}
\end{align}
and even extending to $p_z=0$, there is no need for a cutoff. See Fig. \ref{fig:quadratic_spectrum}.
This is actually the correct hydrodynamic result for the (weak-coupling) BCS system \cite{VolovikMineev81, Volovik85, CombescotDombre86} to lowest order in gradients, since the final answer for the anomalous vacuum current is sensitive only to the $e_3 = \unitvec{l}$ direction, even in the presence of $\vek{v}_s$ (corresponding to curvature in the perpendicular plane). Upon taking the time-derivative of this momentum, the hydrodynamics of the system produce the covariant current implied by the Weyl anomaly. If we assume, without any supporting arguments, that the curvature and torsion contribute to the current \eqref{eq:vacuum_current} as they enter the anomaly Eq. \eqref{eq:NYanomaly}, we get the same result if we apply the cutoff \eqref{eq:Weyl_momenta} as above, {even in the linear model}. We note that these findings are corroborated by the thermal contribution to the NY anomaly, as found in Ref. \cite{NissinenVolovik2019}. {The proper inclusion of curvature also ensures that states far away from the Fermi surface do not contribute to the currents}.
These considerations beyond the LL spectral flow {aside}, what we want to here emphasize is that the \eqref{eq:vacuum_current} current corresponds to the consistent anomaly, and can be derived from a corresponding Wess-Zumino terms {that should be generalized for torsional spacetimes \cite{Volovik1986c, Balatsky87, PeetersWaldron99, Landsteiner16, KurkovVassilevich18, Stone2019b, Copetti20}}. See especially \cite{Copetti20}, where the consistent and covariant anomalies are discussed in an anisotropic Lifshitz model, closely related to Eq. \eqref{eq:NC_fermion}. We leave the study of the consistent vacuum current from the perspective of gravitational anomalies with torsion for the future.
\section{Strained Weyl semimetals}\label{sec:WSM}
Semimetals with Weyl fermions arise in solid-state systems where the Fermi energy is tuned to a band-crossing in the Brillouin zone \cite{NielsenNinomiya83, WanEtAl11}. The tetrads arise universally via the {coefficients of the} linear expansion. In this case, the fermions are also charged leading to the possibility of the U(1) anomaly with electric fields \cite{NielsenNinomiya83}. In addition to the tetrads, {related} effective background (axial) fields {can be considered} with similar origin as in the chiral superconductor \cite{Volovik03} -- the (constant) shift of the Weyl node in momentum space {that leads} to the existence of the protected Fermi arc states \cite{Haldane14, Landsteiner16, GrushinEtAl16}. Here we would like to clarify {the related but physically distinct torsional contribution to anomalous transport} from the tetrads in the presence of elastic strains. In fact, due to the universal coupling of the tetrads to momentum \cite{ParrikarEtAl14, ShapourianEtAl15}, as in gravity, one expects that deformations of the (lattice) geometry would lead to effects that probe the Weyl fermions via the background tetrads. {This framework correctly takes into account the anomalous physics of the momentum dependent fields, see nevertheless \cite{ZhouEtAl13, SunWan14, Fujimoto16, PikulinEtAl16, GrushinEtAl16, HuangEtAl19, FerreirosEtAl19, Stone2019, HuangBo20}.}
We start in a roundabout way, first discussing the low-energy Weyl Hamiltonian and then considering a lattice model for a realistic $T$-breaking material.
\subsection{Bloch-Weyl fermions in crystals}
The low-energy Bloch-Weyl Hamiltonian is of the form \cite{NielsenNinomiya83, WanEtAl11, ArmitageEtAl18}
\begin{align}
h_{\pm}(\vek{k}) &= \pm \sigma^a (k_a \mp k_{F,a}) + \textrm{ h.c.} \nonumber\\
&= \pm \frac{\sigma^a}{2} e^{i}_{a}(k_i \mp k_{F,i}) +\textrm{ h.c.} .
\end{align}
where now
\begin{align}
e^i_a = \frac{\partial H_{\rm TB}(\vek{k})}{\partial k^a}\bigg\vert_{\vek{k}_F}
\end{align}
are simply the linear coefficients of the expansion of the underlying (tight-binding) Bloch Hamiltonian $H_{\rm TB}(\vek{k})$ near the Weyl nodes. Before we consider lattice deformations in this model, we remark on the interplay of the tetrads and momentum. The lattice momentum is \cite{ShapourianEtAl15}
\begin{align}
\hat{p}_a = \frac{i}{2a} \sum_{\vek{x}} c_{\vek{x}}^\dagger c_{\vek{x}+\unitvec{a}}- c_{\vek{x}+\unitvec{a}}^\dagger c_{\vek{x}} = \sum_{\vek{k}} \sin (k_a a) c^{\dagger}_{\vek{k}}c_{\vek{k}} .
\end{align}
Under non-trivial background fields, the Weyl system itself is anomalous under the lattice translation symmetry, $T_{3} = T_{\unitvec{z}}$, corresponding to the conservation of the lattice momentum $\hat{p}_3$,
\begin{align}
T_{\unitvec{z}}^{\dagger} c_{\pm \vek{k}} T_{\unitvec{z}} = e^{\pmi a\vek{k}_{\rm F}}c_{\pm \vek{k}_F} \label{eq:lattice_rotation}
\end{align}
which corresponds to an anomalous chiral rotation of the low-energy Weyl fermions at the $T$-breaking nodes $\pm \vek{k}_F$. Here $c^{\dagger}_{\vek{k}}$ creates the state corresponding to the lattice periodic Bloch state $\vert v_{\vek{k}}\rangle = \vert v_{\vek{k}+\vek{K}} \rangle$, with wave function
\begin{align}
\psi_{\vek{k}}(\vek{x}) = e^{i \vek{k}\cdot \vek{x}}v_{\vek{k}}(\vek{x}).
\end{align}
In the presence of elastic deformations corresponding to torsion, i.e. phonons, the anomalous chiral symmetry corresponding to translations is manifested as the non-conservation of {(lattice)} momenta between the Weyl fermions and the background phonons \cite{Nissinen2019, Burkov20}, as found in superfluid $^3$He-A for the $p+ip$-wave paired Fermi-liquid \cite{Volovik03}. See also \cite{CortijoEtAl15, FerreirosEtAl19, NissinenVolovikPRR19, Copetti20}.
\subsection{Elastic deformations}
Now consider general lattice deformations. The original unstrained lattice momenta entering the Weyl {Hamiltonian} are represented as $k_a$ and the deformed lattice is given as $k_i = e^{\ a}_i k_a$ in the coordinate system of the laboratory, where $e^{\ a}_{i} \neq \delta^a_i$ to first order in the strains. These will couple as expected in the continuum model, as long as we take into account the lattice model properly, as we now recall following \cite{ShapourianEtAl15}. See also \cite{FerreirosEtAl19}. We have the continuum linear strain tensor,
\begin{align}
e^{\ a}_i = \delta^a_i + w^{\ a}_i &= \delta^a_{i}+\partial_i u^a \nonumber\\
e_{\ a}^i = \delta^i_a - w^i_{\ a} &= \delta_a^{i}-\partial_j u^b \delta_{ab} \delta^{ij} \label{eq:continuum}
\end{align}
where $u^a/a \ll 1$, {in terms of the lattice constant}. This means that $k_{F,a}$ is held fixed, whereas $k_{F,i}$ with $\delta k_{F,i} = w_i^{\ a} k_{F,a}$ is deformed ({in the laboratory coordinates}). This becomes on the lattice
\begin{align}
k_a \to k_a -w_{\ a}^i \frac{\sin k_i a}{a} \approx e_{\ a}^i k_i, \nonumber \\
k_i \to k_i + w^{\ a}_i \frac{\sin k_a a}{a} \approx e_{i}^{\ a}k_a . \label{eq:lattice}
\end{align}
where $w_{\ a}^i = \partial_j u^{b} \delta_{ab}\delta^{ij}$ is defined above and in the last approximation, the linear approximation for strain as well as $k_i a \ll 1$, close to the $\Gamma$-point, are used. In addition we assume that we work with low-frequencies corresponding to the acoustic phonons, below the Debye energy \cite{ShapourianEtAl15}.
\subsection{Lattice model}
In general, a model for a $T$-breaking Weyl semimetal consist of layered 2D Wilson fermions tuned to a zero energy crossing in three dimensions \cite{Volovik03, SukhachovEtAl18}. For a model of this kind pertaining to a real material, Ref. \cite{PikulinEtAl16} considered a time-reversal invariant $k\cdot p$ close to the $\Gamma$-point, where the the Weyl node itself will be at finite momentum corresponding to four momenta in the Brillouin zone, the minimum for $P$-breaking system. While the $k\cdot p$ model is realistic, it is more convenient to work with an explicit model with a lattice regularization that produces the same results. In terms of a tight-binding model, they considered
\begin{align}
H_{\rm lat}(\vek{k}) = \epsilon(\vek{k}) + \left(\begin{matrix} h_{\rm lat}(\vek{k}) \\ & -h_{\rm lat}(\vek{k}) \end{matrix}\right), \label{eq:H_latt}
\end{align}
where we focus on the time-reversal odd block $h_{\rm latt}(\vek{k})$ of the $T$-invariant model \cite{Volovik03, PikulinEtAl16, SukhachovEtAl18},
\begin{align}
h_{\rm lat}(\vek{k}) = t_z(M - \sum_{i=x,y,z} c_{i} \cos k_i a) \sigma^3 \\
+ (t_x \sin k_xa ) \sigma^1 + (t_y \sin k_ya) \sigma^2 . \nonumber
\end{align}
For $-1<\frac{M-c_x-c_y}{c_z}<1$ the model $h_{\rm lat}(\vek{k})$ has Weyl points at
\begin{align}
\pm a\vek{k}_F = (0,0,\pm \arccos \frac{M-c_x-c_y}{c_z}),
\end{align}
otherwise it is gapped. The dimensionful tetrads are
\begin{align}
e^i_a(\pm \vek{k}_{F}) = a(t_x, t_y, \pm t_zc_z \sin a k_{F,z})\delta^i_a.
\end{align}
Inversion symmetry $P$ acts as $h_{\rm lat}(\vek{k}) \to \sigma^z h_{\rm latt}(-\vek{k}) \sigma^z$. For simplicity we set $c_z=1$, $c_{x,y} = c_{\perp}$, $t_{x,y} = t_{\perp}$ and assume uniaxial symmetry along $\unitvec{z}$ in the following. We expect \eqref{eq:lattice} to hold for the Weyl semimetal model Eq. \eqref{eq:H_latt}, originating from the $k\cdot p$ model {close to the $\Gamma$-point}.
For this tetrad we can {moreover} ignore the difference of lattice and coordinate indices, with $u_{ij} = \frac{1}{2}(\partial_i u_j + \partial_j u_i) + O(u^2)$ the symmetric lattice strain. The strain induces the deformation considered in Ref. \cite{CortijoEtAl15} and \cite{PikulinEtAl16, GrushinEtAl16}
\begin{align}
\delta h_{\rm lat}(\vek{k}) =& - t_z \beta_{\rm el}u_{zz} \sigma^3 \cos ak_z \nonumber\\
&+ t_{\perp}\beta_{\rm el}(u_{xz} \sigma^1+u_{yz} \sigma^2) \sin ak_z
\end{align}
which gives
\begin{align}
\delta e^i_a = a t_z \beta_{\rm el} u_{ii} \delta_a^i \sin (k_Fa) + at_{\perp} \beta_{\rm el} \sum_{i' \neq i} u_{ii'}\delta^{i'}_a \cos (k_F a)
\end{align}
where $\beta_{\rm el}$ is the Gr\"unesein parameter. Restricting to a uniaxial strain corresponding to the axis of the Weyl node orientation, with the approximation that $ak_F\ll 1$,
\begin{align}
e_a^z \to at_z (1+\beta_{\rm el}u_{zz})\delta_{a3} + a t_{\perp} \sum_{i=x,y} \beta_{\rm el}u_{zj}\delta_{a}^j, \nonumber \\
\delta e_3^z = at_z u_{zz}, \quad \delta e^z_1 = at_\perp u_{zx}, \quad \delta e^z_2 = a t_{\perp} u_{yz}.
\end{align}
This has the (dimensionless) inverse tetrad, up to the neglected terms $O(u^2)$ in strains,
\begin{align}
e^1_i &= \unitvec{x}, \quad e^2_i = \unitvec{y}, \nonumber\\
e^3_i &= \unitvec{z}-\beta_{\rm el}\left(u_{zx},\left(\tfrac{t_z}{t_\perp}\right) u_{zy},\left(\tfrac{t_z}{t_\perp}\right)u_{zz}\right) .
\end{align}
This is what we expected, based on the corresponding universal continuum limit \eqref{eq:continuum} and the lattice substitution \eqref{eq:lattice} coupling to geometry, apart from the (non-universal) couplings $\beta_{\rm el}$, $\left(\tfrac{t_z}{t_\perp}\right) $ between the phonons and electrons of the lattice model \cite{ShapourianEtAl15}.
Now in the presence of non-homogenous strain vector $e^3_z $ depending coordinates and time, torsion $T^3_{\mu\nu}$ {and spectral flow} will arise. The Landau level arguments of Sec. \ref{sec:torsional_LLs} and \ref{sec:chiral} apply for a torsional magnetic field from $u_{zx,zy}(x,y)$ (in the ``symmetric gauge") and an adiabatic electric field from $u_{zz}(t)$, as in \cite{PikulinEtAl16, GrushinEtAl16}.
\subsection{Torsional density of states in anomalous transport}
Armed with the geometric background fields corresponding to torsional (magnetic field), we can consider the anomaly resulting from the chiral rotation \eqref{eq:lattice_rotation}. The linear Weyl model is valid up to the approximation
\begin{align}
t_z(M - \sum_{i=x,y,z} c_{i} \cos k_i a) & \\
\approx \frac{t_za^2}{2} \bigg[c_{\perp}(k_x^2+k_y^2) & +(k_z \mp k_F)^2\bigg] \nonumber\\
\approx t_z a e_3^i(k_i-&k_{F,i}) = (t_za \sin k_Fa)q_{z}
\end{align}
which is simply restricted by the ignored terms of the remainder in the expansion. Apart from the trivial $q_z \ll k_F \ll 1/a$, also
\begin{align}
c_{x} \cos q_x a + c_{y}\cos q_ya \nonumber
\approx& \frac{c_{\perp} a^2}{2} (q_x^2 + q_y^2) = \frac{c_{\perp} a^2}{2}q_\perp^2\\
\ll& \frac{t_x}{t_z}a q_x + \frac{t_y}{t_z}a q_y = \frac{t_{\perp}}{t_z}a q_{\perp}
\end{align}
leading to the constraint $q_{\perp} \ll \frac{2 t_\perp }{c_\perp a t_z}$, meaning
\begin{align}
E_{\rm Weyl} \ll \frac{t_\perp^2}{c_{\perp} t_z},
\end{align}
for the perpendicular direction. We are working in the units where $-1<M-2c_{\perp}<1$ and $\cos k_Fa = M-2c_{\perp} \approx 1$. For the effects of any torsional anomaly from {magnetic strain}, we can just evaluate the chiral densities at the nodes,
\begin{align}
n_{\pm}(\Lambda) = ej^0_{\pm} = \int_{\pm k_F(1-\frac{\Lambda^2}{2})}^{\mp k_F(1+\frac{\Lambda^2}{2})} dk^3 N_{\rm LL}(k_z) \nonumber \\
=\mp \frac{k_F^2 \Lambda^2}{4\pi^2}\beta_{\rm el}\left(\tfrac{t_z}{t_\perp}\right)T_B e^3_z .
\end{align}
It is interesting to recall that for the chiral superfluid, while strictly it must be that $\Lambda^2 \ll 1$ since $q_{z} \ll k_F$, we found that the cutoff was parametrically high ``$\frac{1}{2} \ll 1$" in terms {of the validity of the Weyl description}. There however, due to the orthonormal triad, also the perpendicular direction couples to the transport, with the cutoff Eq. \eqref{eq:Weyl_momenta} which in real $^3$He-A is actually $\sim 10^{-6} p_F$.
For the semimetal, the case where $q_z \sim \frac{t_{\perp }}{t_{z} \sin k_Fa}q_{\perp} \ll k_F$ {arises when} assuming that we isotropically couple to the perpendicular directions for general {strain} field configurations. Plugging in real parameters, we expect that for e.g. Cd$_3$As$_2$, $t_\perp \sim t_z \sin k_Fa$ \cite{PikulinEtAl16}. {Another option would be to consider the Newton-Cartan model with quadratic spectrum $M-2c_{\perp}-\cos k_za$ along the Weyl node direction with {uniaxial strain} only, with the constraint $q_z \ll k_F$}. The same model with different parameter also applies for the Dirac semimetal Na$_3$Bi \cite{PikulinEtAl16} and references therein.
Independent of whether one has a torsional electric field $\partial_t e^3_z \neq 0$ or an electric field $E^z$ driving the spectral flow, as in Fig. \ref{fig:B5E} and \ref{fig:B5E5}, this will lead to the suppression of the density proportional to $\Lambda^2$, corresponding to the validity of the linear Weyl approximation, in the anomalous transport, as compared to the Fermi wavevector $k_F$ and {the pseudo gauge field in momentum space \cite{PikulinEtAl16, GrushinEtAl16}}. We note that this reduction of {anomalous axial density} is simply due to the momentum dependent density of states. {This, as we have explained, naturally follows from the tetrads and torsion coupling to momenta and should be contrasted with a U(1) gauge field and constant density of states, as dictated by the universal minimal coupling and the topology of U(1) gauge fields}.
\section{Thermal effects}\label{sec:thermal}
Finally we briefly recall and discuss thermal contributions to the torsional anomaly. There are two possible effects: i) the small but finite temperature enters the NY anomaly as the scale of thermal fluctuations in momentum space. These are analyzed in \cite{NissinenVolovik2019, NissinenVolovik19b, Stone2019} ii) There is a {related} finite thermal gradient in the system and one computes the thermal response via Luttinger's fictitious gravitational field \cite{Luttinger64}. We note that non-zero time-like torsion for the Luttinger spacetime implies the non-single valued time coordinate in the fictitious gravitational field \cite{BradlynRead15}. See also \cite{Stone12, GromovAbanov15, Sekine16, RyuEtAl17, ChernodubEtAl18, KobayashiEtAl18}.
Here we focus on the effects of a thermal gradient, the currents induced can be computed by coupling the system to fictitious spacetime metric, following Luttinger \cite{Luttinger64}. Specifically, we assume a thermal gradient
\begin{align}
\nabla \sigma = -\frac{1}{T}\nabla T
\end{align}
which is equivalent to a weak gravitational potential $g_{00} = 1+2\sigma$ in the system. The perturbation $\delta g_{00}$ couples to the Hamiltonian (energy current) $T^{00}$. In units where the velocity of propagation is $v=1$, the metric is
\begin{align}
ds^2 &= e^{+2\sigma}dt - \delta_{ij}dx^i dx^j \\
&\approx (1+2\sigma)dt^2 - \delta_{ij}dx^i dx^j
\end{align}
from which the linear response to the thermal gradient $\sigma$ can be calculated \cite{Luttinger64}. This can be generalized to a metric
\begin{align}
ds^2 = e^{2\sigma}(dt+e^{-\sigma}N_i dx^i)^2 - \delta_{ij}dx^i dx^j \\
= e^0_{\mu}e^{0}_{\nu} dx^{\mu}dx^{\nu} - \delta_{ij} d x^i dx^j,
\end{align}
now with a small gravimagnetic potential \cite{Volovik03, RyuEtAl17}
\begin{align}
A_{\mu}^{\rm g} = (e^{\sigma},N_i) \approx (1+\sigma, N_i) \equiv e^0_{\mu},
\end{align}
where $N_i$ describes a velocity field in the units where $v=1$. The gravitational thermal potential \cite{Volovik03, RyuEtAl17, KhaidukovZubkov2018}
\begin{align}
-\frac{1}{T}\nabla T = \nabla \sigma - \partial_t N_i. \label{eq:gravimagnetic}
\end{align}
whence
\begin{align}
e^0_{\mu} &= (e^{\sigma}, N_i), \quad e^a_\mu =\delta^{a}_{\mu}, \quad a=1,2,3 \\
e^{\mu}_0 &= (e^{-\sigma},0), \quad e^{\mu}_a = (e^{-\sigma}N_i,\delta^{i}_{a}), \quad a=1,2,3.
\end{align}
In this case Eq. \eqref{eq:gravimagnetic} becomes
\begin{align}
-\frac{1}{T}\nabla T = \nabla \sigma - \partial_t N_i = \partial_{i}e^{0}_{t} - \partial_{t}e^0_{i} = T^{0}_{i t}
\end{align}
where $T^0_{\mu\nu}= \partial_{\mu}e^0_{\nu}-\partial_{\nu}e^0_{\mu}$ is the temporal torsion, assuming zero temporal spin-connection $\omega^0_{\mu b} \equiv 0$. It is expected then, that one would have possibility for anomalous transport in terms of the combination of thermal gradient and vorticity $T^0_{ij} = \partial_i N_j -\partial_j N_j$ in the velocity field $N_i(x)$, as in the chiral vortical (and magnetic) effect \cite{KhaidukovZubkov2018, ImakiYamamoto19}.
Now similarly as we expect momentum density at the Weyl node $(P^{\mu})_{\rm node} = \Pi^{t\mu} = p_F e_3^{i}\delta_{i}^\mu e j^{0}_5$\cite{Nissinen2019} for the Weyl systems at finite $p_{Wa}=p_F\delta_{3a}$, or since $T^{0\mu} = e e^{\mu}_a T^{t a}$,
\begin{align}
e \Pi^{t 3}= \frac{p_F^3 \Lambda^2}{16\pi^2} e^3_\mu e_3^i \delta_i^{\mu} \epsilon^{0\nu\lambda\rho} e^3_{\nu} T^3_{\lambda\rho}
\end{align}
we expect an energy density of the form
\begin{align}
J^{t}_{\epsilon} = eT^{t}_{\ 0} = p_F e j^0_5= \frac{p_F T^2}{12v^2} \epsilon^{tijk} e_{i}^0 T^0_{jk}
\end{align}
where $T^{\mu}_{\ a} \equiv \frac{1}{e}\frac{\delta S}{\delta e^a_\mu}$. The anomaly of this current would be proportional to $T\nabla T$, and is indeed reminiscent of the chiral vortical effect \cite{GromovAbanov15, KhaidukovZubkov2018}. We can also expect mixed terms, in the sense that there should be a corresponding energy current from \emph{both} the momentum density and thermal current, $\partial_t e^i_3 \neq 0$, at the node
\begin{align}
J^i_{\epsilon} = e T^i_{\ 0} = \frac{p_F T^2}{6v^2} \epsilon^{0ijk} e^3_j \times T^0_{0k} + \frac{p_F T^2}{12v^2} \epsilon^{0ijk} e_t^0 T^{3}_{jk} ,
\end{align}
these ``mixed" contributions to the currents were identified and discussed in Ref. \cite{LiangOjanen19b}.
The message we want to convey here is that one can indeed expect anisotropic and ``mixed" contributions to the torsional anomalies, in the sense that the Lorentz invariant $\Lambda^2\eta_{ab} \to \Lambda_a \Lambda_b$ a generalized anisotropic tensor, in various condensed matter systems depending on the symmetries, perturbations and cutoffs. We leave the detailed discussion of such thermal gravitational contributions for the future, see however \cite{Stone2019, LiangOjanen19b} and the general discussion in \cite{NissinenVolovik2019}.
\section{On the relation of emergent torsion and pseudo gauge fields} \label{sec:comparison}
Here we summarize our findings in relation to earlier literature, where the momentum space field corresponding to the shift of the node is often considered as an axial gauge field \cite{Volovik85, Volovik03, CortijoEtAl15, Fujimoto16, PikulinEtAl16, GrushinEtAl16, SukhachovEtAl17, HuangEtAl19, FerreirosEtAl19, IlanEtAl19}. We note that torsion can be shown to enter as an axial gauge field constructed from the totally antisymmetric torsion $\gamma^5S^{\mu} =\epsilon^{\mu\nu\lambda\rho}T_{\nu\lambda\rho}$ \cite{ChandiaZanelli97, Soo99} coupling to the momentum. This is essentially what we found in Secs. \ref{sec:torsional_LLs} and \ref{sec:chiral} with the momentum space dependent LL density of states. {The LL calculation and anomaly itself should be performed by taking this momentum dependence into account, as we have done here}.
How are tetrads with torsion otherwise different from the momentum gauge field? The symmetries corresponding to the tetrads are translations which for finite node momenta, {requisite for condensed matter Weyl fermions}, corresponds to the anomalous chiral symmetry. There is no local gauge symmetry corresponding to the Berry curvature in momentum space. On the other hand, the geometric formulation is suited for such translation symmetries and reveals the background geometry of the spacetime emerging from the node \cite{Horava05}. The overall geometry can made consistent with the non-relativistic symmetries away from the Weyl node for a finite momentum range. For the anomalous axial density and anomaly, this leads to the parametric suppression compared to U(1) anomaly and the UV-scale $p_W$. The phenomenological implications of this are significant, even without the theoretical recourse to the emergent geometry.
We also note that Ref. \cite{FerreirosEtAl19} discusses torsion (and the conservation of momentum) in strained semimetals in terms of a model with both the axial gauge field from the node and the tetrad with elastic deformations. While such a ``splitting" between low-energy and high-energy momenta is in principle allowed, it makes the consideration of the momentum dependent anomalies more involved, with the danger of double counting. The momentum anomaly (without EM gauge fields) should be proportional $k_W \partial_{\mu}(e j^{\mu}_5)$, as found in \cite{Nissinen2019}.
The original paper \cite{ShapourianEtAl15} for {elastic} deformations takes an explicitly geometrical view point which nicely connects with the strain induced tetrad formalism proposed here. In the simplest possible terms, we start with the Weyl (or Dirac) Hamiltonian in flat space with the small deformation $e^i_a = \delta^i_a+\delta e_a^i$,
\begin{align}
H_{+} = \sigma^a(\hat{k}_a - k_{Wa}) &\to \frac{\sigma^a}{2} e^i_a (\hat{k}_i - k_{Wi}) + \textrm{ h.c.} \nonumber\\
&= \frac{\sigma^a}{2} (e_a^i k_i - k_{Wa}) + \textrm{ h.c.}. \\
&\approx \frac{\sigma^a}{2} ([\delta_a^i + \delta e^i_a] q_i + k_W\delta e^i_a) + \textrm{ h.c.} \nonumber
\end{align}
where now $k_W \delta e^i_a =-k_W \delta e^a_i$ is the momentum space gauge field in the Hamiltonian with (almost) constant tetrads \cite{Volovik85, BalatskiiEtAl86, ShapourianEtAl15, PikulinEtAl16, GrushinEtAl16, FerreirosEtAl19}. The right-hand side is the Hamiltonian in coordinate (or laboratory) space, which is the one we have experimental access to, and is deformed with respect to the orthogonal frame of $k_a$. We see that the momentum $\hat{k}_i$ couples to $e^{i}_a$, as expected, and the shift is essentially constant in the Hamiltonian, in the sense that $k_{Fa}$ is constant corresponding to the undeformed case, irrespective of the deformation. At the same time, the laboratory value changes though as $k_{Fi} = e^a_i k_{Fa}$. In the examples we considered, in the chiral superfluid and superconductor we explicitly have that $k_{F,i}=p_F e^3_i$, giving $k_{Fa} = p_F\delta^3_a$. Similarly, for the strained semimetal we consider the originally unstrained lattice Fermi wave vector $k_{Fa}(x) \to k'_{Fa}(x+u) \approx k_{Fa}(x) + \partial_i u^a k_{Fa}(x) \equiv e_i^a k_{Fa}$ under strain $x' = x+u$, giving Eq. \eqref{eq:continuum} as expected.
What this means more generally is that $\nabla k_{Fa}=0$, in terms of the connection corresponding to the emergent spacetime, as discussed in Sec. \ref{sec:spacetimes}. In fact this is one of the requirements for the consistent assignment of the low-energy geometry. On the other hand, all the torsional spacetimes we considered are in some sense abelian (or gravitoelectromagnetic) since the relevant fields can be identified as an abelian gauge fields in momentum space, amounting to what was called ``minimal coupling" trick in \cite{ParrikarEtAl14,ShapourianEtAl15}. In this case however, the gravitational character comes still evident in the momentum dependent charge and density of LLs, as expected for gravitational response, coupling to momenta and energy densities {including thermal effects}.
\section{Conclusions and outlook}\label{sec:conclusions}
In this paper, we have argued for the emergence of non-zero torsional anomalies in Weyl (and Dirac) systems with simple Landau level arguments. In particular, we were motivated by the possibility of non-zero torsional Nieh-Yan anomalies in condensed matter systems with an explicit cutoff and the lack of relativistic Lorentz symmetries. For the anomaly, the spectral flow in the presence of torsion clearly renders non-zero results for Weyl nodes at finite momentum. Although obtained with specific simple field configurations corresponding to the torsion with Landau level spectra, they are expected to generalize covariantly in terms of the relevant spatial symmetries of the system. We discussed two idealized spacetimes related to the {symmetries}, the linear Riemann-Cartan and the anisotropic Newton-Cartan spacetime with quadratic dispersion.
{We also briefly discussed the thermal torsion via Luttinger's fictitious spacetime, since we can expect mixed anomalies already from the inclusion of thermal gradients. This connects to gravitational anomalies and transport in general \cite{NissinenVolovik2019}. The recent results on universal anomaly coefficients in linear response thermal transport related to gravitational anomalies \cite{Landsteiner11, LoganayagamSurowka12, JensenEtAl13, Landsteiner2014, LucasEtAl2016, StoneKim18} are related. From the non-universal torsional anomaly, via e.g. the momentum dependent LL density of states, the expected gravitational anomaly polynomials at finite temperature arise already at the level of linear response from the universality of IR thermal fluctuations \cite{NissinenVolovik2019}.} Moreover, we expect that the emergent tetrads with coordinate dependence arise rather generally in any Weyl system, making sense of evaluating the linear response to these, even in flat space.
We {clarified} the relation between momentum space pseudo gauge fields and the emergent tetrads. It is important to realize that the spectral {or Hamiltonian} correspondence between torsion and U(1) magnetic fields, e.g. in a Landau level problem, is not yet enough for the anomalies to match in general. The simple LL spectral flow argument is enough to identify the non-universal cutoff appearing in the NY anomaly term. The message is that {low-energy tetrads and geometry} couple to the momentum in a universal way, even in lattice models with some caveats \cite{ShapourianEtAl15, CortijoEtAl15}, due to the non-universal coupling of the lattice phonons and fermions {as compared to pure continuum}. The UV scales appearing in the termination of anomalous chiral transport from such emergent fields, related to the Fermi-point momentum $p_W$ and the regime of validity of the effective Weyl/Dirac description, are naturally understood from the geometric perspective. In the presence of both independent U(1) fields and momentum space tetrads we should also expect many mixed terms, as studied e.g. in \cite{KubotaEtAl01, ParrikarEtAl14}. The mixed {torsional anomalies should also be carefully reconsidered with regards to finite node momentum, where we again expect differences to relativistic fermions}. {On this note} our results for the anomaly at finite momentum are in contrast to \cite{HuangBo20}, where a model with torsion is compared to a relativistic model at $p=0$ with pseudo gauge fields without consideration of node momentum coupling to the torsion or the cutoff of the quasirelativistic dispersion.
More formally, what we did amounts to applying the $K$-theory theorem of Horava \cite{Horava05} to the geometry of specific Weyl nodes in three dimensions, by keeping track of the UV symmetries and scales in the problem for the precise the form of the emergent geometry and fields coupling to the quasiparticles. The topology only guarantees the effectively Dirac like spectrum, with everything else depending on the microscopics.
Many interesting avenues remain in the geometric description of topological condensed matter systems with gapless fermions, {including also nodal line systems \cite{NissinenVolovik2018, Schnyder20}}. It would be extremely interesting to study the gravitational anomalies in Weyl and Dirac systems from the global symmetry perspective with many nodes Weyl, taking into account the relevant space group symmetries \cite{CortijoEtAl15, Manes12, JuricicEtAl12, SlagerEtAl13, RaoBradlyn20}. More generally, the appearance of low-energy quasirelativistic fermions {with exotic geometric backgrounds within feasible experimental reach is expected to give} more insight also to the physics of relativistic gravitational anomalies with torsion \cite{ChandiaZanelli97}, although the symmetries and status of the background fields {are} dramatically different.
\emph{Acknowledgements. ---} We thank Z.-M. Huang for correspondence on his work, T. Ojanen and P.O. Sukhachov for discussions. Finally we especially thank G.E. Volovik for discussions, support and collaborations on related subjects. This work has been supported by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (Grant Agreement no. 694248).
| proofpile-arXiv_065-286 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Data Specifications Table}
\begin{table}[htb]
\centering
\footnotesize
\label{DataSpecificationTable}
\begin{tabular}{|l|p{10cm}|}
\hline
\textbf{ Subject }& Management of Technology and Innovation. \\\hline
\textbf{ Specific subject area }& A focus area maturity model for API management. \\\hline
\textbf{ Type of data }& Text, literature references, and tables. \\\hline
\textbf{ How data were acquired }& Systematic literature review and expert interviews. \\\hline
\textbf{ Data format }& Raw, analyzed, and evaluated. \\\hline
\textbf{ Parameters for data collection }& The collected practices had to fit strict requirements in terms of having to be executable, implementable, and easily understandable by practitioners that are involved with API management within their organization. \\\hline
\textbf{ Description of data collection }& The initial data was collected through a SLR \cite{mathijssen2020identification}. Initially, the data was grouped according to topical similarity. Practices were categorized, analyzed and verified through discussion sessions with all involved researchers, inter-rater agreement and information gathered from grey literature. Capabilities and practices were then evaluated through 11 expert interviews. For information on selection of the practitioners, we refer to the related research article \textit{(to be published)}. If at least 2 or more practitioners found a practice relevant and useful, they became a part of the collection. Additionally, six discussion sessions among the researchers were conducted, during which all suggested changes (i.e. removal, addition, and relocation of practices and capabilities) were discussed, interpreted, and processed. The resulting practices and capabilities were then evaluated with 3 experts whom were previously interviewed.
Finally five case studies were conducted to evaluate different software products.
\\\hline
\textbf{ Data source location }& All included source literature can be reviewed in the associated research article~\cite{mathijssen2020identification}. \\\hline
\textbf{ Related research article }& Mathijssen, M., Overeem, M., \& Jansen, S. (2020). Identification of Practices and Capabilities in API Management: A Systematic Literature Review. arXiv preprint arXiv:2006.10481.\\\hline
\end{tabular}
\end{table}
\onecolumn
\section{Introduction}
\label{sec:introduction}
This data set describes the API Management Focus Area Maturity Model (API-m-FAMM).
The model supports organizations that expose their API(s) to third-party developers, in a structured manner, in their API management activities.
Using the API-m-FAMM, organizations may evaluate, improve upon and assess the degree of maturity their business processes regarding the topic of API management have.
We define API Management as an activity that enables organizations to design, publish and deploy their APIs for (external) developers to consume. API Management encompasses capabilities such as controlling API lifecycles, access and authentication to APIs, monitoring, throttling and analyzing API usage, as well as providing security and documentation.
\begin{itemize}
\item The data may be used by API management researchers for evaluation, validation and extension of the model.
\item The data can be used by focus area maturity researchers to establish the vocabulary used in the field.
\item The data can be used by researchers as a basis for future research work in the domains of API management, versioning and evolution.
\item The data is reusable by consultants and practitioners to assess whether they have implemented a practice fully.
\end{itemize}
The research approach is explained in Section~\ref{sec:design}.
Section~\ref{sec:apimfamm} describes the final API-m-FAMM in full detail.
The different intermediate versions are described in Sections~\ref{sec:version01}, \ref{sec:version02}, \ref{sec:version03}, \ref{sec:version04}, \ref{sec:version05}, and \ref{sec:version10}.
\section{Experimental Design, Materials, and Methods}
\label{sec:design}
The Focus Area Maturity Model is constructed using the design methodology of \cite{van2010design} and \cite{de2005understanding}.
The development of the FAMM is done in five phases: \emph{Scope}, \emph{Design}, \emph{Populate}, \emph{Test}, and \emph{Deploy}.
These phases are executed through a SLR, expert interviews, case studies, and numerous discussions among the authors.
Between the execution of every method, the authors discussed the state of the model until consensus was reached on its contents and structure.
This was done using online \textit{Card Sorting}~\citep{nielsen1995}, with \textit{Google Drawings} as a tool.
Figure~\ref{fig:research-steps} shows which methods were used in each phase, by linking them to the different intermediate versions of the API-m-FAMM.
The intermediate versions including a changelog are described in Sections~\ref{sec:version01}, \ref{sec:version02}, \ref{sec:version03}, \ref{sec:version04}, \ref{sec:version05}, and \ref{sec:version10}.
\begin{figure*}[!h]
\centering
\includegraphics[page=1, clip, trim=1.0cm 12.5cm 2.1cm 0.8cm, width=\textwidth]{Figures/ResearchApproach.pdf}
\caption{The steps that were executed in constructing the API-m-FAMM and its various intermediate versions.}
\label{fig:research-steps}
\end{figure*}
\subsection{Scope, Design, Populate Phases}
The initial data was acquired through the SLR as described in \cite{mathijssen2020identification}.
Based on this SLR, a primary source was chosen~\cite{de2017api}.
Using this source as a starting point, the scope of the API-m-FAMM was determined and the initial model was constructed (\textbf{version 0.1}, Section~\ref{sec:version01}).
Subsequently, the SLR was used to populate the model, which resulted in a FAMM consisting of 114 practices and 39 capabilities that are categorized into 6 focus areas (\textbf{version 0.2}, Section~\ref{sec:version02}).
These practices and capabilities were then analyzed and verified through four validation sessions with all involved researchers, inter-rater agreement and information gathered from grey literature, such as online blog posts, websites, commercial API management platform documentation and third-party tooling (\textbf{version 0.3}, Section~\ref{sec:version03}).
\subsection{Test Phase}
The API-m-FAMM underwent two evaluation cycles.
First, 11 semi-structured interviews with experts were conducted.
During these interviews, experts were asked whether they agree with the inclusion of practices, capabilities, and focus areas as part of the API-m-FAMM, as well as whether they could suggest the addition of any new practices or capabilities.
Additionally, practices were ranked by these experts in terms of their perceived maturity in order to determine their respective maturity levels.
As a result of these interviews, many suggestions were made to either move practices to a different capability, remove them entirely, rename them, or newly add practices.
These suggestions were then analyzed, processed, and discussed through 6 discussion sessions with all involved researchers.
As a result, the model was quite substantially modified, with the existing body of practices and capabilities being narrowed down to 87 practices and capabilities, as well as numerous focus areas, capabilities, and practices being renamed.
Additionally, all practices were assigned to individual maturity levels within their respective capabilities (\textbf{version 0.4}, Section~\ref{sec:version04}).
The second evaluation cycle consisted of three unstructured interviews with experts originating from the sample of experts that were interviewed during the first evaluation cycle.
During these interviews, the changes made as a result of the previous evaluation cycle, as well as the newly introduced maturity assignments were presented and discussed.
Additionally, experts were asked to evaluate the model again with regards to the same criteria used in the first cycle.
The API-m-FAMM was not significantly changed after this second cycle (\textbf{version 0.5}, Section~\ref{sec:version05}).
\subsection{Deploy Phase}
Finally the API-m-FAMM was used to evaluate five different software products.
The evaluation was done by using a \emph{do-it-yourself} kit, which is available on \url{https://www.movereem.nl/api-m-famm.html}.
These evaluations led to some minor changes (\textbf{version 1.0}, Section~\ref{sec:version10}).
\section{API-m-FAMM}
\label{sec:apimfamm}
The API-m-FAMM and the practices and capabilities it consists of are divided into six focus areas. The focus areas are not equal in size, with the smallest focus area consisting of 2 capabilities and 11 practices, while the largest is composed of 5 capabilities and 18 practices. This is caused by the fact that the topic of API management is broad and not evenly distributed across its domains. For example, the \textit{Community} and \textit{Lifecycle Management} focus areas that are described below contain many practices, while \textit{Observability} is a domain consisting of a small but relevant amount of practices and capabilities.
We have defined capabilities as the ability to achieve a goal related to API Management, through the execution of two or more interrelated practices. Combined, these practices and capabilities form the focus areas which describe the functional domain the topic of API management is composed of. A practice is defined as an action that has the express goal to improve, encourage, and manage the usage of APIs. Furthermore, the practice has to be executable, implementable and verifiable by an employee of the organization.
Each individual practice is assigned to a maturity level within its respective capability. As mentioned earlier, these maturity levels were determined by having experts rank the practices according to their perceived maturity within their respective capabilities. Additionally, they were asked whether they could identify any dependencies with regards to the implementation of other practices. Practices can not depend on practices as part of another capability that have a higher maturity level. For example, practice 1.1.6 is dependant on the implementation of practices 1.3.3 and 4.2.3, resulting in a higher overall maturity level being assigned to this practice. The API-m-FAMM in its entirety, including the maturity level that each practice has been assigned to, is depicted visually in Figure~\ref{fig:api-m-famm}.\\
Section~\ref{subsec:areas} describes and defines the focus areas and capabilities. Section~\ref{subsec:practices} details the practices. Practices are described by using the following elements:
\begin{itemize}
\item \textbf{Practice code -} The practice code is made up of three numbers. The first number concerns the focus area, the second number the capability, and the third number the maturity level it has been assigned to.
\item \textbf{Practice -} The name of the practice, as it is mentioned in the API-m-FAMM.
\item \textbf{Focus area -} The focus area is mentioned to indicate the domain in which this practice is relevant.
\item \textbf{Description -} A paragraph of text is provided to
describe the practice in detail. The main reason for providing a lengthy description is internal validity: in future evaluations by third parties, they should be able to perform the evaluations independently.
\item \textbf{When implemented -} Provides a series of necessary conditions before this practice can be marked as implemented. Again, to strengthen internal validity of the API-m-FAMM.
\item \textbf{Literature -} Several references are included to articles that mention the practice. The literature can be found in the SLR~\cite{mathijssen2020identification}. References may also consist of online blog posts, websites, commercial API management platform documentation and third-party tooling.
\end{itemize}
\begin{figure*}
\centering
\includegraphics[page=1, clip, trim=0.5cm 0.5cm 0.5cm 0.5cm, width=\textwidth]{Figures/API-m-FAMMv1.0.pdf}
\caption{The API-m-FAMM model, showing all six focus areas, the capabilities, and the practices regarding API management. The columns correspond with the maturity level of the practice. }
\label{fig:api-m-famm}
\end{figure*}
\newpage
\subsection{Focus Areas \& Capabilities}
\label{subsec:areas}
\begin{enumerate}
\item \textbf{Lifecycle Management}: Generally speaking, an API undergoes several stages over the course of its lifetime; creation, publication, realization, maintenance and retirement \citedata{medjaoui2018continuous}. In order to control and guide the API through these stages, the organization must be able to perform a variety of activities. In order to maintain the API, the organization must decide on a versioning strategy, notification channels and methods in case of updates, as well as decouple their API from their application. In doing so, the organization is able to manage and maintain the versions the API goes through as it evolves over time.\\
\begin{enumerate}
\item [1.1] \textit{Version Management}: APIs evolve over time with newer business requirements. In order to cope with this, the organization should have a versioning strategy in place, such as managing multiple versions of an API to support existing consumers, or by avoiding breaking changes as part of an evolutionary strategy. Additionally, the organization should be able to deprecate and retire older versions of their API smoothly. With proper notice and period, deprecated APIs should be retired and removed so as to avoid any maintenance overheads \citedata{de2017api}. In order to guide this process, the organization may also have a deprecation protocol in place.
\item [1.2] \textit{Decoupling API \& Application}: When an organization creates an API to expose its data and services, it needs to ensure that the API interface is intuitive enough for developers to easily use \citedata{de2017api}. However, the interface for the API will most likely be different from that of the back-end services that it exposes. Therefore, the organization should be able to transform the API interface to a form that the back end can understand.
\item [1.3] \textit{Update Notification}: Changes made to an API may adversely affect its consumers. Hence, consumers must be notified of any planned updates of the API \citedata{de2017api}. The organization should have the ability to inform developers using the API of any changes by distributing change logs, using a communication channel such as email, the developer portal, or preemptively through the use warning headers or a versioning roadmap.\\
\end{enumerate}
\item \textbf{Security}: APIs provide access to valuable and protected data and assets \citedata{de2017api}. Therefore, security for APIs is necessary to protect the underlying assets from unauthenticated and unauthorized access. Due to the programmatic nature of APIs and their accessibility over the public cloud, they are also prone to various kinds of attacks. Hence, the organization should undertake various measures to prevent this from happening. For example, one of many available authentication and authorization protocols should be implemented, prevention for attacks such as DoS or SQL script injection attacks should be in place and sensitive data should be encrypted or masked.\\
\begin{enumerate}
\item [2.1] \textit{Authentication}: Authentication is the process of uniquely determining and validating the identity of a client \citedata{de2017api}. In order to achieve this, the organization may implement an authentication mechanism such as API keys or protocols such as WSS or OpenID Connect, or the Single Sign-on method.
\item [2.2] \textit{Authorization}: Authorization controls the level of access that is provided to an app making an API call and controls which API resources and methods that can invoke \citedata{de2017api}. The organization may implement authorization through access control or an industry-standardized authorization protocol such as OAuth 2.0.
\item [2.3] \textit{Threat Detection \& Protection}: The likelihood of bad actors making attacks using malicious content is high, in addition to common threats such as DoS attacks. Content-based attacks can be in the form of malformed XML or JSON, malicious scripts, or SQL within the payload \citedata{de2017api}. Therefore, the organization should be able to detect malformed request formats or malicious content within the payload and then protect against such attacks.
\item [2.4] \textit{Encryption}: Oftentimes, message payloads sent in API calls contain sensitive information that can be the target for man-in-the-middle attacks \citedata{de2017api}. Therefore, the organization should secure all communication between the client app and the API service through using techniques such as TLS encryption by default. Furthermore, it is desirable for the organization to prevent exposure of sensitive data by making utilizing methods such as masking or hashing.\\
\end{enumerate}
\item \textbf{Performance}: APIs are no longer exclusively seen as mechanisms for integration but have become mainstream for the delivery of data and services to end users through various digital channels \citedata{de2017api}. This increases the demand on APIs to perform well under loads. The overall performance of a client app is dependent on the performance of the underlying APIs powering the app. Hence, the importance of performance for APIs increases greatly. In order to ensure performance and stability of their APIs, organizations must be able to perform various activities. For example, enabling consumers to implement caching improves an API's performance through reduced latency and network traffic. Additionally, using rate limiting and throttling mechanisms to manage traffic and using load balancing to route traffic more effectively also improves the API's performance.\\
\begin{enumerate}
\item [3.1] \textit{Resource Management}: In order to improve the performance of their API(s), it is important for an organization to effectively manage the available resources. This may be accomplished through the use of mechanisms such as load balancing, scaling, or by having a failover policies in place.
\item [3.2] \textit{Traffic Management}: Another aspect of improving API performance is effectively managing incoming traffic. In order to do so, the organization may choose to implement mechanisms such as caching, rate limiting or throttling, or by prioritizing traffic based on customer characteristics.\\
\end{enumerate}
\item \textbf{Observability}: As an organization, it is necessary to have insight into the API program to make the right investments and decisions during its maintenance. Through various monitoring techniques, the organization is able to collect metrics which can shed light on the API's health, performance and resource usage. In turn, these metrics may be aggregated and analyzed to improve the decision making process on how to enhance the business value by either changing the API or by enriching it \citedata{de2017api}. Additionally, by being able to log API access, consumption and performance, input may be gathered for analysis, business value or monetization reports. These may be used to strengthen communication with consumers and stakeholders or check for any potential service-level agreement violations.\\
\begin{enumerate}
\item [4.1] \textit{Monitoring}: As an organization, it is important to be able to collect and monitor metrics and variables concerning the exposed API. For example, information regarding the health and performance of the API, as well as resources used by the API should be monitored so that it may be used as input for activities such as generating analysis reports and broadcasting the API's operational status.
\item [4.2] \textit{Logging}: In monitoring their API(s), it is helpful for the organization to be able to perform logging of consumer behavior and activities. This may include logging of API access, usage and reviewing historical information.
\item [4.3] \textit{Analytics}: As an organization, it is important to be able to analyze the metrics and variables that are collected through monitoring. For example, information regarding the health and performance of the API may be utilized to decide which features should be added to the API. Additionally, it is desirable for the organization to be able to extract custom variables from within the message payload for advanced analytics reporting.\\
\end{enumerate}
\item \textbf{Community}: As an organization exposing APIs for external consumers and developers to consume, it is often desirable to foster, engage and support the community that exists around the API. For example, this entails offering developers the ability register on the API and offering them access to test environments, code samples and documentation. Additionally, the organization may support developers in their usage of the API by offering them support through a variety of communication channels and allowing them to communicate with the organization or among another through a community forum or developer portal. Furthermore, it is desirable for developers to be able to freely browse through the API offering, review operational status updates regarding the API, create support tickets in the event of an error and to share knowledge, views and opinions with other developers.\\
\begin{enumerate}
\item [5.1] \textit{Developer Onboarding}: To start consuming APIs, developers must first register with the organization that is providing them. The sign up process should be simple and easy, possibly by supporting developers with resources such as (automatically generated) SDKs and testing tools such as an API console or sandbox environment.
\item [5.2] \textit{Support}: In order to strengthen the community around the API, the organization should support developers whom are consuming it. This may be accomplished by establishing an appropriate communication channel, adequately managing issues and handling errors, should they present themselves.
\item [5.3] \textit{Documentation}: API documentation can help speed up the adoption, understanding and effectiveness of APIs \citedata{de2017api}. Hence, the organization must provide consumers of their API(s) with reference documentation. Additionally, they may be supplied with start-up documentation, code samples and FAQs to further accelerate understanding of the API.
\item [5.4] \textit{Community Management}: Oftentimes, app developers wish to know the views of other developers in the community. They may want to collaborate and share their API usage learnings and experiences with one another \citedata{de2017api}. In order to facilitate these wishes, the organization may choose to provide developers with a community forum or developer portal.
\item [5.5] \textit{Portfolio Management}: As an API providing organization, a platform to publicize and document APIs is needed. Hence, a discoverable catalog of APIs through which potential consumers are able to browse may be provided.\\
\end{enumerate}
\item \textbf{Commercial}: Organizations have been consuming third-party APIs to simplify and expand business partnership. APIs provide faster integration and an improved partner/customer experience, enabling organizations to grow rapidly \citedata{de2017api}. Oftentimes, exposing and consuming APIs has a commercial aspect tied to it. For API consumers and providers, this is often embodied by legal business contracts for the use of the APIs which they are bound to. These business contracts called service-level agreements govern the service levels and other aspects of API delivery and consumption. Another commercial aspect of API management is that of monetization. Considering APIs provide value to the consuming party, organizations often opt to monetize the services and APIs and build a business model for them \citedata{de2017api}. Utilizing the right monetization model for APIs enables organizations to reap the benefits of their investment in their APIs.\\
\begin{enumerate}
\item [6.1] \textit{Service-Level Agreements}: A service-level agreement (SLA) defines the API’s non-functional requirements, serving as a contract between the organization and consumers of their API. As such, the organization should ensure that the consumer of their API agrees with the SLA's contents. These may include matters such as terms and conditions for API usage, consumption quotas, uptime guarantees and maintenance or downtime information.
\item [6.2] \textit{Monetization Strategy}: APIs securely expose digital assets and services that are of value to consumers. Hence, the organization may wish to adopt a monetization strategy to enable monetization of the exposed services and APIs by constructing a business model around them. This may be accomplished through a monetization model which can be based on consumer characteristics such as their type of subscription, access tier or the amount of resources used.
\item [6.3] \textit{Account Management}: It is desirable to effectively manage accounts in order to foster a qualitative relationship with customers, stakeholders and the organization's management. This may be achieved by reporting on the API's business value internally through the use of business value reports, as well as externally by providing consumers of the API with subscription reports and training them in using the API as efficiently as possible. \\
\end{enumerate}
\end{enumerate}
\subsection{Practices}
\label{subsec:practices}
\newarray\MyData
\readarray{MyData}
{
1.1.2 &
Implement Evolutionary API Strategy &
Version Management &
Lifecycle Management &
The organization utilizes an evolutionary strategy to continuously version their API over time. Using this strategy, the organization evolves a single API by avoiding the introduction of breaking changes. Optionally, this may be accomplished by adhering to the GraphQL specification \citedata{graphqlVersioning}. &
$\bullet$ The organization maintains one version of their API. \newline
$\bullet$ The organization utilizes an evolutionary API versioning strategy.
& \citedata{ploesserVersioning, icappsVersioning} &
&
6&
1.1.5 &
Implement Multiple API Versioning Strategy &
Version Management &
Lifecycle Management &
The organization has a versioning strategy in place which entails the process of versioning from one API to a newer version. In order to do so, the organization must be able to maintain multiple versions of (one of) their API(s) for a period of time. Possible strategies include URI/URL Versioning (possibly in combination with adherence to the Semantic Versioning specification), Query Parameter versioning, (Custom) Header versioning, Accept Header versioning or Content Negotiation. &
$\bullet$ The organization utilizes one of the following versioning strategies: URI/URL Versioning, Query Parameter versioning, (Custom) Header versioning, Accept Header versioning or Content Negotiation.
& \citedata{de2017api, redhatVersioning, anjiVersioning, rapidVersioning} &
&
6&
1.1.6 &
Implement API Deprecation Protocol &
Version Management &
Lifecycle Management &
The organization has a protocol in place that details what steps should be taken when deprecating one of their APIs. This includes determining the amount of developers currently consuming the API through the use of monitoring, and then setting a threshold that details the amount of developers that should have migrated to the new version of the API before commencing with deprecation of the old version. Furthermore, developers, including their contact information, should be identified so that they may be notified of the deprecation through their preferred communication channel. This notification should be accompanied by a migration period and deprecation date, so that consumers have a clear target to migrate their apps over to the new API version. Additionally, referrals to to documentation and the new endpoint should be included. Furthermore, the protocol should detail what course of action should be taken to roll back to a previously deployed version of an API in the event of an incorrect deployment of the API. &
$\bullet$ The organization has implemented the 'Distribute Versioning Notification Through Channel(s)' (1.3.3) and 'Log Activity' (4.2.3) practices. \newline
$\bullet$ The organization has a deprecation protocol in place.
& \citedata{peterLifecycle} &
&
6&
1.1.7 &
Check Backwards Compatibility &
Version Management &
Lifecycle Management &
The organization has an approach in place with which it is able to detect breaking changes when versioning their API(s). Approaches include using a unit test suite, plugging an automated contract test suite into the CI/CD pipeline or by using the \emph{swagger-spec-compatibility} library to detect differences between two Swagger / OpenAPI specifications \citedata{swaggerComp}. &
$\bullet$ The organization has implemented the 'Implement Evolutionary API Versioning Strategy' (1.1.2) practice. \newline
$\bullet$ The organization has a backwards compatibility checking approach in place.
& \citedata{bhojwaniCheck} &
&
6&
1.2.1 &
Decouple API \& Software Versioning &
Decoupling API \& Application &
Lifecycle Management &
The organization has decoupled the version of their API(s) from its software implementation. The API version should never be tied to the software version of the back-end data/service. A new API version should be created only if there is a change in the contract of the API that impacts the consumer. &
$\bullet$ The organization has decoupled the version of their API(s) from its software implementation.
& \citedata{de2017api} &
&
6&
1.2.4 &
Decouple Internal \& External Data Model &
Decoupling API \& Application &
Lifecycle Management &
The organization has decoupled the data models that are used internally and externally from one another. Doing so is considered to be beneficial, since an application might use a normalized relation data model internally. While this data model is less suitable to expose through a public API, this separation of concerns allows the organization to evolve the relational data model at a different speed than the API.
& The organization has decoupled the data models that are used internally and externally from one another & None. &
&
6&
1.2.5 &
Decouple Internal \& External Data Format
&
Decoupling API \& Application &
Lifecycle Management &
The organization has decoupled the data format that are used internally and externally from one another. Doing so is considered to be beneficial, since an application might use a data format such as XML internally, while using a data format such as JSON for the API(s). This separation of concerns grants the organization greater flexibility in designing and developing their APIs.
&
$\bullet$ The organization has decoupled the data format that are used internally and externally from one another.
& None. &
&
6&
1.2.6 &
Decouple Internal \& External Transport Protocol &
Decoupling API \& Application &
Lifecycle Management &
The organization has decoupled the transport protocol that are used internally and externally from one another. Considering that an application might internally use a protocol that is less commonly used in modern APIs such as SOAP or JDBC internally, which may be less suitable for public APIs, the organization may opt to use a different protocol for their API(s). This separation of concerns grants the
These protocols are less commonly used in modern APIs, or are less suitable for public APIs, and the organization can decide to use a different protocol for the APIs. This separation of concerns grants the organization greater flexibility in designing and developing their APIs.
&
$\bullet$ The organization has decoupled the transport protocol that are used internally and externally from one another.
& None. &
&
6&
1.3.2 &
Distribute Changelogs &
Update Notification &
Lifecycle Management &
The organization uses (automated) email services to distribute changelogs describing the versioning of their API(s) to consumers. Ideally, the organization offers consumers the ability to opt-in or opt-out of this service. &
$\bullet$ The organization uses (automated) email services to distribute changelogs describing the versioning of their API(s) to consumers. & \citedata{sandovalChange} &
&
6&
1.3.3 &
Distribute Versioning Notification Through Channel(s) &
Update Notification &
Lifecycle Management &
The organization has the ability to distribute versioning notifications among consumers of their API(s) through established communication channels. Possible channels include email, social media, and announcements within the developer portal or reference documentation. Ideally, the organization offers consumers of their API(s) the option to select the communication channel they prefer receiving versioning notifications through.
&
$\bullet$ The organization has implemented the 'Establish Communication Channel' (5.2.1) and 'Distribute Changelogs' (1.3.2) practices. \newline
$\bullet$ The organization has the ability to distribute versioning notifications among consumers of their API(s) through established communication channels.
& \citedata{de2017api, sandovalChange} &
&
6&
1.3.5 &
Extend API with Versioning Information &
Update Notification &
Lifecycle Management &
The organization has the ability to extend their API specification to incorporate warning headers into responses in run-time. By doing so, consumers of the API are notified of its impending deprecation, and possibly requested to change their implementation. &
$\bullet$ The organization has the ability to introduce warning headers.
& \citedata{de2017api} &
&
6&
1.3.9 &
Announce Versioning Roadmap &
Update Notification &
Lifecycle Management &
The organization has announced a roadmap that details the planned dates on which the current (old) version of their API will be versioned to a new version, in order to notify consumers ahead of time. This may be done through email, social media, announcements within the developer portal or reference documentation.&
$\bullet$ The organization has implemented the 'Distribute Versioning Notification Through Channel(s)' (1.3.3) practice. \newline
$\bullet$ The organization has announced a versioning roadmap.
& \citedata{de2017api} &
&
6&
2.1.1 &
Implement Basic Authentication &
Authentication &
Security &
The organization has the ability to implement basic authentication in order to authenticate consumers of their API(s). This may be accomplished through the use of HTTP Basic Authentication, with which the consumer is required to provide a username and password to authenticate, or by issuing API keys to consumers of the API. An app is identified by its name and a unique UUID known as the API key, often serving as an identity for the app making a call to the API. &
$\bullet$ The organization has implemented HTTP Basic Authentication, or is able to issue API keys.
& \citedata{biehl2015api, de2017api, Zhao_2018, sandoval2018_2} &
&
6&
2.1.4 &
Implement Authentication Protocol &
Authentication &
Security &
The organization has implemented an authentication protocol or method in order to authenticate consumers of their API(s). In order to apply security For SOAP APIs, the usage of a WS Security (WSS) protocol \citedata{wikipediaWS} may be opted for. This protocol specifies how integrity and confidentiality can be enforced on messages and allows the communication of various security token formats, such as Security Assertion Markup Language (SAML), X.509 and User ID/Password credentials. Consumers of REST APIs may be authenticated by using methods and protocols such as Client Certificate authentication, SAML authentication, or OpenID Connect \citedata{openIDConnect}. OpenID Connect 1.0 is an authentication protocol that builds on top of OAuth 2.0 specs to add an identity layer. It extends the authorization framework provided by OAuth 2.0 to implement authentication.&
$\bullet$ The organization has implemented a WSS authentication protocol, or methods and protocols such as Client Certificate authentication, SAML authentication, or OpenID Connect.
& \citedata{de2017api, oracleWS, wikipediaWS} &
&
6&
2.1.7 &
Implement Single Sign-On &
Authentication &
Security &
The organization has implemented Single Sign-on (SSO), which is a authentication method that enables users to securely authenticate with multiple applications and websites by using one set of credentials. The user is then signed in to other applications automatically, regardless of the platform, technology, or domain the user is using.
&
$\bullet$ The organization has implemented the 'Implement Authentication Protocol' (2.1.4) practice. \newline
$\bullet$ The organization has implemented the Single Sign-on (SSO) authentication method.
& \citedata{de2017api, Onelogin, SSO} &
&
6&
2.2.2 &
Implement Access Control &
Authorization &
Security &
The organization has implemented an access control method in order to identify and authorize consumer potential users of their API(s). In order to accomplish this, the Role-based Access Control (RBAC) method may be used, with which permissions may be assigned to users based on their role within the organization. Alternatively, the Attribute-based Access Control (ABAC) may be used, with which permissions are granted based on an identities' attributes. Optionally, RBAC and ABAC policies may be expressed by using the eXtensible Access Control Markup Language (XACML).
&
$\bullet$ The organization has implemented the Role-based Access Control (RBAC) or Attribute-based Access Control (ABAC) method.
& \citedata{de2017api, hofman2014technical, thielens2013apis, WikiXACML} &
&
6&
2.2.4 &
Implement Token Management &
Authorization &
Security &
The organization provides consumers of their API(s) with the ability to perform (access) token and API key management. This is an activity that involves measures to manage (i.e. review, store, create and delete) the tokens and API keys that are required to invoke back-end APIs. &
$\bullet$ The organization allows consumers to manage their tokens and API keys.
& \citedata{de2017api, hofman2014technical} &
&
6&
2.2.6 &
Implement Standardized Authorization Protocol &
Authorization &
Security &
The organization has implemented an industry-standardized authorization protocol, such as the OAuth 2.0 Authorization protocol. OAuth is used as a mechanism to provide authorization to a third-party
application for access to an end user resource on behalf of them. OAuth helps with granting authorization without the need to share user credentials. &
$\bullet$ The organization has an industry-standardized authorization protocol.
& \citedata{de2017api,gadge2018microservice,gamez2015towards,hohenstein2018architectural,matsumoto2017fujitsu,patni2017pro,thielens2013apis,hofman2014technical,Xu_2019,Zhao_2018} &
&
6&
2.2.7 &
Implement Authorization Scopes &
Authorization &
Security &
The organization has implemented an authorization scopes mechanism, such as the OAuth 2.0 Scopes mechanism \citedata{OAuthScopes}, to limit access to their application(s) to their users' accounts. An application can request one or more scopes, where after this information is then presented to the user in a consent screen. Then, the access token that was issued to the application will be limited to the scopes granted. &
$\bullet$ The organization has an authorization scopes mechanism in place.
& None. &
&
6&
2.3.1 &
Implement Allow \& Deny IP Address Lists &
Threat Detection \& Protection &
Security &
The organization has the ability to impose allow and deny list policies. Through these policies, specific IPs can either be excluded from requests, or separate quotas can be given to internal users by throttling access depending on their IP address or address range.
&
$\bullet$ The organization has the ability to impose allow and deny list policies.
& \citedata{gadge2018microservice, gamez2015towards, hohenstein2018architectural} &
&
6&
2.3.2 &
Implement Injection Threat Protection Policies &
Threat Detection \& Protection &
Security &
The organization has implemented injection threat protection security policies. Injection threats are common forms of attacks, in which attackers try to inject malicious code that, if executed on the server, can divulge sensitive information. These attacks may take the form of XML and JSON bombs or SQL and script injection.&
$\bullet$ The organization has injection threat policies in place against XML or JSON bombs or SQL or script injection.
& \citedata{de2017api, preibisch2018api, OWASPInjection} &
&
6&
2.3.5 &
Implement DoS Protection &
Threat Detection \& Protection &
Security &
The organization has protection against DoS attacks in place. Hackers may try to bring down back-end systems by pumping unexpectedly high traffic through the APIs. Denial-of-service (DoS) attacks are very common on APIs. Hence, the organization should be able to detect and stop such attacks. Identification of a DoS attack is done through Spike Arrest. &
$\bullet$ The organization has protection against DoS attacks in place.
& \citedata{de2017api, gadge2018microservice, gamez2015towards} &
&
6&
2.3.7 &
Implement Security Breach Protocol &
Threat Detection \& Protection &
Security &
The organization has a security breach protocol in place, which details what steps should be taken in the event where a security breach occurs. This protocol may include activities such as notifying stakeholders and consumers of the API, identifying the source of the breach by scanning activity logs, containing the breach by stopping the data leakage, and consulting third-party IT security and legal advice providers.
&
$\bullet$ The organization has a security breach protocol in place.
& \citedata{Reynold2020, Soliya2020} &
&
6&
2.3.9 &
Conduct Security Review &
Threat Detection \& Protection &
Security &
The organization has the ability to conduct security reviews that potential consumers of their API(s) must pass before being allowed to integrate the organization's API(s) into their application. This typically involves testing the degree to which customer data is protected and encrypted, and identifying security vulnerabilities that may be exploited, such as threats related to script injections and non-secure authentication and access control protocols.
&
$\bullet$ The organization has the ability to conduct security reviews.
& \citedata{Salesforce2020} &
&
6&
2.3.10 &
Implement Zero Trust Network Access (ZTNA) &
Threat Detection \& Protection &
Security &
The organization has implemented a Zero Trust Network Access (ZTNA) security architecture, where only traffic from authenticated users, devices, and applications is granted access to other users, devices, and applications within an organization. ZTNA may be regarded as a fine-grained approach to network access control (NAC), identity access management (IAM) and privilege access management (PAM), offering a replacement for VPN architectures. Optionally, a ZTNA may be implemented through third-party providers such as Akamai, Cloudflare, or Cisco.
&
$\bullet$ The organization has implemented a Zero Trust Network Access (ZTNA) security architecture.
& \citedata{ZTNAwiki2020} &
&
6&
2.4.1 &
Implement Transport Layer Encryption &
Encryption &
Security &
The organization has implemented current and up-to-date encryption protocols such as Transport Layer Security (TLS). It is always desirable to have TLS compliant endpoints to safeguard against man-in-middle attacks, and bi-directional encryption of message data to protect against tampering. &
$\bullet$ The organization has implemented a current and up-to-date transport layer encryption protocol.
& \citedata{de2017api, familiar2015iot, gadge2018microservice, hofman2014technical, preibisch2018api} &
&
6&
2.4.3 &
Implement Certificate Management &
Encryption &
Security &
The organization has the ability to manage its TLS certificates. This involves monitoring and managing the certificates' acquisition and deployment, tracking renewal, usage, and expiration of SSL/TLS certificates. &
$\bullet$ The organization has the ability to manage its TLS certificates.
& \citedata{de2017api,hohenstein2018architectural,sine2015api,thielens2013apis,gadge2018microservice} &
&
6&
3.1.2 &
Implement Load Balancing &
Resource Management &
Performance &
The organization has implemented load balancing to distribute API traffic to the back-end services. Various load balancing algorithms may be supported. Based on the selected algorithm, the requests must be routed to the appropriate resource that is hosting the API. Load balancing also improves the overall performance of the API. &
$\bullet$ The organization has implemented load balancing.
& \citedata{biehl2015api,ciavotta2017microservice,de2017api,gadge2018microservice,gamez2015towards,montesi2016circuit,nakamura2017fujitsu,Xu_2019,Zhao_2018} &
&
6&
3.1.5 &
Implement Scaling &
Resource Management &
Performance &
The organization has the ability to scale the amount of available resources up or down depending on traffic and API usage in a reactive manner. This may be done either manually or automatically, through the use of a load balancer. &
$\bullet$ The organization has implemented the 'Implement Load Balancing' (3.1.2) practice. \newline
$\bullet$ The organization has the ability to scale the amount of available resources up or down.
& \citedata{akbulut2019software,jacobson2011apis,gadge2018microservice,hofman2014technical} &
&
6&
3.1.6 &
Implement Failover Policies &
Resource Management &
Performance &
The organization has the ability to mitigate outages through the implementation of failover policies. This may be done by automatically deploying a service to a standby data center if the primary system fails, or is shut down for servicing. By being able to perform a failover, the particular service is guaranteed to be operational at one of the data centers. This is an extremely important function for critical systems that require always-on accessibility. &
$\bullet$ The organization has the ability to mitigate outages through the implementation of failover policies.
& \citedata{Barracuda2020} &
&
6&
3.1.10 &
Implement Predictive Scaling &
Resource Management &
Performance &
The organization has the ability to scale the amount of available resources up or down depending on traffic and API usage in a proactive manner. This may be done automatically, through the use of a load balancer as based on insights gained from predictive analytics. &
$\bullet$ The organization has implemented the 'Implement Load Balancing' (3.1.2) and 'Enable Predictive Analytics' (4.3.9) practices. \newline
$\bullet$ The organization has implemented predictive scaling.
& None. &
&
6&
3.2.1 &
Set Timeout Policies &
Traffic Management &
Performance &
The organization is able to set timeout policies, by detecting and customizing the amount of time that is allowed to pass before a connection times out and is closed. Using timeout policies, the organization is able to ensure that the API always responds within a given amount of time, even if a long-running process hangs. This is important in high-availability systems where response performance is crucial so errors can be dealt with cleanly. &
$\bullet$ The organization is able to set timeout policies on their API(s).
& \citedata{tykTimeout} &
&
6&
3.2.2 &
Implement Request Caching &
Traffic Management &
Performance &
The organization utilizes caching as a mechanism to optimize performance. As consumers of the API make requests on the same URI, the cached response can be used to respond instead of forwarding those requests to the back-end server. Thus caching can help to improve an APIs performance through reduced latency and network traffic. &
$\bullet$ The organization utilizes caching as a mechanism to optimize performance.
& \citedata{biehl2015api,de2017api,gadge2018microservice,gamez2015towards,indrasiri2018developing,patni2017pro,preibisch2018api,vsnuderl2018rate,vijayakumar2018practical,hofman2014technical,Zhao_2018} &
&
6&
3.2.3 &
Perform Request Rate Limiting &
Traffic Management &
Performance &
The organization has a mechanism in place with which limits on the amount of requests or faulty calls API consumers are allowed to make, may be imposed. Requests made within the specified limit are routed successfully to the target system. Those beyond the limit are rejected. &
$\bullet$ The organization has a rate limiting mechanism in place for their API(s).
& \citedata{de2017api,gamez2015towards,jacobson2011apis,lourencco2019framework,raivio2011towards,jayathilaka2015eager,vsnuderl2018rate,hofman2014technical,gadge2018microservice} &
&
6&
3.2.4 &
Perform Request Rate Throttling &
Traffic Management &
Performance &
The organization has a mechanism in place with which API requests may be throttled down, without the connection being closed. This can help to improve the overall performance and reduce impacts during peak hours. It helps to ensure that the API infrastructure is not slowed down by high volumes of requests from a certain group of customers or apps. &
$\bullet$ The organization has a rate throttling mechanism in place for their API(s).
& \citedata{de2017api,fremantle2015web,familiar2015iot,gadge2018microservice,hohenstein2018architectural,indrasiri2018developing,jacobson2011apis,thielens2013apis,weir2015oracle} &
&
6&
3.2.5 &
Manage Quota &
Traffic Management
&
Performance &
The organization has policies in place regarding the number of API calls that an app is allowed to make to the back end over a given time interval. Calls exceeding the quota limit may be throttled or halted. The quota allowed for an app depends on the business policy and monetization model of the API. A common purpose for a quota is to divide developers into categories, each of which has a different quota and thus a different relationship with the API. &
$\bullet$ The organization has implemented the 'Perform Request Rate Limiting' (3.2.3) practice or 'Perform Request Rate Throttling' (3.2.4) practice.\newline
$\bullet$ The organization has quota policies for their API(s) in place.
& \citedata{de2017api} &
&
6&
3.2.6 &
Apply Data Volume Limits &
Traffic Management &
Performance &
The organization has a mechanism in place with which the amount of data consumers of their API(s) are allowed to consume in one call may be limited. This can help to improve the overall performance and reduce impacts during peak hours. It helps to ensure that the API infrastructure is not slowed down by calls that transport unnecessarily high chunks of data volumes. &
$\bullet$ The organization has implemented the 'Monitor Resource Usage' (4.1.5) practice.\newline
$\bullet$ The organization has a data volume limiting mechanism in place.
& \citedata{DropboxDatalimiting} &
&
6&
3.2.9 &
Prioritize Traffic &
Traffic Management &
Performance &
The organization is able to give a higher priority in terms of processing API calls, based on certain customer characteristics and/or classes. This priority may be based on their subscription, customer relationships, or agreements made in the SLA. &
$\bullet$ The organization is able to prioritize traffic based on customer characteristics and/classes.
&\citedata{de2017api} &
&
6&
4.1.1 &
Monitor API Health &
Monitoring &
Observability &
The organization is able to perform health monitoring on its API(s), possibly through an management platform, external monitoring tool/dashboard, functional testing or custom scripts and plugins. This should return basic information such as the operational status of the API, indicating its ability to connect to dependent services. &
$\bullet$ The organization is able to perform health monitoring on its API(s).
& \citedata{averdunkHealth, gadge2018microservice} &
&
6&
4.1.3 &
Monitor API Performance &
Monitoring &
Observability &
The organization is able to perform performance monitoring on its API(s), possibly through an management platform, external monitoring tool/dashboard, functional testing or custom scripts and plugins. Doing so should provide performance statistics that track the latency within the platform and the latency for back-end calls. This helps the organization in finding the source of any performance issues reported on any API. &
$\bullet$ The organization is able to perform performance monitoring on its API(s).
& \citedata{de2017api, Xu_2019} &
&
6&
4.1.5 &
Monitor Resource Usage &
Monitoring &
Observability &
The organization is able to perform resource monitoring on its API(s), possibly through an management platform, external monitoring tool/dashboard, functional testing or custom scripts and plugins. Doing so should provide insights into the amount of resources that are consumed as a result of calls made to the API(s). This may be done by measuring
hardware metrics such as CPU, disk, memory, and network usage, or by using an indirect approximation of the amount of resources that are consumed by calls. &
$\bullet$ The organization is able to perform resource monitoring on its API(s).
& \citedata{KubernetesResources} &
&
6&
4.2.1 &
Log Errors &
Logging &
Observability &
The organization has the ability to internally log errors that are generated as a result of consumption of their APIs. Error logs should typically contain fields that capture information such as the date and time the error has occurred, the error code, and the client IP and port numbers.
&
$\bullet$ The organization has the ability to internally log errors.
& \citedata{andrey_kolychev_konstantin_zaytsev_2019_3256462, de2017api, medjaoui2018continuous} &
&
6&
4.2.2 &
Log Access Attempts &
Logging &
Observability &
The organization has the ability to generate access logs, in which HTTP requests/responses are logged, to monitor the activities related to an APIs usage. Access logs offer insight into who has accessed the API, by including information such as the consumer's IP address. &
$\bullet$ The organization is able to perform access logging.
& \citedata{wso2Access} &
&
6&
4.2.3 &
Log Activity &
Logging &
Observability &
The organization has the ability to perform basic logging of API activity, such as access, consumption, performance, and any exceptions. In doing so, it may be determined what initiated various actions to allow for troubleshooting any errors that occur. &
$\bullet$ The organization is able to perform activity logging.
& \citedata{de2017api, fremantle2015web, gadge2018microservice} &
&
6&
4.2.5 &
Audit User Activity &
Logging &
Observability &
The organization is able to perform user auditing. Doing so enables the organization to review historical information regarding API activity, to analyze who accesses an API, when it is accessed, how it is used, and how many calls are made from the various consumers of the API. &
$\bullet$ The organization is able to perform user auditing.
& \citedata{de2017api, gadge2018microservice} &
&
6&
4.3.2 &
Report Errors &
Analytics &
Observability &
The organization has the ability to report any errors to consumers that may occur during usage of their API(s). Error reports typically include information such as the error code and text describing why the error has occurred. &
$\bullet$ The organization has implemented the 'Log Errors' (4.2.1) practice.\newline
$\bullet$ The organization is able to report any errors to consumers.
& \citedata{andrey_kolychev_konstantin_zaytsev_2019_3256462, de2017api, medjaoui2018continuous} &
&
6&
4.3.3 &
Broadcast API Status &
Analytics &
Observability &
The organization broadcasts the status of its API(s) to consumers by providing them with operational information on the API in the form of an external status page, possibly on the developer portal or a website. The function of this status page is to let consumers know what is going on with the API at a technical level at any point in time. &
$\bullet$ The organization has implemented the 'Monitor API Health' (4.1.1) practice.\newline
$\bullet$ The organization broadcasts the operational status of its API(s) to consumers.
& \citedata{sandoval2018} &
&
6&
4.3.6 &
Generate Custom Analysis Reports &
Analytics &
Observability &
The organization is able to generate custom analysis reports on metrics of choice, possibly through an API management platform or monitoring tool. &
$\bullet$ The organization is able to generate custom analysis reports.
& \citedata{de2017api} &
&
6&
4.3.7 &
Set Alerts &
Analytics &
Observability &
The organization has the ability to set and configure alerts that should trigger in case of certain events or thresholds being exceeded. Such events or thresholds may include resource limits being exceeded, or occurrence of outages. Ideally, the organization is able to configure what persons should be alerted about the event, and through what communication channel they should be contacted. &
$\bullet$ The organization has implemented the 'Monitor API Health' (4.1.1), 'Monitor API Performance' (4.1.3), and 'Monitor API Resource Usage' (4.1.5) practices.\newline
$\bullet$ The organization has the ability to set and configure alerts.
& \citedata{UptrendsAlerting} &
&
6&
4.3.9 &
Enable Predictive Analytics &
Analytics &
Observability &
The organization has the ability to aggregate predictive analytics, through techniques such as pattern recognition, data mining, predictive modelling, or machine learning, by analyzing current and historical facts to make predictions about future or otherwise unknown events. &
$\bullet$ The organization has implemented the 'Monitor API Performance' (4.1.3) and 'Monitor API Resource Usage' (4.1.5) practices.\newline
$\bullet$ The organization has the ability to aggregate predictive analytics.
& None. &
&
6&
5.1.1 &
Facilitate Developer Registration &
Developer Onboarding &
Community &
The organization has a mechanism in place with which API consumers are able to register to the API so that they can obtain access credentials. Consumers can then select an API and register their apps to use it. &
$\bullet$ The organization has a mechanism in place with which API consumers are able to register to their API(s). &
\citedata{de2017api} &
&
6&
5.1.4 &
Provide SDK Support &
Developer Onboarding &
Community &
The organization offers API consumers the option to either download client-side SDKs for the API, or generate the SDK themselves from standard API definition formats such as OpenAPI (formerly known as Swagger). These functionalities are usually offered through the developer portal, where app developers often look for device-specific libraries to interact with the services exposed by the API. &
$\bullet$ The organization offers API consumers the option to download or generate client-side SDKs for their API(s).
&
\citedata{de2017api} &
&
6&
5.1.5 &
Implement Interactive API Console &
Developer Onboarding &
Community &
The organization provides API consumers with an interactive console. Using this console, developers are able to test the behavior of an API. &
$\bullet$ The organization provides API consumers with an interactive console. &
\citedata{biehl2015api} &
&
6&
5.1.8 &
Provide Sandbox Environment Support &
Developer Onboarding &
Community &
The organization provides API consumers with an environment that they can use to mimic the characteristics of the production environment and create simulated responses from all APIs the application relies on. &
$\bullet$ The organization provides API consumers with a sandbox environment.
&
\citedata{buidesign, jacobson2011apis, Mueller:2020, patni2017pro} &
&
6&
5.2.1 &
Establish Communication Channel &
Support &
Community &
The organization has established a communication channel between the API provider and consumer with which support may be provided to the consumer. Possible communication media include email, phone, form, web, community forum, blogs or the developer portal.&
$\bullet$ The organization has established one of the following communication channels with consumers of their API(s): email/phone/form/web/ community forum/blog/developer portal. &
\citedata{de2017api, jacobson2011apis} &
&
6 &
5.2.4 &
Manage Support Issues &
Support &
Community &
The organization is able to manage any support issues with their API(s). API consumers must be able to report any issues, bugs or shortcomings related to the API. They should be able to raise support tickets and seek help regarding API usage. Additionally, the API provider must be able to track and prioritize support tickets. &
$\bullet$ The organization is able to manage any support issues with their API(s).
& \citedata{de2017api, jacobson2011apis} &
&
6&
5.2.6 &
Dedicate Developer Support Team &
Support &
Community &
The organization employs a dedicated that offers support to consumers of their API(s). This team should be well-trained and possess knowledge that enables them to assist consumers with any problems or difficulties they may experience during the usage or implementation of the API. &
$\bullet$ The organization has implemented the 'Establish Communication Channel' (5.2.1) practice. \newline
$\bullet$ The organization employs a dedicated developer team that offers support to consumers of their API(s).
& None. &
&
6&
5.3.1 &
Use Standard for Reference Documentation &
Documentation &
Community &
The organization provides consumers of their API(s) with basic reference documentation on their website, developer portal or an external, third-party documentation platform. This documentation should document every API call, every parameter, and every result so that consumers are informed on the API's functionality. Additionally, it must be specified using a documentation framework such as Swagger, RAML, API Blueprint, WADL, Mashery ioDocs, Doxygen, ASP.NET API Explorer, Apigee Console To-Go, Enunciate, Miredot, Dexy, Docco or TurnAPI. &
$\bullet$ The organization provides consumers of their API(s) with basic reference documentation.\newline
$\bullet$ The organization utilizes one of the following (or comparable) documentation tools to specify its API documentation: Swagger (OpenAPI), RAML, API Blueprint, WADL, Mashery ioDocs, Doxygen, ASP.NET API Explorer, Apigee Console To-Go, Enunciate, Miredot, Dexy, Docco or TurnAPI.
& \citedata{de2017api, jacobson2011apis, medjaoui2018continuous} &
&
6&
5.3.3 &
Provide Start-up Documentation \& Code Samples &
Documentation &
Community &
The organization provides consumers of their API(s) with start-up documentation on on their website, developer portal or an external, third-party documentation platform. This type of documentation explains key concepts by summarizing the reference documentation, accelerating understanding as a result. Optionally, a list of Frequently Asked Questions and code samples that may be readily used in apps to invoke the API may be included.
&
$\bullet$ The organization has implemented the 'Use Standard for Reference Documentation' (5.3.1) practice. \newline
$\bullet$ The organization provides consumers of their API(s) with start-up documentation.
& \citedata{de2017api, jacobson2011apis} &
&
6&
5.3.5 &
Create Video Tutorials &
Documentation &
Community &
The organization is able to create video tutorials in order to provide consumers with visual information that details how to use the API and integrate it into their applications.
&
$\bullet$ The organization is able to create video tutorials.
& None. &
&
6&
5.4.1 &
Maintain Social Media Presence &
Community Engagement &
Community &
The organization is able to maintain their social media presence on platforms such as Facebook or Twitter. This may involve activities such as reporting on the API's status, announcing news and updates, responding to questions, or reacting to feedback.
&
$\bullet$ The organization is able to maintain their social media presence on platforms such as Facebook or Twitter.
& None. &
&
6&
5.4.3 &
Provide Community Forum &
Community Engagement &
Community &
The organization provides (potential) consumers of their API(s) with a community forum, possibly through a website or API management platform. This forum may assist in building and interconnecting a developer community, by providing them with a central hub they can use to communicate with one another and the organization. Additionally, it may serve as a repository with guides on API usage, documentation and support. &
$\bullet$ The organization provides API consumers with a community forum.
& \citedata{de2017api} &
&
6&
5.4.4 &
Provide Developer Portal &
Community Engagement &
Community &
The organization provides (potential) consumers of their API(s) with a developer portal. A developer portal provides the platform for an API provider to communicate with the developer community. Addtionally, it typically offers functionality such as user registration and login, user management, documentation, API key management, test console and dashboards. &
$\bullet$ The organization has implemented a developer portal.
& \citedata{de2017api, fremantle2015web, medjaoui2018continuous, sine2015api} &
&
6&
5.4.7 &
Organize Events &
Community Engagement &
Community &
The organization is actively involved in organizing or participating in events that are aimed towards engaging and motivating the developer community to incorporate their API(s) into their applications. This may include events such as hackathons, conferences, or workshops. &
$\bullet$ The organization is actively involved in organizing or participating in developer community events.
& None. &
&
6&
5.4.9 &
Dedicate Evangelist &
Community Engagement &
Community &
The organization employs a dedicated API evangelist. This individual is responsible for evangelizing the API by gathering consumer feedback, and promoting the organization's API(s) by creating samples, demos, training materials and performing other support activities aimed towards maximizing the developer experience. &
$\bullet$ The organization employs a dedicated API evangelist.
& None. &
&
6&
5.5.1 &
Enable API Discovery &
Portfolio Management &
Community &
The organization provides potential consumers of their API(s) with a mechanism to obtain information, such as documentation and metadata, about their API(s). This mechanism may take the shape of an external website, hub or repository that consumers can freely browse through. &
$\bullet$ The organization has a mechanism in place with which their API(s) may be discovered.
& \citedata{biehl2015api, hofman2014technical} &
&
6&
5.5.4 &
Provide API Catalog &
Portfolio Management &
Community &
The organization provides API consumers with an API Catalog. This is a a searchable catalog of APIs. An API catalog is also sometimes referred to as an API registry. API consumers should be able to search the catalog based on various metadata and tags. The catalog should document the API functionality, its interface, start-up documentation, terms and conditions, reference documentation, and so forth.&
$\bullet$ The organization has implemented the 'Enable API Discovery' (5.5.1) practice. \newline
$\bullet$ The organization provides API consumers with a searchable API catalog.
& \citedata{de2017api, lourencco2019framework, vijayakumar2018practical, hofman2014technical, medjaoui2018continuous} &
&
6&
5.5.5 &
Bundle APIs &
Portfolio Management &
Community &
The organization is able to combine two or more APIs into a bundle. This is a collection of API products that is presented to developers as a group, and typically associated with one or more rate plans for monetization. &
$\bullet$ The organization is able to combine two or more APIs into a bundle.
& \citedata{apigeebundling} &
&
6&
6.1.1 &
Publish Informal SLA &
Service-Level Agreements
&
Commercial &
The organization has the ability to publish and agree upon an informal, bare-bones SLA with consumers of their API(s). This type of SLA is minimalistic and loose in terms of the nature and amount of agreements it contains, as well as the consequences attached to these agreements should they be violated. This type of SLA is satisfactory for organizations that provide non-critical services and that have close relationships with their consumers and partners. &
$\bullet$ The organization has the ability to publish and agree upon an informal SLA with consumers.
& None. &
&
6&
6.1.3 &
Provide SLA &
Service-Level Agreements
&
Commercial &
The organization has the ability to provide and agree upon a formal, elaborate SLA with consumers of their API(s). This type of SLA is extensive and strict in terms of the nature and amount of agreements it contains, as well as the consequences attached to these agreements should they be violated. Typically, agreements regarding the guaranteed uptime of the API on a monthly or yearly basis are included in this type of SLA, along with guaranteed response times in the event of incidents, as well as policies regarding privacy, security, and possibly rate and data quotas. Additionally, when providing a formal SLA, the organization should have a plan in place that details what course of action should be taken in the event where agreements are failed to be upheld.
&
$\bullet$ The organization has the ability to provide and agree upon a formal SLA with consumers.
& \citedata{de2017api} &
&
6&
6.1.6 &
Proactively Monitor SLAs &
Service-Level Agreements
&
Commercial &
The organization is able to proactively monitor metrics that are relevant in checking whether the agreements made with API consumers are adhered to. Such metrics may include availability, performance and functional correctness. &
$\bullet$ The organization has implemented the 'Monitor API Resource Usage' (4.1.5) practice.\newline
$\bullet$ The organization is able to perform SLA monitoring.
& \citedata{moizSLA} &
&
6&
6.1.7 &
Customize Personalized SLA &
Service-Level Agreements
&
Commercial &
The organization has the ability to provide consumers of their API(s) with personalized SLAs. This type of SLA is suitable for intensive consumers that utilize services offered by the API in such a way that requires customized agreements as compared to those that are offered as part of the organization's standard SLA. For example, some consumers may require minimal latency and response times for their calls, want to make large amounts of calls, or demand API uptime approaching 100\%. Additionally, a personalized SLA may be required due to the consumer being located in a different geographic location than other consumers, requiring customized agreements with regards to privacy laws and regulations. &
$\bullet$ The organization has implemented the 'Provide SLA' (6.1.3) practice.\newline
$\bullet$ The organization has the ability to provide consumers of their API(s) with personalized SLAs.
& \citedata{manualSLA} &
&
6&
6.2.6 &
Adopt Subscription-based Monetization Model &
Monetization Strategy &
Commercial &
The organization has adopted a monetization model that is based on a subscription basis. With this model, API consumers pay a flat monthly fee and are allowed to make a certain number of API calls per month. &
$\bullet$ The organization has implemented the 'Implement Subscription Management System' (6.3.2) and 'Manage Quota' (3.2.5) practices. \newline
$\bullet$ The organization has adopted a monetization model that is based on a subscription basis.
& \citedata{budzynskiMonetization} &
&
6&
6.2.8 &
Adopt Tier-Based Monetization Model &
Monetization Strategy &
Commercial &
The organization has adopted a monetization model that is based on tiered access. Typically, each tier has its own set of services and allowances for access to API resources, with increasing prices for higher tiers. &
$\bullet$ The organization has implemented the 'Prioritize Traffic' (3.2.7) and 'Manage Quota' (3.2.5) practices. \newline
$\bullet$ The organization utilizes a monetization model that is based on tiered access.
& \citedata{redhatMonetization, budzynskiMonetization} &
&
6&
6.2.9 &
Adopt Freemium Monetization Model &
Monetization Strategy &
Commercial &
The organization has adopted a monetization model that is based on freemium functionalities and access. This involves providing consumers with a limited part of the services and functionalities the API offers as a whole. Consumers that wish to utilize all services and functionalities are required to have an active, paid subscription to the API.
&
$\bullet$ The organization utilizes a monetization model that is based on freemium functionalities and access.
& \citedata{redhatMonetization, budzynskiMonetization} &
&
6&
6.2.10 &
Adopt Metering-Based Monetization Model &
Monetization Strategy &
Commercial &
The organization utilizes a monetization model that is based on metering. With this model, API consumers pay for the amount of resources they use. This may be measured in terms of bandwidth, storage or amount of calls made. &
$\bullet$ The organization has implemented the 'Monitor Resource Usage' (4.1.5) practice.\newline
$\bullet$ The organization utilizes a monetization model that is based on metering.
& \citedata{redhatMonetization, budzynskiMonetization} &
&
6&
6.3.2 &
Implement Subscription Management System &
Account Management &
Commercial &
The organization has a system in place with which it is able to manage existing subscriptions (consumers of) on their API. A subscription management system provides support for billing on a recurring basis, as well as providing insight into active subscriptions.
&
$\bullet$ The organization has implemented a subscription management system.
& \citedata{fremantle2015web, preibisch2018api, raivio2011towards} &
&
6&
6.3.7 &
Report on API Program Business Value &
Account Management &
Commercial &
The organization is able to generate business value reports associated with their API(s). Business value reports gauge the monetary value associated with the API program. Monetization reports of API usage provide information on the revenue generated from the API. Value-based reports should also be able to measure customer engagements. Engagements can be measured by the number of unique users, the number of developers registered, the number of active developers, the number of apps built using the APIs, the number of active apps, and many other items. Optionally, these metrics may be visualized in the form of dashboards, so that they may then easily be shared and presented to relevant internal stakeholders to communicate the API program's business value. &
$\bullet$ The organization has implemented the 'Generate Custom Analysis Reports' (4.3.6) practice. \newline
$\bullet$ The organization is able to generate business value reports associated with their API(s).
& \citedata{de2017api}&
&
6&
6.3.8 &
Provide Subscription Report to Customer &
Account Management &
Commercial &
The organization is able to generate subscription reports for consumers of their API(s). These reports contain metrics gathered through internal monitoring and analytics. Such metrics may include amount of calls made, performance, and status regarding remaining allowed quotas. &
$\bullet$ The organization has implemented the 'Generate Custom Analysis Reports' (4.3.6) and 'Implement Subscription Management System' (6.3.2) practices. \newline
$\bullet$ The organization is able to generate subscription reports for consumers of their API(s).
& \citedata{de2017api}&
&
6&
6.3.9 &
Proactively Suggest Optimizations to Customers &
Account Management &
Commercial &
The organization has the ability to train and help customers in using their API(s) as well and efficiently as possible. This may be in the best interest of both parties, as optimizing inefficient calls may positively impact traffic load on the API infrastructure. &
$\bullet$ The organization has implemented the 'Monitor API Performance' (4.1.3) and 'Monitor Resource Usage' (4.1.5) practices. \newline
$\bullet$ The organization is able to generate business value reports.
& \citedata{buidesign, de2017api}&
&
6&
}
\dataheight=9
\def\returnData(#1){\expandafter\checkMyData(#1)\cachedata}
\newcounter{deTeller}
\newcounter{volgendeStart}
\newcounter{volgendeStop}
\setcounter{deTeller}{1}
\setcounter{volgendeStart}{\value{deTeller}}
\newcounter{tempCount}
\newcounter{groteLoop}
\newcounter{loop}
\newcounter{loopPlusEen}
\newcounter{loopMinEen}
\newcounter{stopTeller}
\newcounter{oldStopTeller}
\newcommand{15.5cm}{15.5cm}
\forloop{groteLoop}{1}{\value{groteLoop}<21}{
\setcounter{oldStopTeller}{0}
\setcounter{stopTeller}{4}
\ifnum\value{deTeller} > \value{oldStopTeller} \setcounter{volgendeStop}{\value{stopTeller}} \fi
\setcounter{oldStopTeller}{\value{stopTeller}}
\addtocounter{stopTeller}{3}
\ifnum\value{deTeller} > \value{oldStopTeller} \setcounter{volgendeStop}{\value{stopTeller}} \fi
\setcounter{oldStopTeller}{\value{stopTeller}}
\addtocounter{stopTeller}{4}
\ifnum\value{deTeller} > \value{oldStopTeller} \setcounter{volgendeStop}{\value{stopTeller}} \fi
\setcounter{oldStopTeller}{\value{stopTeller}}
\addtocounter{stopTeller}{3}
\ifnum\value{deTeller} > \value{oldStopTeller} \setcounter{volgendeStop}{\value{stopTeller}} \fi
\setcounter{oldStopTeller}{\value{stopTeller}}
\addtocounter{stopTeller}{4}
\ifnum\value{deTeller} > \value{oldStopTeller} \setcounter{volgendeStop}{\value{stopTeller}} \fi
\setcounter{oldStopTeller}{\value{stopTeller}}
\addtocounter{stopTeller}{6}
\ifnum\value{deTeller} > \value{oldStopTeller} \setcounter{volgendeStop}{\value{stopTeller}} \fi
\setcounter{oldStopTeller}{\value{stopTeller}}
\addtocounter{stopTeller}{2}
\ifnum\value{deTeller} > \value{oldStopTeller} \setcounter{volgendeStop}{\value{stopTeller}} \fi
\setcounter{oldStopTeller}{\value{stopTeller}}
\addtocounter{stopTeller}{4}
\ifnum\value{deTeller} > \value{oldStopTeller} \setcounter{volgendeStop}{\value{stopTeller}} \fi
\setcounter{oldStopTeller}{\value{stopTeller}}
\addtocounter{stopTeller}{7}
\ifnum\value{deTeller} > \value{oldStopTeller} \setcounter{volgendeStop}{\value{stopTeller}} \fi
\setcounter{oldStopTeller}{\value{stopTeller}}
\addtocounter{stopTeller}{3}
\ifnum\value{deTeller} > \value{oldStopTeller} \setcounter{volgendeStop}{\value{stopTeller}} \fi
\setcounter{oldStopTeller}{\value{stopTeller}}
\addtocounter{stopTeller}{4}
\ifnum\value{deTeller} > \value{oldStopTeller} \setcounter{volgendeStop}{\value{stopTeller}} \fi
\setcounter{oldStopTeller}{\value{stopTeller}}
\addtocounter{stopTeller}{5}
\ifnum\value{deTeller} > \value{oldStopTeller} \setcounter{volgendeStop}{\value{stopTeller}} \fi
\setcounter{oldStopTeller}{\value{stopTeller}}
\addtocounter{stopTeller}{4}
\ifnum\value{deTeller} > \value{oldStopTeller} \setcounter{volgendeStop}{\value{stopTeller}} \fi
\setcounter{oldStopTeller}{\value{stopTeller}}
\addtocounter{stopTeller}{3}
\ifnum\value{deTeller} > \value{oldStopTeller} \setcounter{volgendeStop}{\value{stopTeller}} \fi
\setcounter{oldStopTeller}{\value{stopTeller}}
\addtocounter{stopTeller}{3}
\ifnum\value{deTeller} > \value{oldStopTeller} \setcounter{volgendeStop}{\value{stopTeller}} \fi
\setcounter{oldStopTeller}{\value{stopTeller}}
\addtocounter{stopTeller}{5}
\ifnum\value{deTeller} > \value{oldStopTeller} \setcounter{volgendeStop}{\value{stopTeller}} \fi
\setcounter{oldStopTeller}{\value{stopTeller}}
\addtocounter{stopTeller}{3}
\ifnum\value{deTeller} > \value{oldStopTeller} \setcounter{volgendeStop}{\value{stopTeller}} \fi
\setcounter{oldStopTeller}{\value{stopTeller}}
\addtocounter{stopTeller}{4}
\ifnum\value{deTeller} > \value{oldStopTeller} \setcounter{volgendeStop}{\value{stopTeller}} \fi
\setcounter{oldStopTeller}{\value{stopTeller}}
\addtocounter{stopTeller}{4}
\ifnum\value{deTeller} > \value{oldStopTeller} \setcounter{volgendeStop}{\value{stopTeller}} \fi
\setcounter{oldStopTeller}{\value{stopTeller}}
\addtocounter{stopTeller}{4}
\ifnum\value{deTeller} > \value{oldStopTeller} \setcounter{volgendeStop}{\value{stopTeller}} \fi
\setcounter{loopPlusEen}{\value{loop}}
\setcounter{loopMinEen}{\value{loop}}
\addtocounter{loopPlusEen}{1}
\addtocounter{loopPlusEen}{-1}
\begin{table}[ht!]
\footnotesize
\begin{tabular}{|p{.1cm}|p{.1cm}|ll|ll|}
\hline
\multirow{15}{*}{\rotatebox[origin=c]{90}{\returnData(\value{deTeller},4)}} &
\multirow{15}{*}{\rotatebox[origin=c]{90}{\returnData(\value{deTeller},3)}} &
\forloop{loop}{\value{volgendeStart}}{\value{loop}<\value{volgendeStop}}{
\textbf{Practice Code}: & \returnData(\value{deTeller},1) & \textbf{Practice Name}: & \returnData(\value{deTeller},2) \\\cline{3-6}
&&\multicolumn{4}{p{15.5cm}|}{\textbf{\textit{Description: }}\returnData(\value{deTeller},5)}\\\cline{3-6}
&&\multicolumn{4}{p{15.5cm}|}{\textbf{\textit{Implemented when:}} \newline \returnData(\value{deTeller},6)}\\\cline{3-6}
&&\multicolumn{4}{p{15.5cm}|}{Literature: \returnData(\value{deTeller},7)}\\\cline{3-6}
&&\multicolumn{4}{|p{15.5cm}}{}\\\cline{3-6}
&&
\addtocounter{deTeller}{1}
}
\setcounter{volgendeStart}{\value{deTeller}}
\textbf{Practice Code}: & \returnData(\value{deTeller},1) & \textbf{Practice Name}: & \returnData(\value{deTeller},2) \\\cline{3-6}
&&\multicolumn{4}{p{15.5cm}|}{\textbf{\textit{Description: }}\returnData(\value{deTeller},5)}\\\cline{3-6}
&&\multicolumn{4}{p{15.5cm}|}{\textbf{\textit{Implemented when:}} \newline \returnData(\value{deTeller},6)}\\\cline{3-6}
&&\multicolumn{4}{p{15.5cm}|}{Literature: \returnData(\value{deTeller},7)}\\\hline
\end{tabular}
\end{table}
\addtocounter{deTeller}{1}
}
\newpage
\section{Version 0.1}
\label{sec:version01}
This version was populated using the primary source~\cite{de2017api}.
It consisted of four focus areas.
Further details are omitted because of the intermediate state of the model.
\begin{table}[h]
\centering
\begin{tabular}{l|c}
Focus Area & Number of capabilities\\
\hline
\textbf{Developer Enablement} & 4 \\
\textbf{Security and Communication} & 5 \\
\textbf{Lifecycle} & 2 \\
\textbf{Auditing and Analysis} & 3 \\
\end{tabular}
\caption{API-m-FAMM version 0.1}
\label{tab:version01}
\end{table}
\section{Version 0.2}
\label{sec:version02}
This version was populated using the SLR~\cite{mathijssen2020identification}.
The re-location of practices and capabilities was primarily driven by the decision to split the \textit{security and communication} focus area up into two separate focus areas: \textit{security} and \textit{communication}.
This decision was made because security was found to be an substantial and integral topic of API management in itself.
Moreover, it was decided that the communication focus area, which was later renamed to \textit{performance}, comprises capabilities such as \textit{service routing} that are unrelated to security.
Furthermore, the decision was made to split the \textit{auditing and analytics} focus area up into technical management, which was later renamed to \textit{monitoring}, and business-side, which was later renamed to \textit{commercial}.
This was done due to the difference in nature between capabilities such as \textit{monetization} and \textit{analytics}, which were originally grouped together.
This difference was further compounded by the decision to split the traffic management capability into two separate capabilities, with one capturing the business-level aspect of this capability and the other encompassing operational aspects.
The former capability was then moved to the new commercial focus area along with the monetization capability, while the latter was moved to the performance focus area.
\begin{table}[h]
\centering
\begin{tabular}{l|c}
Focus Area & Number of capabilities\\
\hline
\textbf{Community Engagement} & 4 \\
\textbf{Security} & 2 \\
\textbf{Communication} & 2 \\
\textbf{Lifecycle} & 5 \\
\textbf{Technical Management} & 4 \\
\textbf{Business Side} & 3 \\
\end{tabular}
\caption{API-m-FAMM version 0.2}
\label{tab:version02}
\end{table}
\section{Version 0.3}
\label{sec:version03}
More information was needed to determine whether practices and capabilities were suited to be included in the model with regards to their scope and relevance.
In order to resolve this, the collection of practices and capabilities was verified by using information gathered from grey literature such as online blog posts, websites, commercial API management platform documentation and third-party tooling.
Doing so resulted in the following changes made with regards to the contents of the API-m-FAMM:
\begin{itemize}
\item \textit{Removal} of several practices that were found to be irrelevant, redundant, or too granular. For example, \textit{filtering spam calls}, which was originally uncovered as part of the SLR, was found to be redundant as this practice is already covered by practices such as \textit{DoS protection} and \textit{rate limiting}. Consequently, such practices were removed.
\item \textit{Addition} of several practices that were newly identified. For example, \textit{predictive analytics} was found to be a practice that is offered by multiple commercial API management platform providers. Similarly, \textit{including change logs} was found to be a practice that is recommended by practitioners as a best practice when updating APIs. Consequently, such practices were added to the API-m-FAMM.
\item \textit{Merging} of several practices that were found to be irrelevant, redundant, or too granular. For example, practices that were originally uncovered through the SLR, such as \textit{email-based support}, \textit{phone-based support}, and \textit{form-based support} were found to be redundant, as no significant difference with regards to their maturity may be discerned among these practices. Consequently, these practices were merged into one practice: \textit{establish communication channel}.
\item \textit{Splitting} of practices that were found to be compounded by practices that were thought to warrant separate, individual practices. For example, the \textit{black or whitelist IP addresses} was split up into the \textit{blacklist IP addresses} and \textit{whitelist IP addresses} practices because these were found to be relevant practices on their own. Additionally, Consequently, these practices were merged into one practice: \textit{establish communication channel}.
\item \textit{Relocation} of practices to different capabilities than those they were originally assigned to. For example, the \textit{Oauth2.0 authorization} practice was moved from the \textit{authentication} capability to the newly introduced \textit{authorization} capability as Oauth is considered to be an authorization protocol.
\item \textit{Renaming} of several practices, as well as updating descriptions and formulation of practice descriptions that were previously missing or incomplete. For example, the \textit{provide code samples} practice was renamed to \textit{provide FAQ with code samples} because it was found that these two practices often go hand in hand. Additionally, this practice's description was updated.
\item \textit{Identification} of dependencies among practices, either among practices within the same capabilities or among practices across different capabilities or focus areas. Some dependencies were found to be relatively straightforward, such as the \textit{multiple API versioning strategy} practice depending on the implementation of the \textit{maintain multiple APIs} practice. However, dependencies between practices belonging to different capabilities such as \textit{quota management} depending on \textit{rate limiting} or \textit{rate throttling} were also identified.
\item \textit{Arrangement} of practices based on their interrelated maturity with regards to the other practices in the capability they are assigned to. At this point in time, this was performed on a mostly subjective and empirical basis, and thus should be regarded as a first attempt to discern practices with regards to their relative maturity.
\item \textit{Formulation} of implementation conditions corresponding to each practice, which are aimed at providing practitioners with an overview of the necessary conditions that must be met before a practice may be marked as implemented.
\end{itemize}
The amount of practices and capabilities that were added, removed, merged, split, relocated or renamed as a result of the supplemental material validation process and the aforementioned discussion session are shown in Table~\ref{tab:ResultsSupplemental} below.
However, it should be noted that some practices that were added as a result of the online verification process were later removed as a result of the discussion session.
As such, numbers corresponding to the \textit{added} and \textit{removed} operations presented in Table~\ref{tab:ResultsSupplemental} are slightly inflated.
\begin{table}[h]
\centering
\begin{tabular}{l|c|c|c|c|c|c}
\textbf{Component} & \textbf{Added} & \textbf{Removed} & \textbf{Merged} & \textbf{Split} & \textbf{Relocated} & \textbf{Renamed}\\
\hline
Practice & 17 & 27 & 39 & 4 & 12 & 93 \\
Capability & 1 & 1 & 1 & 0 & 1 & 2 \\
\end{tabular}
\caption{Number of practices and capabilities added, removed, merged, split, relocated or renamed as a result of the supplemental material validation process and the discussion session.}
\label{tab:ResultsSupplemental}
\end{table}
At this stage of the design process, the model is grounded in literature, and is verified and supplemented by using grey literature.
As a result of these activities, the initial body of 114 practices and 39 capabilities that was extracted as a result of the SLR was refined and narrowed down to 87 practices and 23 capabilities, which are divided among six focus areas.
Instead, the contents of this version of the API-m-FAMM can be found in \emph{version2} of this published source document on arXiv~\cite{mathijssen2021source}.
The general structure of the API-m-FAMM version 0.3 is presented in Figure~\ref{fig:api-m-famm03}. As shown, each individual practice is assigned to a maturity level within its respective capability. Additionally, it should be noted that practices can not depend on practices as part of another capability that have a higher maturity level. For example, practice 1.4.4 is dependant on the implementation of practice 1.2.3, resulting in a higher maturity level being assigned to the former of these practices.
Figure~\ref{fig:api-m-famm03} also shows that at this stage, 17 practices were added in addition to those extracted through the SLR. Furthermore, 14 new practices were introduced as a result of merging 39 former practices, as shown in Table~\ref{tab:ResultsSupplemental}. Moreover, descriptions that are based on grey literature were formulated for 18 practices for which adequate descriptions were not able to be identified in academic literature. Lastly, 6 practices are accompanied by descriptions that were formulated by the researchers themselves, as based on empirical knowledge. Even though suitable descriptions could not be identified for these practices in academic literature or grey literature, they were included in this version of the API-m-FAMM because they were hypothesized to be relevant for practitioners. Among other things, this hypothesis is tested through expert interviews, which are part of the next phase in constructing the API-m-FAMM.
\begin{figure*}
\centering
\includegraphics[page=1, clip, trim=0cm 0cm 0cm 0cm, width=\textwidth]{Figures/API-m-FAMMv0.3.pdf}
\caption{version 0.3 of the API-m-FAMM and the focus areas, capabilities, and practices it consists of. Additionally, it is shown which capabilities and practices were newly introduced between API-m-FAMM v0.2 and v0.3, as well as for which practices descriptions were formulated based on supplemental material. Please consult the legend on the top left-hand side of the figure for more information regarding the differently shaped and/or colored components.}
\label{fig:api-m-famm03}
\end{figure*}
\section{Version 0.4}
\label{sec:version04}
Eleven expert interviews were conducted.
During these interviews, many additions and changes in terms of the API-m-FAMM's structure and contents were suggested by experts, whom were encouraged to elaborate on their motivation regarding these suggestions.
By transcribing and processing the recordings of all interviews, the numerous suggestions that were made by experts to either add, remove, merge, split, relocate, or rename several focus areas, capabilities, and practices, are compiled.
The amount in which these suggestions for changes occurred are shown in Table \ref{tab:EvaluationChanges} below, as grouped by the type of suggested change as well as the type of component they apply to. Additionally, these changes are visually represented in their entirety in Figure \ref{fig:api-m-famm04a}, along with the number of experts that suggested for a specific change to be made. Evidently, the number of practices that were suggested to be added is relatively high. It should be noted that while a large part of these practices were explicitly mentioned by experts, some were also indirectly extracted from transcripts as a result of comments the expert had made. Additionally, no suggestions are rejected at this point, hence all suggestions that were made by experts are taken into account and incorporated into Table \ref{tab:EvaluationChanges} and Figure \ref{fig:api-m-famm04a}.
\begin{table}[h]
\centering
\begin{tabular}{l|c|c|c|c|c|c}
\textbf{Component} & \textbf{Added} & \textbf{Removed} & \textbf{Merged} & \textbf{Split} & \textbf{Relocated} & \textbf{Renamed}\\
\hline
\textbf{Practice} & 50 & 5 & 3 & 3 & 9 & 3 \\
\textbf{Capability} & 7 & 0 & 0 & 2 & 2 & 2 \\
\textbf{Focus Area} & 1 & 0 & 0 & 0 & 0 & 3\\
\end{tabular}
\caption{Number of practices, capabilities, and focus areas that were suggested to be added, removed, merged, split, relocated or renamed by experts during interviews.}
\label{tab:EvaluationChanges}
\end{table}
\begin{figure*}
\centering
\includegraphics[page=1, clip, trim=7cm 4cm 9cm 0.5cm, width=0.8\textwidth]{Figures/API-m-FAMMv0.4a.pdf}
\caption{API-m-FAMM version 0.3 plus all suggested changes that were made by experts during interviews. Please consult the legend on the left-hand side of the figure for more information regarding the manner in which the colored outlines should be interpreted. Practices and capabilities that were not directly categorized by the expert during interviews are placed in the 'undecided' box on the top-left hand side.}
\label{fig:api-m-famm04a}
\end{figure*}
After having compiled all suggestions made by experts, extensive discussion sessions are held among all authors to analyze, discuss, and interpret them.
All suggested changes to either a focus area itself, or the capabilities or practices it consists of are then analyzed and interpreted through the help of the transcribed arguments that were provided by experts during the interviews.
As a result, numerous modifications are made to the API-m-FAMM, which are visualized in its entirety in Figure \ref{fig:api-m-famm04b}.
Additionally, some fundamental decisions are made with regards to the scope and contents of the API-m-FAMM.
\begin{itemize}
\item Firstly, it was decided that all practices that are contained in the model should be implementable \textit{without} the usage of an API management platform. This decision was made due to several reasons. First of all, it was found that among the organizations that the experts that were consulted are employed at, only a small portion actively utilizes a third party platform to manage their API(s). When asked, experts belonging to the category that have not incorporated an API management platform into their organizations cited arguments such as wanting to avoid vendor lock-in, high costs, or simply not having a need for many of the functionalities provided by such management platforms. Oftentimes, the latter argument was tied to the organization currently exclusively using internal APIs, thus removing the need for using a management platform to manage and expose any partner or public APIs altogether. Considering that it may reasonably be hypothesized that these arguments may likely also apply to other organizations wishing to consult the API-m-FAMM to evaluate and improve upon their API management related practices, any practices or capabilities that were found to be directly tied to the usage of an API management platform were removed from the model. For example, this was the case for the \textit{Visual Data Mapping} practice, which is exclusively provided by the \textit{Axway} API management platform \footnote{\url{https://www.axway.com/en/products/api-management}}, as well as the practices corresponding to the newly suggested \textit{Error Handling} capability, which are implementable through the use of the \textit{Apigee} platform \footnote{\url{https://cloud.google.com/apigee/api-management?hl=nl}}.
An additional reason for excluding such capabilities and practices is that they are likely to evolve throughout the coming years, which would in turn require the API-m-FAMM to be updated as well. In order to prevent this, the API-m-FAMM and the practices it comprises should be platform-independent. Lastly, the purpose of the API-m-FAMM is not to guide practitioners in selecting an appropriate commercial API management platform for their organization. Instead, the API-m-FAMM aims to guide organizations in assessing and evaluating their current maturity in terms of those processes that are considered to be best-practices and are at the core of API management, so that they may then develop a strategy towards implementing practices that are currently not implemented and desirable in further maturing the organization in terms of API management.
\item Secondly, many practices were deemed to be too granular, specific, or irrelevant to be included. Consequently, such practices were either removed, or merged into a practice that is composed of these smaller practices. An example of practices that were found to be too granular include newly suggested practices such as \textit{Event Participation}, \textit{Event Hosting}, and \textit{Organize Hackathons}. Additionally, since determining a difference among these practices in terms of their maturity was found to be unfeasible, they were instead merged into the \textit{Organize Events} practice and included in its description.
\item Thirdly, some practices that describe a specific protocol were renamed to be more ambiguous and generic. For example, the former \textit{OAuth 2.0 Authorization} practice was renamed to \textit{Standardized Authorization Protocol}, with a referral to the OAuth 2.0 protocol being included in its description instead. This was done to ensure that the API-m-FAMM remains functional and applicable in the future, since it is likely that new protocols will be developed and adopted among the industry in the future. These concerns also applied to suggested practices corresponding to individual authentication methods such as client certificate and SAML authentication, which were ultimately merged into the \textit{Implement Authentication Protocol} practice and included in its description. An additional reason for doing so in the case of these authentication methods is that they each have their individual strengths and weaknesses, with one not always necessarily being 'better' or more mature than the other. Furthermore, some methods may be more appropriate for some use cases than others.
\item Furthermore, some capabilities and its corresponding practices that were also thought to apply to most organizations in general, that are not necessarily involved with API management were excluded from the model. An example of this is the \textit{Financial Management} capability that was suggested to be added. Considering that practices such as \textit{Automated Billing}, \textit{Third-Party Payment Provider Integration}, and \textit{Revenue Sharing} are best practices that apply to commercially oriented organizations in general, they were removed. This decision was made to ensure that the contents of the API-m-FAMM is exclusively composed of practices that are directly tied to API management.
\item During interviews focused on the \textit{Lifecycle} focus area, experts were asked to elaborate on the manner in which their organization has implemented \textit{Governance}. Based on the answers given however, it became clear that capturing processes related to governance in the form of practices is not feasible. This may largely be attributed to the observation that such processes seem to be inherent to specific characteristics of the organization, such as its culture, size, usage of a third party API management platform, as well as the amount of APIs that are used or exposed by the organization.
Some practices were suggested for addition, such as \textit{Define Naming Conventions}, \textit{Define Best Practices}, and \textit{Define Integration Patterns}. However, after having discussed these with experts in subsequent interviews, it was decided that these practices are too abstract and inconcrete in comparison with other practices, considering that they may be interpreted in different ways by practitioners due to the varying organizational characteristics mentioned earlier. Hence, the \textit{Governance} capability that was originally part of the \textit{Lifecycle} focus area was removed, along with the \textit{Design-time Governance} and \textit{Run-time Governance} practices it was composed of.
\item A valuable suggestion that was made by experts is the addition of monitoring in terms of the amount of resources that calls to the API consume, such as CPU, disk, memory, and network usage. Considering that this monitoring perspective was previously missing alongside performance and health monitoring, as well as it being suggested by multiple experts independently from one another, the \textit{Resource Monitoring} practice was newly added. Similarly, this resource perspective was also found to be missing among the \textit{Traffic Management} capability, alongside the \textit{Request Limiting} and \textit{Request Throttling} practices. Hence, the \textit{Data Volume Limiting} practice was newly added.
\item Another fundamental change that was made to the API-m-FAMM is the renaming of the former \textit{Monitoring} focus area to \textit{Observability}. This rename was independently suggested by two experts, whom argued that observability better describes the focus area, considering that the \textit{Analytics} capability was split into two capabilities: \textit{Monitoring} and \textit{Analytics}. This decision was made because experts were of the opinion that monitoring is concerned with gathering (real-time) metrics related to the API's health, performance, and resource usage, while analytics is concerned with aggregating these metrics so that insights may be formed and subsequent action may be taken based off of these. As a result, the monitoring capability was added, as well as practices related either to monitoring or analytics being moved to the capabilities they are associated with.
\item Moreover, some practices that were originally posed from a passive perspective, were changed with the intention of being conducted in an active manner. For example, the \textit{Include Changelogs} practice was renamed to \textit{Distribute Changelogs}, and its description was changed so that its focus is changed from passive inclusion of changelogs in the reference documentation, to active distribution of changelogs to consumers of the API. Similarly, the \textit{Provide API Status Page} was renamed to \textit{Broadcast API Status}, as well as its description being changed to signify the operational status of the API being broadcasted to consumers in an active manner, as opposed to providing an API status page in a passive fashion. These changes were made due to the fact that when phrased in a passive manner, these practices were deemed to be too irrelevant to be included in the API-m-FAMM, considering that the level of maturity required to implement these practices is too low when compared to other practices. When phrased from an active perspective however, these practices can be considered to be best practices that an organization should strive to implement.
\item Finally, a major fundamental change was made with regards to the \textit{Lifecycle Control} capability. While practices belonging to this capability such as \textit{API Endpoint Creation}, \textit{API Publication}, and \textit{Import Pre-existing API} are considered to be an integral aspect of API management in both literature as well as the industry, the decision was made to exclude these practices from the API-m-FAMM. This choice was made due to the fact that being able to design, create, publish, and deploy an API is a precondition for implementing all other practices the model consists of. Moreover, during interviews it became clear that it was difficult for experts to rank these practices in terms of their maturity, considering that they are often performed in chronological order.
\end {itemize}
\begin{figure*}[!h]
\centering
\includegraphics[page=1, clip, trim=7cm 4cm 2cm 0.5cm, width=0.8\textwidth]{Figures/API-m-FAMMv0.4b.pdf}
\caption{API-m-FAMM v0.4, including all suggested changes that were made by experts during interviews, as well as the manner in which they were subsequently interpreted and applied by the researchers. Please consult the legend on the top left-hand side of the figure for more information regarding the manner in which the colored outlines and fills should be interpreted.}
\label{fig:api-m-famm04b}
\end{figure*}
Next the practices are assigned to individual maturity levels.
This is done by using the results of the maturity ranking exercises during the interviews.
First however, all dependencies between practices are identified, which are depicted in Figure \ref{API-m-FAMM Dependencies}.
In this context, a dependency entails that one or more practices that the practice in question is dependant on are required to be implemented before the practice may be implemented.
These dependencies may either occur; (1) between practices within the same capability; (2) between practices that are assigned to different capabilities within the same focus area, or (3) between practices that are assigned to different capabilities and focus areas.
In total 34 dependencies are identified, which was done by analyzing literature stemming from the SLR and online supplemental material, as well as input received through expert interviews and the discussion sessions that were conducted among the researchers. The number of dependencies that are identified are shown for each focus area in Table \ref{tab:DependenciesTable}, as well as for each of the three dependency types mentioned.
\begin{table}[h]
\centering
\begin{tabular}{l c c c r}
\hline
\textbf{Focus Area} & \textbf{Within Capability} & \textbf{Within Focus Area} & \textbf{Between Focus Areas} & \textbf{Total} \\
\hline
Community & 3 & 0 & 0 & 3 \\
Security & 2 & 0 & 0 & 2 \\
Lifecycle Management & 3 & 1 & 2 & 6 \\
Observability & 0 & 6 & 0 & 6 \\
Performance & 4 & 0 & 2 & 6 \\
Commercial & 2 & 1 & 8 & 11 \\
\hline
\textbf{Total} & 14 & 8 & 12 & 34
\end{tabular}
\caption{The number of identified dependencies per focus area and per dependency type.}
\label{tab:DependenciesTable}
\end{table}
As an example of a dependency between practices within the same capability, implementation of the \textit{Implement Load Balancing} practice is required before the \textit{Implement Scaling} practice may be implemented.
An example of a dependency between practices that are assigned to different capabilities within the same focus area is the dependency between \textit{Enable Predictive Analytics} and \textit{Performance Monitoring}. The former practice belongs to the \textit{Analytics} capability, while the latter practice belongs to the \textit{Monitoring} capability, but both capabilities are contained within the \textit{Observability} focus area. An example of a dependency between practices that are assigned to different capabilities and focus areas may be observed in the case of the dependency between the \textit{Adopt Metering-based Monetization Model} and \textit{Resource Monitoring} practices. The former practice is assigned to the \textit{Monetization Strategies} capability within the \textit{Commercial} focus area, while the latter practice is assigned to the \textit{Monitoring} capability within the \textit{Performance} focus area.
\begin{figure*}[!h]
\centering
\includegraphics[page=1, clip, trim=1cm 3cm 8cm 1cm, width=0.7\textwidth]{Figures/API-m-FAMMv0.4dependencies.pdf}
\caption{The API-m-FAMM v0.4 after all changes had been applied, showing all dependencies that were identified between practices. In order to improve legibility, practices are not ranked in terms of their maturity in this figure.}
\label{API-m-FAMM Dependencies}
\end{figure*}
After having identified all dependencies between practices, all 34 practices that have one or more dependencies are juxtaposed in a matrix.
This is done by adhering to the constraint that practices can not depend on practices that have a higher maturity level.
As a result, the foundation of the API-m-FAMM is formed, with practices ranging from maturity levels 1 to 10.
Using this structure as a base, all other practices are subsequently assigned to individual maturity levels within their respective capabilities.
These assignments are performed by using the results of the maturity ranking exercises that were performed by experts as one of the main sources of input.
By again using the \textit{Logging} capability as an example, the interpretation of such a maturity ranking exercise is visualized in Figure \ref{Maturity_Ranking_Interpretation}.
In this figure, it can be seen that the \textit{Activity Logging}, \textit{Access Logging}, and \textit{User Auditing} practices were ranked by 3 experts in terms of their perceived maturity.
An additional practice, \textit{Application Logging}, was suggested for addition.
However, this practice was removed because the decision was made to exclude applications in terms of abstraction from the API-m-FAMM, which is why it is outlined in red.
Additionally, the decision was made to include and move the \textit{Error Logging} practice to the \textit{Logging} capability.
Hence, this practice is outlined in green, and is included in this ranking exercise by incorporating this practice in the figure, along with the capability it was originally categorized with by the expert.
Furthermore, the \textit{Error Reporting} practice was moved to the \textit{Analytics} capability (as can be seen in Figure \ref{fig:api-m-famm04b}, which is why it is outlined in purple and excluded from this maturity ranking exercise.
Lastly, the remaining 3 practices that were suggested to be added are excluded, along with the \textit{Error Handling} capability as a whole, which is denoted by the red outlines.
\begin{figure}[h]
\centering
\includegraphics[page=1, clip, trim=1cm 0cm 1cm 0cm, width=0.7\textwidth]{Figures/API-m-FAMMv0.4maturityranking.pdf}
\caption{Conceptual overview representing a rough approximation of the way in which the expert's maturity rankings were interpreted and used as a starting point for performing the maturity level assignments.}
\label{Maturity_Ranking_Interpretation}
\end{figure}
Arrows are included that range from the lowest a practice has been ranked in terms of its perceived maturity, to its highest. Dotted lines are attached to each practice, which are then connected to these arrows with a small circle in order to highlight and compare the maturity assignments of each expert with one another. Subsequently, dashed lines are used to indicate a rough estimate of the average of these assignments, which are then mapped on the maturity levels.
However, it should be noted that Figure \ref{Maturity_Ranking_Interpretation} was made for illustratory purposes, in order to provide the reader with a conceptual idea of the manner in which the maturity assignments were performed.
In practice, the maturity assignment of practices was done in a pragmatic manner, through discussion sessions among the researchers during which the expert's varying maturity rankings and their accompanying motivation and arguments were discussed and interpreted. Based on the outcome of these discussions, decisions were then made to assign practices to individual maturity levels, while taking the experts' opinions and maturity rankings into account.
Finally, all practices are renamed to fit an uniform syntactical structure, which starts with a verb, followed by one or more nouns.
For example, \textit{User Auditing} is renamed to \textit{Audit Users}, and \textit{Resource Monitoring} is renamed to \textit{Monitor Resource Usage}.
Furthermore, descriptions of the practices that are included in the API-m-FAMM after all changes had been applied are updated.
When possible, this is done using information and input that was provided by experts during interviews.
Ultimately, these activities produced a second, updated version of the API-m-FAMM, which is shown in Figure \ref{API-m-FAMM_2.4} and consists of 6 focus areas, 20 capabilities, and 81 practices.
These descriptions are available through \emph{version3} of this published source document on arXiv~\cite{mathijssen2021source}.
\begin{figure*}[!h]
\centering
\includegraphics[page=1, clip, trim=0.5cm 0.5cm 0.5cm 0.5cm, width=\textwidth]{Figures/API-m-FAMMv0.4.pdf}
\caption{API-m-FAMM v0.4, which includes the assignment of all practices to their respective maturity levels, which range from level 1 to level 10.}
\label{API-m-FAMM_2.4}
\end{figure*}
\section{Version 0.5}
\label{sec:version05}
After having updated the API-m-FAMM to incorporate all findings from the interviews a second evaluation cycle was conducted.
This is done as a means for evaluating and verifying whether experts agree with the fundamental decisions that were made, as well as gathering feedback on the way suggestions made by experts were interpreted and the maturity levels that practices had been assigned to.
This second evaluation cycle consists of unstructured interviews with three experts originating from the same sample of experts that were interviewed during the first evaluation cycle.
During these interviews, the changes made as a result of the previous evaluation cycle, as well as the newly introduced maturity assignments are presented and discussed.
Since all experts agreed with the fundamental decisions that were made, no major further adjustments are made to the API-m-FAMM as a result of this evaluation cycle.
\section{Version 1.0}
\label{sec:version10}
The final phase of the API-m-FAMM construction, the \emph{Deploy} phase, was executed through case studies.
These case studies were conducted by evaluating six software products.
Some additional changes are made to practices as a result of the discussion sessions with practitioners after the evaluation.
One practice was removed altogether, and the descriptions of six practices were modified. Specifically, the following changes were made to the following practices:
\begin{itemize}
\item \textbf{Perform Request Rate Limiting}: this practice was extended to also comprise error limiting. In the case of AFAS Profit, this is implemented by placing consumers on a temporary denylist when they perform an excessive amount of faulty calls within a predefined time span.
\item \textbf{Prevent Sensitive Data Exposure}: this practice was removed. During discussions, this practice caused confusion due to the observation that this practice is already captured by the \textit{Implement Transport Layer Encryption} and \textit{Decouple Internal \& External Data Model} practices. Additionally, after further investigation this practice was deemed to be out of scope, considering that the scope of this practice involves app data storage in general, as opposed to API management.
\item \textbf{Implement Predictive Scaling}: the description of this practice was modified. Originally, the description mentioned that this practice may be implemented 'manually or automatically', which caused confusion due to the fact that these methods are already capture in the \textit{Implement Scaling} practice. Because predictive scaling is envisioned by practitioners and the researchers to be done automatically, the manual element was removed from the description.
\item \textbf{Monitor Resource Usage}: the description of this practice was expanded. During discussions, it became clear that monitoring resources does not always specifically involve metrics such as CPU and disk usage. Instead, rough approximations may be used to determine resource usage instead, which is why the description was expanded to clarify this.
\end{itemize}
In addition to these changes, a small number of changes were made as a result of practitioners identifying errors such as typos.
The final version of the model can be seen in Figure~\ref{fig:api-m-famm}.
\clearpage
\bibliographystyledata{elsarticle-num}
\bibliographydata{apimanagement}
\clearpage
\bibliographystyle{elsarticle-num}
| proofpile-arXiv_065-287 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The scale of Internet-of-Things (IoT) networks is getting increasingly large in recent years. According to Cisco \cite{Cisco}, the number of connected IoT devices is due to reach $75$ billion by $2025$. In the foreseeable future, massive IoT networks with millions of connections can pose great challenges to network monitoring, management, and control \cite{SDIoT1,challenge2007,APNC}.
To enable efficient and dynamic network monitoring, a recent trend in large-scale IoT networks is the fusion of software-defined networking (SDN) \cite{SDNMagazine} and IoT, dubbed software-defined IoT networking (SDIoT)~\cite{SDIoT1,SDIoT2,TCOM,Significant2020}. The essence of SDIoT, as that of SDN, is to disassociate the data plane (forwarding process of network packets) from the control plane (monitoring and configuration). In particular, the control plane is managed by one or more logically centralized controllers that have a global view of the network. SDIoT greatly simplifies the network monitoring process in large-scale IoT networks because all the IoT devices are equipped with programmable interfaces \cite{monitoring}, with which the controller can query/sample the devices for statistics of each flow passing through them. In a nutshell, the controller monitors the SDIoT network by simple means of per-flow sampling \cite{openTM,Opennetmon,DCM,SLAM}.
As an example, let us elaborate on OpenTM \cite{openTM}, a network monitoring system implemented on SDN, to explain how per-flow sampling works in large-scale networks. The goal of OpenTM is to estimate the traffic matrix (TM) of the network, i.e., a traffic map reflecting the volumes of traffic flowing between all the edge devices of the network \cite{TMSigMetrices,TMprimer}. To this end, OpenTM keeps track of the statistics of all active flows in the network and monitors each flow independently. For each active flow, the controller 1) gets the routing information and determines the flow path; 2) periodically samples flow statistics, such as flow byte and packet count, from one of the devices on the flow path; 3) constructs the TM by adding up statistics of flows from the same source to the same destination.
As can be seen, per-flow sampling is well suited for large-scale SDIoT networks in that
1) The controller can directly communicate with the IoT devices thanks to the measurement infrastructure provided by the SDIoT. This allows lightweight sampling operations that yield real-time flow statistics.
2) In lieu of centralized sampling, per-flow sampling scales well with the network size and adapts to the dynamic nature of the IoT-network topology -- IoT devices are often deployed without paying particular attention to the topology they form and the post-deployment topology can change frequently because of the displacement of IoT devices, e.g., industrial IoT (IIoT) networks.
Consider sampling a single flow path. A decision to be made by the controller is which IoT device to sample in each sampling epoch. In traditional SDN, an important criterion to be considered when making the sampling decision is the sampling preference of the controller \cite{openTM,Opennetmon,SLAM,FlexMonitor}.
\textbf{Sampling preference of the controller} -- The controller may have a preference to sample some of the devices on the flow path. In OpenTM, for example, different devices on a flow path can observe different traffic volumes for the flow due to the packet loss. OpenTM aims to capture the amount of traffic that arrives at the destination. Therefore, the controller prefers to sample the last device of the path \cite{openTM} because it is closest to the destination, and the traffic volumes sampled from it are considered as the most accurate.
The sampling preference of the controller also exists in other applications. OpenNetMon \cite{Opennetmon} and SLAM \cite{SLAM} are OpenFlow controller modules developed to measure per-flow latency, packet loss, and throughput. For these intents, the controller prefers to sample the first and the last devices because the difference between the statistics collected from them gives the most accurate measurements. In FlexMonitor \cite{FlexMonitor}, on the other hand, the controller prefers to sample the devices that yield the minimal communication cost with the controller.
In addition to the sampling preference of the controller, another design dimension that merits particular treatment in SDIoT networks is the load balancing among IoT devices.
\textbf{Load balancing among IoT devices} -- Devices consume extra energy to execute the sampling tasks. However, unlike the traditional SDN wherein the network nodes are routers and switches that are connected to power supplies, the network nodes in SDIoT are low-powered IoT devices. Therefore, a fair sampling policy should be able to distribute the sampling tasks evenly to the IoT devices so that the energy consumptions and lifespan of different devices on the flow path are balanced \cite{DCM,Tradeoff2013}.
There is a clear tradeoff between the above two criteria. As far as the sampling preference is concerned, the controller prefers to sample some of the IoT devices more frequently as they yield more accurate flow statistics. On the other hand, in terms of load balancing, it is preferred to sample the IoT devices uniformly so that they carry equal average loads. An outstanding issue in SDIoT networks is how to devise a judicious flow-sampling policy that strikes the best tradeoff between these two criteria.
To fill this gap, this paper formulates the flow sampling problem in SDIoT networks and investigates different sampling policies that balance the controller's sampling preference (i.e., more accurate statistics) and load balancing among IoT devices. In particular, we model the flow sampling problem as a discrete Markov decision process (MDP) \cite{MDPBook,POMDP} with the {\it state} being a measurement of load balance among devices.
The sampling policy of the controller is a mapping from state to an {\it action} (i.e., a chosen device), and different actions yield different sampling accuracies.
In successive time slots, the controller follows its sampling policy and makes a sequence of independent decisions to sample one of the IoT devices on the flow path.
The quality of an action at a state is reflected by a {\it cost} associated with this state-action pair. This cost function is designed to take both sampling accuracy and load balancing among IoT devices into account. The optimal sampling policy is then defined as the policy that minimizes the average cost on an infinite time horizon.
Three classes of sampling policies are explored in this paper as solutions to the MDP: the optimal policy, the state-independent policies, and the index policies.
\textbf{The optimal policy} -- The optimal policy is derived by solving the MDP using stochastic dynamic programming (DP) \cite{MDPBook}. Although optimal, the relative value iteration algorithm for stochastic DP is computationally intensive: its complexity grows exponentially with the increase of the number of IoT devices on the flow path. This limits the scalability of stochastic DP when the sampling problem involves a large number of IoT devices.
\textbf{State-independent policy} -- As the name suggests, state-independent policies make the sampling decision without considering the current states of the IoT devices. We analyze two state-independent policies implemented in OpenTM \cite{openTM}: a uniform sampling policy and a non-uniform sampling policy. For each flow path, the uniform policy instructs the controller to sample the IoT devices uniformly at random. The non-uniform policy, on the other hand, indexes the devices on the flow path so that devices with larger indexes are closer to the destination. In each decision epoch, the non-uniform policy randomly generates two integers and instructs the controller to sample the device indexed by the larger integer. We further generalize these two state-independent policies to a largest-order-statistic policy and a weighted-probability policy that have better performance. In particular, the weighted-probability policy is the optimal stationary state-independent policy.
Overall, state-independent policies have very low complexity, hence are easy to implement in practice. Their performance, however, is suboptimal in general.
\textbf{Index policies} -- To devise low-complexity policies with good performance, we consider a class of index policies to solve the MDP. The Whittle index \cite{Whittle1988} refers to an index policy proposed by Whittle to solve restless multi-armed bandit (RMAB) problems \cite{GittinsBook}. RMAB is a sequential decision problem where, at each time, one or more choices must be made among all available Markovian arms/jobs. The Whittle index associates each arm with an index, and chooses the arm with the largest index at each decision epoch \cite{Whittle1988}. By so doing, the original high-dimensional decision problem is decoupled to multiple one-dimensional problems of computing the individual indexes of the jobs/arms, hence the computational complexity of the Whittle index grows linearly in the number of arms. Thanks to its low complexity and excellent performance, the framework of RMAB and the Whittle index solution has been widely used to solve the problem of route planning for unmanned military aircraft \cite{RMABUAV}, opportunistic communication channel usage \cite{RMABIT,Kadota1}, and sensor management \cite{RMABSensor}, to name a few.
This paper formulates our MDP as an RMAB problem and devises a Whittle index policy to solve the MDP. The Whittle index is derived in closed form. Simulation results show that 1) the Whittle index policy performs as well as the optimal policy derived from stochastic DP when the number of IoT devices on the flow path is small; 2) the Whittle index policy outperforms all the state-independent policies. Compared with the uniform policy and the largest-order-statistic policy, the Whittle index policy reduces the average cost by $66.4\%$. Compared with the weighted-probability policy, the Whittle index policy reduces the average cost by $33.4\%$.
The Whittle index policy has satisfactory average-cost performance and low computation complexity. Yet, as the optimal policy does, it relies on perfect knowledge of the network dynamics for ``planning''. This prior knowledge, however, may not be available to the controller in practice. In view of this, this paper further puts forth a second-order index policy inspired by the form of the Whittle index. The second-order index policy is the most desired policy among all as it requires no prior knowledge of the network dynamics while having all the advantages of the Whittle index. Simulation results show that the performance gap between the second-order index policy and the Whittle index is negligible.
\section{Problem Formulation}\label{sec:II}
\input{SecII.tex}
\section{A Lower Bound and the Optimal Policy}\label{sec:III}
\input{SecIII.tex}
\section{State-independent Policies}\label{sec:IV}
\input{SecIV.tex}
\section{Index Policies}\label{sec:V}
\input{SecV.tex}
\section{Numerical and Simulation Results}\label{sec:VI}
\input{SecVI.tex}
\section{Conclusion}\label{sec:Conclusion}
In software-defined Internet-of-Things networking (SDIoT), the controller samples each active flow to gather network information for traffic engineering and management.
A good sampling policy should sample the IoT devices to meet the controller's sampling preference and balance the query loads on the IoT devices.
In addition, a practical sampling policy should be computation-friendly, and has little reliance on prior knowledge of the network dynamics since they may be unavailable in practice.
The policies that meet these requirements, to our knowledge, are lacking in the literature.
To fill this research gap, this paper investigated the flow sampling problem in large-scale SDIoT networks, and studied the performance of different policies with the above criteria. Our main contributions are as follows:
\begin{enumerate}
\item We formulated the flow sampling problem in SDIoT networks by a Markov decision process (MDP). The optimal policy to this MDP is defined as the policy that makes the best tradeoffs between sampling accuracy and load balancing among IoT devices. We solved the MDP by a relative value iteration algorithm and derived the optimal policy.
\item We analyzed two state-independent policies previously proposed by others and generalized them to a largest-order-statistic policy and a weighted-probability policy. The weighted-probabi\-li\-ty policy was shown to be the optimal stationary state-independent policy. The performance of these policies was derived and validated by simulation results.
\item We transformed the MDP into a restless multi-armed bandit (RMAB) problem that admits a Whittle index policy. The closed-form Whittle index was derived. The Whittle index policy is near-optimal and has better performance than the previously proposed state-independent policies and their generalizations. The Whittle index policy, however, requires prior knowledge of the network dynamics.
\item Inspired by the Whittle index policy, we put forth a second-order index policy. This policy meets all the expectations we have for a practical policy: it is easy to compute, strikes very good tradeoffs between sampling accuracy and load balancing, and does not require any prior knowledge of the network dynamics.
\end{enumerate}
\appendices
\section{A Lower Bound to The Average Cost}\label{sec:AppA}
\input{AppendixA.tex}
\section{}\label{sec:AppB}
\input{AppendixB.tex}
\section{Performance of the weighted-probability sampling policy}\label{sec:AppC}
\input{AppendixC.tex}
\section{Proof of Indexability}\label{sec:AppE}
\input{AppendixE.tex}
\section{}\label{sec:AppF}
\input{AppendixF.tex}
\section{}\label{sec:AppG}
\input{AppendixG.tex}
\section{}\label{sec:AppH}
\input{AppendixH.tex}
\bibliographystyle{IEEEtran}
\subsection{Flow Sampling}
To formulate the flow sampling problem, let us first introduce the definitions of sampling accuracy and the measurement of querying loads.
\begin{defi}[Accuracy]
We denote by $\varphi_i\in[0,1]$ the accuracy of the statistics collected from the $i$-th IoT device. The parameters $\{\varphi_i:i=1,2,...,M\}$ can take any form in general and larger $\varphi_i$ is desired in each sampling operation.
\end{defi}
To get intuitive results (and be able to compare with prior works), sometimes we may set $\varphi_i=\sigma^{M-i}$, where $\sigma\in(0,1]$ is a constant. That is, we consider a homogeneous network where the packet loss rates at the IoT devices are the same and the statistics collected from the devices closer to the destination are more accurate.
To measure the querying loads imposed on each IoT device, we let the controller maintain $M$ counters, each of which is associated with an IoT device.
\begin{defi}[Counters]
The $i$-th counter $n_i$ associated with the $i$-th IoT device records the number of slots since the last slot the $i$-th device was sampled. Over time, $n_i$ evolves in the following way:
\begin{eqnarray}\label{eq:II1}
n_i^{t+1}=\left\{
\begin{array}{lll}
0, &&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \textup{if the $i$-th device is sampled in slot $t$;}\\
n_i^t+1, &&\!\!\!\!\!\!\!\! \textup{otherwise,}
\end{array} \right.
\end{eqnarray}
where we use superscript to denote time and subscript to denote the index of the IoT device. The counters are updated at the end of a time slot.
\end{defi}
We emphasize that the evolution of the counters in \eqref{eq:II1} can be triggered by not only the sampling of the controller on path $\overline{AB}$, but also the sampling operation on any other flow path which intersects with path $\overline{AB}$. Take Fig.~\ref{fig:1} for example. There are $M = 3$ IoT devices on path $\overline{AB}$, and there is another flow path $\overline{A'B'}$ that intersects with $\overline{AB}$ at the second device (i.e., the second device is a crosspoint). Suppose the counter array of the three IoT devices on $\overline{AB}$ are updated to $\{2,3,1\}$ at the end of slot $t-1$, and the controller decides to sample the third device of $\overline{AB}$ in slot $t$,
\begin{enumerate}[a)]
\item If the controller also samples path $\overline{A'B'}$ at the crosspoint, the counter array associated with $\overline{AB}$ evolves to $\{3,0,0\}$ because both the second and third devices are sampled by the controller in slot $t$.
\item Otherwise, if the controller does not sample path $\overline{A'B'}$ at the crosspoint, the counter array evolves to $\{3,4,0\}$.
\end{enumerate}
Succinctly speaking, the counter of an IoT device will be reset to $0$ as long as it is sampled during slot $t$, whether it is sampled by flow path $\overline{AB}$ or by any other flow paths.
In large-scale networks, the controller monitors and samples the flow paths independently.
Consider a specific path $\overline{AB}$ with $M$ IoT devices. We model the event that a device is sampled by other flow paths (other than $\overline{AB}$) as a random variable. Specifically, define the event $H^t_i$: the $i$-th device is sampled by flows other than $\overline{AB}$ in slot $t$.
We assume $H^t_i$, $\forall~i$ follows independent Bernoulli distribution with parameter $p_i$, and is time-invariant (constant over time). That is, the $i$-th device is sampled by flows other than $\overline{AB}$ with probability $p_i$ in a time slot. In this context, the evolution of $n^t_i$ in \eqref{eq:II1} can be rewritten as follows:
\begin{enumerate}[a)]
\item If the $i$-th device is sampled by $\overline{AB}$ in slot $t$.
\begin{eqnarray}\label{eq:II_tran1}
n^{t+1}_i = 0, \textup{w. p. $1$};
\end{eqnarray}
\item If the $i$-th device is not sampled by $\overline{AB}$ in slot $t$.
\begin{eqnarray}\label{eq:II_tran2}
n_i^{t+1}=\left\{
\begin{array}{lll}
0, &&\!\!\!\!\!\!\!\! \textup{w. p. $p_i$,}\\
n_i^t+1, &&\!\!\!\!\!\!\!\! \textup{w. p. $1-p_i$.}
\end{array} \right.
\end{eqnarray}
\end{enumerate}
The goal of flow sampling is to discover a sampling policy that strikes good tradeoff between sampling accuracy and load balancing among devices.
An example of such a policy is the non-uniform sampling policy proposed and implemented in \cite{openTM}.
For each flow path, the non-uniform policy randomly generates two integers in each decision epoch and instructs the controller to sample the device indexed by the larger integer.
We will analyze the non-uniform policy in Section~\ref{sec:IV} and further generalize it to a largest-order-statistic policy.
In this section, let us first formulate the flow sampling problem as an MDP and define quantitatively what is a good sampling policy.
\begin{rem}
In existing implementations of flow sampling, the sampling and monitoring functions are defined in the network layer \cite{openTM,Opennetmon,SLAM,FlexMonitor}. Therefore, this paper formulates the flow sampling problem for SDIoT networks assuming error-free sampling operations thanks to the error correction code in the PHY layer and the automatic repeat request in the MAC layer.
The system model can be further generalized to a cross-layer design wherein the sampling function is defined in the PHY layer and hence is error-prone.
\end{rem}
\subsection{An MDP Formulation}
The problem of discovering the optimal flow sampling policy can be described as a discrete MDP. Specifically, at the beginning of a slot $t$, the controller observes a state of the counter array $s^t=\{n^t_i:i=1,2,...,M\}$.
Given this observation, the controller chooses an action $a^t$ (i.e., which device to sample) following its sampling policy $\mu$, and executes $a^t$ in slot $t$. The action produces two results: 1) an immediate cost $C(s^t)$ is incurred (defined later), and 2) the system evolves to a new state $s^{t+1}$ in the next slot as per the transition probability defined below.
\begin{eqnarray}\label{eq:III_tranPr}
&&\hspace{-1cm} P(s^{t+1}\left.\right| s^t, a^t=j)= \\
&&\hspace{-1cm} \qquad \prod_{i=1,2,...,M, i\neq j}\left\{p_i\mathbbm{1}_{n^{t+1}_i=0} + (1-p_i)\mathbbm{1}_{n^{t+1}_i=n^{t}_i+1} \right\}, \nonumber
\end{eqnarray}
where $\mathbbm{1}$ is an indicator function, and
\begin{eqnarray*}
&& s^t=\left(n^t_1, n^t_2, ..., n^t_{j-1}, n^t_j, n^t_{j+1}, ..., n^t_M\right), \\
&& s^{t+1}=\left(n^{t+1}_1, n^{t+1}_2, ..., n^{t+1}_{j-1}, n^{t+1}_j=0, n^{t+1}_{j+1}, ..., n^{t+1}_M\right).
\end{eqnarray*}
Eq.~\eqref{eq:III_tranPr} defines the probability that the controller evolves from $s^t$ to $s^{t+1}$ if action $a^t=j$ is executed in slot $t$. Specifically, 1) the $j$-th counter $n^{t+1}_j$ is reset to $0$ deterministically; 2) the $i$-th counter $n^{t+1}_i$, $i\neq j$ is reset to $0$ with probability $p_i$, and evolves to $n^t_i+1$ with probability $1-p_i$. The evolutions of all counters are independent. Thus, $P(s^{t+1}\left.\right| s^t, a^t=j)$ is a product of $M-1$ terms, each of which is $p_i$ or $1-p_i$, depending on the value of $n^{t+1}_i$, $i\neq j$.
The same decision problem is faced by the controller in all the subsequent slots, but with different observations and corresponding actions.
\begin{figure}[t]
\centering
\includegraphics[width=0.95\columnwidth]{./F2.pdf}\\
\caption{An example of the state transitions in the MDP associated with the flow sampling problem.}
\label{fig:2}
\end{figure}
An example of the state transitions is given in Fig.~\ref{fig:2}, wherein $M = 4$. As can be seen, the system starts with state $s^0=(0,0,0,0)$.
In the beginning of slot $t = 0$, the controller takes action $a^0=4$, and no event $H^0_i$ happens during slot $0$. Thus, the state transits to $s^1=(1,1,1,0)$ at the end of slot $0$ because only the fourth device is sampled.
In slot $1$, the controller takes action $a^1=3$, and there is an event $H^1_2$, meaning that the second device is a crosspoint and is sampled by another flow during slot $1$.
Thus, the state transits to $s^2=(2,0,0,1)$ at the end of slot $1$ because both the second and third devices are sampled.
In each slot, an immediate cost is incurred as the penalty of being in state $s^t$, as defined below.
\begin{defi}[Immediate cost and average cost]
The immediate cost of being in state $s^t=\{n^t_i:i=1,2,...,M\}$ is defined as
\begin{eqnarray}\label{eq:II_cost}
C(s^t)=\sum_{i=1}^{M}\varphi_i n^t_i.
\end{eqnarray}
A given policy $\mu$ instructs the controller to traverse through a series of states. The average cost incurred by this policy over the infinite-time horizon is defined as
\begin{eqnarray}\label{eq:II_avecost}
J_\mu=\lim_{T\rightarrow\infty}\mathbb{E}_\mu \left[\frac{1}{T}\sum^{T-1}_{t=0} C(s^t) \right].
\end{eqnarray}
\end{defi}
As can be seen, we define the immediate cost to be a sum of the counter values $n^t_i$ weighted by the accuracies $\varphi_i$. In so doing, the controller favors sampling 1) the more accurate device if two or more devices have the same counter values; 2) the device with larger counter value if two or more devices are equally accurate; since it reduces the immediate cost the most.
The optimal policy, denoted by $\mu^*$, is the policy that minimizes the average cost over the infinite-time horizon, giving,
\begin{eqnarray}\label{eq:II_opt_policy}
\mu^*=\arg\min_\mu J_\mu.
\end{eqnarray}
\begin{rem} An alternative way to define the counters $n^t_i$ is the number of times that the $i$-th device has been sampled up until slot $t$. That is, counter $n^t_i$ is increased by $1$ if the $i$-th device is sampled in slot $t$, and frozen otherwise (a setup akin to the standard multi-armed bandit (MAB) problem \cite{weber1992gittins}). However, given this definition, the MDP associated with our flow sampling problem is very tricky to handle because $n^t_i$ grows indefinitely over time. In particular, the states of the MDP do not communicate. Our definition of counters in Definition \ref{eq:II1} circumvents this issue and renders the problem solvable.
\end{rem}
\subsection{The decoupled problem}
Faced with an $M$-dimensional MDP, it is inevitable that the computational complexity of the optimal policy increases exponentially with the number of devices $M$. A possible scheme to admits linear-complexity policy is to decouple the $M$-dimensional problem to $M$ one-dimensional problems. Decoupling is the main idea of a series of index policies to solve the MAB problems.
When sampling the $M$ IoT devices on a flow path, the state evolution of each device is a controlled Markov process independent from other devices. Specifically, the evolution of $n_i$ is controlled by the ``sample'' action (i.e., $n_i$ goes to $0$ once being sampled, and $n_i+1$ otherwise), and is irrelevant to how $n_j$, $j\neq i$ evolves.
Let us consider a decoupled problem of sampling only one device. To simplify the notations, we remove the subscript $i$ for all the definitions in Section \ref{sec:II} since there is only one device. The state of the device is then $s=\{n: n=\mathbb{N}^0\}$, and the action space is $a=\{0,1\}$ where $0$ and $1$ correspond to ``rest'' and ``sample'', respectively. The state transition probability is given by
\begin{eqnarray*}
\left\{
\begin{array}{lll}
P\left(s^{t+1}=0 \left|\right. s^t=n, a^t=1 \right) = 1, & \\
P\left(s^{t+1}=0 \left|\right. s^t=n, a^t=0 \right) = p, & \\
P\left(s^{t+1}=n+1 \left|\right. s^t=n, a^t=0 \right) = 1-p, &
\end{array}
\right.
\end{eqnarray*}
The immediate cost incurred by being in state $s^t$ and executing $a^t$ is
\begin{eqnarray*}
\left\{
\begin{array}{lll}
C\left(s^t=n, a^t=1 \right) = c+\varphi n, & \\
C\left(s^t=n, a^t=0 \right) = \varphi n, &
\end{array}
\right.
\end{eqnarray*}
where $\varphi$ is the accuracy associated with this device, and $c\geq 0$ is a fixed sampling cost (defined later).
The optimal policy $\overline{\mu}^*$ for the decoupled problem is defined as
\begin{eqnarray}\label{eq:V_opt_policy}
\overline{\mu}^*=\arg\min_{\overline{\mu}} \lim_{T\rightarrow\infty} \mathbb{E} \left[\frac{1}{T}\sum_{t=0}^{T-1}C(s^t,a^t) \right].
\end{eqnarray}
Compared with the original $M$-device sampling problem, the decoupled problem introduces a fixed sampling cost $c$. Without this fixed sampling cost, the controller would keep sampling the device to minimize \eqref{eq:V_opt_policy}. To avoid this, we artificially introduce a fixed cost $c$ for each sampling operation. As per Whittle's argument, we aim to find the sampling cost $c^*$ for which it is equally optimal to ``sample'' and ``rest'' (i.e., the expected costs incurred by ``sample'' and ``rest'' are the same). In doing so, $c^*$, i.e., the Whittle index, acts as a measurement of how much the controller is willing to pay to sample this device.
In the original $M$-device sampling problem, we could compute the corresponding Whittle index for individual devices in each decision epoch, and sample the device with the largest Whittle index.
\subsection{Solving the Decoupled Problem}
The decoupled problem is also a controlled MDP. Given a sampling cost $c$, the optimal solution to the decoupled problem can be modified from \eqref{eq:III_bellman} as
\begin{eqnarray}\label{eq:V_bellman}
&&\hspace{-0.4cm} g^* + \bm{h}^*[n] = \min \big\{c+\varphi n + \bm{h}^*[0], \nonumber\\
&&\hspace{1cm} \varphi n + p \bm{h}^*[0] + (1-p) \bm{h}^*[n+1] \big\},
\end{eqnarray}
where the two terms inside the minimization operation correspond to the costs incurred by the actions ``sample'' and ``rest'', respectively. Without loss of generality, we choose state $n=0$ as the reference state and set $\bm{h}^*[0]=0$. Thus,
\begin{eqnarray}\label{eq:V_bellman2}
\bm{h}^*[n]= \varphi n + \min \left\{c, (1-p) \bm{h}^*[n+1] \right\} - g^*.
\end{eqnarray}
Eq.~\eqref{eq:V_bellman2} defines the relative value function of each state $n$ under the optimal policy for a given sampling cost $c$.
\begin{prop}[solution to the decoupled problem]\label{thm:5}
The optimal policy $\overline{\mu}^*$ to the decoupled problem is a threshold policy. For a given sampling cost $c$, there exists an integer threshold $\Gamma(c)$ such that 1) if a state $n<\Gamma(c)$, the optimal policy is to ``rest'', and 2) if a state $n\geq\Gamma(c)$, the optimal policy is to ``sample''.
\end{prop}
\begin{NewProof}
See Appendix~\ref{sec:AppE}.
\end{NewProof}
\begin{figure}[t]
\centering
\includegraphics[width=0.9\columnwidth]{./F3.pdf}\\
\caption{Under the threshold policy, the decoupled problem is a unichain with a single recurrent class. All the states $n>\Gamma$ are transient states. The circles in the figure are states, while the rectangles are actions.}
\label{fig:3}
\end{figure}
Given the threshold structure of the optimal policy, the decoupled problem is essentially a unichain with a single recurrent class. In equilibrium, the transitions of recurrent states are illustrated in Fig.~\ref{fig:3}. We define the set of states wherein the optimal policy is ``rest'' as the ``passive set'', i.e.,
\begin{eqnarray}\label{eq:V_passiveset}
\mathcal{Q}(c) = \{n: 0\leq n<\Gamma(c), n\in\mathbb{Z} \}
\end{eqnarray}
\subsection{Whittle index policy}
Whittle index is a good heuristic to solve RMAB problems provided that the problem is indexable. As noted by Whittle \cite{Whittle1988}, a decoupled problem is said to be indexable if the passive set $\mathcal{Q}(c)$ is monotone non-decreasing as the subsidy (in our case, sampling cost) increases. That is, for any real values $c_1<c_2$, the passive set $\mathcal{Q}(c_1)\subseteq\mathcal{Q}(c_2)$. An RMAB problem is indexable if all its arms are indexable.
\begin{lem}[Indexability]\label{thm:6}
The decoupled problem in \eqref{eq:V_opt_policy} as well as the original M-device sampling problem in \eqref{eq:II_opt_policy} are indexable.
\end{lem}
\begin{NewProof}
Let $n=\Gamma-1$ and $n=\Gamma$ in \eqref{eq:V_F1} and \eqref{eq:V_F4}, respectively, we have
\begin{eqnarray}\label{eq:V_F12}
\bm{h}^*[\Gamma] \leq \frac{c}{1-p} \leq \bm{h}^*[\Gamma+1].
\end{eqnarray}
Given a sampling cost $c$, Eq. \eqref{eq:V_F12} means there exists one and only one $\Gamma(c)$ such that $\frac{c}{1-p}$ falls into the interval $\left[ \bm{h}^*[\Gamma(c)], \bm{h}^*[\Gamma(c)+1] \right]$.
From the proof of Proposition \ref{thm:5}, we know that $\bm{h}^*[n]$ is a strictly increasing function of $n$. Thus, $\Gamma(c)$ is monotone nondecreasing in $c$ (it is a staircase function since $\Gamma$ takes integer values), and the passive set $\mathcal{Q}(c)$ defined in \eqref{eq:V_passiveset} is monotone nondecreasing in $c$.
As a result, the decoupled problem for each device is indexable, hence the original M-device sampling problem in \eqref{eq:II_opt_policy} is also indexable.
\end{NewProof}
Given the indexability condition established in Lemma \eqref{thm:6}, the Whittle index policy is captured by Theorem \eqref{thm:7} below:
\begin{thm}[Whittle index policy]\label{thm:7}
At the beginning of a slot $t$, the controller computes a Whittle index $c^*(n_i)$ separately for each device as a function of its current state $n_i$, and then samples the device with the greatest index. The whittle index is given by
\begin{eqnarray}\label{eq:V_F13}
c^*(n_i) = \frac{\varphi_i(1-p_i)}{p^2_i}\left[ (1-p_i)^{n_i+2} + (n_i+2)p_i - 1 \right].
\end{eqnarray}
\end{thm}
\begin{NewProof}
See Appendix \ref{sec:AppF}.
\end{NewProof}
When in state $n_i$, the Whittle index $c^*(n_i)$ measures how attractive is the $i$-th device to the controller.
\begin{figure}[t]
\centering
\includegraphics[width=0.8\columnwidth]{./F4.pdf}\\
\caption{Comparisons among the optimal policy, the state-independent policies (uniform, largest-order-statistic, and weighted-probability policies), the Whittle index policy, and the second-order index policies in terms of the computation complexity ($x$-axis), the performance of average cost ($y$-axis), and the requirements of prior knowledge $p_i$ (z-axis). The average costs of different policies can be found in Section \ref{sec:VI}. The relative performance of different policies is provided for illustrative purposes only and is not meant to depict the precise performance gaps.}
\label{fig:4}
\end{figure}
\subsection{The second-order index policy}
In Fig. \ref{fig:4}, the optimal policy, the state-independent policies, and the Whittle index policy are evaluated in a three-dimensional coordinate system. The positive direction of the $x$-axis means the policy requires higher computational complexity, the positive direction of the $y$-axis means the policy yields larger average cost (poorer performance), and the positive direction of the $z$-axis means the policy requires prior-information $p_i$, a parameter determined by the local volatility of each device.
As shown, the Whittle index policy is preferred to the optimal policy and the state-independent policies thanks to its low complexity and decent average-cost performance. Yet, the execution of the Whittle index policy hinges on the accurate estimation of $p_i$, as the optimal policy does. Since the accurate estimates of $p_i$ may not be available to the controller in practice, we put forth a second-order index policy in the following that does not rely on prior information $p_i$, while inheriting all the advantages of the Whittle index.
\begin{defi}[second-order index policy]
At the beginning of a slot $t$, the controller computes a second-order index $I(n_i)$ separately for each device as a function of its current state $n_i$, and then samples the device with the greatest index. For the $i$-th device, the second-order index is given by
\begin{eqnarray}\label{eq:V_2order}
I(n_i) =\lim_{p_i\rightarrow 0} c^*(n_i) = \frac{\varphi_i}{2} (n_i+1) (n_i+2).
\end{eqnarray}
\end{defi}
It is plausible that the second-order index policy performs well when $p_i$,$\forall i$ are small, because the second-order index is inferred from the Whittle index by assuming a device undergoes very light traffic with $p_i\rightarrow 0$. However, one may ask, does this second-order index perform well when some of the IoT devices undergo moderate or heavy traffic with relatively large $p_i$? We answer this question affirmatively by the simulation results in section \ref{sec:VI}, where it is shown that the second-order index policy performs well for both small and large $p_i$.
Overall, the second-order index policy is the most desired policy among other policies. As shown in Fig.~\ref{fig:4}, it has low computation complexity, no reliance on the prior-information $p_i$, and comparable average-cost performance to the Whittle index policy.
\begin{rem}
When $p_i$ of the $i$-th device is large, an alternative to the second-order index in \eqref{eq:V_2order} is a first-order index
\begin{eqnarray}\label{eq:V_1order}
I(n_i) =\lim_{p_i\rightarrow 1} \frac{c^*(n_i)}{1-p_i} = \varphi_i (n_i+1).
\end{eqnarray}
This gives us the following heuristic index policy.
\textbf{Heuristic index policy} -- Assume the controller has a rough idea of whether $p_i$ is larger or smaller than a threshold probability $\overline{p}$ for each device. At a decision epoch, the controller takes the second-order index in \eqref{eq:V_2order} as the heuristic index for IoT devices whose $p_i<\overline{p}$; and the first-order index in \eqref{eq:V_1order} as the heuristic index for IoT devices whose $p_i\geq \overline{p}$. Then, the controller samples the device with the largest heuristic index.
This heuristic index policy is evaluated at the end of section \ref{sec:VI}. It is shown that the heuristic index policy only yields minor gains over the second-order policy. Yet, it requires the controller to know a certain amount of prior knowledge $p_i$, and the threshold probability $\overline{p}$ must be chosen very carefully. Overall, the second-order index is good enough to ensure a minor gap to the Whittle index policy.
\end{rem}
\subsection{The optimal policy and the lower bound}
As stated in Section \ref{sec:III}, the computational complexity of relative value iteration is prohibitively high. This makes the optimal policy in \eqref{eq:II_opt_policy} very expensive to obtain, especially when the number of IoT devices $M$ is large. In view of this, we first consider a simple case where there are only three devices to evaluate the performance gap between the optimal policy and the lower bound given in Theorem \ref{thm:1}.
\begin{figure}[t]
\centering
\includegraphics[width=0.8\columnwidth]{./FS1.pdf}\\
\caption{The performance of the optimal policy benchmarked against the lower bound, wherein $M = 3$. The performance of the uniform policy and the Whittle index policy is also plotted in this same figure.}
\label{fig:S1}
\end{figure}
Fig. \ref{fig:S1} presents the average costs achieved by the optimal policy, the uniform policy, and the Whittle index policy benchmarked against the lower bound on a flow path with $M = 3$ IoT devices. In this figure, we fix $\sigma=0.8$, i.e., the accuracies of statistics collected from the three IoT devices are $0.64$, $0.8$, and $1$, respectively. The probability that a device is sampled by flows other than $\overline{AB}$ is set to $p_1=p_2=p_3=p$, and we increase $p$ from $0.025$ to $0.2$. To execute relative value iteration and compute the optimal policy, we set the upper limit $U$ of each counter to $10$ (i.e., a counter no longer grows when it reaches $10$). The size of the state space is then $\left|\mathcal{S}\right|=U^M=1000$, and the decision space is $\left|\mathcal{S}\right|\times\left|\mathcal{A}\right|\times\left|\mathcal{S}\right|=\allowbreak MU^{2M}=\allowbreak 3\times 10^6$.
As can be seen from Fig.~\ref{fig:S1},
\begin{enumerate}
\item The Whittle index policy performs as well as the optimal policy for small $M$ (the two curves coincident with each other). However, the optimality of the Whittle index is unknown in the case of large $M$ due to the unavailability of the optimal policy.
\item The performance gap between the optimal policy and the lower bound is minor when $p$ is small, but gets larger as $p$ increases. This is not surprising because to derive the lower bound, we have assumed in Theorem \ref{thm:1} that the variance of the inter-sampling time of each device is negligible relative to the mean of the inter-sampling time. Thus, the lower bound is supposed to be tighter in the case of larger $M$ and smaller $p$.
\end{enumerate}
Assuming a large number of IoT devices, the following parts evaluate the performance of state-independent policies proposed in \cite{openTM} and our second-order index policy. Keeping in mind that the Whittle index policy can be suboptimal, and the lower bound may not be tight, we will take them as the benchmarks.
\subsection{State-independent policies}
This subsection evaluates the average costs achieved by different state-independent policies and their generalizations, i.e., the uniform sampling policy, the largest-order-statistic policy, and the weighted-probability policy.
\begin{figure}[t]
\centering
\includegraphics[width=0.9\columnwidth]{./FS2.pdf}\\
\caption{Numerical and simulation results for the uniform policy, the largest-order-statistic policy (with $G = 2$ and $3$), and the weighted-probability policy, wherein $\sigma=0.8$, $p_1=p_2=\cdots=p_M=p=0.1$. The lower bound and the performance of the Whittle index policy are plotted in the same figure.}
\label{fig:S2}
\end{figure}
The numerical and simulation results of the above three state-independent sampling policies are presented in Fig.~\ref{fig:S2}, where we fix $\sigma=0.8$, and $p_1=p_2=\cdots=p_M=p=0.1$. The analytical results match with the simulation results very well.
As can be seen from Fig.~\ref{fig:S2},
\begin{enumerate}
\item Uniform sampling gives the worst performance. The average cost, as predicted in \eqref{eq:IV_uniform2}, increases monotonically with the increase of $M$. As $M$ goes to infinity, the average cost converges to $\frac{1-p}{(1-\sigma)p}=45$.
\item The performance of the largest-order-statistic policy depends on the value of $G$, i.e., the number of random integers generated each time. For a fixed $G\ll M$, \eqref{eq:IV_order2} indicates that the average cost converges to the same value $\frac{1-p}{(1-\sigma)p}=45$ as the uniform policy.
\item The weighted-probability policy outperforms both the uniform policy and the largest-order-statistic policy. This outcome is expected because we have optimized the sampling probability over all IoT devices to devise the weighted-probability policy. As indicated in \eqref{eq:IV_weighted}, the performance of the weighted-probability policy is twice of the lower bound. With the increase of $M$, the average cost converges to around $22.64$.
\item The Whittle index policy outperforms all three state-independent policies. Compared with the uniform policy and the largest-order-statistic policy, the Whittle index policy reduces the average cost by $66.4\%$ when $M$ goes to infinity. Compared with the weighted-probability policy, the Whittle index policy reduces the average cost by $33.4\%$ when $M$ goes to infinity.
\end{enumerate}
\subsection{The Second-order index policy}
The Whittle index policy outperforms the state-independent policies by much, but it requires accurate estimates of $p_i$ to compute the indexes. An alternative to the Whittle index is the second-order index given in \eqref{eq:V_2order}, the computation of which does not require any prior information $p_i$. This subsection verified the performance of the second-order index policy benchmarked against the Whittle index policy.
We consider an asymmetric network where IoT devices undergo two kinds of sampling-request traffic: 1) all the odd-indexed devices undergo light traffic with small $p_i=\pi_0$; and 2) all the even-indexed devices undergo moderate/heavy traffic with relatively large $p_i=\pi_1$. In the simulation, we fix $\pi_0$ to $0.01$, and vary $\pi_1$. For the Whittle index policy, $\pi_0$ and $\pi_1$ are assumed to be known to the controller such that the Whittle index can be computed. For the second-order index policy, the controller computes the second-order index directly from \eqref{eq:V_2order}.
\begin{figure}[t]
\centering
\includegraphics[width=0.8\columnwidth]{./FS3.pdf}\\
\caption{Performance comparison between the second-order index policy and the Whittle index policy, wherein $M = 40$ and $\sigma=0.8$.}
\label{fig:S3}
\end{figure}
Fig.~\ref{fig:S3} presents the average costs achieved by the second-order index and the Whittle index policies in the considered network, wherein $M = 40$. As shown, for different $\pi_1$, the performance gaps between the two policies are very small. The second-order index policy is a good substitute for the Whittle index policy given the same low-complexity property and comparable average-cost performance. Better yet, the second-order index policy requires no prior-information of $p_i$.
Finally, we evaluate the heuristic index policy in the same network. As per the heuristic index policy, the controller has to compute a heuristic index for each device in a decision epoch. To this end, we first set the threshold probability $\overline{p}=0.3$. That is, the heuristic index of the $i$-th device is the second-order index given in \eqref{eq:V_2order} if $p_i<0.3$, and is the first-order index given in \eqref{eq:V_1order} if $p_i\geq 0.3$. The controller then samples the device with the largest heuristic index.
The performance of the heuristic index policy is plotted in Fig.~\ref{fig:S3}. As shown, when $\pi_1<0.3$, the performance of the heuristic index policy is the same as that of the second-order index, because all $p_i$ in the network are smaller than the threshold probability $0.3$. On the other hand, when $\pi_1\geq 0.3$, the indexes of all even-indexed IoT devices are the first-order indexes rather than the second-order indexes. The heuristic index policy is slightly better than the second-order index policy. However, the downsides are that the controller has to know a certain amount of information about $p_1$, and the threshold probability $\overline{p}$ must be chosen very carefully (an ill-chosen $\overline{p}$ easily leads to substantial performance degradations).
| proofpile-arXiv_065-288 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section*{Supplemental Materials}
\section*{S1. Effect of Hubbard U}
We used PBE+U to "force" the $f$-electrons to localise. For the main calculations, we set U=6.7eV. In Fig.S\ref{DoS-U} we show the effect of changing U on the density of states, calculated for the AFM-Fddd phase at 90GPa.
The very sharp peak corresponding to the half-filled
$f-$band lies well below E$_F$, and the main effect of +$U$ is to shift
this peak. The simple treatment means that the peak is not split,
however the figures show that it does not broaden, hybridize or
contribute to the valance band.
Consequently, the value of $U$ it has no significant effect
on the energy differences between phases and the phase
transformation sequence. The largest effect is at the Fddd-fcc
transformation, where increase U from 0 to 8eV shifts the enthalpy
difference by 10meV, in favour of fcc, and the predicted phase
transformation pressure by about 10GPa. Interestingly, the $f$-band remains localised even with $U=0$, so the use of the Hubbard $U$ does not affect the conclusions of this paper. The occupied $f$-states lie below the
$sd$-band, forming a sharp peak in the DoS. The unoccupied $f$-states are well above the Fermi energy, but lie within the $sd$-band, appearing as a distinct but broader peak.
Regardless of the choice of $U$, the $f-$band does move closer to E$_F$
with pressure, and this is essentially unaffected by the crystal structure.
\begin{figure}[h]
\centering
\includegraphics[width=0.95\columnwidth]{FdddDOS+U.png}
\caption{ [supplemental] Calculated Density of States for AFM-Fddd phase with various values of U as shown in eV.
\label{DoS-U}}
\end{figure}
\section*{S2. Details of the magnetic free energy calculation}
For the magnetic free energy, we calculated the free energy of the Ising model on an fcc lattice with near-neighbour interaction $J$.
\begin{equation} \label{eq:H}
\mathcal{H} = -J \sum\limits_{\left\langle i,j \right\rangle^{\prime}} S_i S_j
\end{equation}
$J$ can be either positive (ferromagnetic) or negative (antiferromagnetic). We used the effective field approach which gives an analytic, albeit complicated, expression for the free energy.
The parameter $J$ was fitted to the DFT values for the difference in enthalpy between FM and AFM structures. Consequently, $J$ takes a different value for each crystal structure, and is pressure dependent. Where more than one AFM structure was considered, $J$ was fitted to the average value. We note that the enthalpy difference includes the $P\Delta V$ term which arises from the volume difference between FM and AFM. The negative thermal expansion of Gd arises from the fact that as spins flip thermally in the FM-hcp phase, the reduced exchange interaction allows for compression.
Once $J$ is determined, the magnetic contribution to the ground state (T=0) energy is fixed.
To compare different crystal structures, this is subtracted, so that the enthalpy difference is precisely as given by the DFT.
\[ \Delta H_{\alpha,\beta}(T) = \Delta H^{DFT}_{\alpha,\beta}(0) + \Delta H^{mag}_{\alpha,\beta}(T) - \Delta H^{mag}_{\alpha,\beta}(0) \]
\section*{S3. Details of the phonon free energy calculation}
We carried out phonon free energy calculations in the harmonic approximation using CASTEP. Harmonic phonon frequencies are calculated using the as-implemented finite displacement lattice dynamics method\cite{warren1996ab,ackland1997practical,clark2005first}
At 0GPa, we compared the stable ferromagnetic hcp phase with the lowest energy (ferrimagnetic) 9R phase. This comprises a double close-packed layer of up-spin followed by a single layer of down-spin, resulting in a macroscopic moment: this arrangement is neither ferromagnetic not antiferromagnetic, hence the slightly irregular use of the term Ferrimagnetic.
It is the lowest enthalpy decoration of spins we found, below ferromagnetic, alternating close-packed layers, and alternating $[11\overline{2}0]$ lines within the close packed layers, the arrangement which maximises the number of opposite-spin pairs
Figure. S\ref{phon-free} shows the variation in the phonon contribution to the free energy in the harmonic approximation. The main feature to note is that the hcp and 9R phases are extremely close, e.g. at 300K, 0GPa hcp is -83meV and 9R is -85meV, the 2meV difference is therefore an order of magnitude lower than the magnetic effects. The phonon densities of state themselves are shown in Figure. S\ref{phonDos9R}
\begin{figure}
\centering
\includegraphics[width=0.95\columnwidth]{Phonon_Free.pdf}
\caption{ [supplemental] Quasiharmonic contribution to Phonon Free Energy for 9R and hcp
\label{phon-free}}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.95\columnwidth]{phonondos_crop.pdf}
\caption{ [supplemental] Phonon densities of states for 9R and hcp at 0GPa and 4GPa (displaced). The zero point energies are 0.01354eV/atom and 0.01450eV/atom for 9R; 0.01365 eV/atom and 0.01470eV/atom for hcp at 0 and 4GPa respectively.
\label{phonDos9R}}
\end{figure}
\onecolumngrid
\subsection*{S4. Examples AFM castep structures}
AFM-hcp
\begin{verbatim}
3.2170236 1.8573495 -0.0000000
-0.0000000 3.7146989 -0.0000000
-0.0000000 -0.0000000 5.9696793
Gd 0.0000000000 0.0000000000 0.00000000 spin=7.9
Gd 0.3333333333 0.3333333333 0.5 spin=-7.9
\end{verbatim}
AFM-9R
\begin{verbatim}
8.9699005 -1.8411527 -0.1624865
8.9699005 1.8411527 -0.1624865
17.0834975 0.0000000 6.1091634
symmetry_generate
snap_to_symmetry
Gd 0.0000000 0.0000000 0.0000000 spin=-7.9
Gd 0.222222222 0.222222222 0.111111111 spin=7.9
Gd 0.777777778 0.777777778 0.388888888888 spin=-7.9
Gd 0.0000000 0.0000000 0.5 spin=7.9
Gd 0.222222222 0.222222222 0.611111111111 spin=-7.9
Gd 0.777777778 0.777777778 0.888888888888 spin=7.9
\end{verbatim}
\begin{verbatim}
ANG
3.18482849821009 1.83876158342089 0.00000000000000
3.18482849821009 -1.83876158342089 0.00000000000000
4.24643799693 0.00000000000000 17.906
Gd 0.0000000 0.0000000 0.0000000 spin=7.8
Gd 0.55555555555 0.55555555555 0.16666666666 spin=-7.8
Gd 0.77777777777 0.77777777777 0.33333333333 spin=7.8
Gd 0.0000000 0.0000000 0.5000000 spin=-7.8
Gd 0.55555555555 0.55555555555 0.66666666666 spin=7.8
Gd 0.77777777777 0.77777777777 0.83333333333 spin=-7.8
\end{verbatim}
9R-Ferri
\begin{verbatim}
ANG
2.11879279179908 0.00000000000000 8.86188738887579
-1.05939639589954 1.83492837184018 8.86188738887579
-1.05939639589954 -1.83492837184018 8.86188738887579
Gd -0.000000000000000 -0.000000000000000 -0.000000000000000 SPIN=-7.800
Gd 0.221418839009061 0.221418839009061 0.221418839009061 SPIN= 7.800
Gd 0.778581160990939 0.778581160990939 0.778581160990939 SPIN= 7.800
\end{verbatim}
AFM-fcc
\begin{verbatim}
ANG
3.60447438475254 0.109501252625182E-35 0.00000000000000
0.109501252625182E-35 3.60447438475254 -0.144308898241572E-57
0.00000000000000 -0.204083601064473E-57 5.24509012894360
Gd 0.000000000000000 0.000000000000000 0.000000000000000 SPIN= 7.500
Gd 0.500000000000000 0.500000000000000 0.500000000000000 SPIN=-7.500
\end{verbatim}\begin{verbatim}
ANG
2.97633087949342 0.903693904420478E-36 -0.331177716568357E-36
0.903693904420478E-36 2.97633087949342 -0.332259333440925E-36
-0.468569060571272E-36 -0.470099393611576E-36 8.6
1 1 3
0 0 0
Gd 0.000000000000000 0.000000000000000 0.000000000000000 SPIN= 7.500
Gd 0.500000000000000 0.500000000000000 0.2500000000000000 SPIN=7.500
Gd 0.000000000000000 0.000000000000000 0.500000000000000 SPIN= -7.500
Gd 0.500000000000000 0.500000000000000 0.7500000000000000 SPIN=-7.500
\end{verbatim}
Fddd
\begin{verbatim}
2.8299946 1.5442522 0.0000000
-0.0000000 3.0885043 0.0000000
0.0000000 0.0000000 10.3135949
symmetry_generate
Gd 0.0000000000 0.0000000000 0.00000000 spin=7.4
Gd 0.5 0.0 0.25 spin=-7.4
Gd 0.0 0.5 0.5 spin=7.4
Gd 0.5 0.5 0.75 spin=-7.4
\end{verbatim}
\end{document}
Evaluate
\[ -\frac{d}{dx}\cos{x}\]
Substitute $x=\pi$.
\[ \frac{d}{d\pi}(-\cos{\pi}) = \frac{d}{d\pi}(1) = 0 = \sin\pi = \sin x\]
\[ \frac{d}{dx}\cos{\pi}=-\sin\pi \]
| proofpile-arXiv_065-289 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Reinforcement learning (RL) allows robots to adaptively learn an optimal behavior for specific tasks by a series of trial-and-error. With the advent of a rapidly developing deep learning in the last decades, RL variants are used on tasks involving higher dimensionality and complexity. A significant portion of RL algorithms follows the model-free paradigm and train on samples without emulating a transition model. These algorithms usually require many trials to learn a specific task. As a consequence, most of these applications are first performed in a simulated environment and then transferred to the real world. This transfer is very challenging due to the systematic difference between the simulator and the real environment, commonly known as the reality gap \cite{jakobi1995noise}. Deep Q Learning is a notable example from the model-free branch of algorithms. These algorithms have been commonly used in robotic control and decision-making since \cite{Mnih2015} proposed the framework.
In juxtaposition, the opposing trend among RL algorithms consists of model-based approaches, which devise controllers from a predictive transition model of an environment. Model-based methods are capable of fast learning due to their sample efficiency and, as a result, they can be directly applied to real-world robotics experiments and skip the reality gap. These methods consist of the learning of a probabilistic or Bayesian transition model, which translates into a higher sample efficiency \cite{PILCO2011, DeepPILCO2016, PETS2018, PolSearch2019}. For example, Deep PILCO \cite{DeepPILCO2016} is a typical probabilistic model-based RL algorithm that relies on a Bayesian neural network (BNN) transition model. Although computationally expensive, it advanced the DQL algorithms used in \cite{ContiCtrl2016, ContiCtrlModel2016} in terms of the number of trials by at least an order of magnitude on the cart-pole swing benchmark task.
Here, we apply Deep Q-Learning (DQL) and Deep PILCO on simulations and real-world experiments of a robot combat decision making problem. This problem consists of the control of a robot positioning itself in an arena to shoot at enemies. We compare the two aforementioned algorithms on a Gazebo \cite{koenig2006gazebo} simulation of those robots and also on real-world experiments on a 5m x 8m arena (Figure \ref{fig:experiment}). Our results show that Deep PILCO was superior to DQL in terms of speed of convergence and in the quality of its best policy on both simulations and experiments. Deep PILCO also required less hyper-parameter tuning than Deep Q Learning, which contributes to more effective deployment of the algorithm. More importantly, the real-world implementation of the Deep Bayesian algorithm found the optimal solution in 20 minutes, which is faster than the real-time deployment of the Deep Q Learning algorithm in both real-world and simulation, and we discuss these results in Section \ref{Results}. We conclude by pointing to the advantages of probabilistic model-based reinforcement learning implementations over deep reinforcement learning when implemented in a real-world environment.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{Figures/experiment1.jpg}
\caption{The arena used for experiments. As enemies don't change position during iterations, we use two plastic boxes (in black, at the figure) to emulate their positioning, forcing our robots to use LiDAR sensors to localize them.}
\label{fig:experiment}
\end{figure}
\section{Related Work}
\subsection{RL and Robotics}
Deep reinforcement learning methods made themselves prominent for robotics control and decision-making problems \cite{SurveyRLRobt2014, SurveyPolicy2019, kormushev2013survey, polydoros2017surveymb}. One of the most notorious algorithms is Deep Q-Learning \cite{DQN2016}, with which the RL-based agents learned to play the Atari 2600 games directly from high-dimensional pixel inputs. Notably, in 2016 \cite{MasterGO2016} used DQL to create an agent capable of outplaying the world's best players in the game of Go. In the robotics field, DDPG \cite{ContiCtrl2016} and NAF \cite{ContiCtrlModel2016} adapted the ideas of DQL to the continuous control domain on simulations. In \cite{SimToReal2016} a similar algorithm is used to bridge the reality gap and enable a real-world robot to learn a manipulation task. \cite{vecerik2017sparse-reward} built upon the DDPG algorithm using demonstrations \cite{hester2017demonstration} for RL on real robotics with sparse rewards.
Compared to model-free deep RL, model-based RL algorithms are more promising candidates due to their data efficiency. For the low-dimensional problems, Gaussian processes (GP) provides excellent performance. PILCO \cite{PILCO2011} uses GP models to fit the dynamical models and a gradient-based optimizer to search for an optimal policy that maximizes the expected reward. On the cart-pole benchmark task, PILCO achieves remarkable data efficiency. Based on the key idea of PILCO, Black-DROPS \cite{BlackDROPS2017} further advances the algorithm by replacing the gradient-based optimizer with a parallel, black-box algorithm. It achieves similar results as PILCO while being much faster with multi-core computers. However, it still suffers from the curse of dimensionality because of the limitations of GPs. In \cite{wangshu-robio}, the authors study the influence of different neural network structures on Black-DROPS.
As a comparison, Deep PILCO \cite{DeepPILCO2016} extends PILCO's framework to use Bayesian neural networks (BNN) \cite{BNN1992} dynamics models with binary dropout \cite{Dropout2016}, allowing itself to scale linearly with the number of trials and observation space dimensionality.
Deep PILCO was already applied to a number of robotic tasks, where \cite{DPcode2018} improved Deep PILCO by using random numbers and gradient clipping, and applied it for learning swimming controllers for a simulated 6-legged autonomous underwater vehicle. In \cite{kahn2017uncertainty}, the authors learned
the specific task of a quad-rotor and an RC car navigating an unknown environment while avoiding collisions using Deep PILCO with bootstrap \cite{bootstrap1982jackknife}.
The advantages of Deep PILCO in learning speed have been proven on simulations and single-robot experiments. In this paper, we will further demonstrate its potential in applications within a noisy real-world environment in the context of a multi-agent competitive game.
\subsection{RL and Games}
In recent years, the application of reinforcement learning in multi-agent combat games has become more and more popular.
An important sub-problem of a multi-agent game is the cooperation between robots.
In \cite{Portugal2016-rev}, the authors proposed a probabilistic multi-robot patrolling strategy using Bayesian decision rules and distributed intelligence. To solve the intractable multi-robot patrolling problem with complex interactions, an extended Monte Carlo tree search method \cite{silver2010monte} was proposed in \cite{Zhou2019-rev} to obtain a scalable decentralized online learning algorithm. MRCDRL \cite{Wang2020-rev} solves the same problem by using end-to-end deep reinforcement learning methods with a modified Duel neural network structure \cite{wang2016dueling}.
Although our problem is similar to the patrolling problem, patrolling may not be the optimal strategy for winning the specific game. Our strategy based on the rules will be discussed in Section \ref{problem-def}. Moreover, these works only evaluated their algorithms in the simulations, which may make them suffer from sim-to-real transfer problems.
There are also many studies on online multi-player competitive games.
Dota 2 is a modern competitive team game that is played on a map with two teams defending bases in opposite corners.
OpenAI Five \cite{vinyals2019starcraft} has trained a team of agents with a scaled up version of PPO \cite{schulman2017ppo} and handcrafted rewards, which defeated a team of professional Dota 2 players. However, they have simplified some rules and restricted the players to a subset of heroes.
StarCraft is a real-time strategy game that takes place in a science fiction universe. It combines many sub-challenges such as micromanagement, base economy, and optimization.
Reinforcement learning and imitation learning has been proposed to solve the sub-problems and control bots in the game \cite{vinyals2017starcraft-7, usunier2016episodic-34, shao2018starcraft-42, justesen2017learning-45, Rashid2018qmix}.
However, none of these StarCraft bots could defeat high-level human players, even with the capability to view the entire map at once \cite{farooq2016starcraft-47}.
In 2019, AlphaStar addressed the challenge of StarCraft using a multi-agent reinforcement learning algorithm that uses both human and agent game data to adapt strategies and counter-strategies constantly \cite{vinyals2019starcraft}. AlphaStar was evaluated in a series of full online games against professional human players and was rated above $99.8\%$ of human players.
The main features of these two games are similar to the game that we study.
The results of previous work are also fascinating. Nevertheless, most of them require powerful computing devices and a long training time. For example, each StarCraft agent was trained using 32 third-generation tensor processing units (TPUs) \cite{jouppi2017tpu} over 44 days \cite{vinyals2019starcraft}. OpenAI Five was trained on a distributed training system continually for 10 months \cite{berner2019dota}.
On the other hand, these games themselves are well-developed, so there is no need to consider the issue of building a simulation environment. However, the game mentioned in this article takes place in the real world without a complete simulation environment. We should train intelligent RL agents while skipping the time-consuming step of building a simulation environment that simulated every function of the game.
\section{Problem Definition} \label{problem-def}
The background of the problem is a robot competition called ICRA-DJI RoboMaster AI Challenge.
According to the rules, each team should use the same hardware to build a pair of wheeled robots, which is displayed in Figure \ref{fig:hardware}. During the 3-minute competition between two teams, these wheeled robots must be fully autonomous instead of human pilots. The AI challenge asks teams to build robots that can sense the environment around themselves, navigate an area, and engage in combat with the opposing team. The robots have a launcher used to fire small 17mm plastic projectiles at the opposing team. More specifically, they need to fire precisely at the armors of the `enemies'. They also need to move around the battlefield, launch projectiles and try to avoid incoming hits. In the end, the team that has scored the most hits on their opponents will be declared the winner.
Much of the vehicle's hardware is supplied by DJI, except the components marked in Figure \ref{fig:hardware}. The LiDAR and IMU (Inertial Measurement Unit) sensors collect location information for the robot to localize itself and the enemy robots in the arena. The camera captures visual information for an object recognition neural network to detect the enemy robots' armors, which is essential to accomplish autonomous firing. Lastly, a Raspberry Pi and a Jetson TX2 from Nvidia are served as computing devices for the robot.
\begin{figure}[hbtp]
\centering
\includegraphics[width=4in]{Figures/Hardware.png}
\caption{Hardware of the adopted robot. The robot is capable of recognizing the enemy through a combination of a LiDAR and a camera, both sensors sampled by a TX2 and a Raspberry Pi.}
\label{fig:hardware}
\end{figure}
Consequently, for victory, we must find solutions for all sub-modules, including self-localization, vision-based enemy detection, path planning, autonomous firing and re-supplying of projectiles, and decision making. In this paper, we focus on the solution to the decision-making problem for two reasons. First, there are already many mature frameworks that can solve the other sub-problems. Second, an intelligent decision-making system is the most crucial part that integrates all the sub-modules and is the key to winning the AI challenge. Figure \ref{fig:module} shows the
workflow of main sub-modules, where decision-making is the core component.
Reinforcement learning has proven its great potential in solving such kind of problem.
A typical approach is training an RL algorithm with one full match as an episode while giving positive rewards only if the team wins at the end of the match \cite{vinyals2019starcraft, berner2019dota}.
However, this approach requires either expensive real-world experiments or a highly simulated environment. In order not to waste time on building simulation environments or waiting for the complete process of the match, we decide to extract the core from all the decisions that the robot needs to make.
Many decisions can be made directly by analyzing the rules. For instance, the robots should not hesitate to fire the projectiles whenever they see an enemy since our enemy detection and autonomous firing modules are reliable, and the number of projectiles remaining does not influence the score positively.
We suppose that for the robots, the most critical decision that needs to be made during the whole match is where to go for firing projectiles based on the current situation in the arena. A significant premise is that the robots have known the unchanging map, including the distribution of obstacles. Hence, the only remaining factors that influence the decision are the number of enemies in the robot's field of view and their location.
From a robot's perspective, the number of enemies within sight may have three possibilities - zero, one, and two. We can develop an appropriate reward mechanism to achieve an optimal situation. Some learning strategies will give a medium reward when the robot sees one enemy and the highest reward when it sees two enemies \cite{YiZheng2019}.
However, the robot actually will not benefit from the in-between situation. Because if the robot fires projectiles while only seeing one enemy, it will take the risk of being hit by another unseen enemy. Even if it is precisely a one vs one situation, both parties are only consuming projectiles together since the referee system limits the shooting speed. To this end, the optimal target location should always be where the robot can see both enemies at the same time.
\begin{figure}[hbtp]
\centering
\includegraphics[width=4in]{Figures/Module.pdf}
\caption{Main modules of the multi-robot competitive problem. We use Deep PILCO and DQL algorithms to train a policy search strategy for the decision making module.}
\label{fig:module}
\end{figure}
In this context, we can train an RL agent to behave as we expect.
To fulfil the Markovian property requirement of RL problems \cite{sutton2018reinforcement}, we re-formulate the multi-robot competitive problem to be a Markov Decision Process (MDP) as follows. The MDP is composed of states, actions, transitions, rewards and policy, which can be represented by a tuple $<\mathcal{S}, \mathcal{A}, T, R, \pi>$. Note that we only train an independent policy for the single robot in this paper, with the enemies staying static in one place.
\begin{itemize}
\item State:
$\mathcal{S}$ is the state space which contains all possible states. Considering the map of the arena is omniscient and path planning module is independent to the RL algorithm, we project the 3-dimensional coordinate of the robot $(x, y, z)$ to a 1-dimensional coordinate $(p)$ and thus discretize the space, where $p$ represents the position on the map. This discretization excludes the process of sending low-level control signals to the path planning module. As shown in Figure \ref{fig:state}, the original map is divided into 30 strategic areas in advance. The size of each area depends on the appearing possibility of the robot during a match.
Following this treatment, the state can be denoted by a tuple $(p_{M}, p_{E_{n}}, N_{E})$. $p_{M}$ represents the position of the robot itself. $p_{E_{n}}$ represents the positions of the enemy robots, where $n \in \{1, 2\}$ is the index of the enemy robots. $N_{E}$ represents the number of detected enemy robots. $p_{E_{n}}$ and $N_{E}$ are derived from the LiDAR-based enemy detection function. When the LiDAR only discovers none or partial enemies, $p_{E_{n}}$ is set to be the position of last iteration or the initial assumed position, if the value has never been updated from the beginning of the episode.
\item Action:
$\mathcal{A}$ is the action space which consists of all the possible actions the robot can take. In our problem definition, an action $(p_{G})$ is the next goal position for the robot. Since DQL can only handle discrete action spaces, while Deep PILCO can only handle continuous action spaces, we define the action space separately for two methods. For DQL, $\mathcal{A}$ is a discrete space containing four nearest neighbours of a position $p$. For Deep PILCO, $\mathcal{A}$ is a continuous domain $[0, 4)$. The continuous output is rounded off to map to one of the four nearest neighbours.
\item Transition:
$T(s'|s, a)$ is the transition distribution over the next state $s'$, given the robot took the action $a$ at the state $s$.
\item Reward:
$R(s)$ is the immediate reward function over state $s$. For this experiment, the reward was computed merely based on the number of visible enemy robots of the state $s$.
\begin{subnumcases}
{R(s)=}
0, & $s[N_{E}] \ne n$ \\
1, & $s[N_{E}] = n$
\end{subnumcases}
where $n$ is the target number of visible enemy robots. In this experiment, discovering one enemy does not necessarily form the sub-task of finding both enemies, so the reward is not defined to be proportional to the number of visible enemies.
\item Policy:
$\pi(a|s)$ is a probability distribution of all actions under the state $s$. The action to be taken is given by the policy based on the state.
\end{itemize}
\begin{figure}
\centering
\includegraphics[width=4in]{Figures/state_space.png}
\caption{The original arena on the top is divided into 30 strategic areas to discretize the state space, as shown in the figure below. In the original map, the red and blue squares indicate the special regions in the competition, such as starting zones and bonus zones. These marks can be ignored in our experiments.}
\label{fig:state}
\end{figure}
\section{Materials and Methods}
\subsection{Experimental Design}
We conducted both simulation experiments and real-world experiments. The real-world experiment environment is built one-to-one according to the size of the arena specified by the AI Challenge. The placement of obstacles is also exactly the same as the AI Challenge 2019 Map, as shown in Figure \ref{fig:experiment}. Since the position of the enemy remains unchanged in the experiment, we replaced the enemy with a plastic box matching the size of the robot. The control and message transmission of the robots are done in the Robot Operating System (ROS) \cite{quigley2009ros}.
To properly tune many of the hyper-parameters of DQL and Deep PILCO, we run simulations in GAZEBO simulator \cite{koenig2006gazebo} to reduce the expensive cost of running real robots. The simulation environment also restores the real arena one to one in the GAZEBO virtual world. Since the robot model can be completely restored in the simulator, the enemy is also represented by a real robot model. In the GAZEBO simulator, we can set the $real\_time\_factor$ parameter to speed up the simulation process. The final realization is also limited by the CPU. Both algorithms perform calculations in a computer with 12-core Intel i7 CPU. In this hardware environment, the simulation time can reach 2.4 times faster than in real-time.
We run the experiments in two cases. The first case is 1v1 design, which means there is only one robot against one enemy robot. The second case is 1v2 design, as there is one robot against two enemy robots.
\subsection{Deep Q-Learning}
Q-Learning \cite{watkins1992qlearning} algorithms aim to solve an MDP by learning the Q value function $Q(s, a)$. $Q(s, a)$ is a state-action value function, which gives the expected future return starting from a particular state-action tuple. The basic idea is to estimate the optimal Q value function $Q^*(s, a)$ by using the Bellman equation as an update:
\begin{equation}
Q^*(s,a) = E_{s'}[r+\gamma \max_{a'}Q^*(s',a')|s,a].
\end{equation}
DQL is a variant of the Q-Learning algorithm, which takes a deep neural network as a function approximator for the Q value function where samples are generated from the experience replay buffer. Note that DQL is model-free: it solves the RL task directly using samples from the emulator, without explicitly constructing an estimate of the emulator (or transition model) \cite{DQN2016}. Instead of updating the policy once after an episode in the model-based algorithm PILCO \cite{PILCO2011}, DQL updates the Q-network with samples from the replay buffer every step.
We implemented the DQL algorithm using the Tianshou library \cite{tianshou}, whose underlying layer calls the pytorch library \cite{paszke2019pytorch} for neural network-related computations. As for the model architecture, the input to the Q-network is a state vector. The two hidden layers consist of 128 neurons for simulations, 16 neurons for experiments, activated by ReLU function \cite{nair2010relu}. The output layer is a fully-connected linear layer with a single action output. The policy during training is $\epsilon$-greedy at $\epsilon=0.1$. The learning rate is 0.001, and the discount factor is 0.9. The size of the replay buffer is 20000.
\begin{figure}
\centering
\includegraphics[width=4in]{Figures/DP_flowchart.pdf}
\caption{Flow chart of the Deep PILCO algorithm.}
\label{fig:dpflow}
\end{figure}
\subsection{Deep PILCO}
Compared to model-free deep RL algorithms, model-based RL allows higher sample efficiency, which can be further improved with a probabilistic transition model. Deep PILCO is a prominent example which utilizes a Bayesian neural network (BNN) \cite{BNN1992} to estimate the transition model \cite{DeepPILCO2016, DPcode2018}.
As shown in Figure \ref{fig:dpflow}, the algorithm can be summarized as follows: A policy $ \pi$'s functional form is chosen from scratch, with randomly chosen parameters $\phi$. Then, Deep PILCO executes the current policy on the real agents from the current state until the time horizon $T$. The new observations are recorded and appended to the whole dataset, from which a new probabilistic transition model (or more precisely, the model parameters of BNN) is re-trained. Based on this probabilistic transition model, Deep PILCO predicts state distributions from the current initial state distribution $p(X_0)$ to $p(X_T)$. In detail, the state input and output uncertainty are encoded by using particle methods. Provided with the multi-state distribution $p(X_0,...,X_T)$, the cumulative expected cost $J(\phi)$ is computed, with a user-defined cost function. By minimizing this objective function (using gradient descent method), a newly optimized policy $\pi_{\phi}$ is obtained. Note that here we defined the cost function opposite to the reward: $Cost(X) = 1 - R(X)$.
We implement Deep PILCO in an episodic way so that the algorithm updates the policy after every episode based on the episodic rewards. The episodic reward is the sum of iteration rewards. Each episode consists of 10 iterations. During one iteration, the robot moves from the current position to the goal position given by the action along the planned path. The code is a modified version of an open-source implementation of Deep PILCO algorithm \cite{DPcode2018}, based on the theano library \cite{al2016theano}.
\subsubsection{LiDAR-based Enemy Detection Function}
We complete the enemy detection task with a 2-D LiDAR sensor by filtering out the known obstacles in the given map. Since we already know the map, we know where the walls and obstacles are. If the center of a detected obstacle is inside a wall, we filter out this circle \cite{YiZheng2019}. Otherwise, it will be recognized to be an enemy. In the real world experiments, we replace the two enemies by two black boxes to save usages of real robots. This replacement makes the judgment of the filtering algorithm inaccurate to some extent, resulting in a slightly noisy and unstable environment for the RL experiments. Nonetheless, our results show that the algorithms can still learn optimal policies under such circumstances. A screenshot taken during the algorithm running is presented in Figure \ref{fig:rviz}.
\begin{figure}
\centering
\includegraphics[width=4in]{Figures/rviz.png}
\caption{The visualization of the LiDAR-based enemy detection algorithm. The position of the plastic boxes (enemies) are shown in the two green circles. The navigation stack in ROS depicts the contour of the obstacles in yellow, and displays the local costmap in blue, red and purple.}
\label{fig:rviz}
\end{figure}
\section{Results} \label{Results}
\subsection{Simulation Results}
Before physical experiments, we decide to run simulations to tune the hyper-parameters properly. We simulate the arena and the robots with a one-to-one ratio within GAZEBO, as shown in Figure \ref{fig:gazebo-screen}. The simulation time is 2.4 times faster than in real-time. We focus on simulating the combat environment with a 1v2 case. Figure \ref{fig:gazebo} presents the simulation results of DQL and Deep PILCO. Although both of them converge to an optimal policy, Deep PILCO achieves a higher best reward than DQL, within fewer episodes.
\begin{figure}
\centering
\includegraphics[width=4in]{Figures/gazebo_path.png}
\caption{The simulated arena in GAZEBO and the optimal paths generated by DQL and Deep PILCO. The blue path is learned by DQL. The red path is learned by Deep PILCO.}
\label{fig:gazebo-screen}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=5in]{Figures/cp_gazebo.pdf}
\caption{Results for the 1v2 case task of DQL and Deep PILCO in the GAZEBO simulator (seven replicates). The shaded regions show the 95\% confidence intervals around the mean line in the middle.}
\label{fig:gazebo}
\end{figure}
\subsection{Experimental Results}
\begin{figure}
\centering
\subfigure[1v1 case]{\includegraphics[width=\textwidth]{Figures/Epi_time_1v1.pdf}\label{fig:res-a}}
\hfil
\subfigure[1v2 case]{\includegraphics[width=\textwidth]{Figures/Epi_time_1v2.pdf}\label{fig:res-b}}
\caption{Training rewards versus episodes (solid lines) and versus wall-time (dashed lines) of DQL and Deep PILCO. For DQL vs. episode, the rolling mean rewards are drawn to reduce noise and show convergence. For both methods, each episode contains ten iterations to execute the policy, with each iteration spending about ten seconds. As for the computation time, Deep PILCO takes approximately 60 seconds per episode, while DQL takes three seconds per episode. Combining execution time and computation time, the total training time is 160 seconds per episode for Deep PILCO, and 103 seconds per episode for DQL.
(a) Results for 1v1 case. The red curve vanished earlier since Deep PILCO converged to the optimal reward within fewer training episodes. (b) Results for 1v2 case.}
\label{fig:result}
\end{figure}
In the real world experiments, we implement the 1v1 case and 1v2 case individually. Here, we analyze the results from two perspectives, number of training episodes and length of computation time.
We first compare the episodic rewards of DQL and Deep PILCO. In order to reveal the learning trend, we also plot the rolling mean rewards of six neighboring episodes for DQL. For the 1v1 case, both algorithms learned optimal solutions after training. In Figure \ref{fig:res-a}, we can see that Deep PILCO found the solution within 11 episodes, much fewer than DQL, which took around 90 episodes. Furthermore, the result of Deep PILCO remained near its maximum even after the optimal moment, while the result of DQL was more unsteady.
For the 1v2 case, the results of both algorithms fluctuated more than in the 1v1 case, as shown in Figure \ref{fig:res-b}. While the performance of Deep PILCO kept a similar number of episodes when changing from the 1v1 case to the 1v2 case, DQL failed to converge to an optimal solution after 400 training episodes.
Considering the expensive training cost of real-world experiments, we decided to stop the experiment after 400 episodes. To eliminate the impact of the hyper-parameters, we changed the learning rate parameter of the DQL algorithm and re-ran the experiments, but DQL was still unable to find a stable optimal solution, as we can see in Figure \ref{fig:alpha}. Higher learning rates are likely to lead to a performance breakdown \cite{Hyper2019}.
For this reason, we stopped the experiments earlier for the two higher learning rates.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{Figures/alpha_cp_new.pdf}
\caption{DQL training results for three learning rates at the 1v2 case. Empirically, learning rates with large values hinder the convergence in DQL experiments. With that in mind we ran more trials with the smallest learning rate 0.001. Nonetheless, all three experiments failed to achieve a high reward.}
\label{fig:alpha}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=4in]{Figures/test.pdf}
\caption{Statistic results of real-world Deep PILCO and Deep Q-Learning methods for two test cases. One-episode tests were run for ten times for each case. The histogram shows the average episodic reward value for different metrics, while error bars show the standard deviation.}
\label{fig:test}
\end{figure}
With regards to the computation time, fewer episodes are not necessarily equivalent to a shorter training time, since each episode costs a different wall-time for DQL and Deep PILCO. Consequently, we plot the same results in the reward-versus-minute figures to compare the converging speed of these two algorithms. Figure \ref{fig:result} shows that in both the 1v1 case and 1v2 case, Deep PILCO achieved the optimal point much faster than DQL. Deep PILCO required less than one hour to find the optimal solution.
After the training procedure, we evaluate the optimal policies of DQL and Deep PILCO and compare the average episodic rewards. Each testing experiment ran for only one episode. For each case, we executed ten tests and computed the average episodic rewards. The histogram in Figure \ref{fig:test} demonstrates that in both 1v1 and 1v2 cases, Deep PILCO scored higher rewards than DQL.
We further analyze the training efficiency of real-world experiments and simulations together by comparing the minimal total training time required of each method to exceed the test reward threshold at the 1v2 case. The results are listed in Table \ref{table}. Deep PILCO in the simulation beats all the other cases. Nonetheless, in the real-world experiment, it requires nearly the same amount of wall-time, showing the potential of Deep PILCO to bypass the reality gap. While DQL fails to learn a solution to pass the test within acceptable time in the real world, it is unanticipated that even in the simulation, DQL still requires several times longer training time than real-world Deep PILCO.
\begin{table}
\tbl{Table of required training time to pass the test case (test reward $\geq8$) at 1v2 case.}
{\begin{tabular}{lccc}
\toprule
Method & Training time (min) \\
\colrule
Deep PILCO (experiment) & 53.3 \\
\bf{Deep PILCO (simulation)} & \bf{48.0} \\
DQL (experiment) & Fail \\
DQL (simulation) & 139.1 \\
\botrule
\end{tabular}}
\label{table}
\end{table}
Figure \ref{fig:snapshot} displays the snapshots of our experiments with Deep PILCO for the 1v2 case. The four pictures show different phases of the found optimal policy. We can see that our robot started from the initial state, where it saw none of the enemy robots, and finally navigated to an optimal position where it could see two enemy robots at the same time. In the rest of the episode, it stayed at the optimal position in order to achieve the highest episodic reward.
\section{Discussion}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{Figures/Snapshot.png}
\caption{Snapshots of the real-world experiment for the 1v2 case. After about 15 training episodes, the robot found the optimal position to see the two enemies simultaneously. During the episode that the reward is the highest, the robot started from the initial position and then navigated to the optimal place at the end of the first iteration. The robot stayed at the optimal place during the rest of the episode to reach a maximal reward.}
\label{fig:snapshot}
\end{figure}
\subsection{Superiority of Deep Bayesian RL over Deep RL}
The results of the simulations and experiments indicated that Deep Bayesian RL surpassed Deep RL in both learning efficiency and learning speed. This corroborates the findings of previous works in \cite{PILCO2011, DeepPILCO2016} that explicitly incorporated model uncertainty and temporal correlation into planning and control, enhancing the sample efficiency and scalability. Although DQL required much shorter computation time than Deep PILCO for each iteration, the learning efficiency of the latter compensated that cost.
According to Moore's law, the number of transistors on a chip should double in each technology generation. With the improvement in computer speed, both types of algorithms will run faster, which will further improve the learning performance in the real world. We presume that Deep Bayesian RL algorithms will be much more superior to Deep RL in terms of learning speed, on the basis of predictable advances in computation hardware.
Note that while DQL didn't perform well in the 1v2 case, with the original hyper-parameters fine-tuned for other applications in \cite{DQN2016}, Deep PILCO was still capable of learning the best policy, without the extra effort of modifying hyper-parameters, which were chosen for the basic cart-pole swing up experiment in \cite{DPcode2018}. This finding suggests another advantage of Deep PILCO as compared to DQL - it demands less work than DQL to fine-tune hyper-parameters every time we apply it on a new kind of task. This property makes Deep PILCO more flexible for various applications.
It is also worthy of mention the superiority of real-world experiments with Deep PILCO in comparison to simulated results from DQL. Although simulations are widely accepted as a faster media than the real world, the efficiency behind the aforementioned Deep Bayesian algorithm was sufficient to overcome that problem. Although the simulation time is capable of running 2.4 times faster than the wall-time, the real-world Deep PILCO is still capable of learning an optimal policy 2.6 times faster than the simulation-based DQL, as shown in Table \ref{table}.
\subsection{Effect of Random Rollouts in Deep PILCO}
Observing the first few episodes of the training curves of Deep PILCO in Figure \ref{fig:result}, we noticed that the initial reward of the 1v2 case was already much higher than that of the 1v1 case. This implies that the initial random rollouts in the 1v2 case explored more beneficial trajectories so that the first learned dynamics model was already expressive enough to achieve a high reward. Nonetheless, there is no guarantee such that a better initialization leads to faster learning.
Likewise, increasing the number of initial random rollouts also did not enhance the performance evidently during our experiments, which is consistent with the finding of \cite{MbMf2018}.
In their work, which combines model predictive control (MPC) and reinforcement learning, they evaluated various design decisions, including the number of initial random trajectories, for their model-based RL algorithm. They found that although a higher amount of initial training data leads to higher initial performance, low-data initialization runs were able to reach a high final performance level as well, due to the reinforcement data aggregation.
\subsection{DQL's Lack of Convergence}
As for the lack of convergence issue of DQL in 1v2 case experiments, the basic function approximation idea of Q-Learning could be a major factor, if not the only one, causing convergence failures. Recently, \cite{Diagnosis2019} introduced a unit testing framework to investigate the effect of function approximation on convergence. They surprisingly found that the function approximation rarely caused divergence in Q-Learning algorithms, but only when the representational capacity of the function approximator is high. Namely, the network architecture should not be too small.
\subsection{Future Work}
Future work should be devoted to extending the 1v2 system to 2v1 and even 2v2 systems. This requires us to introduce the mechanism of robot cooperation and take advantage of current multi-agent reinforcement learning algorithms, such as QMIX \cite{Rashid2018qmix}.
We also suggest a promising field of research with more focus on improvement in the computation time of Deep Bayesian algorithms. Actually, \cite{BlackDROPS2017} has already found that the algorithm can be faster when multiple cores are available if we replace the gradient-based optimization algorithm with a parallel, black-box algorithm that takes into account the model uncertainties. We suggest that Deep Bayesian methods may perform even better if we combine Deep PILCO with their work, Black-DROPS, by making use of gradient-free policy optimizers such as CMA-ES.
Another potential future work is to replace the variational inference method used in Deep PILCO with the $\alpha$-divergence minimization algorithm with $\alpha=0.5$. While training the BNN dynamics model, Deep PILCO updates the weight parameters by minimizing a divergence between the exact posterior distribution and an approximation, which is called variational Bayes (VB) \cite{VB2008}. Running VB is equivalent to minimizing $\alpha$-divergence when $\alpha \to 0$.
However, as \cite{PolSearch2019} evaluated in their recent work, minimizing $\alpha$-divergence with $\alpha = 0.5$ often produces better results than VB, as reflected in better test log-likelihood values.
\section{Conclusions}
We proposed a new application of Deep PILCO on a real-world multi-robot combat game. We further compared this Deep Bayesian RL algorithm with the Deep Learning-based RL algorithm, DQL. Our results showed that Deep PILCO significantly outperforms Deep Q-Learning in learning speed and scalability, and the real-world learning rate is even faster than the learning rate shown by DQL on simulations. We conclude that sample-efficient Deep Bayesian learning algorithms have great prospects on competitive games where the agent aims to win the opponents in the real world, as opposed to being limited to simulated applications.
\section*{Data Availability}
The datasets used in the present study are available in the article and the references.
\section*{Conflicts of Interest}
The authors declare that there is no conflict of interest regarding the publication of this paper.
\bibliographystyle{tADR}
| proofpile-arXiv_065-290 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Wave fronts propagating into an unstable state according to the model of
Fisher and Kolmogorov, Petrovskii, and Piskunov (FKPP)~\cite{fisher,kpp}
are encountered in many fields~\cite{vansaarloos}, in particular biology~\cite{murray}
and ecology~\cite{mendez14}. Phenotype selection through the propagation of the fittest trait \cite{bouin} and
cultural transmission in neolithic transitions \cite{fort2016} are a few examples of applications of FKPP fronts.
The model introduces a partial differential equation with a logistic growth term and a diffusion term.
\\
The effect of non standard diffusion on the speed of FKPP front is currently investigated~\cite{mancinelli,froemberg,cabre,adnani}
and we recently considered the propagation of a wave front in a concentrated solution in which cross-diffusion cannot be neglected~\cite{pre19}.
Experimental evidence of cross-diffusion has been given in systems involving ions, micelles, surface,
or polymer reactions and its implication in hydrodynamic instabilities has been demonstrated~\cite{vanag1,leaist,vanag2,rossi,budroni1,budroni2}.
In parallel, cross-diffusion is becoming an active field of research in applied mathematics~\cite{desvillettes1,desvillettes2,desvillettes3,juengel,daus,moussa}.
The sensitivity of FKPP fronts to fluctuations has been first numerically observed~\cite{breuer94,lemarchand95}.
An interpretation has been then proposed in the framework of a deterministic approach introducing a cutoff in the logistic term~\cite{brunet97}.
In mesoscopic or microscopic descriptions of the invasion front of A particles
engaged in the reaction $\rm{A}+\rm{B} \rightarrow 2\rm{A}$,
the discontinuity induced by the rightmost particle in the leading edge of species A profile amounts to a cutoff in the reactive term.
The inverse of the number of particles in the reactive interface gives an estimate of the cutoff~\cite{hansen}.
The study of the effect of fluctuations on FKPP fronts remains topical~\cite{panja2,doering}.
In this paper we perform a stochastic analysis of a reaction-diffusion front of FKPP type in the case of two species A and B with
different diffusion coefficients~\cite{mai}, giving rise to cross-diffusion phenomena in concentrated solutions.
The paper is organized as follows. Section 2 is devoted to a dilute system without cross-diffusion. The effects of the discrete number of particles on the front speed,
the shift between the profiles of the two species and the width of species A profile are deduced from a master equation approach.
In section 3, we derive the expression of the master equation associated with a concentrated system inducing cross-diffusion
and compare the properties of the FKPP wave front in the dilute and the concentrated cases.
Conclusions are given in section 4.
\section{Dilute system}
We consider two chemical species A and B engaged in the reaction
\begin{equation}
\ce{A + B ->[k] 2 A},
\label{reac}
\end{equation}
where $k$ is the rate constant. The diffusion coefficient, $D_A$, of species A may differ from the diffusion coefficient, $D_B$, of species B.
In a deterministic approach, the reaction-diffusion equations are
\begin{eqnarray}
\partial_t A &=& D_A\partial_x^2A + kAB \label{RDA}\\
\partial_t B &=& D_B\partial_x^2B - kAB \label{RDB}
\end{eqnarray}
where the concentrations of species A and B are denoted by $A$ and $B$.
The system admits wave front solutions propagating without deformation at constant speed.
For sufficiently steep initial conditions and in particular step functions $(A(x,t=0)=C_0H(-x)$ and $B(x,t=0)=C_0H(x))$, where
$C_0$ is constant and $H(x)$ is the Heaviside function, the minimum velocity
\begin{eqnarray}
v^*=2\sqrt{kC_0D_A}
\label{vdet}
\end{eqnarray}
is selected~\cite{vansaarloos,murray,brunet97}.
The parameter $C_0=A(x,0)+B(x,0)$ is the sum of the initial concentrations of species A and B.
Discrete variables of space, $i=x/\Delta x$, and time, $s=t/\Delta t$, where $\Delta x$ is the cell length and $\Delta t$ is the time step, are introduced in order to numerically solve Eqs. (\ref{RDA}) and (\ref{RDB}) in a wide range of diffusion coefficients $D_B$. We consider a system of $\ell=2000$ spatial cells.
The initial condition is a step function located in the cell $i_0=\ell/2$
\begin{eqnarray}
A(i,0)&=&C_0H(i_0-i),\\
B(i,0)&=&C_0H(i-i_0),
\end{eqnarray}
where $H(i)$ is the Heaviside function.
In order to simulate a moving frame and to counterbalance the autocatalytic production of species A in a finite system,
the following procedure is applied.
At the time steps $s$ such that $\sum_{i=1}^{\ell} A(i,s) > \sum_{i=1}^{\ell} A(i,0)$,
the first cell is suppressed and a last cell with $A(\ell,s)=0$ and $B(\ell,s)=C_0$ is created.
Hence, the inflection point of the front profile remains close to the initial step of the Heaviside function.
In small systems with typically hundreds of particles per spatial cell, the deterministic description may fail and a stochastic approach is required.
We consider the chemical master equation associated with Eq.~(\ref{reac})~\cite{nicolis,gardiner}. The master equation is divided into two parts
\begin{eqnarray}
\label{med}
\dfrac{\partial P(\phi)}{\partial t}=\left.\dfrac{\partial P(\phi)}{\partial t}\right|_{\rm reaction}+\left.\dfrac{\partial P(\phi)}{\partial t}\right|_{\rm diffusion}
\end{eqnarray}
where the first part corresponds to the reactive terms
\begin{eqnarray}
\label{medr}
\left.\dfrac{\partial P(\phi)}{\partial t}\right|_{\rm reac}=&\sum_i\dfrac{k}{\Omega N_0}\bigg[(N_A(i)-1)(N_B(i)+1)P(\{N_A(i)-1,N_B(i)+1\}) \nonumber \\
& -N_A(i)N_B(i)P(\phi)\bigg]
\end{eqnarray}
and the second part corresponds to the diffusion terms
\begin{eqnarray}
\label{medd}
\left.\dfrac{\partial P(\phi)}{\partial t}\right|_{\rm diff}=&\sum_i\bigg[\dfrac{D_A}{\Delta x^2}(N_A(i)+1)\big[P(\{N_A(i-1)-1,N_A(i)+1\}) \nonumber \\
&+P(\{N_A(i)+1,N_A(i+1)-1\})\big] \nonumber \\
&+\dfrac{D_B}{\Delta x^2}(N_B(i)+1)\big[P(\{N_B(i-1)-1,N_B(i)+1\})\nonumber \\
&+P(\{N_B(i)+1,N_B(i+1)-1\})\big] \nonumber\\
&-\dfrac{2}{\Delta x^2}\big(D_AN_A(i)+D_BN_B(i)\big)P(\phi)\bigg]
\end{eqnarray}
where $\phi=\{N_A(i),N_B(i)\}$ denotes the default state, $\Omega$, the typical size of the system, $N_0=\Omega C_0$, the initial total number of particles in a cell, and
$N_A(i)=\Omega A(i)$ and $N_B(i)=\Omega B(i)$ are the numbers of particles A and B in cell $i$.
We consider parameter values leading to the macroscopic values used in the deterministic approach.
The initial condition is given by $(N_A(i)=N_0,N_B(i)=0)$ for $1 \leq i < \ell/2$
and $(N_A(i)=0,N_B(i)=N_0)$ for $\ell/2 \leq i \leq \ell$ with $N_0=100$, $\Omega=10$ $(C_0=10)$.
The kinetic Monte Carlo algorithm developed by Gillespie is used to directly simulate the reaction and diffusion processes and numerically solve the master equation~\cite{gillespie}.
The procedure used in the deterministic approach to evaluate the front speed is straightforwardly extended to the fluctuating system.
\begin{figure}
\centering
\includegraphics[height=6cm]{fig1.eps}
\caption{Dilute system. Wave front speeds $v_{d,{\rm ME}}$, $v_{d,{\rm cut}}$, $v_{B_\varepsilon}$, and $v_d=v^*$ versus ratio of diffusion coefficients $D_B/D_A$ in the dilute case.
The values of $v_{d,{\rm ME}}$ (red circles)
are deduced from the direct simulation of the master equation (Eqs. (\ref{med}-\ref{medd})) for $k=10$, $\Omega=10$, $N_0=100$, $D_A=1$, $\ell=2000$, and $\Delta x=0.008$.
The values of $v_{d,{\rm cut}}$ (black open triangles) are deduced from the numerical integration of the deterministic equations (Eqs.~(\ref{RDAc}) and (\ref{RDBc})) in the presence of a cutoff $\varepsilon=10^{-4}$ for $k=10$, $C_0=10$, $D_A=1$, $\ell=2000$, $\Delta x=0.008$, and $\Delta t= 6.4\times 10^{-6}$.
The values of $v_{B_\varepsilon}$ (green crosses) are deduced from Eq.~(\ref{vbeps})
in which the value $B_\varepsilon$ has been deduced from the numerical integration of Eqs.~(\ref{RDAc}) and (\ref{RDBc}).
The horizontal line gives the minimum velocity $v_d=v^*$ (Eq. (\ref{vdet})) of an FKPP front in the absence of a cutoff.
}
\label{figv}
\end{figure}
\subsection{Front speed}
For sufficiently small spatial lengths $\Delta x$ and time steps $\Delta t$,
the numerical solution of the deterministic equations given in Eqs. (\ref{RDA}) and (\ref{RDB}) leads to the same propagation speed $v_d$,
where the index $d$ stands for dilute, in the entire range of $D_B/D_A$ values~\cite{pre19}.
The number of cells created during $10^7$ time steps once a stationary propagation is reached is used to evaluate the front speed.
For the chosen parameter values, we find a propagation speed obeying $v_d=v^*=20$
with an accuracy of $0.4\%$: No appreciable deviation from the unperturbed deterministic prediction given in Eq. (\ref{vdet}) is observed.
In particular, the front speed $v_d$ does not depend on the diffusion coefficient $D_B$.
The front speed deduced from the direct simulation of Eqs. (\ref{med}-\ref{medd}) is denoted $v_{d,{\rm ME}}$ where the index $d$ stands for dilute and the index ${\rm ME}$ for master equation.
As shown in Fig.~\ref{figv}, the velocity $v_{d,{\rm ME}}$ is smaller than the deterministic prediction $v^*$
given in Eq.~(\ref{vdet}).
As long as $D_B$ remains smaller than or equal to $D_A$, the velocity $v_{d,{\rm ME}}$ is constant.
The main result of the master equation approach is that the front speed drops
as $D_B$ increases above $D_A$. Typically, for $D_B/D_A=16$, the velocity $v_{d,{\rm ME}}$
is reduced by $22\%$ with respect to $v_d=v^*$. Due to computational costs, larger $D_B/D_A$ values were not investigated.
In the case of identical diffusion coefficients for the two species, the decrease of the front speed observed in a stochastic description is interpreted in the framework of the cutoff approach introduced by Brunet and Derrida~\cite{brunet97}. For $D_A=D_B$, the dynamics of the system is described by a single equation. When a cutoff $\varepsilon$ is introduced in the reactive term according to
\begin{equation}
\partial_tA=\partial_x^2A + kA(C_0-A)H(A-\varepsilon),
\end{equation}
the velocity is given by
\begin{eqnarray}
v_\varepsilon=v^*\left(1-\dfrac{\pi^2}{2(\ln \varepsilon)^2}\right)
\label{veps}
\end{eqnarray}
In a particle description, the cutoff is interpreted as the inverse of the total number of particles in the reactive interface~\cite{hansen}:
\begin{equation}
\label{cutoff}
\varepsilon=\dfrac{\Delta x}{N_0W^*}
\end{equation}
where the width of the interface is roughly evaluated at~\cite{murray,pre19}
\begin{equation}
W^*=8\sqrt{\dfrac{D_A}{kC_0}}
\end{equation}
For the chosen parameter values, the cutoff equals $\varepsilon=10^{-4}$ leading to the corrected speed $v_\varepsilon=18.84$.
According to Fig.~\ref{figv}, the velocity $v_{d,{\rm ME}}$ deduced from the master equation for $D_A=D_B$ agree with the velocity
$v_\varepsilon$ deduced from the cutoff approach. The results are unchanged for $D_B<D_A$ and Eq.~(\ref{veps})
correctly predicts the velocity in a fluctuating system. For $D_B>D_A$, Eq.~(\ref{veps}) is not valid.
Nevertheless, the relevance of the cutoff approach can be checked by numerically integrating the two following equations
\begin{eqnarray}
\partial_t A &= D_A\partial_x^2A + kABH(A-\varepsilon)\label{RDAc}\\
\partial_t B &= D_B\partial_x^2B - kABH(A-\varepsilon)\label{RDBc}
\end{eqnarray}
The values of the front speed $v_{d,{\rm cut}}$ deduced from the numerical integration
of Eqs. (\ref{RDAc}) and (\ref{RDBc}) are given in Fig.~\ref{figv} and satisfactorily agree with the results $v_{d,{\rm ME}}$ of the master equation, including for large $D_B/D_A$ values.
\begin{figure}
\centering
\subfigure{\includegraphics[height=6cm]{fig2a.eps}}
\subfigure{\includegraphics[height=6cm]{fig2b.eps}}
\caption{Dilute system. (a) Numbers $N_A$ of particles A (red dashed line) and $N_B$ of particles B (black solid line)
versus spatial coordinate $x$ deduced from direct simulation of
the master equation (Eqs. (\ref{med}-\ref{medd})) using Gillespie method. The snapshot is given at time $t=9$ for $k=10$, $\Omega=10$, $N_0=100$, $D_A=1$, $D_B=16$, $\ell=2000$, and $\Delta x=0.008$.
The vertical dashed line indicates the rightmost cell occupied by A particles. (b) Concentrations $A$ of species A (red dashed line) and $B$ of
species B (black solid line) versus spatial
coordinate $x$ deduced from numerical integration of the deterministic equations (Eqs.~(\ref{RDAc}) and (\ref{RDBc})) in the presence of a cutoff $\varepsilon=10^{-4}$ . The snapshot is given at time $t=640$ for the same other parameters as in the master
equation approach. The vertical dashed line indicates the abscissa $x_\varepsilon$ for which the scaled A concentration $A(x_\varepsilon)/C_0$ reaches the cutoff value.
The horizontal line indicates the value $B_\varepsilon$ of B concentration at the abscissa $x_\varepsilon$.}
\label{prof}
\end{figure}
According to Fig.~\ref{prof}a, the A profile is steeper than the B profile for $D_B>D_A$. The mean number of B particles in the leading edge smoothly converges to $N_0$. In average, the rightmost A particle sees a number of B particles smaller than $N_0$.
The significant decrease of the front velocity $v_{d,{\rm cut}}$ for $D_B>D_A$ is qualitatively interpreted by the apparent number $N_\varepsilon$ of B particles seen by the rightmost A particle in the leading edge.
The linear analysis of Eqs.~(\ref{RDAc}) and (\ref{RDBc}) according to the cutoff approach~\cite{brunet97} leads to Eq.~(\ref{veps})
which does not account for the behavior at large $D_B$. A nonlinear analysis would be necessary. Using the perturbative approach
that we developed in the case of the deterministic description~\cite{murray,pre19}, applying the Hamilton-Jacobi
technique~\cite{fedotov1999,mirrahimi}, or deducing the variance $\langle AB\rangle$ from a Langevin approach~\cite{carlo},
we unsuccessfully tried to find an analytical estimation of the front speed.
Instead, we suggest the following empirical expression of the velocity of an FKPP front for two species with different diffusion coefficients
\begin{equation}
v_{B_\varepsilon}=2\sqrt{kB_\varepsilon D_A}\left(1-\dfrac{\pi^2}{2(\ln\varepsilon)^2}\right)
\label{vbeps}
\end{equation}
where $B_\varepsilon$ denotes the concentration of B species at the abscissa $x_\varepsilon$ at which the scaled
concentration $A(x_\varepsilon)/C_0$ is equal to the cutoff $\varepsilon$ (see Fig.~\ref{prof}b).
The variation of $B_\varepsilon$ versus $D_B/D_A$ is numerically evaluated using Eqs.~(\ref{RDAc}) and (\ref{RDBc}).
The result is given in Fig.~\ref{beps}.
\begin{figure}
\centering
\includegraphics[height=6cm]{fig3.eps}
\caption{Dilute system. The green crosses give the value $B_\varepsilon$ deduced from the numerical integration of the deterministic equations (Eqs.~(\ref{RDAc}) and (\ref{RDBc})) with a cutoff $\varepsilon=10^{-4}$
versus the ratio of the diffusion coefficients $D_B/D_A$. The horizontal line indicates the concentration $C_0$.
The parameters are given in the caption of Fig.~\ref{figv}.}
\label{beps}
\end{figure}
As shown in Fig.~\ref{figv}, the variation of the front speed $v_{B_\varepsilon}$ with $D_B/D_A$ deduced from Eq. (\ref{vbeps}) slightly underestimates the results
$v_{d,{\rm cut}}$ deduced from the numerical integration of the deterministic equations (Eqs.~(\ref{RDAc}) and (\ref{RDBc})) with a cutoff.
\subsection{Profile properties}
We focus on two steady properties of the wave front, the shift between the profiles of species A and B
and the width of species A profile~\cite{pre19}.
For a wave front propagating at speed $v$ and using the coordinate $z=x-vt$ in the moving frame, the shift between the profiles of the two species
is defined as the difference $A(z=0)-B(z=0)$ of concentrations between species A and B at the origin
$z=0$ chosen such that $A(z=0)=C_0/2$.
The shift is denoted by $h_d$, where the index $d$ stands for dilute, when the concentrations are solutions of the deterministic equations
without cutoff given in Eqs. (\ref{RDA}) and (\ref{RDB}).
As shown in Fig.~\ref{hd}, the shift $h_d$ significantly varies with the ratio $D_B/D_A$, in particular when $D_B$ is larger than
$D_A$ \cite{pre19}. The shift vanishes for $D_A=D_B$, is positive
for $D_B<D_A$ and negative for $D_B>D_A$.
The direct simulation of the master equation leads to highly fluctuating profiles. We use the following strategy to compute the shift $h_{d,{\rm ME}}$.
First, starting from the leftmost cell, we scan to the right to determine the label $i_l$ of the first cell in which the number of A particles drops under $N_0/2$ and store
$N_B(i_l,s)$ for a large discrete time $s$ at which the profile has reached a steady shape.
Then, starting from the rightmost cell labeled $\ell$, we follow a similar procedure and determine the label $i_r$ of the first cell in which the number
of A particles overcomes $N_0/2$ and store $N_B(i_r,s)$ for the same discrete time $s$. The instantaneous value of the shift deduced from the master equation at discrete time $s$ is then given by
$(N_0-N_B(i_l,s)-N_B(i_r,s))/2\Omega$. The values of the shift $h_{d,{\rm ME}}$ used to draw Fig.~\ref{hd} are obtained after a time average between the times $t=1$ and $t=10$ in arbitrary units, i.e.
between $s=1.5\times 10^5$ and $s=1.5\times 10^6$ in number of time steps.
The shift $h_{d,{\rm ME}}$ between the profiles of A and B is sensitive to the fluctuations
of the number of particles described by the master equation.
Introducing an appropriate cutoff satisfying Eq. (\ref{cutoff}) in the reactive term of
the deterministic equations given in Eqs. (\ref{RDAc}) and (\ref{RDBc}) leads to values of the shift $h_{d,{\rm cut}}$ in very good agreement with the results
$h_{d,{\rm ME}}$ of the master equation.
\begin{figure}
\centering
\includegraphics[height=6cm]{fig4.eps}
\caption{Dilute system. Scaled shifts $h_{d,{\rm ME}}/C_0$, $h_{d,{\rm cut}}/C_0$, and $h_d/C_0$ between the profiles of species A and B versus ratio of diffusion coefficients $D_B/D_A$.
The values of $h_{d,{\rm ME}}/C_0$ (red circles) are deduced from the master equation (Eqs. (\ref{med}-\ref{medd})).
The values of $h_{d,{\rm cut}}/C_0$ (black open triangles) are deduced from the deterministic equations (Eqs. (\ref{RDAc} and \ref{RDBc}))
with a cutoff $\varepsilon=10^{-4}$.
The values of $h_d/C_0$ (blue open squares) are deduced from the deterministic equations (Eqs. (\ref{RDA}) and (\ref{RDB})) without cutoff.
The line gives the results for $D_A=D_B$.
The parameters are given in the caption of Fig. 1.}
\label{hd}
\end{figure}
Considering the deterministic equations, we deduce the width of A profile from the steepness $A'(0)$ in the moving frame at the origin $z=0$ and find
\begin{equation}
W_d=C_0/|A'(0)|
\label{width}
\end{equation}
where $A$ is solution of Eqs. (\ref{RDA}) and (\ref{RDB}) without cutoff.
The same definition is applied to Eqs. (\ref{RDAc}) and (\ref{RDBc}) to obtain the width $W_{d,{\rm cut}}$ in the presence of a cutoff.
The definition has to be adapted to take into account the fluctuations of the profile deduced from the master equation.
Using the cell labels $i_l$ and $i_r$ determined for the shift between the fluctuating A and B profiles solutions of Eqs. (\ref{med}-\ref{medd}),
we define the mean cell label $i_m$ as the nearest integer to the average $(i_l+i_r)/2$.
We use Eq. (\ref{width}) with $|A'(0)| \simeq (N_A(i_m-40)-N_A(i_m+40))/(81\Delta x\Omega)$
to compute the instantaneous width. As in the case of the shift $h_{d,{\rm ME}}$ between the fluctuating profiles of A and B, the values $W_{d,{\rm ME}}$ of the width used to draw Fig.~\ref{Wd}
are obtained after a time average between the times $t=1$ and $t=10$.
\begin{figure}
\centering
\includegraphics[height=6cm]{fig5.eps}
\caption{Dilute system. Profile widths deduced from different approaches versus ratio of diffusion coefficients $D_B/D_A$.
The values of $W_{d,{\rm ME}}$ (red circles) are deduced from the master equation (Eqs. (\ref{med}-\ref{medd})).
The values of $W_{d,{\rm cut}}$ (black open triangles) are deduced from the numerical integration of the deterministic equations (Eqs. (\ref{RDAc}) and (\ref{RDBc})) with a cutoff $\varepsilon=10^{-4}$.
The values of $W_d$ (blue open squares) are deduced from the numerical integration of the deterministic equations (Eqs. (\ref{RDA}) and (\ref{RDB}))
without cutoff.
The parameters are given in the caption of Fig. 1.}
\label{Wd}
\end{figure}
As shown in Fig.~\ref{Wd}, the width $W_d$ deduced from the deterministic equations without cutoff is smaller (resp. larger) for $D_B<D_A$ (resp. $D_B>D_A$)
than the width evaluated at $W^*$ in the case of identical diffusion coefficients $D_B=D_A$ \cite{pre19}.
The width $W_{d,{\rm ME}}$ deduced from the master equation (Eqs. (\ref{med}-\ref{medd})) and
the width $W_{d,{\rm cut}}$ deduced from the deterministic equations (Eqs. (\ref{RDAc}) and (\ref{RDBc})) with a cutoff obeying Eq. (\ref{cutoff}) agree and are both smaller than the width $W_d$
of the wave front, solution of the deterministic equations without cutoff.
According to the good agreement between the results of the master equation and the deterministic equations with a cutoff,
it is more relevant to describe the effect of the fluctuations on the wave front as the effect of the discretization of the variables than a pure noise effect.
\begin{figure}
\centering
\includegraphics[height=6cm]{fig6.eps}
\caption{Dilute system. Relative differences between
the front properties deduced from the master equation (Eqs. (\ref{med}-\ref{medd})) and
the analogous properties deduced from the deterministic equations without cutoff (Eqs. (\ref{RDA} and (\ref{RDB}))
versus $D_B/D_A$. The large red circles give the relative difference $(v_{d,{\rm ME}}-v_d)/v_d$ for the front speed,
the blue circles of intermediate size give the relative difference $(h_{d,{\rm ME}}-h_d)/h_d$ for the shift between A and B profiles, and
the small black circles give the relative difference $(W_{d,{\rm ME}}-W_d)/W_d$ for the width of A profile.
The parameters are given in the caption of Fig. 1.}
\label{figXX}
\end{figure}
Figure \ref{figXX} summarizes the effect of the fluctuations on the three quantities $q$ for $q=v,h,W$
in the whole range of considered values of the ratio $D_B/D_A$ for the dilute system.
The relative differences $(q_{d,{\rm ME}}-q_d)/q_d$ between the results deduced from the master equation and the deterministic equations without cutoff
are given in Fig. \ref{figXX} for the velocity, the shift, and the width.
In the whole range of $D_B/D_A$, the discrete nature of the number of particles in the master equation
induces a small decrease of $5\%$ of the profile width with respect to the deterministic description without cutoff.
A significant increase of $14\%$ of the shift between the A and B profiles is observed in the presence of fluctuations in the entire interval
of ratios of diffusion coefficients.
As for the width, the relative difference of velocity $(v_{d,{\rm ME}}-v_d)/v_d$, with $v_d=v^*$, is negative and takes the same value
of $-5\%$ for $D_B/D_A \leq 1$. However, the relative difference of velocity is not constant for $D_B/D_A > 1$ and reaches $-22\%$ for
$D_B/D_A=16$.
Hence, a significant speed decrease is observed whereas the shift and the width,
far behind the leading edge of the front, are not affected by large diffusion coefficients of species B with respect to the
diffusion coefficient of species A.
\section{Concentrated system}
In a dilute system, the solvent S is in great excess with respect to the reactive species A and B.
The concentration of the solvent is then supposed to remain homogeneous regardless of the
variation of concentrations $A$ and $B$.
In a concentrated solution, the variation of the concentration of the solvent cannot be ignored.
In the linear domain of irreversible thermodynamics, the diffusion fluxes are linear combinations of the concentration gradients of the different species. The flux $j_X$ of species X=A, B, S depends on the concentration gradients and the diffusion coefficients of all species A, B, and S~\cite{degroot,signon16}. Using the conservation relations $C_{tot}=A+B+S$, where $C_{tot}$ is a constant,
we eliminate the explicit dependence of the fluxes on the concentration $S$ of the solvent and find
\begin{eqnarray}
j_A&=&-\left(1-\dfrac{A}{C_{tot}}\right)D_A\partial_xA+\dfrac{A}{C_{tot}}D_B\partial_xB \label{ja}\\
j_B&=&\dfrac{B}{C_{tot}}D_A\partial_xA-\left(1-\dfrac{B}{C_{tot}}\right)D_B\partial_xB \label{jb}
\end{eqnarray}
According to the expression of the diffusion fluxes in a concentrated system, the reaction-diffusion equations associated with the chemical mechanism given in Eq.~(\ref{reac}) read~\cite{signon16}
\begin{eqnarray}
\partial_t A &=& D_A\partial_x\left[\left(1-\dfrac{A}{C_{tot}}\right)\partial_xA\right]-D_B\partial_x\left(\dfrac{A}{C_{tot}}\partial_xB\right) + kAB \label{RDAconc}\\
\partial_t B &=& D_B\partial_x\left[\left(1-\dfrac{B}{C_{tot}}\right)\partial_xB\right]-D_A\partial_x\left(\dfrac{B}{C_{tot}}\partial_xA\right) - kAB \label{RDBconc}
\end{eqnarray}
The discrete expression of the flux at the interface between cells $i$ and $i+1$ is related to the difference of the transition rates in the master equation
according to
\begin{eqnarray}
j_X(i+\sfrac{1}{2})&=&-\frac{1}{\Delta x}\left(T_{N_X(i+1)}^--T_{N_X(i)}^+\right)
\end{eqnarray}
where $X=A,B$, the transition rate $T_{N_X(i+1)}^-$ is associated with the jump of a particle X to the left from cell $i+1$ to cell $i$, and $T_{N_X(i)}^+$ is associated
with the jump of a particle X to the right from cell $i$ to cell $i+1$.
Using Eqs.~(\ref{ja}) and (\ref{jb}) and replacing $\partial_xX$ by $(N_X(i+1)-N_X(i))/\Omega\Delta x$ for $X=A,B$, we assign well-chosen terms of the flux $j_X(i+\sfrac{1}{2})$ to the transition rates to the left and to the right
\begin{eqnarray}
T_{N_A(i)}^\pm&=&\dfrac{D_A}{\Delta x^2}N_A(i)-\dfrac{N_A\left(i\pm\sfrac{1}{2}\right)}{\Omega C_{tot}\Delta x^2}\left[D_AN_A(i)-D_BN_B(i\pm 1)\right]\label{wa1}\\
T_{N_B(i)}^\pm&=&\dfrac{D_B}{\Delta x^2}N_B(i)-\dfrac{N_B\left(i\pm\sfrac{1}{2}\right)}{\Omega C_{tot}\Delta x^2}\left[D_BN_B(i)-D_AN_A(i\pm 1)\right]\label{wa2}
\end{eqnarray}
to ensure that they are positive or equal to zero for any number of particles.
A standard arithmetic mean for the number $N_X\left(i\pm\sfrac{1}{2}\right)$ of particles $X=A,B$ in the virtual cell $i\pm\sfrac{1}{2}$ cannot be used since it may lead to a non-zero transition rate when the departure cell is empty. Instead, we choose the harmonic mean between the number of particles in cells $i$ and $i\pm 1$:
\begin{eqnarray}
N_X\left(i\pm\sfrac{1}{2}\right)&=&\dfrac{N_X(i)N_X(i\pm 1)}{N_X(i)+N_X(i \pm 1)}
\end{eqnarray}
which ensures that no jump of $X$ from cell $i$ to cell $i\pm 1$ occurs when the number of particles $N_X$ vanishes in cell $i$. We checked different definitions
of the mean obeying the latter condition and found that the results are not significantly affected
when choosing for $N_X\left(i\pm\sfrac{1}{2}\right)$ a modified arithmetic mean which vanishes if $N_X(i)=0$ and equals $(N_X(i)+N_X(i \pm 1))/2$ otherwise,
or a geometric mean $\sqrt{N_X(i)N_X(i\pm 1)}$.\\
It is worth noting that, contrary to the dilute case for which the transition rate associated with the diffusion of particles X only
depends on the number of particles X in the departure cell, the transition rate in the concentrated case also depends on the number of particles A and B in the arrival cell.
In the case of a concentrated system, the diffusion term reads
\begin{eqnarray}
\label{mecd}
\left.\dfrac{\partial P(\phi)}{\partial t}\right|_{\rm diff}=&\sum_i \Big[T_{N_A(i)+1}^-P(\{N_A(i-1)-1,N_A(i)+1\}) \nonumber \\
&+T_{N_A(i)+1}^+P(\{N_A(i)+1,N_A(i+1)-1\}) \nonumber \\
&+T_{N_B(i)+1}^-P(\{N_B(i-1)-1,N_B(i)+1\}) \nonumber \\
&+T_{N_B(i)+1}^+P(\{N_B(i)+1,N_B(i+1)-1\}) \nonumber \\
&-\big(T_{N_A(i)}^-+T_{N_A(i)}^++T_{N_B(i)}^-+T_{N_B(i)}^+\big)P(\phi)\Big]
\end{eqnarray}
The reaction term $\left.\dfrac{\partial P(\phi)}{\partial t}\right|_{\rm reac}$ of the master equation given in Eq.~(\ref{medr}) for the dilute system is unchanged
in the case of a concentrated system.
The kinetic Monte Carlo algorithm and the initial and boundary conditions used for the dilute system are straightforwardly extended to the concentrated system.
\begin{figure}
\centering
\includegraphics[height=6cm]{fig7.eps}
\caption{Concentrated system. Wave front speed $v_{c,{\rm ME}}$ deduced from the master equation (Eqs. (\ref{med}), (\ref{medr}), and (\ref{mecd})) in
a concentrated system (red solid disks) for $C_{tot}=50$ and speed
$v_{d,{\rm ME}}$ deduced from the direct simulation of the master equation (Eqs. (\ref{med}-\ref{medd}))
associated with the dilute system (red circles)
versus ratio of diffusion coefficients $D_B/D_A$. The horizontal solid line gives the minimum velocity $v^*$ (Eq. (\ref{vdet})) of an FKPP front in the absence of a cutoff.
The horizontal dashed line gives the velocity $v_\varepsilon=18.84$ given in Eq. (\ref{veps}) for
a cutoff $\varepsilon=10^{-4}$ and $D_A=D_B$.
The parameters are given in the caption of Fig. 1.}
\label{fig4}
\end{figure}
The front speeds $v_{c,{\rm ME}}$ and $v_{d,{\rm ME}}$ deduced from the master equation in concentrated and dilute cases, respectively, are compared in Fig.~\ref{fig4}. The correction to the wave front speed induced by an increase of the ratio of diffusion coefficients $D_B/D_A$ is smaller for a concentrated system than for a dilute system. Indeed, in the concentrated case, the diffusion of a species depends on the diffusion coefficients of both species. Hence, increasing $D_B$ at constant $D_A$ has a smaller impact on the velocity since the contribution depending on $D_B$ is partly compensated by the unchanged terms depending on $D_A$.
\begin{figure}
\centering
\includegraphics[height=6cm]{fig8.eps}
\caption{Concentrated system. Wave front speeds versus the deviation from the dilution limit $C_0/C_{tot}$. The values of $v_{c,{\rm ME}}$ (red disks) are
deduced from the direct simulation of the master equation (Eqs. (\ref{med}), (\ref{medr}), and (\ref{mecd})) for $k=10$, $\Omega=10$, $N_0=100$, $D_A=1$, $D_B=8$, $\ell=2000$, and $\Delta x=0.008$ ($C_0=N_0/\Omega$).
The horizontal solid line gives the minimum velocity $v^*=20$ (Eq. (\ref{vdet}) )
of an FKPP front, solution of the deterministic equations (Eqs. (\ref{RDA}) and (\ref{RDB})) without cutoff.
The horizontal dashed line gives the velocity $v_\varepsilon=18.84$ given in Eq. (\ref{veps})
for a cutoff $\varepsilon=10^{-4}$ and $D_A=D_B$.}
\label{fig5}
\end{figure}
The effect of the departure from the dilution limit on the wave front speed $v_{c,{\rm ME}}$ deduced from the master equation given in
Eqs. (\ref{med}), (\ref{medr}), and (\ref{mecd}) is shown in Fig.~\ref{fig5}.
The dilution limit $v_{d,{\rm ME}}(D_B/D_A=8)=17.20$
is recovered for $C_0/C_{tot} \rightarrow 0$.
As $C_0/C_{tot}$ increases, the solution is more concentrated and the cross-diffusion terms
become more important, so that the system is less sensitive to the difference between the diffusion coefficients $D_A$ and $D_B$:
The wave front speed $v_{c,{\rm ME}}$ increases and tends to the value $v_\varepsilon=18.84$ predicted
by Eq. (\ref{veps}) for the cutoff $\varepsilon=10^{-4}$ and $D_A=D_B$. \\
\begin{figure}
\centering
\includegraphics[height=6cm]{fig9.eps}
\caption{Concentrated system. Scaled shifts $h_{c,{\rm ME}}/C_0$, $h_{c,{\rm cut}}/C_0$, and $h_c/C_0$ between the profiles of species A and B versus ratio of diffusion coefficients $D_B/D_A$.
The values of $h_{c,{\rm ME}}/C_0$ (red disks) are deduced from the master equation (Eqs. (\ref{med}), (\ref{medr}), and (\ref{mecd})).
The values of $h_{c,{\rm cut}}/C_0$ (black solid triangles) are deduced from the deterministic equations (Eqs. (\ref{RDAconc}) and (\ref{RDBconc}))
with a reactive term multiplied by the cutoff $H(A-\varepsilon)$ for $\varepsilon=10^{-4}$.
The values of $h_c/C_0$ (blue solid squares) are deduced from the deterministic equations (Eqs. (\ref{RDAconc}) and (\ref{RDBconc})) without cutoff.
The other parameters are given in the caption of Fig. \ref{fig4}.
The line gives the results for $D_A=D_B$.
}
\label{figXXX}
\end{figure}
\begin{figure}
\centering
\includegraphics[height=6cm]{fig10.eps}
\caption{Relative differences $(W_{c,{\rm ME}}-W_{d,{\rm ME}})/W_{d,{\rm ME}}$, $(W_{c,{\rm cut}}-W_{d,{\rm cut}})/W_{d,{\rm cut}}$, and $(W_c-W_d)/W_d$
between the widths in a concentrated system and a dilute system for different approaches versus $D_B/D_A$.
The values of $W_{c,{\rm ME}}$ and $W_{d,{\rm ME}}$ (red disks) are deduced from the master equation
(Eqs. (\ref{med}), (\ref{medr}), and (\ref{mecd}) and Eqs. (\ref{med}-\ref{medd}), respectively).
The values of $W_{c,{\rm cut}}$ and $W_{d,{\rm cut}}$ (black solid triangles) are deduced from the deterministic equations
(Eqs. (\ref{RDAconc}) and (\ref{RDBconc}) and Eqs. (\ref{RDAc}) and (\ref{RDBc}), respectively)
with a reactive term multiplied by the cutoff $H(A-\varepsilon)$ for $\varepsilon=10^{-4}$.
The values of $W_c$ and $W_d$ (blue solid squares) are deduced from the deterministic equations
(Eqs. (\ref{RDAconc}) and (\ref{RDBconc}) and Eqs. (\ref{RDA}) and (\ref{RDB}), respectively) without cutoff.
}
\label{figXXXX}
\end{figure}
The variation of the shifts $h_{c,{\rm ME}}$, $h_{c,{\rm cut}}$, and $h_c$ between the two profiles with respect to
the ratio of the diffusion coefficients $D_B/D_A$
is shown in Fig. \ref{figXXX} in a concentrated system for the three approaches, the master equation and the deterministic descriptions with and without cutoff.
As revealed when comparing the results given in Figs.~\ref{hd} and \ref{figXXX}, the effect of the departure from the dilution limit on the shift is too small for us to evaluate
the difference $(h_{c,{\rm ME}}-h_{d,{\rm ME}})/h_{d,{\rm ME}}$ with a sufficient precision for the fluctuating results deduced from the master equations.
The effects of the departure from the dilution limit on the widths $W_{c,{\rm ME}}$, $W_{c,{\rm cut}}$, and $W_c$ of the profile are given in Fig. \ref{figXXXX} for the three approaches.
The agreement between the results $W_{c,{\rm ME}}$ and $W_{c,{\rm cut}}$ deduced from the master equation (Eqs. (\ref{med}), (\ref{medr}), and (\ref{mecd}))
and the deterministic equations (Eqs. (\ref{RDAc} and \ref{RDBc})) with a cutoff, respectively,
is satisfying considering the high level of noise on the evaluation of the width $W_{c,{\rm ME}}$.
According to Fig.~\ref{Wd}, the width in a dilute system is smaller than the width obtained for identical diffusion coefficients
if $D_B<D_A$ and larger if $D_B>D_A$.
The results displayed in Fig. \ref{figXXXX} prove that, for each description method, the width in a concentrated system is larger than the width in a dilute system
if $D_B<D_A$ and smaller if $D_B>D_A$. Hence, in the entire range of ratios of diffusion coefficients and for deterministic
as well as stochastic methods, the width in a concentrated system
is closer to the width obtained for identical diffusion coefficients.
As for the front speed, the departure from the dilution limit reduces the effects induced by the difference between the diffusion coefficients.
\section{Conclusion}
We have performed kinetic Monte Carlo simulations of the master equation associated with a chemical system involving two species A and B.
The two species have two different diffusion coefficients, $D_A$ and $D_B$, and are engaged in the autocatalytic reaction $\rm{A}+\rm{B} \rightarrow 2\rm{A}$.
The effects of fluctuations on the FKPP wave front have been studied in the cases of a dilute solution
and a concentrated solution in which cross-diffusion cannot be neglected.
In the case of a dilute system, the linearization of the deterministic equations with a cutoff in the leading edge of the front
leads to a speed shift independent of the diffusion coefficient $D_B$ of the consumed species. The speed shift obtained for two different diffusion coefficients
is the same as in the case $D_A=D_B$.
The main result deduced from the master equation is that
the front speed sensitively depends on the diffusion coefficient $D_B$. For $D_B$ larger than $D_A$,
the front speed decreases as $D_B$ increases and is significantly smaller than the prediction of the linear cutoff theory.
The speed decrease obtained for large values of $D_B/D_A$ is related to the number $N_{B_\varepsilon}$ of B particles
at the position of the most advanced A particle in the leading edge of the front.
When species B diffuses faster that species A, $N_{B_\varepsilon}$ is significantly smaller than the steady-state value $N_0$.\\
We carefully derived the nontrivial expression of the master equation in a concentrated system with cross-diffusion.
The transition rates are deduced from the diffusion fluxes in the linear domain of irreversible thermodynamics.
The transition rates associated with diffusion depend on the number of particles not only in the departure cell but also in the arrival cell.
Qualitatively, the conclusions drawn for a dilute solution and $D_A \neq D_B$ remain valid, but the front properties deduced from the master equation with cross-diffusion
depart less from those obtained for $D_A=D_B$. The dependence of the front properties on $D_B/D_A$ in a concentrated system are softened
with respect to the dilute case. Cross-diffusion mitigates the impact of the difference between the diffusion coefficients.
\section{Acknowledgments}
This publication is part of a project that has received
funding from the European Union’s Horizon 2020 (H2020-EU.1.3.4.) research and innovation program under the Marie
Sklodowska-Curie Actions (MSCA-COFUND ID 711859) and from the Polish Ministry of Science and Higher Education
for the implementation of an international cofinanced project.
| proofpile-arXiv_065-291 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{\label{intro}Introduction}
Squeezed states are a type of nonclassical light that are characterized by squeezing of the quantum uncertainty in a given quadrature below the level of vacuum noise. They can be used in a variety of contexts, including in applications where quadrature noise is a major concern, such as optical communications \cite{Slavik2010All-opticalSystems} and interferometers \cite{Tan2014EnhancedStates,schnabelSqueezeStates, Aasi2013EnhancedLight}. Squeezed states can also be used as the starting point to create entangled states of light. Weakly-squeezed states can be used as a source of entangled photons, which can be used for quantum teleportation \cite{Qteleportation} and quantum cryptography \cite{Qcrypto}. Single-mode squeezed states can be combined using waveguide couplers to create quadrature-entangled states \cite{Masada2015Continuous-variableChip}. In addition, two-mode quadrature-squeezed states are a source of continuous variable (CV) entanglement, which can also be used for quantum computation \cite{quantumcomputingsqueezelimit} and quantum information \cite{BraunsteinQuantumInformationCV}; such states are important as they are generally more robust to loss than two-photon entangled states \cite{Braunstein1998TeleportationVariables}.
One way to generate squeezed states of light is via spontaneous parametric down conversion (SPDC), where a strong coherent pump field interacts with a material that has a $\chi^{(2)}$ nonlinearity \cite{squeezedstatesviaSPDC}. The conversion efficiency of pump photons into signal and idler pairs can be enhanced by enclosing the nonlinear interaction within a cavity that is resonant with the pump. In this case, if it is a multimode cavity, where a second mode is resonant at the signal and idler frequencies, then it can play a dual role, by ensuring that essentially all generated pairs end up in a single cavity mode.
Ring resonators side-coupled to a waveguide have been shown to enhance spontaneous parametric down conversion efficiency \cite{Yang2007EnhancedResonator}. Thus, they are promising structures for on-chip applications such as entangled photon pair generation for quantum communication \cite{Lu2019Chip-integratedCommunication} and generating squeezed light for discrete and CV entanglement \cite{Vernon2018ScalableSampling, VaidyaBroadbandDevice, Samara2019High-RateMicroresonators, Guo2016ParametricChip}. The schematic diagram of a side-coupled ring resonator is shown in Fig. \ref{fig:ringresonator}. The ring waveguide has a radius chosen such that it has resonant modes at the frequencies of the pump and the squeezed light. The straight waveguide (channel) and ring are in proximity to each other, such that pump and squeezed light can be evanescently coupled in and out of the resonator.
Considerable theoretical work has been done on a Hamiltonian treatment of SPDC and spontaneous four-wave mixing in lossy microring resonators \cite{Yang2007GeneratingResonators, Vernon2015SpontaneousResonators,Vernon2015StronglyResonators,alsingEntangleandSqueezeLossyRing}. The general approach is to solve the Heisenberg equations of motion for the mode operators in the ring and channel. This procedure is applicable to both the weak pumping limit for generating entangled photon pairs and the strong pumping limit for generating quadrature squeezing. For example single-mode quadrature squeezing of -10dB in the channel of a lossy SiN microring resonator was recently shown to be theoretically achievable \cite{Vernon2018ScalableSampling}, using a 100pJ Gaussian input pulse of duration 30ps. Experimentally, about 4dB \cite{VaidyaBroadbandDevice} to 5dB \cite{Dutt2015On-ChipSqueezing} of squeezing has been inferred on-chip with SiN microring resonators. Both the theory and experimental demonstration of quadrature squeezing in lossy microring resonators provides a promising path forward for creating a practical CV entangled states for quantum computing applications.
Recent experimental work has demonstrated that one can tune the squeezing level generated in coupled ring resonators; by increasing the coupling efficiency, Dutt \textit{et al.} \cite{DuttRingSqueezingVscoupling} demonstrated experimentally an increase of the on-chip squeezing level in a SiN resonator from $-0.9$dB to $-3.9$dB.
Although this and other work demonstrate the promise of ring resonators for generating squeezed light, it appears that very little has been done on the \textit{optimization} of the ring resonator system to obtain maximum squeezing.
In this paper, we theoretically study the quadrature squeezing inside a lossy ring resonator pumped by a Gaussian input pulse. We focus on the optimization of the pump pulse duration and ring-channel coupling, in order to achieve the conditions that maximize the squeezing in the presence of scattering loss.
We consider the case of squeezed-state generation via SPDC in a single mode of the ring. To allow us to compare the squeezing achieved for different pump durations, in all that follows, the energy of the input pulse is held constant when the pulse duration is changed. We model the dynamics of the density operator for the state in the ring in the presence of loss using the Lindblad master equation for a cavity with a single lossy mode. It has recently been shown that the general solution to this Lindblad master equation is a single-mode squeezed thermal state \cite{Seifoory2017SqueezedCavities} characterized by a time-dependent squeezing amplitude, squeezing phase, and a thermal photon number. Using this solution, we model the squeezed thermal state in the ring resonator as a function of time, and derive an approximate analytic expression for the maximum squeezing in the presence of loss.
Our theoretical approach is somewhat different from what is commonly done in the literature. The strength of our method is that, because we know that the density operator inside the ring is always a squeezed thermal state, the time-dependent properties of the state in the ring, such as the variance of the quadrature operator and expectation value of the number operator, can be easily determined by simply solving for the time dependence of the thermal photon number and squeezing parameter of the state. Of course, our study is restricted to a single-mode squeezed state in the ring, but this condition is easily satisfied by limiting the bandwidth of the input pulse, and carefully phase-matching the desired pump mode and squeezed light mode in the ring.
Using our exact solution for the time evolution of the state, we derive approximate but accurate analytic expressions for the optimum coupling value and optimum pump pulse duration for a fixed pump energy. We show that they are in excellent agreement with full numerical simulations when the pump and ring configuration is relatively close to the optimal. We find that the optimum pulse duration depends on the loss in the ring and is in the range of of $10$ to $60$ times the ring round-trip time. We also show that the optimum coupling is slightly below critical coupling (undercoupling).
The paper is organized as follows. In section \ref{sec:ringtheory} we review the theory of the coupling of a pulsed classical pump field from a channel waveguide into a ring resonator, discuss practical limitations on the pump pulse duration for generation in a single-mode, and determine the exact and approximate expressions for the time-dependent pump field inside the lossy ring. In section \ref{sec:squeezingtheory} we present the theory behind the generation of a squeezed thermal state in a single leaky mode for a pulsed pump. In section \ref{sec:results} we model the system and develop approximate analytic expressions for the optimal pulse duration, coupling constant and quadrature noise for a given ring loss. Finally, in section \ref{sec:conclusions} we present our conclusions.
\section{\label{sec:theory}Theory}
In this section we present the theory behind the generation of squeezed light inside a ring resonator. The system consists of a ring resonator waveguide of radius $R$ side-coupled to a straight waveguide (the channel) (see Fig. \ref{fig:ringresonator}). Both waveguides are made from a material with a nonlinear $\chi^{(2)}$ response. We treat the ring resonator as an optical cavity that generates squeezed light in a single leaky mode. The mode is leaky due both to scattering loss and coupling to the channel. The input field to the system is a classical pump pulse ($E_1(t)$) propagating in the channel. The bandwidth of the input pulse is limited such that it only couples appreciably into a single mode inside the ring, with frequency, $\omega_P$. Once inside the ring, the pump will produce squeezed light in a separate mode with frequency, $\omega_S$, that is half the frequency of the pump, \textit{i.e.} $\omega_S = \omega_P/2$. In section \ref{sec:ringtheory} we study the frequency response of the ring using a transfer matrix approach in the presence of loss, and derive exact and approximate expressions for the time-dependent pump field inside the ring. In section \ref{sec:squeezingtheory} we give the solution to the Lindblad master equation for the quantum state of light generated inside the ring.
\subsection{Time-Dependent Pump Field Inside the Ring Resonator}
\label{sec:ringtheory}
In this section we present the theory to obtain the time dependence of the pump field inside the ring resonator and examine the dependence of the field build-up in the ring on the pump pulse duration, the scattering loss in the ring, and the coupling between the channel and ring waveguides.
The classical pump pulse field, $E_1(t)$, incident on the ring resonator is taken to be a classical Gaussian pulse of
the form
\begin{equation}
E_1(t) =E_1^{(+)}(t) + E_1^{(-)}(t),\nonumber
\end{equation}
where
\begin{equation}
\label{eq:incidentpulse}
E_1^{(+)}(t)=E_0\sqrt{\frac{T_R}{\tau}} \exp\left(-2\ln\left(2\right)\frac{t^2}{\tau^2}\right)\exp(-i\omega_P t),
\end{equation}
and $E^{(-)}(t) =\left[ E^{(+)}(t)\right]^*$. Here $\tau$ is the duration of the pulse (FWHM of the intensity), $\omega_P$ is the pulse carrier frequency, $T_R$ is the ring round-trip time (discussed in more detail below), and $E_0$ is the amplitude of the pulse. The factor of $1/\sqrt{\tau}$ is included so that the energy of the pulse is independent of the pulse duration. We do this so that we can study the squeezing level in the ring for many different pumping durations, with a constant amount of energy going into the system. In the following, only the positive frequency part of the input field is needed, because we are using the rotating wave approximation.
In calculating the coupling of the field in and out of the ring, it is easier to work in the frequency domain. We define the Fourier transform of the time-dependent field as
\begin{eqnarray}
\label{eq:FT}
\tilde{E}(\omega) &=& \int_{-\infty}^{\infty}E^{(+)}(t)\exp(i\omega t) dt,
\end{eqnarray}
and the inverse Fourier transform as
\begin{eqnarray}
\label{eq:INVFT}
E^{(+)}(t) &=& \frac{1}{2\pi}\int_{-\infty}^{\infty}\tilde{E}(\omega)\exp(-i\omega t) d\omega.
\end{eqnarray}
The Fourier transform of the input pulse of Eq. \eqref{eq:incidentpulse} is
\begin{eqnarray}
\label{eq:e1w}
\tilde{E}_1(\omega) = \tilde{E}_0 \sqrt{\frac{\tau}{T_R}} \exp\left(-\frac{(\omega-\omega_P)^2\tau^2}{8\ln2}\right),
\end{eqnarray}
where $\tilde{E}_0\equiv E_0 T_R\sqrt{\pi/(2\ln\left(2\right))}$. The bandwidth $\Delta \omega$ (FWHM in frequency) of the input pulse is related to the pulse duration $\tau$ by $\Delta \omega = 4\ln{\left(2\right)} / \tau$.
\begin{figure}[htbp]
\centering
\includegraphics[scale = 0.333]{ringresonator3rdtry.png}
\caption{\small Schematic of the ring resonator coupled to a channel waveguide. The field components incident to the coupling point are $\tilde{E}_1$ in the channel and $\tilde{E}_3$ in the ring. The field components leaving the coupling point are $\tilde{E}_2$ in the channel and $\tilde{E}_4$ in the ring. The cross-coupling coefficient is $\kappa$, and the attenuation in the ring is $a$. }
\label{fig:ringresonator}
\end{figure}
The fields in the ring and channel are assumed to couple at a point, as indicated in Fig. \ref{fig:ringresonator}. The fields incident on the coupling point are $\tilde{E}_1(\omega)$ in the channel and $\tilde{E}_3(\omega)$ in the ring. The fields leaving the coupling point are $\tilde{E}_2(\omega)$ in the channel and $\tilde{E}_4(\omega)$ in the ring. The input and output field components are defined at locations just to the left and right of the coupling point, respectively. The input and output fields are related by a transfer matrix as
\begin{equation}
\label{eq:scatteringmatrix}
\begin{pmatrix}
\tilde{E}_4(\omega) \\ \tilde{E}_2(\omega)
\end{pmatrix}
=
\begin{pmatrix}
\sigma & i\kappa \\ i\kappa & \sigma
\end{pmatrix}
\begin{pmatrix}
\tilde{E}_3(\omega) \\ \tilde{E}_1(\omega)
\end{pmatrix},
\end{equation}
where $\sigma$ and $\kappa$ are real numbers called the self- and cross- coupling coefficients, respectively. This is the form of the transfer matrix that is commonly used \cite{Heebner2004DistributedOptics}. The coupling is assumed to occur at a single point, so the field components that pass through the coupling point and stay in the same waveguide do not acquire a phase. However, the field components that cross-over into the other waveguide at the coupling point do acquire the phase $i$. This phase is needed in order to conserve power across the coupling point ($i.e.$ the transfer matrix must be unitary).
Additionally, the coupling is assumed to be lossless, so we obtain the relation $|\sigma|^2 + |\kappa|^2 = 1$. The fields $\tilde{E}_4(\omega)$ and $\tilde{E}_3(\omega)$ are related by,
\begin{eqnarray}
\label{eq:e3w}
\tilde{E}_3(\omega) = a \exp(i\Theta)\tilde{E}_4(\omega).
\end{eqnarray}
Here, $a$, is the field attenuation after one circuit of the ring (excluding any coupling to the straight waveguide); this is related to the scattering power-loss coefficient, $\alpha_{\rm sc}$, in the ring by $a = \exp(-\alpha_{\text{sc}}2\pi R/2)$. In what follows, we assume that $a$ is frequency independent, and also that $a$ and $\kappa$ are independent of each other. The single-circuit phase shift $\Theta$ in the ring is given by $\Theta =2\pi Rk$, where $k = 2\pi n_{\rm eff}/\lambda$, where $n_{\rm eff}$ is the effective index of refraction for the pump mode in the ring and $\lambda$ is the free space wavelength. The phase shift can also be expressed as,
\begin{equation}
\label{eq:thephase}
\Theta= \omega T_R,
\end{equation}
where $T_R = n_{\rm eff}2\pi R /c$ is the ring round-trip time. For light that is on resonance with a mode in the ring, the phase shift is $\Theta = 2\pi m$, where $m$ is a positive integer (the mode number). Thus, in order to ensure that the pump frequency is on resonance with the ring, it is chosen to be $\omega_P = 2\pi m_P /T_R$, where $m_P$ is the pump mode number. In all that follows, we will scale the time, the pump duration, and the pump pulse amplitude by the round-trip time $T_R$; consequently, all of the results that follow are independent of the ring radius and mode number.
We choose the frequency of the signal and idler photons to both be $\omega_S =\omega_P/2$ (where $S$ stands for ``squeezed light"), such that the mode number for the squeezed light is $m_S = m_P/2$. The coupling coefficients are assumed to be frequency independent. This is a good approximation as long as the pump pulse is in a single mode. We assume that the ring waveguide dimensions have been chosen such that the squeezed light mode has the same $n_{\rm eff}$ as the pump mode (\textit{i.e.}, they are phase matched). This has been demonstrated in an AlN ring resonator \cite{Guo2016ParametricChip} for a waveguide with a height of $1{\rm \mu m}$ and a width of $1.10 {\rm \mu m}$, and in AlGaAs nanowaveguides \cite{Rutkowska2011SecondNanowaveguides}.
Using Eqs. \eqref{eq:scatteringmatrix} and \eqref{eq:e3w}, we find that the field inside the ring is given by
\begin{equation}
\label{eq:fieldinring}
\tilde{E}_3(\omega)= \frac{i\sqrt{1-\sigma^2}\, a \exp(i\omega T_R)}{1-\sigma a \exp(i \omega T_R)}\tilde{E}_1(\omega).
\end{equation}
The ratio of intensity inside the ring to the incident intensity in the channel is defined as the buildup factor,
\begin{equation}
\label{eq:buildupfactor}
\mathcal{B}(\omega) \equiv \left|\frac{\tilde{E}_3(\omega)}{\tilde{E}_1(\omega)}\right|^2 = \frac{(1-\sigma^2)a^2}{1-2\sigma a \cos(\omega T_R) + \sigma^2 a^2}.
\end{equation}
It is maximized for light that is on resonance with the ring, \textit{i.e.} $\cos(\omega T_R) = 1$. Using $\omega=\omega_P$ in Eq. \eqref{eq:buildupfactor} gives the maximum value of the buildup factor,
\begin{equation}
\label{eq:idealbuildupfactor}
\mathcal{B}(\omega_P) = \frac{\left(1-\sigma^2\right)a^2}{\left(1-\sigma a\right)^2}.
\end{equation}
The value of $a$ that maximizes Eq. \eqref{eq:idealbuildupfactor} is $a = \sigma$. This is known as \textit{critical coupling}.
To ensure that the squeezed light will be generated mostly in a single mode with frequency $\omega_S$ we require that the pump pulse almost exclusively couples into a single mode in the ring with frequency $\omega_P$. In Fig. \ref{fig:buildup}(a) we demonstrate that with an incident pulse with duration $\tau = 2 T_R$ (thick line), virtually all of the pulse intensity couples into a single ring resonance (thin red line). In contrast, in Fig. \ref{fig:buildup}(b) we show that by reducing the pulse duration to $\tau = T_R/4$, the broadening of the pulse in frequency causes some of its intensity to couple into adjacent modes. Thus, in all that follows, we restrict ourselves to pulses with duration $\tau \ge T_R$ to ensure the squeezed light is generated almost entirely in a single mode. Although two-mode squeezed light could also be generated in a number of different mode pairs that satisfy energy conservation, we assume that generation in those other modes is suppressed because they are not well phase matched.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.48\textwidth]{buildupopttauandquarterTRa1sigmapoint6.png}
\caption{\small The intensity buildup (in three ring modes) of the pump pulse with a duration of (a) $\tau = 2 T_R$ and (b) $\tau = T_R/4$. The normalized intensity of the input pulse in the channel is $|\tilde{E}_1|^2/\tilde{E}^2_0$ (thick curve), and the normalized intensity of the pulse in the ring, after its intensity has built up, is $|\tilde{E}_3|^2/\tilde{E}^2_0$ (thin red curve). The buildup factor (dashed line) and intensity are calculated with $\sigma = 0.6$ and $a=1$.}
\label{fig:buildup}
\end{figure}
The intensity decay rate $\Gamma$ of light in the ring cavity is given by,
\begin{equation}
\label{eq:decayrate}
\Gamma \equiv\frac{\alpha_{\text{tot}}2\pi R}{T_R} =\frac{1}{T_R} \left[\ln\left(\frac{1}{\sigma^2}\right) + \ln\left(\frac{1}{a^2}\right)\right],
\end{equation}
where $\alpha_{\rm tot}$ is the total loss coefficient for the ring. It is given by $\alpha_{\rm tot} = \alpha_{\rm {sc}} + \alpha_{\rm {cpl}}$, where $\alpha_{\rm sc}$ is given above, and $\alpha_{\rm cpl}$ is defined by the equation $\sigma = \exp\left(-\alpha_{\rm cpl}2\pi R/2\right)$ \cite{2008OpticalMicroresonators} and is the power-loss coefficient due to light coupling out of the ring into the channel. To obtain strong squeezing in the ring, the intensity decay rate multiplied by the round-trip time must be small, \textit{i.e.}, $\Gamma T_R \ll 1$. If the loss is small enough such that $(1 - \sigma a) \ll 1$ then from Eq. \eqref{eq:decayrate}, we obtain,
\begin{equation}
\label{eq:approxdecayrate}
\Gamma \approx\frac{2(1-\sigma a)}{T_R}.
\end{equation}
The decay rate $\Gamma$ gives an estimate of the width of the peaks in the the buildup factor.
The time-dependent pump field, $E_3(t)$, inside the ring just to the left of the coupling point (see Fig. \ref{fig:ringresonator}) is calculated by taking the inverse Fourier transform of Eq. \eqref{eq:fieldinring}, giving:
\begin{eqnarray}
\label{eq:e3time}
E_3\left(\tilde{t}\,\right) &=& \frac{i\kappa aE_0}{\sqrt{\pi}}\exp\left(-i2\pi m_P \tilde{t}\,\right)\sqrt{\frac{\tilde{\tau}}{8\ln2}} \times \nonumber
\\
&\times& \int_{-\infty}^{\infty} d\Omega\frac{\exp\left(-\Omega^2\tilde{\tau}^2/(8\ln2) - i\Omega \tilde{t}\,\right)}{\exp(-i\Omega) - \sigma a},
\end{eqnarray}
where $\Omega \equiv (\omega-\omega_P)T_R$, $\tilde{t} \equiv t/T_R$, and $\tilde{\tau}\equiv \tau/T_R$. The integral is real because we integrate $\Omega$ from $-\infty$ to $\infty$. This is the general expression that we use in our simulations. In the low-loss limit, where $(1-\sigma a)\ll 1$, the integral in Eq. \eqref{eq:e3time} can be evaluated using Voigt functions \cite{analyticringpulse} (see Appendix \ref{ringpulsederiv}), and we obtain the approximate expression
\begin{eqnarray}
\label{e3timeapprox}
|E_3\left(\tilde{t}\,\right)| = \frac{\sqrt{\pi}\kappa a \tilde{\tau}\,{\rm e}^{z\left(\tilde{t}\,\right)^2}{\rm erfc}\left[\,z\left(\tilde{t}\,\right)\right]}{\sqrt{8\ln2}}\big|E_1^{(+)}\left(\tilde{t}\,\right)\big|,
\end{eqnarray}
where
\begin{equation}
\label{ztransformation}
z\left(\tilde{t}\,\right)\equiv \frac{(1-\sigma a)\tilde{\tau}}{\sqrt{8\ln(2)}}-\frac{\sqrt{8\ln(2)}\tilde{t}}{2\tilde{\tau}},
\end{equation}
and ${\rm erfc}\,\left(x\right) = 1 - {\rm erf} \,\left(x\right)$, where ${\rm erf}\,\left(x\right)$ is the error function. In the following sections, we shall use this expression to optimize the incident pump pulse duration to achieve the greatest nonlinear response in the ring. We note parenthetically that Eq. \eqref{e3timeapprox} would also be useful for calculating classical nonlinear processes such as second harmonic generation or parametric downconversion in a ring resonator, using the undepleted pump approximation.
In this section, we have derived an expression for the time-dependent pump field inside the ring, which we shall use in the following section to calculate the generation of the squeezed state.
\subsection{Quadrature Squeezing Inside a Lossy Ring Cavity }
\label{sec:squeezingtheory}
In this section we present the main theory behind quadrature squeezing inside the ring.
The Hamiltonian for light inside the ring, using the undepleted pump approximation, is given by \cite{quantumopticsGarrison}
\begin{eqnarray}
\hat{H} &=& \hat{H}_0 + \gamma E_3(t) \hat{b}^{\dagger 2} + \gamma^*E^*_3(t){\hat{b}}^2, \label{eq:Hnoloss}
\end{eqnarray}
where the interaction-free part of the Hamiltonian is $\hat{H}_0= \hbar \omega_S \hat{b}^\dagger \hat{b}$, and the last two terms account for the SPDC process. The operator $\hat{b}$ is the annihilation operator for the squeezed light photons in the ring. The nonlinear coupling coefficient between the pump, $E_3(t)$, and squeezed light is $\gamma = \hbar \omega_P \chi^{(2)}_{\text{eff}}/n^2_{\rm eff}$, where $\chi^{(2)}_{\text{eff}}$ is an effective nonlinear susceptibility that depends on the intrinsic nonlinear susceptibility of the ring material and spatial mode profiles in the ring \cite{Seifoory2019CounterpropagatingWaveguides}. Note that we neglect any nonlinear interactions in the channel waveguide, because the pump intensity is much smaller there. The pump field is given in Eq. \eqref{eq:e3time}, where only the positive frequency part is used, as we are using the rotating wave approximation.
The effects of scattering and coupling losses on the dynamics of the generated light in the ring can be modelled using the Lindblad master equation for the density operator $\hat{\rho}$ \cite{openQsystemsBreuer}:
\begin{equation}
\label{eq:lindblad}
\frac{d\hat{\rho}}{dt}=-\frac{i}{\hbar}\left[\hat{H},\hat{\rho}\right] + \Gamma\left( \hat{b}\hat{\rho}\hat{b}^\dagger -\frac{1}{2} \hat{b}^\dagger\hat{b}\hat{\rho}-\frac{1}{2} \hat{\rho}\hat{b}^\dagger\hat{b}\right),
\end{equation}
where $\Gamma$ is the decay rate for the squeezed light generated in the cavity. It is given in Eq. \eqref{eq:decayrate}, where now $\sigma$ and $a$ correspond to the coupling and loss parameter for the squeezed light. For simplicity, we have assumed that the squeezed light and the pump have the same coupling and loss parameters, but it is straightforward to generalize this within our theory. The effects of thermal photon populations are negligible at room temperature for the optical frequencies of interest, and so they are not included.
It was recently shown \cite{Seifoory2017SqueezedCavities} that the exact solution to Eq. \eqref{eq:lindblad} for the Hamiltonian given in Eq. \eqref{eq:Hnoloss} is a \emph{squeezed thermal state}, which can be written as,
\begin{equation}
\label{eq:solution}
\hat{\rho}(t) = \hat{S}(\xi(t))\hat{\rho}_{\text{th}}(\beta(t))\hat{S}^\dagger(\xi(t)),
\end{equation}
where
\begin{equation}
\hat{\rho}_{\text{th}}(\beta(t)) = \left(1-{\rm e}^{-\beta(t)\hbar\omega_P/2}\right)^{-1}{\rm e}^{-\beta(t)\hat{H}_0}
\end{equation}
is the density operator for a thermal state at an effective time-dependent temperature $ T(t) = (k_{\rm B}\beta(t))^{-1}$, where $k_{\rm B}$ is the Boltzmann constant. In what follows, rather than use the effective temperature, we characterize this thermal state by the average thermal photon number, which is given by
\begin{equation}
\label{eq:nth}
n_{\text{th}}(t) = \left( {\rm e}^{\beta(t)\hbar\omega_P/2} - 1 \right)^{-1}.
\end{equation}
The operator $\hat{S}$ is a unitary squeezing operator, given by
\begin{equation}
\label{eq:sqopt}
\hat{S}(\xi(t)) = \exp\frac{1}{2}\left(\xi^*(t)\hat{b}^2 - \xi(t) \hat{b}^{\dagger 2}\right),
\end{equation}
with a complex squeezing parameter $\xi(t) = u(t)\exp(i\phi(t))$. The form of the state given in Eq. \eqref{eq:solution} is only a solution to the Lindblad master equation if the squeezing amplitude $u$, squeezing phase $\phi$, and average thermal photon number $n_{\rm th}$ obey the following three coupled first order differential equations:
\begin{eqnarray}
\label{eq:sqamp}
\frac{1}{\Gamma}\frac{du(t)}{dt}
&=& \frac{g(t)}{2} -\frac{ \cosh u(t) \sinh u(t)}{2n_{\text{th}}(t)+1},
\\
\frac{d\phi(t)}{dt} &=& -\omega_P, \label{eq:sqphase}
\\
\frac{1}{\Gamma}\frac{dn_{\text{th}}(t)}{dt} &=&\sinh^2u(t) - n_{\text{th}}(t) \label{eq:thmnum}.
\end{eqnarray}
Here,
\begin{equation}
\label{eq:pump}
g(t) \equiv\frac{4|\gamma||E_3(t)|}{\hbar \Gamma}
\end{equation}
is a dimensionless function of time that we will refer to as the pumping strength \cite{Seifoory2017SqueezedCavities}; it is the ratio of the pumping rate to the total decay rate of the squeezed light in the cavity. It is constructed such that when $g(t) = 1$, the rate of signal generation in the ring equals the signal loss out of the ring. Using the approximate expression for the field in Eq. \eqref{e3timeapprox}, we can write the pumping strength as,
\begin{eqnarray}
\label{pumpapprox}
g\left(\tilde{t}\,\right)&=&g_0\frac{\kappa a }{ \tilde{\Gamma}}\sqrt{\frac{\tilde{\tau}}{8\ln2}}\exp\left(\frac{-2\ln(2)\tilde{t}^2}{\tilde{\tau}^2}\right)\times \nonumber
\\
&\times& \sqrt{\pi}{\rm e}^{z\left(\tilde{t}\,\right)^2}{\rm erfc}\,\left[z\left(\tilde{t}\,\right)\right],
\end{eqnarray}
where $\tilde{\Gamma} \equiv \Gamma T_R$ and $g_0 \equiv 4|\gamma| E_0 T_R/\hbar$ is a dimensionless parameter. The pumping strength is the function that drives the squeezing processes, and directly affects the amount of squeezing in the ring. A large peak value in the pumping strength will generate substantial quadrature squeezing. In Fig. \ref{fig:e3time}, the pumping strength in the ring is plotted as a function of time for $a = \sigma = 0.99$ (critical coupling) and $g_0= .413$. Initially ($t=-\infty$), the pumping strength in the ring is zero. As the input pulse starts to coupling into the ring the pumping strength begins to build up. At $t=0$, the input pulse takes on its peak value at the coupling point in the channel. Some time later the pumping strength reaches its peak value. As can be seen, this time and the maximum value that the pumping strength reaches depend on the duration of the input pulse $\tau$ in the channel. For a short input pulse duration of $\tilde{\tau} = 1$, the pumping strength very quickly builds up to its peak value. The longer the input pulse becomes, the more time it takes for this to occur. For very long input pulses, the peak pumping strength will scale as $1/\sqrt{\tau}$, but the dependence on the pump duration is more complicated for shorter pulses and as can be seen, the maximum pumping strength is in fact achieved for intermediate pulse durations. We denote the input pulse duration that maximizes the peak pumping strength by $\tau_g$. In the Appendix \ref{Doftaug}, we derive the following approximate but accurate expression for $\tau_g$ in the low-loss limit $(1-\sigma a)\ll 1$:
\begin{equation}
\label{taup_text}
\tilde{\tau}_g\approx0.342\frac{\sqrt{8\ln2}}{1-\sigma a}.
\end{equation}
Also in Appendix \ref{Doftaug}, we show that a pulse duration of $\tau_g$ given in Eq. \eqref{taup_text} causes the pumping strength to peak at the time
\begin{equation}
\label{tpeak}
\tilde{t}_{peak} = \frac{1}{2(1-\sigma a)},
\end{equation}
which is $1/\tilde{\Gamma}$, assuming that $(1-\sigma a)\ll 1$.
Before proceeding, we note that we could have used the field $E_4(t)$ rather than $E_3(t)$ and produced similar results. They are related by $E_3(t) = a E_4(t-T_R)$. However, the field $E_3(t)$ is a more conservative representation of the field inside the ring, because it has been reduced by the attenuation loss of one additional round trip relative to $E_4(t)$.
\begin{figure}[htbp]
\centering
\includegraphics[scale=1]{pumpstrengthVsTau.png}
\caption{\small The pumping strength $g\left(\tilde{t}\,\right)$ in the ring for $\sigma=a=0.99$ (critical coupling) generated with a short input pulse ($\tilde{\tau} = 1$) (solid thin line), a pulse $\tau =\tau_g$ that gives the highest peak in $g$ (solid bold line), and a long pulse ($\tilde{\tau} =300$) (dashed line). }
\label{fig:e3time}
\end{figure}
The initial conditions for equations \eqref{eq:sqamp} to \eqref{eq:thmnum} are evaluated at an early time, $t_i$ $(<0)$, when the incident pump pulse amplitude is negligible. The initial state of the system is the vacuum state, which means that $u(t_i) =0$ and $n_{\rm th}(t_i)=0$. We set the initial squeezing phase, $\phi(t_i)$, to be $\phi(t_i) = 0$, so that the time-dependent phase is given by $\phi(t) = -\omega_P (t-t_i)$. In numerical calculations, the absolute value of the initial time must be chosen such that $|t_i| \gg \tau$.
The numerical solution of the coupled equations \eqref{eq:sqamp} to \eqref{eq:thmnum} enable us to determine the time-dependent level of quadrature squeezing in the ring. To this end, quadrature operators $\hat{X}$ and $\hat{Y}$ are defined as,
\begin{eqnarray}
\label{quadopx}
\hat{X} &=& \hat{b}^\dagger{\rm e}^{-i\theta(t)} + \hat{b}{\rm e}^{i\theta(t)},
\\
\label{quadopy}
\hat{Y} &=& i\left(\hat{b}^\dagger{\rm e}^{-i\theta(t)} - \hat{b}{\rm e}^{i\theta(t)}\right).
\end{eqnarray}
Here the quadrature phase $\theta(t)$ is defined as $\theta(t) \equiv \omega_S ( t- t_i)$. We include this phase so that the expectation value of the quadrature does not contain fast oscillations in time, because this choice cancels with the phase $\phi(t)$ of the squeezed state. The noise in the $\hat{X}$ and $\hat{Y}$ quadratures is defined as the square root of the variance, and written as $\Delta X$ and $\Delta Y$. Using Eq. \eqref{eq:solution} they can be shown to be given by, \cite{KimPropertiesStates}
\begin{eqnarray}
\Delta X(t) &=& \sqrt{2n_{\text{th}}(t) + 1}\;{\rm e}^{-u(t)} \label{eq:dx},
\\
\Delta Y(t) &=& \sqrt{2n_{\text{th}}(t) + 1}\;{\rm e}^{u(t)}. \label{eq:dy}
\end{eqnarray}
Multiplying Eqs. \eqref{eq:dx} and \eqref{eq:dy} together gives,
\begin{equation}
\label{nthdxdy}
\Delta X (t) \Delta Y (t) = 2n_{\text{th}}(t)+ 1.
\end{equation}
If $n_{\rm th} = 0$, then $\Delta X \Delta Y = 1$ and a squeezed vacuum state is recovered, with $\Delta X = \exp(-u)$ and $\Delta Y = \exp(u)$. With our choice of quadrature operators, the noise in either quadrature for a vacuum state ($u=0$) is simply $\Delta X = 1$ and $\Delta Y = 1$. Therefore, squeezing below the vacuum noise in the $\hat{X}$ quadrature occurs when $\Delta X < 1$ in Eq. \eqref{eq:dx}. The expectation value of the photon number for the squeezed thermal state can be shown to be given by \cite{KimPropertiesStates}
\begin{equation}
\label{eq:totalphoton}
\left<\hat{n}\right>\equiv\left<\hat{b}^\dagger \hat{b}\right> = n_{\rm th}(t)\cosh\left(2u(t)\right) + \sinh^2\left(u(t)\right).
\end{equation}
When $n_{\rm th} = 0$, the expectation value of the photon number is $\sinh^2(u)$, which is the result obtained for a squeezed vacuum state.
\section{Results and Discussion}
\label{sec:results}
In this section, we present our numerical solutions to the set of equations \eqref{eq:sqamp} to \eqref{eq:thmnum}. We solve them using a fourth-order Runge-Kutta method; the total run time for a given configuration is on the order of a few seconds on a standard PC. We also derive an approximate analytic expression for the minimum quadrature noise in terms of the peak pumping strength, and an expression for the optimum choice of $\sigma$ (or alternatively, $\kappa$) that produces the global minimum in the quadrature noise. In addition, we numerically determine the pulse duration that produces the minimum quadrature noise for a given $\kappa$ and show that it is close to $\tau_g$, as given in Eq. \eqref{taup_text}. We discuss the effects of scattering loss $a$ on the quadrature noise, and the optimum coupling coefficient and pulse duration. Finally, we study the sensitivity of the minimum quadrature noise to a phase offset due to imperfect homodyne detection.
In the remainder of this paper, we use the following values for our pump and ring parameters. We take the ring material to be AlGaAs with $\chi^{(2)}_{\rm eff} = 100\, {\rm pm}/{\rm V}$ \cite{Yang2007GeneratingResonators}, $n_{\rm eff} = 2.85$, and $\omega_P = 2\pi\times135.73\,{\rm THz}$ ($\lambda_P = 775\, {\rm nm}$). The amplitude of the input pulse $E_0$ can be written in terms of the total pump pulse energy $U$ as,
\begin{equation}
\label{E0}
E_0 = \left(\frac{4\ln2}{\pi}\right)^{1/4}\sqrt{\frac{2U}{\mathcal{A}c\, n_{\rm eff} \epsilon_0 T_R}},
\end{equation}
where $\mathcal{A} = 0.71\, {\mu {\rm m}}^2$ is the cross-sectional area of the ring waveguide, $\epsilon_0$ is the permittivity of free space, and $c$ is the speed of light. The energy of the incident pulse is chosen to be $U = 0.188\,{\rm pJ}$ (independent of the pulse duration). This value of $U$ produces a substantial amount of squeezing, but generally does not lead to significant pump depletion, even for low-loss cavities.
The radius of the ring required to give a resonance at the pump frequency is $R = m_P c/ (\omega_P n_{\rm eff})$, where we have used Eq. \eqref{eq:thephase} with $\omega = \omega_P$ and $\Theta = 2\pi m_P$. We choose the pump mode number to be $m_P=200$, which makes the ring radius approximately equal to $R \approx 25\,{\rm \mu m} $. The ring round trip time is given by $T_R = 2\pi R n_{\rm eff}/c$, and in this case is $T_R \approx 1.47 \, {\rm ps} $. We present our results in terms of the dimensionless parameters; $\tilde{t}\equiv t/T_R$ and $\tilde{\tau}\equiv \tau/T_R$. Once this is done, the only place where $R$ enters our model is in the amplitude of the pumping strength in Eq. \eqref{pumpapprox}. Thus in order to make our results independent of $R$ we require $E_0 T_R$ be constant. We collect all the dimensional parameters above into the single dimensionless constant, $g_0$, which was introduced in Eq. \eqref{pumpapprox}. For the above choice of parameters, $g_0 = 0.413$.
\subsection{Dynamics of the squeezing process}
We begin by examining the time-dependent quadrature noise $\Delta X$ in the ring in Fig. \ref{fig:dxandg300} for $\sigma = a = 0.99$ (critical coupling), for an input pulse duration of $\tilde{\tau}=300$. Initially the pumping strength is zero and the quadrature noise is equal to the vacuum noise $\Delta X = 1$. As the pumping strength builds up, the quadrature noise gets squeezed below the vacuum noise $\Delta X < 1$. We find that the quadrature noise is a minimum at approximately (but not exactly) the time at which the pumping strength is at its peak, that is, at $\tilde{t}_{\rm min}\approx 40$ (indicated by the vertical line). Finally, when the pump pulse couples out of the ring, the quadrature noise returns to the vacuum noise. The time-dependent squeezing amplitude $u$ and thermal photon number $n_{\rm th}$ are shown in Fig. \ref{fig:sqandnth300} for the same parameters. As the squeezing amplitude increases, the quadrature noise is squeezed by the factor $\exp(-u)$. However, the trade off is that the thermal photon number also increases, which results in an increase in the quadrature noise by the factor $\sqrt{2n_{\rm th} +1}$. Thus the minimum quadrature noise does not happen when the squeezing amplitude is maximum, but instead at an earlier time closer to when the pumping strength is maximum and the thermal photon number is much less than its peak value.
\begin{figure}[htbp]
\centering
\begin{subfigure}[b]{.475\textwidth}
\centering
\includegraphics[width=\textwidth]{dxgdynamicstau300.png}
\caption[]%
\label{fig:dxandg300}
\end{subfigure}
\vfill
\begin{subfigure}[b]{.475\textwidth}
\centering
\includegraphics[width=\textwidth]{sqnthdynamicstau300.png}
\caption[]%
\label{fig:sqandnth300}
\end{subfigure}
\caption{\small (a) The quadrature squeezing $\Delta X$ (thick line) and pumping strength $g$ (thin line) as a function of time, and (b) the squeezing amplitude $u$ (thick line) and thermal photon number $n_{\rm th}$ (thin line) as a function of time for an input pulse duration of $\tilde{\tau}=300$ and coupling constant $\sigma = a =0.99$. The time at which $\Delta X$ is minimum is $\tilde{t}_{\rm min}\approx 40$ is indicated by the vertical line.}
\label{fig:tau300dynamics}
\end{figure}
In Fig. \ref{fig:dxandg1}, we examine a similar setup as above, except our input pump pulse has a much shorter duration of $\tilde{\tau} = 1$. Here, the pumping strength quickly reaches its peak value and does not spend much time building up in the ring. The quadrature noise is not as squeezed as it was with the long pulse. Additionally, with the short pulse, the minimum quadrature noise does not occur at the same time as when the pumping strength is at its peak. In this case the peak pumping strength occurs at approximately $\tilde{t}\approx 2$ and the minimum quadrature noise occurs at approximately $\tilde{t}_{\rm min}\approx 26$. The time-dependent squeezing amplitude and thermal photon number are shown in Fig. \ref{fig:sqandnth1} for the same short pulse. The thermal photon number is significantly smaller now, so the factor $\sqrt{2n_{\rm th} +1}$ is less detrimental to the squeezing. As a result we find that the minimum quadrature noise now occurs closer to the time when the squeezing amplitude is at its peak value.
\begin{figure}[htbp]
\centering
\begin{subfigure}[b]{.475\textwidth}
\centering
\includegraphics[width=\textwidth]{dxgdynamicstau1.png}
\caption[]%
\label{fig:dxandg1}
\end{subfigure}
\vfill
\begin{subfigure}[b]{.475\textwidth}
\centering
\includegraphics[width=\textwidth]{sqnthdynamicstau1.png}
\caption[]%
\label{fig:sqandnth1}
\end{subfigure}
\caption{\small The same plots as in Fig. \ref{fig:tau300dynamics} but for an input pulse duration of $\tilde{\tau}=1$. Note that now the time at which $\Delta X$ is minimum is $\tilde{t}_{\rm min}\approx 26$.}
\label{fig:tau1dynamics}
\end{figure}
Having examined the two extreme cases of a long pulse and short pulse, we now consider the most interesting case for quadrature squeezing. We pump the ring with an input pulse duration $\tau_g$ (given by Eq. \eqref{taup_text}) that gives the greatest peak value of the pumping strength. For $\sigma = a = 0.99$, $\tilde{\tau}_g\approx 40$. In Fig. \ref{fig:dxandg40} the time-dependent quadrature noise is shown for this pulse. When compared to the short and long pulse, we find that this duration produces the greatest quadrature squeezing. The minimum quadrature noise occurs at roughly the same time as the peak value of the pumping strength; using Eq. \eqref{tpeak} the peak pumping strength occurs at $\tilde{t}_{peak}\approx 25 $, while the quadrature noise is a minimum, at $\tilde{t}_{\rm min}\approx 29$. The time-dependent squeezing amplitude and thermal photon number for this pulse duration are shown in Fig. \ref{fig:sqandnth40}. The peak squeezing amplitude is reduced by a factor of approximately 2 compared to the long pulse. However, the depletion of the squeezing amplitude is counteracted by the thermal photon number being reduced by a factor of roughly $10^5$. This shows that the thermal noise is much more sensitive to the duration of the input pulse than the squeezing amplitude is, and therefore it is better to err on the side of using a relatively shorter pulse in a lossy ring resonator.
\begin{figure}[htbp]
\centering
\begin{subfigure}[]{.475\textwidth}
\centering
\includegraphics[width=\textwidth]{dxgdynamicstau40.png}
\caption[]%
\label{fig:dxandg40}
\end{subfigure}
\begin{subfigure}[]{.475\textwidth}
\centering
\includegraphics[width=\textwidth]{sqnthdynamicstau40.png}
\caption[]%
\label{fig:sqandnth40}
\end{subfigure}
\caption{\small The same plots as in Fig. \ref{fig:tau300dynamics} but for an input pulse duration of $\tilde{\tau}=\tilde{\tau}_g\approx 40$. Note that now the time at which $\Delta X$ is minimum is $\tilde{t}_{\rm min}\approx 29$.}
\label{fig:tau40dynamics}
\end{figure}
\subsubsection{Minimum in the quadrature noise}
We have demonstrated how the minimum in $\Delta X$ depends on the pulse duration $\tau$. Here we derive an analytic expression for the minimum quadrature noise $\Delta X_{\rm min}$. Setting the derivative of $\Delta X (t)$ in Eq. \eqref{eq:dx} equal to zero at the time $t_{\rm min}$ and simplifying gives,
\begin{equation}
\label{dxdiff02}
\frac{d n_{\rm th}\left(t\right)}{dt}\bigg|_{t=t_{\rm min}}-
\left(2 n_{\rm th}\left(t_{\rm min}\right)+1\right)\frac{d u\left(t\right)}{d t}\bigg|_{t=t_{\rm min}} = 0.
\end{equation}
Replacing the derivatives in Eq. \eqref{dxdiff02} with Eq. \eqref{eq:sqamp} and Eq. \eqref{eq:thmnum} and using Eq. \eqref{eq:dx} to simplify gives,
\begin{eqnarray}
\label{dxdiff04}
\Delta X_{\rm min}\left(\tau\right) = \frac{1}{\sqrt{1+g(t_{\rm min},\tau)}},
\end{eqnarray}
where $g(t_{\rm min},\tau)$ is the pumping strength evaluated at the time when the quadrature noise is at its minimum. In general, we evaluate $g(t_{\rm min},\tau)$ numerically in order to calculate the minimum quadrature noise for a given $\sigma$, $a$, and $\tau$. If the input pulse duration is close to or larger than $\tau_g$, then the value of the pumping strength at the time when the quadrature noise is minimum is roughly the same as the peak value of the pumping strength (see Figs. \ref{fig:dxandg300} and \ref{fig:dxandg40}). Thus, we can neglect the difference between $g(t_{\rm min})$ and the peak value of the pumping strength. That is, if the pulse duration is considerably longer than $T_R$, then the pumping strength does not vary appreciably over a time scale of a few round-trips of the ring. This approximation improves the longer the pulse. Conversely, this approximation is not valid for the setup in Fig. \ref{fig:dxandg1} for the short pulse, as we discussed earlier. Let $g_{\rm max}(\tau)$ denote the peak pumping strength as a function of $\tau$. Then, since for pulses durations $\tau \gg T_R$ $g_{\rm max}(\tau)\approx g(t_{\rm min},\tau)$, we obtain the following approximate expression for the minimum quadrature noise:
\begin{equation}
\label{dxapprox}
\Delta X_{\rm min}(\tau) \approx \frac{1}{\sqrt{1+g_{\rm max}(\tau)}}, \,\,\,\, (\tau\gtrsim \tau_g).
\end{equation}
Therefore the minimum quadrature noise is expressed in terms of the peak pumping strength, for which we have an expression in Eq. \eqref{pumpapprox}. The advantage of Eq. \eqref{dxapprox} is that it gives the minimum quadrature noise as a function of $\tau$ and $\sigma$; without having to solve the coupled differential equations numerically. Additionally, letting $\tau = \tau_g$ in Eq. \eqref{dxapprox}, and using Eq. \eqref{taup_text} and Eq. \eqref{tpeak}, gives the following result,
\begin{equation}
\label{dxapproxtaup}
\Delta X_{\rm min}(\tau_{g}) \approx \left[1+ 0.653 \, \frac{g_0a }{\tilde{\Gamma}}\sqrt{\frac{1-\sigma^2}{1-\sigma a}}\right]^{-\frac{1}{2}}.
\end{equation}
This is the minimum quadrature noise in the ring for the pulse duration of $\tau_g$, as a function of $\sigma$ and $a$. For a given $\sigma$ and $a$, we will show in the next section that this expression approximately gives the best quadrature squeezing. We will assess the accuracy of the expression given in Eqs. \eqref{dxapprox} and \eqref{dxapproxtaup} below.
\subsection{Dependence of the minimum quadrature noise on pulse duration and coupling}
The minimum quadrature noise depends on the pulse duration $\tau$, coupling $\sigma$, and scattering loss $a$. Thus far, the numerical results that we have presented have been only for the case of very low scattering loss at critical coupling ($\sigma =a = 0.99$), and for only three pulses. We have shown that, compared to a short and long pulse, $\tau_g$ generates the best quadrature squeezing for a given $\sigma$ and $a$. In this section, we present numerical results for the maximum quadrature squeezing as a function of the coupling constant and pump duration for different scattering loss in the ring. We will show that the choice of critical coupling, although an obvious starting point, is not the optimal choice in order to achieve the global minimum in the quadrature noise for a given $a$. In fact, we find the global minimum in the quadrature noise is in the undercoupled ($\sigma > a$) regime and derive an approximation analytic expression for the optimal coupling.
Our analysis is done by computing the minimum quadrature noise $\Delta X_{\rm min}(\tau,\kappa)$ as a function of pulse duration and coupling for different attenuation constants $a$. Then we numerically determine the optimal choices for the pulse duration and coupling, and finally compare them to approximate analytic expressions that we derive.
In Fig. \ref{fig:minDX3d} we plot the minimum quadrature as a function of the coupling coefficient and pulse duration for four different loss parameters $a$. First, in Fig. \ref{fig:a1}, we consider the case where there is no scattering loss ($a=1$). In this case, the minimum quadrature noise decreases as the cross-coupling constant $\kappa$ goes to zero (or $\sigma$ goes to one). Consequently, we find no optimum value of $\kappa$ that gives a global minimum in the quadrature noise. This is expected, because as $\kappa$ goes to zero, the buildup factor continues to increase without bound. In the figure, there are two hatched areas. The darker hatching (around $\kappa = 0.1$) is where the expectation value of the number of generated photons is at least $1\%$ of the average number of pump photons ($\left<n_{\rm pump}\right>\sim 2\times 10^6$); thus, our undepleted pump approximation is becoming less accurate. The second, lighter hatching (where $\kappa < 0.1$) indicates when our simulations break down, because the decay rate $\tilde{\Gamma}$ goes to zero as $\kappa$ goes to zero. The blue dotted line in the figure indicates the computed pulse duration that gives the best quadrature squeezing for a given $\kappa$. The red curve is the input pulse duration $\tau_g(\kappa)$ given by Eq. \eqref{taup_text}. The fact that $\tau_g$ fits agrees well with the computed optimal pulse duration means that the minimum in the quadrature noise is approximately where the peak pumping strength is the greatest. For short pulses, or pulses larger than $\tau_g$, the peak pumping strength is too small in the ring and we see that the squeezing gets worse.
We now consider how introducing scattering loss into the ring affects the squeezing. When there is loss, the buildup factor has a peak value at critical coupling $\kappa = \sqrt{1-a^2}$ (or $\sigma = a$). In Fig. \ref{fig:a99} the scattering loss is $a=0.99$. Consequently, there is substantial squeezing at the peak in the buildup factor at critical coupling (indicated by the vertical line), and the squeezing gets worse away from the peak, as $\kappa$ goes to zero (undercoupling) or one (overcoupling). We observe excessive photon generation, at least as much as $1\%$ of the average number of photons in the pump pulse (hatched area), for pulses longer than $\tau_g$ near critical coupling. The optimum squeezing point (indicated by the red circle) is at a $\kappa$ that is lower than critical coupling in the undercoupled regime, where the buildup of pump intensity is less, but the cavity decay rate is smaller. This shows that in order to achieve the largest squeezing it is preferable to have a lower cavity decay rate than that obtained at critical coupling.
In Figs. \ref{fig:a98} and \ref{fig:a95} we increase the attenuation loss in the ring to $a=0.98$ and $a=0.95$, respectively. As the scattering loss in the ring is increased, critical coupling shifts to higher $\kappa$ and so does the optimum point (indicated by a red circle); however, it still remains in the undercoupled regime. In addition, the optimum point shifts to shorter pulses, which is expected, because the longer the pulse is, the more thermal photons are generated. Our approximate expression $\tau_g (\kappa)$ is still in quite good agreement with the numerical results, but is not as accurate as when the loss was very low ($a=0.99$). This is because it is an approximate expression that is valid only when $(1-\sigma a) \ll 1$ (see Appendix \ref{Doftaug}). Interestingly, it still fits quite well at the optimum coupling point, with a difference of less than $2.3 T_R$ or a relative error of $18\%$ when $a=0.95$. Using the approximate value for the pulse duration in this case only leads to a $1\%$ increase in the quadrature noise relative to the optimal value.
\begin{figure*}[htbp]
\centering
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{dxmina1.png}
\caption[]%
\label{fig:a1}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{dxmina99.png}
\caption[]%
\label{fig:a99}
\end{subfigure}
\vskip\baselineskip
\centering
\begin{subfigure}[b]{0.485\textwidth}
\centering
\includegraphics[width=\textwidth]{dxmina98.png}
\caption[]%
\label{fig:a98}
\end{subfigure}
\quad
\begin{subfigure}[b]{0.485\textwidth}
\centering
\includegraphics[width=\textwidth]{dxmina95.png}
\caption[]%
\label{fig:a95}
\end{subfigure}
\caption[]
{\small The minimum quadrature noise $\Delta X_{\rm min}$ as a function of the input pulse duration $\tilde{\tau}$ and cross-coupling constant $\kappa$ for an attenuation constant of (a) $a=1$, (b) $a=0.99$, (c) $a=0.98$, and (d) $a=0.95$. The blue dots indicate the computed pulse duration needed to minimize $\Delta X_{\rm min}$ for a given $\kappa$. The solid red line is the pulse duration $\tilde{\tau}_g(\kappa)$ as a function of $\kappa$ given by Eq. \eqref{taup_text}. The red circles in (b)-(d) mark the point at which the quadrature noise is at a global minimum for the given value of $a$. The vertical black line in (b)-(d) indicates critical coupling ($\sigma =a$, \textit{i.e.}, $\kappa = \sqrt{1-a^2}$). The light hatched area in (a) marks the parameter space where our simulation does not converge. The dark hatched areas in (a) and (b) indicate regions where the number of generated photons is in excess of $1\%$ of the of photons in the incident pump.}
\label{fig:minDX3d}
\end{figure*}
An approximate expression for the optimum coupling value $\sigma_{\rm opt}$ (or $\kappa_{\rm opt}$) is given by minimizing $\Delta X_{\rm min} (\tau_g)$ in Eq. \eqref{dxapproxtaup} with respect to $\sigma$ for a fixed $a$. Doing this we obtain,
\begin{equation}
\label{eq:sigmaopt}
\sigma_{\rm opt}(a) \approx \frac{-1+\sqrt{3 a^2 +1}}{a}.
\end{equation}
This is a good approximation as long as $(1-\sigma a)\ll 1$ and $\tau\gtrsim \tau_g$. In Fig. \ref{fig:optkappa} we compare the $\sigma_{\rm opt}$ given by Eq. \eqref{eq:sigmaopt} (curve) to the numerically-computed value (circles). We find the analytic result fits well for $a \ge 0.9$. Note that as the scattering loss increases, the difference between critical coupling (dashed line) and $\sigma_{\rm opt}$ increases. Thus, for lossy systems the optimum coupling value $\sigma_{\rm opt}$ shifts closer to one (undercoupling) as compared to critical coupling. This compensates for the decrease in $a$ and makes the decay rate smaller. We note that the difference between the quadrature noise at critical coupling and optimum coupling is generally small; for $a=0.95$, the quadrature noise is reduced by only $\sim 0.3$ dB, and for $a=0.99$ by only $\sim 0.2$ dB (see Figs. \ref{fig:a95} and \ref{fig:a99}). However, it is useful to know that one should err on the side of undercoupling if possible.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.95]{avsk.png}
\caption{\small The computed optimum self-coupling constant (circles) and the approximate optimum coupling constant given by Eq. \eqref{eq:sigmaopt} (solid line), as a function of attenuation loss $a$. The dashed line indicates critical coupling.}
\label{fig:optkappa}
\end{figure}
\subsection{Comparing the analytic expression for the minimum quadrature noise to the numerical results }
Generating the 3D plots in Fig. \ref{fig:minDX3d} is a relatively time-consuming process. To solve Eqs. \eqref{eq:sqamp} - \eqref{eq:thmnum} for each $\tau$ and $\kappa$, and at each time-step, we have do the integral in Eq. \eqref{eq:e3time} to obtain the pumping strength. To greatly speed-up this process we can instead use the approximate expression for $\Delta X_{\rm min} (\tau)$ given by Eq. \eqref{dxapprox}, which gives the minimum quadrature noise as a function of the peak pumping strength, $g_{\rm max}(\tau)$. The maximum value of $g_{\rm max}(\tau)$ can then be determined using the analytic expression for $g(\tau)$ given in Eq. \eqref{pumpapprox}. The relative error between the approximate expression for the minimum quadrature noise in Eq. \eqref{dxapprox} and the numerical result is defined as,
\begin{equation}
\label{error}
{\rm Error} \equiv \Bigg|\,1 - \sqrt{\frac{1+g(t_{\rm min})}{1+g_{\rm max}}}\,\Bigg|,
\end{equation}
so that when $g_{\rm max} = g(t_{\rm min})$ the error is zero. In Figs. \ref{fig:diff99} and \ref{fig:diff95} we plot the relative error as a function of $\tau$ and $\kappa$ for (a) $a=0.99$ and (b) $a=0.95$, respectively. As expected, the relative error approaches zero for long pulses. For $a=0.99$, at the optimum point (indicated by a red circle in Fig. \ref{fig:diff99}), the relative error is approximately $0.02\%$. This reinforces our assumption that $g_{\rm max}\approx g(t_{\rm min})$ when $\tau \gtrsim \tau_g$ and $a\approx1$. The relative error increases when the scattering loss increases. However, for $a=0.95$, the relative error is still only $\approx 1\%$, indicating that the approximation can still be used confidently when $a \ge 0.95$.
\begin{figure}[htbp]
\centering
\begin{subfigure}[htbp]{.49\textwidth}
\centering
\includegraphics[width=\textwidth]{diffdxmina99.png}
\caption[]%
\label{fig:diff99}
\end{subfigure}
~
\begin{subfigure}[htbp]{.49\textwidth}
\centering
\includegraphics[width=\textwidth]{diffdxmina95.png}
\caption[]%
\label{fig:diff95}
\end{subfigure}
\caption{\small The absolute value of the relative error (see Eq. \eqref{error}) between the approximate expression for the minimum quadrature noise and the numerically computed result, as a function of the coupling coefficient and pulse duration for (a) $a=0.99$ and (b) $a=0.95$. The red circles in (a) and (b) mark the point at which the quadrature noise is at a global minimum for the given value of $a$}
\label{fig:compareDX}
\end{figure}
Letting $\sigma = \sigma_{\rm opt}$ in Eq. \eqref{dxapproxtaup}, we obtain the following approximate expression for the global minimum in the quadrature noise $ \Delta X_{\rm opt}\equiv \Delta X_{\rm min}(\tau_{\rm opt},\sigma_{\rm opt})$ as a function of the loss parameter $a$:
\begin{equation}
\label{dxminopt}
\Delta X_{\rm opt} \approx \left[1+\frac{0.653g_0 }{\tilde{\Gamma}(\sigma_{\rm opt})}\sqrt{\frac{a^2-\left(1-\sqrt{3a^2+1}\right)^2}{2-\sqrt{3 a^2+1}}}\right]^{-\frac{1}{2}},
\end{equation}
where the cavity decay rate at the optimum coupling is given by,
\begin{equation}
\label{gammaopt}
\tilde{\Gamma}(\sigma_{\rm opt}) = -2\ln\left(-1+\sqrt{3 a^2 +1}\right).
\end{equation}
The optimum pulse duration $\tau_{\rm opt}$ is approximately given by $\tau_g(\sigma_{\rm opt})\equiv \tau_{\rm opt} $,
\begin{equation}
\label{tauopt}
\tilde{\tau}_{\rm opt}(a) \approx 0.342\frac{\sqrt{8\ln2}}{2-\sqrt{3a^2+1}}.
\end{equation}
The expression in Eq. \eqref{dxminopt} can be used to determine the approximate optimum squeezing level in the ring as a function of $a$.
In Fig. \ref{fig:dxminopt} (a) we compare the computed optimum squeezing level (in dB) (circle) to the value obtained with the expression in Eq. \eqref{dxminopt} (curve). As can be seen, the agreement is excellent, with a maximum relative error of $3\% $ (that is an absolute difference of $0.06 {\rm dB}$) when $a=0.9$. The globally-optimal squeezing level (for the range of $a$ considered) is approximately $-9.15 {\rm dB}$ for $a = 0.99$ and $\sigma = 0.995$. In Fig. \ref{fig:dxminopt} (b), we also show the computed anti-squeezing level (\textit{i.e.}, $\Delta Y$) (circles), when the squeezing is optimal. We see that for the global optimum in the squeezing, the anti-squeezing level is approximately $44 {\rm dB}$. Such a high level of anti-squeezing might be of concern if there is some jitter in the homodyne detection, such that one is not measuring the light at the time when it is maximally-squeezed. In the same figure, we show that by cutting the pulse duration in half (\textit{i.e.} $\tau_{\rm opt}/2$ (stars)), the anti-squeezing level reduces to approximately $26 {\rm dB}$, while the squeezing level is only modestly affected (see the stars in Fig. \ref{fig:dxminopt} (a)) (a change of less than $3\%$, or $\sim 0.3$ dB for $a=0.99$). This result is useful for applications trying to achieve fault-tolerant quantum computing in noisy environments \cite{quantumcomputingsqueezelimit, 15dbsqueeze, Knill2005QuantumDevices}.
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.95]{dxminoptanddyminopt.png}
\caption{\small (a) The computed squeezing and (b) anti-squeezing level versus scattering loss $a$, for the optimum coupling constant $\kappa_{\rm opt}$ when $\tau = \tau_{\rm opt}$ (circles) and when $\tau = \tau_{\rm opt}/2$ (stars). The curve in (a) is our analytic expression for the squeezing, given by Eq. \eqref{dxminopt}.}
\label{fig:dxminopt}
\end{figure}
\subsection{Sensitivity of the minimum quadrature noise to a phase offset}
Thus far we have assumed that the measurement of $\Delta X$ is perfect; that is, the phase of the local oscillator in a homodyne measurement is exactly matched to the phase of the squeezed light signal. We now allow for a small phase offset, $\delta \theta$, between the phase of the signal and local oscillator, and study the effect it has on the measured quadrature noise. Letting $\theta(t) = -\phi(t)/2+\delta \theta$ in the original definition for the quadrature operator in Eq. \eqref{quadopx}, the quadrature variance now is,
\begin{eqnarray}
\label{generalquadnoise}
\left(\Delta X_{\delta \theta} \right)^2 &=& \left(2n_{\rm th}(t)+1\right)\times \nonumber
\\
&\times&\left[\cosh 2u(t) - \cos\left(2\delta\theta\right) \sinh2u(t)\right].
\end{eqnarray}
We interpret $\delta \theta$ as the angular deviation from the $\hat{X}$ quadrature in phase-space. If $\delta \theta = 0$ then the squeezing $\Delta X$ is measured; and if $\delta \theta = \pi/2$, then the anti-squeezing $\Delta Y$ is measured. In Fig. \ref{fig:dxmintheta}, we plot the minimum quadrature noise that is measured if the phase offset is $\delta\theta = 5\, {\rm mrad}$ and the attenuation loss in the ring is $a=0.99$. We chose this value of phase offset, because it is close to what was found in a recent experiment \cite{phasejitter}. The hatched region shows where the measured quadrature noise is greater than the vacuum noise ($\Delta X > 1$). We find that the quadrature noise has increased at the previous optimum point that we found for an offset of zero (indicated by the red circle) to $\Delta X\approx 0.8$. One can correct for the increase in noise caused by the phase offset by reducing the pulse duration to approximately $\tilde{\tau}_{\rm opt}/2\approx 26$. Doing so reduces the squeezing level to approximately $\Delta X \approx 0.37$, which is close to the optimum level for an offset of zero ($\Delta X \approx 0.35$). Note that the new optimal point (when there is phase offset) occurs for essentially the same coupling constant and only the pulse duration needs to be adjusted. Note also that there are a number of combinations of $\tau$ and $\kappa$ that achieve a squeezing level of $\Delta X <0.4$ where one could work. The results are most sensitive to a phase offset when the scattering loss is small ($a$ close to 1). For $a\le0.98$, a phase offset of $5\, {\rm m rad}$ did not significantly perturb the minimum squeezing level at $\tau_{\rm opt}$ and $\kappa_{\rm opt}$.
\begin{figure}[htbp]
\centering
\includegraphics[scale=1]{dxmintheta5mrada99.png}
\caption{\small The minimum quadrature noise $\Delta X _{\delta \theta} (t_{\rm min},\tau,\kappa)$ for a phase deviation of $\delta \theta = 5\, {\rm mrad}$ as a function of the coupling constant and the pulse duration. The blue star indicates the optimal operating point, while the red circle gives the optimum point found when $\delta \theta = 0$. The hatched area indicates where the noise is greater than the vacuum noise ($\Delta X_{\delta \theta}>1$).}
\label{fig:dxmintheta}
\end{figure}
\section{Conclusion}
In this work we have studied the time-dependent squeezing process in a lossy microring resonator pumped by a Gaussian pulse. We derived approximate analytic expressions for the optimum pulse duration (Eq. \eqref{taup_text}) and optimum ring-channel coupling constant (Eq. \eqref{eq:sigmaopt}) for a fixed pump energy. Using these optimal parameters, we derived an analytic expression for the maximum squeezing level achievable for a ring with a given loss $a$ (Eq. \eqref{dxminopt}). We found that for the chosen pump energy of $0.188\,$pJ and a scattering loss of $a=0.99$, the optimal coupling constant and pulse duration are $\sigma_{\rm opt} = 0.995$ and $\tau_{\rm opt} = 56 T_R$, while for a scattering loss of $a=0.95$ we find optimal values of $\sigma_{\rm opt} = 0.974$ and $\tau_{\rm opt} = 13T_R$. Under these optimal conditions, we demonstrated a maximum squeezing level of $-9.15{\rm dB}$ and $-3.67 {\rm dB}$ for $a=0.99$ and $a=0.95$, respectively. Furthermore, we demonstrated that by reducing the pulse duration at optimal coupling, the anti-squeezing level can be drastically reduced, while the squeezing level is only modestly affected. Moreover, we showed that our model shows how one can reduce the impact of homodyning phase noise on the squeezing simply by reducing the pump pulse duration from the nominally optimal value. We believe that the analytic expressions that we have developed for this system will help researchers looking to optimize the design of ring resonator systems for the generation of squeezed light.
\label{sec:conclusions}
\section*{Acknowledgements}
This work was supported by Queen’s University and the Natural Sciences and Engineering Research Council of Canada (NSERC). The authors would also like to thank Hossein Seifoory for many fruitful discussions.
| proofpile-arXiv_065-292 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section*{Introduction}
Let $C$ be a smooth projective curve of genus $g \geq 2$, let $r$ and $d$ be integers with $r > 0,$ and let $L$ be a line bundle of degree $d$ on $C.$ Throughout the paper we write $h := {\rm gcd}(r,d).$
The moduli space $M :={\rm SU}_{C}(r,L)$ parametrizes semistable rank-$r$ vector bundles on $C$ with determinant $L$ up to $S-$equivalence. It is well-known that $M$ is a normal and locally factorial projective variety of dimension $(r^{2}-1)(g-1)$ whose singularities are at worst rational and Gorenstein. Moreover, there is an ample divisor $\Theta$ on $M$ such that ${\rm Pic}{M} \cong {\mathbb{Z}}\Theta$ and $K_{M} = -2h\Theta$; in particular $M$ is Fano of index $2h$. One can show that $M$ is smooth precisely when either $h=1$ or $g=r=2$ and $d$ is even; in all other cases, the singular locus is the (nonempty) locus of equivalence classes of strictly semistable bundles. We refer to \cite{DN} for further details on the geometry of $M$.
Rational curves have long been a useful tool in the study of varieties in general, and of Fano varieties in particular (e.g. \cite{K,Hu1}).
The main topic of this paper is the structure of the Hilbert scheme ${\rm Mor}_{k}(\mathbb{P}^{1},M)$ parametrizing rational curves $f : \mathbb{P}^1 \to M$ of degree $k \geq 1.$
This subject has a long tradition. Narasimhan-Ramanan \cite{NR2} and Newstead \cite{N}, who addressed the case of $g=r=2$ and $d$ odd, gave beautiful geometric descriptions of the space of lines in $M.$ In \cite{S}, Sun classified curves of minimal degree and determined the minimal degree of a rational curve through a generic point of $M$. Additional results in this direction were obtained in \cite{MS}, where the authors give constructions of rational curves of minimal degree in $M$ and in \cite{Li} where the particular case of genus 3, rank 2 and even degree is described.
The aforementioned results all deal with very particular cases of genus and rank of the vector bundle or specific degree of the rational curve.
Up to now, the only case that has been addressed for arbitrary genus and degree of the rational curve is when $r=2$ and $d$ is odd;
in this case, Castravet \cite{C} classified the irreducible components of the Hilbert scheme of rational curves.
Here, we extend much of Castravet's work to the setting of arbitrary rank and degree.
Our first two main results completely classify the components of ${\rm Mor}_{k}(\mathbb{P}^{1},M)$ which have the expected dimension:
\begin{thm}\label{thmain}
For all $k \geq 1$, there are precisely $h$ components of ${\rm Mor}_{k}(\mathbb{P}^{1},M)$ which are unobstructed (and therefore of the expected dimension).
They correspond to either families of lines in spaces of one-step extensions of vector bundles on $C$ or extensions of a skyscraper sheaf by a vector bundle.
\end{thm}
\begin{thm}\label{thext}
There is an obstructed component of ${\rm Mor}_{k}(\mathbb{P}^{1},M)$ having the expected dimension if and only if $k$ is divisible by $r_1(r-r_1)(g-1)$ for some $r_1, 1\le r_1\le r-1$.
There is a unique such component for each $r_1$, and it corresponds to families of rational curves of higher degrees in spaces of extensions of vector bundles on $C$.
\end{thm}
We also determine some obstructed components of ${\rm Mor}_{k}(\mathbb{P}^{1},M)$ that are not of the expected dimension, and we show that their associated rational curves only fill up a proper closed subvariety of $M$,
i.e.~ that the generic point on any of these rational curves is not a generic stable vector bundle of the given rank and determinant.
\begin{thm}\label{thoth}
The components of ${\rm Mor}_{k}(\mathbb{P}^{1},M)$ not listed in Theorems \ref{thmain} and \ref{thext} are obstructed components
corresponding to rational curves of higher degree in spaces of extensions of vector bundles on $C$ or to rational curves in multiple step extensions of vector bundles.
The points of these rational curves correspond to vector bundles on $C$ that fill a proper subvariety of $M$.
\end{thm}
Rational curves of minimal degree and their tangent directions have been used to study the deformation theory of $M$ (e.g.~ \cite{Hu2,HuR}).
There has been a lot of interest in studying uniruled varieties, that is varieties covered by rational curves.
One of the most interesting invariants of a uniruled variety is the minimum degree of a rational curve through a generic point of the variety.
This invariant was determined for $M$ in \cite{S}; a natural next step is to ask for the minimal degree of a rational curve between two generic points \cite{KMM}.
We answer this question here looking at rational connectivity and computing the minimum degree of an irreducible rational curve containing two generic points of $M$
(Proposition \ref{prop:rat-conn}).
An important question about rational curves on a Fano variety $X$ comes from Batyrev's conjecture on the growth rate of the number of components of ${\rm Mor}_{k}(\mathbb{P}^{1},X)$
as $k$ increases (e.g. \cite{LT}). We hope that our description of components will add to the reservoir of examples against which to check this conjecture.
In Section \ref{secprel}, we review an equation giving the degree of a curve in $M$ using the natural decomposition of a vector bundle on ${\mathbb P}^1$ as direct sum of line bundles.
We also prove a criteria that allows to describe all families of rational curves in $M$ in terms of extensions.
In Section \ref{secuncomp} we construct the unobstructed components of the Hilbert space and show that these are all the unobstructed components (see Theorem \ref{teornumcomp}).
Our main tool is the stability of a generic extension of generic vector bundles of given rank and degree.
In Section \ref{secadcomp}, we construct the additional components of Theorem \ref{thext}. We also construct additional families of vector bundles and prove Theorem \ref{thoth}.
Finally, in Section \ref{secratc2points}, we consider the degree of rational curves containing two generic points in $M$.
\medskip
\textbf{Acknowledgments:} We would like to thank Ana-Maria Castravet, Brian Lehmann, Sukhendu Mehrotra, Swarnava Mukhopadyay, Peter Newstead, and Hacen Zelaci for valuable discussions and correspondence related to this work. The first author was supported by the Max-Planck-Institut f\"{u}r Mathematik while part of this work was carried out; he would like to thank them for their hospitality and excellent working conditions.
\section{Preliminaries}\label{secprel}
In what follows, we fix a line bundle $L$ of degree $d$ on $C$ and denote ${\rm SU}_{C}(r,L)$ by $M.$
The Zariski tangent space to the moduli space ${U}_{C}(r,d)$ parametrizing semistable bundles of rank $r$ and degree $d$ at a point corresponding to a stable bundle $E$
may be naturally identified with $H^1(C, E^* \otimes E)$.
The trace map ${\rm tr} : E^* \otimes E \to \mc{O}_{C}.$ induces a decomposition
\[ E^* \otimes E\cong {\mathcal O}_C\oplus {\mc{H}}om_0(E)\]
where $\mc{O}_{C}$ corresponds to homotheties and ${\mc{H}}om_0(E)$ denotes traceless endomorphisms of $E$. The derivative of the determinant map ${\rm det}_{r,d} : U_{C}(r,d) \to M$ can be identified with the map
\begin{equation*}
H^{1}({\rm tr}) : H^{1}(C, E^* \otimes E) \to H^{1}(C,\mc{O}_{C})
\end{equation*}
induced by the trace map ${\rm tr}: E^* \otimes E \to \mc{O}_{C}.$
The tangent space to $M$ at $E$ can be identified with $H^1(C, {\mc{H}}om_0(E))$, which has dimension $(r^{2}-1)(g-1).$
Consider the determinant map ${\rm det}_{r,d}: U_{C}(r,d) \to {\rm Pic}^{d}(C)$.
The fibers of ${\rm det}_{r,d}$ are all isomorphic to $M,$ as one can go from one fiber to any other by tensoring with a suitable line bundle of degree zero.
Throughout the paper, we will write $h:=(r,d), \overline{r}:=r/h$ and $\overline{d} := d/h.$
If $F$ is a semistable vector bundle of rank $\overline{r}$ and degree $\overline{r}(g-1)-\overline{d},$ then $E \otimes F$ is semistable of slope $g-1$.
For a generic choice of $F$, the locus $\{E \in M : h^{0}(E \otimes F) > 0\}$ is a {\bf proper} subset of $M$ and then the support of an ample Cartier divisor.
The linear equivalence class of this divisor, which is denoted by $\Theta,$ is independent of $F.$ It is known that ${\rm Pic}(M) \cong \mathbb{Z}{\Theta}$ and that $K_{M} = -2h\Theta.$
\begin{defn}
\label{defdeg}
If $C'$ is a smooth projective curve of genus $g'$ and $f : C' \to M$ is a morphism, the degree of $f$ is $\deg(f) := c_{1}(f^*\Theta)$.
\end{defn}
We are interested in the Hilbert scheme ${\rm Mor}_{k}(C',M)$ parametrizing morphisms from $C'$ to $M$ of degree $k \geq 1$. It is well-known that the Zariski tangent space to the Hilbert scheme at $f$ is $H^0(C',f^{\ast}T_{M})$, and that $f$ is unobstructed if $H^{1}(C',f^{\ast}T_{M})=0.$
\begin{lem}
The expected dimension of a component of ${\rm Mor}_{k}(C',M)$ is its minimum possible dimension
\[ 2hk + (r^{2}-1)(g-1)(1-g')\]
\end{lem}
\begin{proof}
We have from Riemann-Roch that
\begin{equation*}
\chi(C',f^{\ast}T_{M}) = {\rm deg}(f^{\ast}T_{M}) + \text{rank} (f^{\ast}T_{M})(1-g') = {\rm deg}(f^{\ast}T_{M}) + {\rm dim}(M)(1-g')
\end{equation*}
\begin{equation*}
= -f_{\ast}[C'] \cdot K_{M} + (r^{2}-1)(g-1)(1-g') = 2hk + (r^{2}-1)(g-1)(1-g')
\end{equation*}
Thus $2hk + (r^{2}-1)(g-1)(1-g')$ is the minimal dimension of a component of the Hilbert scheme and in fact the expected dimension.
\end{proof}
For the rest of the paper, we focus on the case in which $C'={\mathbb P}^1$.
There is a projective bundle $\mathcal P$ and a vector bundle $\mc{A}_{0}$ on $C\times M$ such that for all $E \in M,$ we have ${\mathcal P}_{|C\times \{ E\}} \cong {\mathbb P}(E)$ and ${\mc{H}}om_0(E) \cong {\mathcal A}_{0}|_{C\times \{ E\}}$.
If $p_{M} : C \times M \to M$ is the projection map, we also have $T_{M} \cong R^{1}p_{M\ast}\mc{A}_{0}.$
There exists a vector bundle $\widetilde{\mc{E}}$ on $C \times M$ such that $\widetilde{\mc{E}}|_{C \times [E]} \cong E$ for all $[E] \in M$
(in particular, $\mc{P} \cong \mathbb{P}(\widetilde{\mc{E}})$) and $\mc{A}_{0} \cong {\mc{H}}om_{0}(\widetilde{\mc{E}})$ precisely when $h=1$; in this case, $\widetilde{\mc{E}}$ is a Poincar\'e sheaf on $C \times M.$
The following is proved in \cite{S} Lemma 2.1. We include it here for ease of citation.
\begin{lem}
\label{sun-lem}
For any $f \in {\rm Mor}_{k}(\mathbb{P}^1,M)$ there exists a vector bundle ${\mathcal E}$ on $C\times {\mathbb P}^1$ such that
${\mathcal E}_{|C\times \{ t\}}=f(t)$ for all $t \in \mathbb{P}^1$ and ${\mc{H}}om_0({\mathcal E})=(1_{C} \times f)^*({\mathcal A}_0)$. \hfill \qedsymbol
\end{lem}
This allows us to identify non-constant maps $f:{\mathbb P}^1\to M$ with vector bundles on $C\times {\mathbb P}^1$
that restrict to a semistable vector bundle of rank $r$ and determinant $L$ on every fiber of the projection to ${\mathbb P}^1$, even when $h > 1.$
\begin{lem}
\label{unobs}
The point of ${\rm Mor}_{k}(\mathbb{P}^{1},M)$ corresponding to a morphism $f : \mathbb{P}^{1} \to M$ whose image lies in the smooth locus of $M$ is unobstructed if and only if the restriction of the associated vector bundle $\mc{E}$ on $C\times {\mathbb P}^1$
to the generic fiber $\{ P \}\times {\mathbb P}^1$ is isomorphic to
${\mathcal E}_{|\{ P \}\times {\mathbb P}^1}={\mathcal O}_{{\mathbb P}^1}(\alpha)^{r_1}\oplus {\mathcal O}_{{\mathbb P}^1}(\alpha -1)^{r-r_1}$ for some $\alpha \in \mathbb{Z}$ and $r_1 \in [1,r].$
\end{lem}
\begin{proof}
For every point $P\in C$ the restriction ${\mathcal E}_{|\{ P \}\times {\mathbb P}^1}$ is a vector bundle on ${\mathbb P}^1$,
and therefore a direct sum of line bundles.
Let us write the fiber over the generic $P$ as
\begin{equation}
\label{decP1}
{\mathcal E}_{|\{ P \}\times {\mathbb P}^1}={\mathcal O}_{{\mathbb P}^1}(\alpha_1)^{r_1}\oplus \cdots \oplus {\mathcal O}_{{\mathbb P}^1}(\alpha_l)^{r_l}, \hskip20pt \alpha _1>\dots >\alpha_l
\end{equation}
Recall that the point corresponding to $f$ is unobstructed if and only if $H^{1}(\mathbb{P}^{1},f^{\ast}T_{M})=0$. By Lemma \ref{sun-lem}, we have that
$$H^{1}( {\mathbb P}^1, f^{\ast}T_{M}) \cong H^{1}(\mathbb{P}^{1},{\mc{H}}om_{0}(\mc{E})|_{\{P\} \times \mathbb{P}^{1}})$$
$$\cong H^{1} ({\mathbb P}^1, {\mathcal O}_{{\mathbb P}^1}^{r-1} \oplus \displaystyle\bigoplus_{i \neq j}{\mathcal O}_{{\mathbb P}^1}(\alpha_i-\alpha_j)^{r_ir_j} )
\cong \displaystyle\bigoplus_{i \neq j}H^{1}({\mathbb P}^1,{\mathcal O}_{{\mathbb P}^1}(\alpha_i-\alpha_j))^{r_ir_j}$$
Since this vanishes if and only if $|\alpha_i-\alpha_j|\le1$ for all $i$ and $j,$ the result follows.
\end{proof}
It follows that up to tensoring with the pull back of a line bundle on ${\mathbb P}^1$, we can assume that on an unobstructed component
the restriction to the generic fiber is either trivial or ${\mathcal O}_{{\mathbb P}^1}(1)^{r_1}\oplus {\mathcal O}_{{\mathbb P}^1}^{r-r_1}$ for some $r_1 > 0.$
Corresponding to the decomposition of the generic fiber in equation (\ref{decP1}), we have a relative Harder-Narasimhan filtration for ${\mathcal E}$ with respect to
$p : C \times \mathbb{P}^{1} \to C$:
\[0={\mathcal E}_0\subset {\mathcal E}_1\subset\dots\subset {\mathcal E}_l={\mathcal E}\]
The successive quotients
\begin{equation}\label{eqquot} {\mathcal F}_i={\mathcal E}_i/{\mathcal E}_{i-1}\end{equation}
are each torsion-free with generic splitting type ${\mathcal O}_{{\mathbb P}^1}(\alpha_i)^{r_i}$.
Therefore, ${\mathcal F}'_i :={\mathcal F}_i\otimes p_2^*{\mathcal O}_{{\mathbb P}^1}(-\alpha_i)$ has generically trivial splitting type for each $i.$
A vector bundle on $C\times {\mathbb P}^1$ whose restriction to a general fiber of the projection to $\mathbb{P}^{1}$ is semistable of rank $r$ and degree $d$ gives rise to a rational curve $f : \mathbb{P}^{1} \to M$.
From (2.1), (2.2) in \cite{S}, the degree of the pull back of $\mc{E}$ with respect to the anticanonical bundle can be computed as the discriminant of $\mc{E}$:
$$ \deg(f) = \Delta ( {\mathcal E})=2rc_2( {\mathcal E})-(r-1)c_1( {\mathcal E})^2
=2r\sum_{i=1}^nc_2( {\mathcal F}_i')+2\sum_{i=1}^{n-1}(\text{rank} ( {\mathcal E}_i)d-\deg ( {\mathcal E}_i)r)(\alpha _i-\alpha_{i+1}) $$
As $d=h\bar d, r=h\bar r$ with $h$ the greatest common divisor of $d,r$, the degree $k$ of the rational curve as defined in \ref{defdeg} is
\begin{equation}
\label{degree}
k=\bar r\sum_{i=1}^nc_2( {\mathcal F}_i')+\sum_{i=1}^{n-1}(\text{rank} ( {\mathcal E}_i)\bar d-\deg ( {\mathcal E}_i)\bar r)(\alpha _i-\alpha_{i+1})
\end{equation}
\medskip
\begin{lem}
\label{pbfC}
If a torsion free sheaf ${\mathcal E} $ on a ruled surface has generic trivial splitting type, then $c_2( {\mathcal E})\ge 0$
with equality if and only if ${\mathcal E}$ is the pullback of a locally free sheaf on $C$, i.e.~ $\mathcal{E}$ has trivial splitting type on {\bf each} fiber.
\end{lem}
\begin{proof}
See the proof of \cite{GL} Lemma 1.4 (or \cite{S}, Lemma 2.2) and \cite{H}.
\end{proof}
Recall that given a vector bundle $E$ on $C$, a point $P\in C$, and a surjective morphism $\phi : E \to \mc{O}_{P},$ the associated elementary transformation $E'$ is the kernel of $\phi.$
In particular $E'$ fits into an exact sequence
\begin{equation}
0 \to E' \to E \to \mc{O}_{P} \to 0
\end{equation}
\begin{lem}\label{mustbeeltr}
Denote the successive quotients from equation (\ref{eqquot}) by $\mc{F}_{i}$.
There exist a finite number of elementary transformations
$$0\to {\mathcal F}_{i1}\to {\mathcal F}_i\to {\mathcal O}_{\{ P_{i1}\}\times {\mathbb P}^1}(\beta_1)^{r'_l}\to 0,\
0\to {\mathcal F}_{i2}\to {\mathcal F}_{i1}\to {\mathcal O}_{\{ P_{i2}\}\times {\mathbb P}^1}(\beta_2)^{r'_2} \to 0,\dots $$
$$ \dots , 0\to {\mathcal F}_{ik_i}\to {\mathcal F}_{ik_i-1}\to {\mathcal O}_{\{ P_{ik_i}\}\times {\mathbb P}^1}(\beta_i)^{r'_{k_i}}\to 0$$
with $ {\mathcal F}_{ik_i}$ the pull-back of a vector bundle on $C$.
\end{lem}
\begin{proof}
Recall that $ {\mathcal F}_i'$ has generic trivial splitting type. So, from Lemma \ref{pbfC}, $c_2( {\mathcal F}'_i)\ge 0$
with equality if and only if $ {\mathcal F}'_i$ is the pull back of a locally free sheaf on $C$.
If $c_2( {\mathcal F}'_i)> 0$, then the vector bundle is not the pull-back of a vector bundle on $C$.
Hence, from Lemma \ref{pbfC}, the fiber over a certain $P\in C$ has nontrivial splitting type.
$$( {\mathcal F}'_i)_{|\{ P \}\times {\mathbb P}^1}={\mathcal O}_{{\mathbb P}^1}(\gamma_1)^{r_1}\oplus {\mathcal O}_{{\mathbb P}^1}(\gamma_2)^{r_2}\oplus\dots
\oplus {\mathcal O}_{{\mathbb P}^1}(\gamma_p)^{r_p}, \ \gamma_1<\dots<\gamma_p .$$
Consider the natural map $ {\mathcal F}'_i\to {\mathcal O}_{\{ P\}\times {\mathbb P}^1}(\gamma_1)^{r_1}$ and the corresponding exact sequence
$$0\to {\mathcal F}_{i1}\to {\mathcal F}'_i\to {\mathcal O}_{\{ P\}\times {\mathbb P}^1}(\gamma_1)^{r_1}\to 0$$
As $ {\mathcal F}'_i$ has trivial splitting type and the degree on each fiber is the same, $\sum_{j=1}^p\gamma_jr_j=0$.
Then, the assumption $\ \gamma_1<\dots<\gamma_p $ implies $\gamma_1 <0$.
From the exact sequence defining $ {\mathcal F}_{i1}$,
$$c_2( {\mathcal F}_{i1})=c_2( {\mathcal F}'_i)+r_1\gamma_1< c_2( {\mathcal F}'_i).$$
Take $\beta_1$ in the statement of the lemma to be $\gamma_1$.
As the cokernel of the injective map $ {\mathcal F}_{i1}\to {\mathcal F}'_i$ is concentrated on a fiber, the generic splitting type of $ {\mathcal F}_{i1}$ is still trivial.
If $c_2( {\mathcal F}_{i1})> 0$, we repeat the process.
We obtain a sequence of bundles ${\mathcal F}_{i1}, {\mathcal F}_{i2}, \dots $ with exact sequences as in the statement of the lemma and
$$\dots < c_2( {\mathcal F}_{i3})<c_2( {\mathcal F}_{i2})<c_2( {\mathcal F}_{i1})< c_2( {\mathcal F}'_i).$$
From Lemma \ref{pbfC}, $c_2( {\mathcal F}_{ik})\ge 0$ for all $k$ with equality only if $ {\mathcal F}_{ik}$ is the pull back of a sheaf on $C$.
As $ c_2( {\mathcal F}'_i)$ is finite, the process needs to stop.
The process can be continued so long as $c_2( {\mathcal F}_{ik})>0$.
Hence, there exists a $k_l$ such that $c_2( {\mathcal F}_{ik_l})=0$ and then $ {\mathcal F}_{ik_l}$ is the pull back of a sheaf on $C$.
Hence, ${\mathcal F}'_i$ can be obtained by doing elementary transformations from the pull back of a bundle on $C$.
\end{proof}
\begin{cor}
Given a family of rational curves in $M$, one can find families of vector bundles and divisors $C$ so that
the rational curves live in spaces of successive extensions and elementary transformations.
\end{cor}
\section{Unobstructed components}\label{secuncomp}
\begin{defn}
If $E$ is a vector bundle of rank $r$ and degree $d$ on $C,$ then for each positive $r'<r$ the \textit{$r'$ Segre invariant} of $E$ is defined as
\begin{equation*}
s_{r'}(E) := \min\Biggl\{\begin{vmatrix} r' & r \\ d' & d \end{vmatrix} : {\exists}~ \textnormal{subbundle }E' \subset E\textnormal{ of rank }r'\textnormal{ and degree }d'\Biggr\}
\end{equation*}
\end{defn}
Note that $E$ is stable if and only if $s_{r'}(E) > 0$ for all positive $r' < r.$ If $E$ is a generic stable vector bundle of rank $r$ and degree $d$, we have from Satz 2.2 of \cite{L1} and Th\'{e}or\`{e}me 4.4 of \cite{Hi} that
\begin{equation*}
r'(r-r')(g-1)\le s_{r'}(E)<r'(r-r')(g-1)+r, \hskip20pt \ s_{r'}(E) \equiv r'd \ {\rm mod} \ r
\end{equation*}
One has the following results {\cite{RT}}:
\begin{prop}
\label{Lange}
When $0 < s \le r'(r-r')(g-1)$ the \textit{Segre locus}
\begin{equation*}
{\rm S}_{(r',s)}(r,d) := \{ E \in U(r,d) : s_{r'}(E) = s\}
\end{equation*}
is nonempty of codimension $r'(r-r')(g-1)-s$ in $U(r,d).$ Moreover, the generic element $E$ of ${\rm S}_{(r',s)}(r,d)$ is an extension of the form
\begin{equation*}
0 \to E' \to E \to {E}'' \to 0
\end{equation*}
where $E', {E}''$ are generic elements of $U(r',d')$ and $U(r-r',d-d'),$ respectively (here $d'=\frac{dr'-s}{r}$). For all such $(r',s)$ we have the inclusion
\begin{equation}
\label{inclLanloc}
{\rm S}_{(r',s)}(r,d) \subset \overline{{\rm S}_{(r',s+r)}(r,d)}, \hskip20pt s<r'(r-r')(g-1);
\end{equation}
\begin{equation*}
{\rm S}_{(r',s)}(r,d)=U(r,d), \hskip20pt \ s\ge r'(r-r')(g-1)
\end{equation*}
\end{prop}
\begin{lem}
\label{lem:stab}
Let $L$ be a fixed line bundle on a curve $C$. Given generic stable vector bundles $E_1, E_2$ of ranks $r_1, r_2$ and degrees $d_1, d_2$, respectively, with $\frac {d_1}{r_1}< \frac {d_2}{r_2}$ and $\det E_1\otimes \det E_2=L$
and a generic extension $0\to E_1\to E\to E_2\to 0$, then $E$ is stable.
In fact, the loci of non-stable bundles inside the space of extensions has codimension at least $r_1r_2(g-1)$.
\end{lem}
\begin{proof}
By Proposition \ref{Lange}, the stratification of $M$ by the Segre invariant satisfies
\begin{equation*}
{\rm S}_{(r',s)}(r,d) \subseteq \overline{{\rm S}_{(r',s+r)}(r,d)}
\end{equation*}
when $s<r'(r-r')(g-1).$ As the moduli space of vector bundles of given rank and degree is nonsingular at any stable point, it suffices to prove the result for the smallest values of $s$,
namely $0<s\le r$.
The result is known without fixing the determinant (p.493 of \cite{RT}).
Two spaces of vector bundles with fixed determinant are isomorphic if the degrees are the same (or simply, congruent modulo $r$).
Therefore, the result is also true with the assumption of fixed determinant.
\end{proof}
\begin{lem}
\label{stabtorext}
Assume that $E$ is a generic vector bundle of rank $r$, degree $d+1$ and Segre invariant $s<r_1(r-r_1)(g-1)$.
Then a generic elementary transformation of $E$ is a vector bundle of rank $r$ and degree $d$ with Segre invariant $s+(r_1-r)$.
Moreover, If $E$ is generic with given invariant $s$, its elementary transformation is generic with given invariant $s+r_1-r$.
\end{lem}
\begin{proof}
A generic vector bundle with Segre invariant $s$ corresponds to a generic extension $0\to E_1\to E\to E_2\to 0$
with $E_1, E_2$ of ranks $r_1, r_2=r-r_1$ and degrees $d_1, d_2=d-d_1$ and $r_2d_1-r_1d_2=s$.
The subbundle $E_1$ is unique with the condition that it has this rank and degree.
A generic elementary transformation does not preserve $E_1$;
hence, the generic elementary transformation has invariant $s+(r_1-r)$ (e.g.~ \cite{BL}, Lemma 1.5).
Considering the dual vector space, the process can be reversed, hence a generic element in ${\rm S}_{(r_1,s+r_1-r)}(r,d)$ must come from a generic element in ${\rm S}_{(r_1,s)}(r,d)$.
\end{proof}
\begin{prop}
\label{prop:fam-ext}
Let $r \geq 2$. Given generic vector bundles $E_1, E_2$ of ranks $r_1, r_2=r-r_1$ and degrees $d_1, d_2=d-d_1$ with $\frac {d_1}{r_1}< \frac {d_2}{r_2}$ and $\det E_1\otimes \det E_2=L$.
On $C \times \mathbb{P}^1$ consider a family of extensions of the form
\begin{equation}\label{equnivext}
0 \to p_{1}^{\ast}E_1 \otimes p_{2}^{\ast}\mc{O}_{\mathbb{P}^1}(1) \to \mc{E} \to p_{1}^{\ast}E_2 \to 0
\end{equation}
Consider a point in the space of extensions over $C \times \mathbb{P}^1$ as a curve in $M$. The degree of this curve is $\bar dr_1-d_1\bar r.$
\end{prop}
\begin{proof}
The stability of the general extension in this family follows from Lemma \ref{lem:stab}.
In fact, as the loci of non-stable extensions has codimension at least two, there are whole rational lines contained in the stable locus.
The degree is computed in equation (\ref{degree}).
\end{proof}
\begin{lem}
\label{lem:dimh0}
Consider an exact sequence of vector bundles
\[ 0 \to E_1 \to E \to E_2 \to 0 \]
If $E_1, E_2$ are semistable and $\mu(E_1)< \mu(E_2)$, then $h^0({\mc{H}}om(E_2, E_1))=0.$
\end{lem}
\begin{proof}
The image of a non-trivial morphism $E_2\to E_1$ would be both a quotient of $E_2$ and a subbundle of $E_1$.
From semistability $\mu(E_2)\le \mu(Im f)\le \mu (E_1)$ which is incompatible with the assumptions.
\end{proof}
\begin{prop}
\label{dimfamext}
There is a family of maps from rational curves to $M$ as described in equation (\ref{equnivext}) with varying $E_1, E_2$.
The family is parametrized by the Grassmannian of lines of a projective extension space over the space of pairs of bundles of fixed product determinant.
It is unobstructed and has the expected dimension (writing $k$ for the degree as in (\ref{degree}))
$$\text{dim} M+2hk,\ k=\bar dr_1-d_1\bar r.$$
\end{prop}
\begin{proof}
While there are no Poincar\'e bundles on the moduli spaces $U_C(r_i,d_i)$ when $r_i, d_i$ are not coprime,
from \cite{NR} Prop 2.4, there exists an \'etale cover $U_1$ of $U_{C}(r_1,d_1)$
such that there is a universal bundle ${\mathcal E}_1$ on $U_1\times C$.
Similarly, there exists an \'etale cover $U_2$ of $U_{C}(r_2,d_2)$ and a universal bundle ${\mathcal E}_2$ on $U_2\times C$.
In what follows we will abuse notation and identify elements of $U_{1}$ and $U_{2}$ with their images under the associated \'{e}tale covers. We define
$$ {\mathcal U}_{L} := \{(E_{1},E_{2}) \in U_1\times U_2 : \det E_1\otimes \det E_2 \cong { L}\}$$
Since this is a fiber of a surjective map from $U_{1} \times U_{2}$ to ${\rm Pic}^{d}(C)$, we have that
$$\text{dim}({\mathcal U}_{L}) = \text{dim}(U_{1})+\text{dim}(U_{2})-g = r_1^2(g-1)+1+r_2^2(g-1)+1-g $$
We will be using the natural projection maps
$$p_1 : {\mathcal U}_{L} \times C \to {\mathcal U}_{L}, p_2 : {\mathcal U}_{L} \times C\to C, \pi_{i} : {\mathcal U}_{L} \to U_{i} (i=1,2)$$
From Lemma \ref{lem:dimh0} and Grauert's theorem,
\begin{equation*}
R^{1}{p_1}_{\ast}{\mc{H}}om((\pi_{2} \times 1_{C})^{\ast}\mc{E}_{2},(\pi_{1} \times 1_{C})^{\ast}\mc{E}_{1})
\end{equation*}
is a vector bundle on ${\mathcal U}_{L}$ whose fiber over $(E_{1},E_{2})$ is ${\rm Ext}^{1}(E_{2},E_{1}).$
Its rank is
$$dim {\rm Ext }^{1}(E_{2},E_{1}) = h^1(E_{2}^{\vee} \otimes E_{1})= r_1r_2(g-1)+r_1d_2-r_2d_1$$
Consider the projective bundle
\begin{equation}
\label{eqextsp}
\pi : \mathbb{P} := {\mathbb P}(R^{1}{p_1}_{\ast}{\mc{H}}om((\pi_{2} \times 1_{C})^{\ast}{\mathcal E}_2,
(\pi_{1} \times 1_{C})^{\ast}{\mathcal E}_1)) \to \mc{U}_{L}
\end{equation}
There is a canonical extension on $\mathbb{P}\times C $
\begin{equation}\label{eqcanext}
0\to ((\pi_{1} \circ \pi) \times 1_{C})^{\ast} {\mathcal E}_1\otimes {\mathcal O}_{{\mathbb P}}(1)\to \mathcal E\to ((\pi_{2} \circ \pi) \times 1_{C})^{\ast} {\mathcal E}_2\to 0\end{equation}
By Lemma \ref{lem:stab} the general extension is stable outside a locus of codimension at least 2, hence there are lines in $\mathbb{P} $ entirely contained in the stable locus.
The restriction of ${\mathcal E}$ to the fiber over a point of ${\mathcal U}_L$ gives a vector bundle on $C $.
Therefore, for every line in $\mathbb{P} $ and every morphism from $\mathbb{P} ^1$ to this line, we obtain a map from $\mathbb{P} ^1$ to $M$.
As the line moves in ${\mathbb P}$ and the morphism from $\mathbb{P} ^1$ to this line moves, we obtain a family of maps from the rational line to $M$.
The restriction of the canonical extension in equation (\ref{eqcanext}) to $\mathbb{P} ^1\times C$ where $\mathbb{P} ^1$ is a line in $\mathbb{P} $
shows that for a fixed $P\in C$, the restriction of $ \mathcal E$ to ${\mathbb P} ^1\times \{ P \}$ is of the form
$ {\mathcal O}_{{\mathbb P}^1}(1)^{r_1}\oplus {\mathcal O}_{{\mathbb P}^1}^{r_2}$.
Hence, from Lemma \ref{unobs}, the component we are constructing is unobstructed.
We compute the dimension of the family of lines in $\mathbb{P}$ and add to this the dimension of the linear group of $\mathbb{P}^1 $
$$\text{dim}~Gr(\mathbb{P}^1,{\mathbb{P}}) =\text{dim}~{\mathcal U}_{L}+\text{dim}~{\mathbb{P}({\rm Ext}^{1}(E_{2},E_{1})})+\text{dim}~{\rm Aut}({\mathbb P}^1)= $$
$$ r_1^2(g-1)+1+r_2^2(g-1)+1-g+2[r_1r_2(g-1)+r_1d_2-r_2d_1-2]+3$$
$$=(r^2-1)(g-1)+2[r_1d_2-r_2d_1]=\text{dim}(M)+2hk$$
\end{proof}
Our next goal is to consider rational curves in $M$ whose general point is an extension of a torsion sheaf by a vector bundle of rank $r$.
Using again the correspondence between rational curves in $M$ and vector bundles on
$C \times \mathbb{P}^1$, we can consider families of extensions of the form
\begin{equation}\label{extskysc}
0 \to p_{1}^{\ast}E' \otimes p_{2}^{\ast}\mc{O}_{\mathbb{P}^1}(1) \to \mc{E} \to p_{1}^{\ast} {\mathcal O}_D\to 0
\end{equation}
for a divisor $D$ on $C$ of degree $t$.
\begin{prop} \label{torsext}
There is a family of maps from rational curves to $M$ as described in (\ref{extskysc}) with varying $E', D$.
The family is parametrized by an extension space over the space of pairs of vector bundles and divisors of fixed degree with fixed product determinant.
The family is unobstructed and has the expected dimension
$$\text{dim} M+2hk,\ k=\bar r\deg D.$$
\end{prop}
\begin{proof}
Consider the Hilbert scheme ${\mathcal H}_t$ of divisors of degree $t$ on $C$ with universal subscheme ${\mathcal D}$ on ${\mathcal H}_t\times C$.
Let $U$ be a covering of the moduli space of vector bundles of rank $r$ and degree $d-t$ such that a Poincar\'e universal bundle ${\mathcal P}$ exists on $U\times C$.
Define
$$S=\{ (D, E) \in {\mathcal H}_t\times U| \ (\wedge^rE)(D)=L\},\ \pi_1: S\to {\mathcal H}_t, \ \pi_2 :S\to U.$$
On $S\times C\times {\mathbb P}^1$ (with projections $p_i, \ i=1,2,3$), consider the space of extensions
\begin{equation*}
{\mathbb P}={\mathbb P}({\rm Ext}^1(( (\pi_1\circ p_{1})\times p_2)^{\ast}{\mathcal O}_{\mathcal D},
((\pi_2\circ p_1)\times p_2)^{\ast}{\mathcal P} \otimes p_3^*{\mathcal O}_{{\mathbb P}^1}(1))
\end{equation*}
The universal exact sequence takes the form
\begin{equation}
\label{eq:torsion-ext}
0 \to ((\pi_2\circ p_1)\times p_2)^{\ast}{\mathcal P} \otimes p_3^*{\mathcal O}_{{\mathbb P}^1}(1) \to \mc{E} \to
( (\pi_1\circ p_{1})\times p_2)^{\ast}{\mathcal O}_{\mathcal D} \to 0
\end{equation}
From Lemma \ref{stabtorext}, the generic extension restricted to the fiber over a point in $S$ gives rise to a stable vector bundle on $C$.
From (\ref{degree}), the generic point in ${\mathbb P}$ parameterizes rational curves in $M$ of degree $\bar rt.$
From the definition of $S$,
$$\text{dim} S=r^{2}(g-1)+1+t-g=(r^{2}-1)(g-1)+t$$
On the other hand, if $D=\sum_{i=1}^tP_i$ then
\begin{equation*}
{\rm Ext}^1({\mathcal O}_D, E\otimes {\mathcal O}_{{\mathbb P}^1}(1)) \cong \displaystyle\bigoplus _{i=1}^t{\rm Ext}^1({\mathcal O}_{P_i}, E\otimes {\mathcal O}_{{\mathbb P}^1}(1))
\end{equation*}
An extension consists of a collection of $t$ extensions.
Changing any of them by multiplication with a constant does not change the vector bundle.
Therefore the dimension of the family is
\begin{equation*}
\text{dim} S+\text{dim}{\rm Ext}^1({\mathcal O}_D, E\otimes {\mathcal O}_{{\mathbb P}^1}(1))-t= (r^{2}-1)(g-1)+t+2tr-t
\end{equation*}
\begin{equation*}
=(r^{2}-1)(g-1)+2tr =\text{dim} M+2hk.
\end{equation*}
\end{proof}
Consider now rational curves of higher degree $a$ inside the space of extensions defined in equation \ref{eqextsp}.
We will see that the images of these rational curves in $M$ fill the whole of $M$ only when
the degree $k$ is divisible by $(g-1)r_1(r-r_1)$ for some $r_1, 1\le r_1\le r-1$ (see Theorem \ref{excomp}).
In those particular situation of degrees, we obtain the generalization of the ``almost nice component'' in \cite{C}.
Consider now the combination of the two constructions
$$0 \to p_{1}^{\ast}E_1 \otimes p_{2}^{\ast}\mc{O}_{\mathbb{P}^1}(1) \to {\mathcal E}' \to p_{1}^{\ast}E_2 \to 0$$
$$0 \to {\mathcal E}' \otimes p_{2}^{\ast}\mc{O}_{\mathbb{P}^1}(1) \to {\mathcal E} \to p_{1}^{\ast} {\mathcal O}_D\to 0$$
\begin{lem}\label{mixed}
There is a family of maps from rational curves to $M$ as described above with varying $E_1, E_2, D$.
The family has dimension smaller than expected and therefore it is not a component of the Hilbert scheme of maps from ${\mathbb P}^1$ to $M$.
\end{lem}
\begin{proof} The degree can be computed from equation (\ref{degree}) as
$$hk=r_1d-rd_1+r\deg D$$
The computation of the dimension of the family is similar to the cases above. It is given as
$$\text{dim} U(r_1, d_1)+\text{dim} U(r_2, d_2)+\deg D-g+2h^1(E_2^*\otimes E_1)+r\deg D-\deg D$$
Using that $d=d_1+d_2+\deg D$ and the expression for $k$, the dimension of the family is given as
$$\text{dim} M+2hk-2r_1\deg D< \text{dim} M+2hk$$
\end{proof}
\begin{thm}
\label{teornumcomp}
Given $d, r>0, \ k>0$ integers, let $h$ be the greatest common divisor of $r, d$.
There exist $h$ different families of unobstructed maps from rational curves to $M$ of degree $k$.
\end{thm}
\begin{proof}
Write $d=h\bar d, \ r=h\bar r$. As $\bar d, \bar r$ are relatively prime, there exist unique integers $r_0$ and $d_0$ such that $0 \leq r_0 < \bar{r}$ and $(r_{0},d_{0})$ is a solution of
\begin{equation}
\label{eq:dioph}
\bar{d}x-\bar{r}y = k
\end{equation}
There are exactly $h$ choices for a solution $(r_{1},d_{1})$ of (\ref{eq:dioph}) satisfying $0 \leq r_{1} < r.$
Assume first that $r_1>0$. Defining $d_2 := d-d_1, r_2 := r-r_1$, we have
\begin{equation}
d_2r_1-r_{2}d_{1}=dr_1-rd_{1}=hk > 0
\end{equation}
It follows at once that $\frac{d_1}{r_1} < \frac{d_2}{r_2}$; in particular, the conditions of Proposition \ref{prop:fam-ext} are satisfied. Moreover, from Proposition \ref{dimfamext} there is a family of maps corresponding to points in the space of extensions of the pull back
of a vector bundle of rank $r_2$ and degree $d_2$ by the pull back of a vector bundle of rank $r_1$ and degree $d_1$ tensored with ${\mathcal O}_{{\mathbb P}^1}(1)$.
We know that this family has the right dimension and the generic point is unobstructed.
If $r_1=0$, $k$ is divisible by $\bar r$. We can then use Proposition \ref{torsext} instead of Proposition \ref{prop:fam-ext} for the construction of one of the components.
We need to check that these are the only unobstructed components.
From Lemma \ref{unobs}, an unobstructed component has one or two steps in the Harder-Narasimhan filtration.
From Lemma \ref{mustbeeltr}, the pieces we use to build extensions come from elementary transformations of pull backs of vector bundles on the curve $C$.
From Lemma \ref{mixed}, there is no need to do elementary transformation in the two step extensions.
Therefore, there are no other unobstructed components.
\end{proof}
\section{Additional components}\label{secadcomp}
From equations (\ref{decP1}), (\ref {degree}) and Lemma \ref{unobs}, any additional components would come from families in which the restriction of the vector bundle to the generic ${\mathbb P}^1$
is direct sum of line bundles of at least three different degrees or line bundles of two degrees that differ in more than one unit.
We look at the second case first.
\medskip
\begin{lem}
\label{prop:exttw}
Let $r \geq 2$ and assume that $E_1, E_2$ are generic semistable vector bundles of respective ranks $r_1, r_2$ and degrees $d_1, d_2$ with $r_{1}+r_{2}=r,$ $d_{1}+d_{2}=d,$ $\frac {d_1}{r_1}< \frac {d_2}{r_2}$ and $\det E_1\otimes \det E_2 \cong L$.
On $C \times \mathbb{P}^1$ consider an extension of the form
\begin{equation}
\label{eq:exttw}
0 \to p_{1}^{\ast}E_1 \otimes p_{2}^{\ast}\mc{O}_{\mathbb{P}^1}(a) \to \mc{E} \to p_{1}^{\ast}E_2 \to 0
\end{equation}
\begin{itemize}
\item[(i)]{For general $y \in \mathbb{P}^{1}$ the restriction $\mc{E}|_{C \times y}$ is a stable bundle, and the degree of the associated rational curve in $M$ is $k=a[\bar dr_1-d_1\bar r].$}
\item[(ii)]{If $a > 1,$ the family of rational curves on $M$ obtained from {\rm (\ref{eq:exttw})} by varying $E_1$ and $E_2$ is obstructed and has dimension
$$ \text{dim}{M}+hk+(a-1)r_1r_2(g-1)+[r_1d_2-r_2d_1].$$}
\end{itemize}
\end{lem}
\begin{proof}
The stability of the general extension is proved as in Proposition \ref{prop:fam-ext}, while the degree computation follows from equation (\ref{degree}); this proves (i).
We now turn to (ii), whose proof is as in Proposition \ref{dimfamext}.
Using again ${\mathcal U}_{L}$ for the space of pairs of bundles with determinant of the product the given $L$,
consider the projective bundle over ${\mc{U}}_{L}$
$$\mathbb{P}_a:= {\mathbb P}(R^{1}{\pi}_{\ast}{\mc{H}}om((\pi_{2} \circ p_1\times p_2)^{\ast}{\mathcal E}_2,
(\pi_{1} \circ p_1\times p_2)^{\ast}{\mathcal E}_1\otimes p_3^*{\mathcal O}_{{\mathbb P}^1}(a))).$$
There is a canonical extension on $\mathbb{P}_a\times C\times {\mathbb P}^1 $
\[ 0\to ((\pi_{2} \circ p_1)\times p_2)^{\ast} {\mathcal E}_1\otimes p_3^*{\mathcal O}_{{\mathbb P}^1}(a)\to
{\mathcal E}\to( (\pi_{2} \circ p_1)\times p_2)^{\ast} {\mathcal E}_2\to 0\]
The restriction of ${\mathcal E}$ to the fiber over a point of ${\mathcal U}_L$ gives a vector bundle on $C\times {\mathbb P}^1 $ and therefore,
a rational curve in $M$. As the point moves in ${\mathcal U}_L$, we obtain a family of rational curves in $M$.
The restriction of ${\mathcal E}$ to the fiber over a point of ${\mathcal U}_{L}\times C$ is of the form
${\mathcal O}_{{\mathbb P}^1}(a)^{r_1}\oplus {\mathcal O}_{{\mathbb P}^1}^{r_2}$.
Hence, from Lemma \ref{unobs}, if $a>1$, the family we are constructing is obstructed.
By Lemma \ref{lem:stab} the general extension is stable.
From the correspondence between vector bundles on $C\times {\mathbb P}^1$ and rational curves in $M$, $\mathbb{P}_a$ gives rise to a family of such maps.
$$ \text{dim} (\mathbb{P}_a )=\text{dim}(U_{\mc{L}})+\text{dim} ( \mathbb{P}({\rm Ext}^{1}(E_{2},E_{1}(a))))= $$
$$ r_1^2(g-1)+1+r_2^2(g-1)+1-g+(a+1)[r_1r_2(g-1)+r_1d_2-r_2d_1]-1=$$
$$ =(r^2-1)(g-1)+(a-1)r_1r_2(g-1)+ah(r_1\bar d-\bar r d_1) +[r_1d_2-r_2d_1]=$$
$$=\text{dim}(M)+hk+(a-1)r_1r_2(g-1)+[r_1d_2-r_2d_1]$$
\end{proof}
We now consider families of extensions of the form
\begin{equation}\label{extskysctw}
0 \to p_{1}^{\ast}E' \otimes p_{2}^{\ast}\mc{O}_{\mathbb{P}^1}(a) \to \mc{E} \to p_{1}^{\ast} {\mathcal O}_D\to 0
\end{equation}
for a divisor $D$ on $C$ of degree $t$.
\begin{lem} \label{torsexttw}
There is a family of maps from rational curves to $M$ as described in (\ref{extskysctw}) with varying $E', D$.
The family is parametrized by an extension space over the space of pairs of vector bundles and divisors of fixed degree with fixed product determinant
and has dimension
$$\text{dim} M+hk+r\deg D,\ k=a\bar r\deg D$$
For $a>1$, the family is obstructed and is not a component of the space of maps from ${\mathbb P}^1$ to $M$.
\end{lem}
\begin{proof}
The proof is as in Proposition \ref{torsext}, replacing ${\mathbb P}$ by
$${\mathbb P}_a={\mathbb P}(Ext^1(( (\pi_1\circ p_{1})\times p_2)^{\ast}({\mathcal O}_{\mathcal D}),
((\pi_2\circ p_1)\times p_2)^{\ast}( {\mathcal P})\otimes p_3^*({\mathcal O}_{{\mathbb P}^1})(a)))$$
Lemma \ref{stabtorext}, can still be applied to prove the stability of the generic extension.
From (\ref{degree}), the generic point in ${\mathbb P}_a$ parameterizes rational curves in $M$ of degree $a\bar rt.$
The dimension of the family is
$$\text{dim} S+Ext^1({\mathcal O}_D, E\otimes {\mathcal O}_{{\mathbb P}^1}(a))-t=
(r^{2}-1)(g-1)+t+(a+1)tr-t
=\text{dim} M+hk+tr$$
For this family to be a component of the space of rational maps to $M$, it would have to have dimension at least the expected dimension.
This would imply that $tr\ge atr$ which in turn implies $a\le 1$.
\end{proof}
\begin{thm}
\label{excomp}
For $a>1$, if the family described in Proposition \ref{prop:exttw} is an (obstructed) component of the space of maps from ${\mathbb P}^1$ to $M$ of degree $k=a(r_1d-rd_1)$,
then a vector bundle in the image rational curve in $M$ is not generic except when $k$ is divisible by $(g-1)r_1(r-r_1)$ for some $r_1, 1\le r_1\le r-1$.
\end{thm}
\begin{proof}
We first show that the dimension we found in Proposition \ref{prop:exttw} is larger than the expected dimension $\text{dim} M +2hk$ if and only if $r_1d-rd_1\le r_1(r-r_1)(g-1)$.
As the expected dimension is the smallest dimension a component of the space of maps can have, this will suffice to prove that the family is not a component of the space of maps
if $r_1d-rd_1> r_1(r-r_1)(g-1)$.
We will only need to deal with the case when that inequality is an equality.
The condition on the dimension can be written as
$$\text{dim}(M)+hk+(a-1)r_1r_2(g-1)+[r_1d_2-r_2d_1] \ge \text{dim}(M)+2hk$$
Using that $hk=a(r_1 d- r d_1)$, this is equivalent to
$$(a-1)r_1(r-r_1)(g-1)+(r_1 d - rd_1)\ge a (r_1 d - rd_1) $$
As we are assuming $a>1$, the inequality is preserved by dividing by $a-1$. We obtain the equivalent equation
\begin{equation}
\label{eq:almost-nice}
r_1(r-r_1)(g-1)\ge r_1d-rd_1
\end{equation}
When the inequality is strict, this implies (see equation (\ref{inclLanloc})) that the corresponding vector bundle is special.
When the inequality is an equality, we obtain the special case in which $k$ is divisible by $(g-1)r_1(r-r_1)$.
\end{proof}
\begin{rem}
When $r=2$ and $d=1,$ equality in (\ref{eq:almost-nice}) implies that $g$ is even, so that we recover the full statement of Theorem 1.6 in \cite{C}.
\end{rem}
\begin{cor}
\label{excompr=2}
If $r=2, a>1$, the family described in Proposition \ref{prop:exttw} is an (obstructed) component of the space of maps from ${\mathbb P}^1$ to $M$ of degree $k=a(d-2d_1)$
if and only if $d-2d_1<g-1$, or equivalently, when the vector bundle in the image rational curve in $M$ is not generic.
\end{cor}
\begin{proof}
The only-if part has already been proved.
It remains to show that, under the given conditions, the family is actually a component of the space of rational curves.
For this, it suffices to show that it is not in the closure of a larger family of such curves.
Assume first that $r$ is general and that $r_1d-rd_1<r_1(r-r_1)(g-1)$ as in (\ref{excomp}).
By construction of the family in (\ref{eq:exttw}), a point in the image rational curve in $M$ is a
vector bundle on $C$ which is an extension of a vector bundle of rank $r_2$ and degree $d_2$ by a vector bundle of rank $r_1$ and degree $d_1$.
From the inclusion in equation (\ref{inclLanloc}), if this family is contained in a larger component, the $d_1$ should decrease.
On the other hand, a family of vector bundles on ${\mathbb P}^1$ of the form ${\mathcal O}_{{\mathbb P}^1}(a)^{r_1}\oplus {\mathcal O}_{{\mathbb P}^1}^{r_2}$
can be deformed by moving some of the degree of ${\mathcal O}_{{\mathbb P}^1}(a)$ to some of the ${\mathcal O}_{{\mathbb P}^1}$.
As we can normalize by tensoring with a line bundle on ${\mathbb P}^1$, this has the effect of decreasing the $a$.
If we assume that the $k$ stays constant, then $ad_1$ is constant and therefore, $d_1$ can be written in terms of $a$.
Writing the dimension in Proposition \ref{prop:exttw} as a function of $a$ alone, we notice that it is an increasing function of $a$.
Hence, the family corresponding to a value of $a$ cannot be in the closure of the family corresponding to a different value.
While for arbitrary $r$ one could deform the family to a family in which the decomposition of the bundle to the rational curve has more summands, this cannot happen for rank two.
This concludes the proof in this case.
\end{proof}
Consider now vector bundles on $C\times {\mathbb P}^1$ whose restriction to the generic ${\mathbb P}^1$ is direct sum of line bundles of at least three different degrees.
Up to tensoring with a line bundle on ${\mathbb P}^1$, we can assume that one of the summands is trivial,
$${\mathcal O}_{{\mathbb P}^1}(a_1+\dots +a_{l-1})^{r_1}\oplus {\mathcal O}_{{\mathbb P}^1}(a_2+\dots +a_{l-1})^{r_2}\oplus \dots
\oplus {\mathcal O}_{{\mathbb P}^1}(a_{l-1})^{r_{l-1}}\oplus {\mathcal O}_{{\mathbb P}^1}^{r_l}$$
We would be considering extensions of the form
\begin{equation}
\label{ext3}
\begin{matrix}
0 \to p_{1}^{\ast}E_1 \otimes p_{2}^{\ast}\mc{O}_{\mathbb{P}^1}(a_1) \to{ \mathcal E }_2' \to p_{1}^{\ast}E_2 \to 0, &
0 \to {\mathcal E}_2 ' \otimes p_{2}^{\ast}\mc{O}_{\mathbb{P}^1}(a_2) \to {\mathcal E}_3 ' \to p_{1}^{\ast}E_3 \to 0 \\
\dots \ \ \ ,& \ 0 \to {\mathcal E}_{l-1} ' \otimes p_{2}^{\ast}\mc{O}_{\mathbb{P}^1}(a_{l-1}) \to {\mathcal E} \to p_{1}^{\ast}E_l \to 0
\end{matrix}
\end{equation}
We can obtain a family of such extensions by considering successive spaces of extensions similarly to the construction in \ref{dimfamext}:
\begin{lem}
\label{fam2se}
Given positive integers $a_1,\dots, a_{l-1}, r_1,\dots, r_l$, arbitrary integers $d_1,\dots, d_l$ with
$$r_1+\dots+r_l=r, \ d_1+\dots+d_l=d, \ \frac{d_1}{r_1}<\frac{d_2}{r_2}<\dots <\frac{d_l}{r_l}$$
there is a family of maps from rational curves to $M$ whose generic restriction to ${\mathbb P}^1$
is $${\mathcal O}_{{\mathbb P}^1}(a_1+\dots +a_{l-1})^{r_1}\oplus{\mathcal O}_{{\mathbb P}^1}(a_2+\dots +a_{l-1})^{r_2}\oplus \dots
\oplus {\mathcal O}_{{\mathbb P}^1}(a_{l-1})^{r_{l-1}}\oplus {\mathcal O}_{{\mathbb P}^1}^{r_l}$$
The degree is obtained from
$$hk=\sum_{i<j}(r_id_j-r_jd_i)(a_i+a_{i+1}+\dots+a_{j-1})$$
The family is obstructed if $l\ge 3$ and has dimension
$$\text{dim} M+\sum_{i<j}(r_id_j-r_jd_i)(a_i+a_{i+1}+\dots +a_{j-1}+1)+[\sum_{i<j}r_ir_j(a_i+a_{i+1}+\dots +a_{j-1}-1)](g-1)$$
\end{lem}
\begin{proof}
Denote by $U(r_i,d_i)$ a suitable cover of the moduli space of vector bundles of rank $r_i$ and degree $d_i$ such that on $C\times U(r_i,d_i)$ a Poincare bundle ${\mathcal E}_i$ exists.
Consider the space ${\mathcal U}_{L}$ of $l$-ples of bundles with determinant of the product the given $L$.
Denote by $\pi_1, \pi_2,\dots, \pi_l$ the projection of $U(r_1,d_1)\times U(r_2,d_2)\times \dots \times U(r_l,d_l)$ onto $U(r_1,d_1), U(r_2,d_2), \dots ,U(r_l,d_l)$ respectively
as well as the restriction of these projections to ${\mathcal U}_{L}$.
Denote by $p_1, p_2, p_3$ the projection of $ C \times {\mathbb P}^1\times {\mathcal U}_{L}$ onto $ C , {\mathbb P}^1, {\mathcal U}_{L}$ respectively.
Consider the projective bundle over $ C \times {\mathbb P}^1\times {\mathcal U}_{L}$
$$\mathbb{P}_1:= {\mathbb P}(R^{1}{p}_{3\ast}{\mc{H}}om( p_1\times (\pi_{2} \circ p_3))^{\ast}{\mathcal E}_2,
( p_1\times (\pi_{1} \circ p_3))^{\ast}{\mathcal E}_1\otimes ( p_2)^*({\mathcal O}_{{\mathbb P}^1}(a_1))).$$
with canonical extension on $ C\times {\mathbb P}^1\times \mathbb{P}_1 $
\[ 0\to ( p_1\times (\pi_{1} \circ p_3))^{\ast} {\mathcal E}_1\otimes p_2^*{\mathcal O}_{{\mathbb P}^1}(a_1)\to {\mathcal E}_1'\to
( p_1\times (\pi_{2} \circ p_3))^{\ast} {\mathcal E}_2\to 0.\]
We then construct a bundle $\mathbb{P}_{2}$ over $\mathbb{P}_{1}$ by considering extensions of the pull back of ${\mathcal E}_3$ by
${\mathcal E}_1'\boxtimes {\mathcal O}_{{\mathbb P}^1}(a_2)$.
More generally, we construct $\mathbb{P}_{j}$ as a projective bundle over $ {\mathbb P}_{j-1}, j=2,\dots, l-1$ (we omit pull back maps and write $\boxtimes$ instead):
$$\mathbb{P}_j:= {\mathbb P}(R^{1}{p}_{3\ast}{\mc{H}}om( {\mathcal E}_{j+1},
{\mathcal E}_j'\boxtimes {\mathcal O}_{{\mathbb P}^1}(a_j))).$$
with canonical extension on $ C\times {\mathbb P}^1\times \mathbb{P}_j $
\begin{equation}\label{canext} 0\to {\mathcal E}'_{j-1}\boxtimes {\mathcal O}_{{\mathbb P}^1}(a_j)\to {\mathcal E}_j'\to
{\mathcal E}_{j+1}\to 0.\end{equation}
We will write ${\mathcal E}={\mathcal E}'_{l-1}$
From the correspondence between vector bundles on $C\times {\mathbb P}^1$ and rational curves in $M$, $\mathbb{P}_{l-1}$ gives rise to a family of maps from
${\mathbb P}^1$ to $M$.
Our next goal is to compute the dimension of the family we constructed. Note that
$$\text{dim}(U_{\mc{L}})=\sum_{i=1}^l( r_i^2(g-1)+1)-g=[r^2-1-2\sum_{i\not= j}r_ir_j](g-1)+(l-1)=\text{dim} M-2(g-1)\sum_{i\not= j}r_ir_j+l-1$$
$$ \text{dim} \mathbb{P}_{l-1} =\text{dim}(U_{\mc{L}})+\text{dim}{(\text fib}{\mathbb P}_1\to (U_{\mc{L}}))+\text{dim}{(\text fib}{\mathbb P}_2\to {\mathbb P}_1)+\dots+\
dim{(\text fib}{\mathbb P}_{l-1}\to {\mathbb P}_{l-2}) $$
The fibers of the projection ${\mathbb P}_1\to {\mathcal U}_{L}$ are
$${\mathbb P}(Ext^1_{C\times {\mathbb P}^1}(E_2, E_1\boxtimes {\mathcal O}_{{\mathbb P}^1}(a_1)))=
{\mathbb P}(H^1(C, E_2^*\otimes E_1)\times H^0({\mathbb P}^1, {\mathcal O}_{{\mathbb P}^1}(a_1))).$$
As $\mu(E_1)<\mu(E_2)$, $h^0(C, E_2^*\otimes E_1)=0$.
Hence, the dimension of the fibers of the projection ${\mathbb P}_1\to {\mathcal U}_{L}$ is
$$[r_1d_2-r_2d_1+r_1r_2(g-1)](a_1+1)-1$$
Similarly, the fibers of the projection ${\mathbb P}_2\to {\mathbb P}_1$ are
${\mathbb P}(Ext^1_{C\times {\mathbb P}^1}(E_3, {\mathcal E}'_2\boxtimes {\mathcal O}_{{\mathbb P}^1}(a_2)))$.
In order to compute the dimension of these fibers, we need to use the tautological sequence defining ${\mathcal E}'_2$
tensored with the pull back of the dual of $E_3$ and $ {\mathcal O}_{{\mathbb P}^1}(a_2)$.
We omit pull back maps and write $\boxtimes$ instead:
\[ 0\to E_1\otimes E_3^*\boxtimes {\mathcal O}_{{\mathbb P}^1}(a_1+a_2)\to
\mathcal E'_2\boxtimes E_3^*\boxtimes {\mathcal O}_{{\mathbb P}^1}(a_2)
\to E_2\otimes E_3^*\boxtimes {\mathcal O}_{{\mathbb P}^1}(a_2)\to 0\]
We obtain that
$$\text{dim} Ext^1_{C\times {\mathbb P}^1}(E_3, {\mathcal E}'_2\boxtimes {\mathcal O}_{{\mathbb P}^1}(a_2))=
h^1(C, E_3^*\otimes E_1)h^0({\mathbb P}^1, {\mathcal O}_{{\mathbb P}^1}(a_1+a_2))+
h^1(C, E_3^*\otimes E_2)h^0({\mathbb P}^1, {\mathcal O}_{{\mathbb P}^1}(a_2))=$$
$$=[r_1d_3-r_3d_1+r_1r_2(g-1)](a_1+a_2+1)+[r_2d_3-r_3d_2+r_2r_3(g-1)](a_1+1)-1$$
The dimension of the remaining fibers would be computed similarly.
Therefore, the dimension of the family is
$$\text{dim} M-2(g-1)\sum_{i\not= j}r_ir_j+l-1+[r_1d_2-r_2d_1+r_1r_2(g-1)](a_1+1)-1+$$
$$+[r_1d_3-r_3d_1+r_1r_3(g-1)](a_1+a_2+1)+[r_2d_3-r_3d_2+r_2r_3(g-1)](a_1+1)-1]$$
$$+\dots +[r_1d_l-r_ld_1+r_1r_l(g-1)](a_1+\dots +a_{l-1}+1)+\dots +[r_{l-1}d_l-r_ld_{l-1}+r_{l-1}r_l(g-1)](a_{l-1}+1)-1 =$$
$$=\text{dim} M+\sum_{i<j}(r_id_j-r_jd_i)(a_i+a_{i+1}+\dots +a_{j-1}+1)+[\sum_{i<j}r_ir_j(a_i+a_{i+1}+\dots +a_{j-1}-1)](g-1)$$
We can compute the degree using equation (\ref{degree}).
Multiplying by $h$ and using that
$$d=d_1+d_2+\dots +d_l, r=r_1+r_2+\dots +r_l $$
we obtain
$$hk=(r_1 d-d_1r)a_1+((r_1+r_2)d-(d_1+d_2)r)a_2+\dots + ((r_1+\dots +r_{l-1})d-(d_1+\dots+d_{l-1})r)a_{l-1}=$$
$$=(r_1 d_2-r_2d_1)a_1+(r_1 d_3-r_3d_1)(a_1+a_2)+\dots +(r_1 d_l-r_ld_1)(a_1+\dots +a_{l-1})+$$
$$+(r_2d_3-r_3d_2 )a_2+\dots +(r_2 d_l-r_ld_2)(a_2+\dots +a_{l-1}) \dots +(r_{l-1} d_l-r_ld_{l-1})a_{l-1}=$$
$$=\sum_{i<j}(r_id_j-r_jd_i)(a_i+a_{i+1}+\dots+a_{j-1})$$
From Lemma \ref{unobs}, the family we are constructing is obstructed if $l\ge 3$ or $a_i\ge 2$
\end{proof}
\begin{thm}
\label{comp2se}
If the family described in Lemma \ref{fam2se} is an (obstructed) component of the space of maps from ${\mathbb P}^1$ to $M$,
then a vector bundle in the image rational curve in $M$ is not generic.
\end{thm}
\begin{proof}
For a family as in \ref{fam2se} to be a component of the Hilbert scheme of maps of ${\mathbb P}^1$ to $M$,
its dimension needs to be at least as large as the expected dimension $\text{dim} M+2hk$.
This condition is
$$\text{dim} M+\sum_{i<j}(r_id_j-r_jd_i)(a_i+a_{i+1}+\dots +a_{j-1}+1)+[\sum_{i<j}r_ir_j(a_i+a_{i+1}+\dots +a_{j-1}-1)](g-1)\ge$$
$$\text{dim} M+2\sum_{i<j}(r_id_j-r_jd_i)(a_i+a_{i+1}+\dots+a_{j-1})$$
This can be rewritten as
$$\sum_{i<j}(r_id_j-r_jd_i-r_ir_j(g-1))(a_i+a_{i+1}+\dots +a_{j-1}-1)\le 0$$
Using that
$$a_i+a_{i+1}+\dots +a_{j-1}-1=(a_i-1)+(a_{i+1}-1)+\dots +(a_{j-1}-1)+(j-i-1)$$
and taking common factor the $a_i-1$, we obtain
$$(a_1-1)\sum_{i=2}^l (r_1d_i-r_id_1-r_ir_1(g-1))+(a_2-1)[\sum_{i=3}^l [r_1d_i-r_id_1-r_ir_1(g-1)+r_2d_i-r_id_2-r_ir_2(g-1)]+\dots$$
$$\dots+(a_j-1))[\sum_{i=j+1}^l\sum _{k=1}^j (r_kd_i-r_id_k-r_ir_k(g-1))+\dots +(a_{l-1}-1)\sum _{k=1}^l(r_kd_l-r_ld_k-r_lr_k(g-1))+$$
$$+\sum_{i<j-1}(j-i-1)(r_id_j-r_jd_i-r_ir_j(g-1))\le 0$$
Regrouping the terms, this gives rise to
\begin{equation}
\label{longineq}
(a_1-1)[r_1\sum_{i=2}^l d_i-\sum_{i=2}^lr_id_1-r_1\sum_{i=2}^lr_i(g-1)]
\end{equation}
\begin{equation*}
+ (a_2-1)[(r_1+r_2)\sum_{i=3}^ld_i-\sum_{i=3}^lr_i(d_1+d_2)-(r_1+r_2)\sum_{i=3}^lr_i(g-1)]+\dots
\end{equation*}
$$\dots+(a_j-1)[ (\sum _{i=1}^j r_i) (\sum_{k=j+1}^ld_k)- (\sum_{k=j+1}^lr_k)( \sum _{i=1}^j d_i)- (\sum_{k=j+1}^lr_k) (\sum _{i=1}^j r_i) (g-1))+\dots $$
$$ \dots +(a_{l-1}-1)(\sum _{k=1}^{l-1}r_k)d_l-r_l(\sum _{k=1}^{l-1}d_k)-r_l(\sum _{k=1}^{l-1}r_k)(g-1))$$
$$+\sum_{i<j-1}(j-i-1)(r_id_j-r_jd_i-r_ir_j(g-1))\le 0$$
\bigskip
For every point $(x, y)\in {\mathbb P}^1\times {\mathbb P}_{l-1}$, we obtain a vector bundle on $C$ by considering the restriction
$E={\mathcal E}_{|C\times \{ x\}\times \{ y\}}$.
We want to show that under the above conditions, $E$ is not generic in $M$ for generic $(x, y)\in {\mathbb P}^1\times {\mathbb P}_{l-1}$.
From exact sequence (\ref{canext}) for $j=l-1$, $E$ is an extension of $E_l$ by $E'_{l-1}={\mathcal E}'_{l-1||C\times \{ x\}\times \{ y\}}$.
The latter is a vector bundle of rank $r_1+\dots+r_{l-1}$ and degree $d_1+\dots +d_{l-1}$.
From Propositon \ref{Lange}, if $E$ is not special , we have
\begin{equation*}
\label{inl-1}
\sum _{k=1}^{l-1}r_kd_l-r_l(\sum _{k=1}^{l-1}d_k)-r_l(\sum _{k=1}^{l-1}r_k)(g-1)\ge 0
\end{equation*}
More generally, write $E'_{j}={\mathcal E}'_{j||C\times \{ x\}\times \{ y\}}$.
Then $E'_{j}$ is a vector bundle of rank $r_1+\dots+r_{j}$ and degree $d_1+\dots +d_{j}$.
Assembling together the injective maps from (\ref{canext}) for $j, j+1, \dots l-1$,
we obtain an inclusion $E'_{j}\to E$ whose cokernel has rank $r_{j+1}+\dots+r_{l}$ and degree $d_{j+1}+\dots +d_{l}$.
From Proposition \ref{Lange}, if $E$ is not special, we have
\begin{equation}
\label{inj}
(\sum _{i=1}^j r_i) (\sum_{k=j+1}^ld_k)- (\sum_{k=j+1}^lr_k)( \sum _{i=1}^j d_i)- (\sum_{k=j+1}^lr_k) (\sum _{i=1}^j r_i) (g-1))\ge 0.
\end{equation}
\begin{claim}
Multiplying equation (\ref{inj}) by $r_1+\dots +r_{j-1}+r_{j+2}+\dots +r_l$ and adding for $j=1,\dots, l-1$, we obtain
\begin{equation}
\label{insum}
\sum_{i<j-1}(j-i-1)[r_id_j-r_jd_i-r_ir_j(g-1)]\ge \frac{g-1}r\sum_{1\le m<n<p\le l}r_mr_nr_p>0
\end{equation}
\end{claim}
Note then that inequalities (\ref{inj}) and (\ref{insum}) are incompatible with (\ref{longineq}). This will complete the proof of the Theorem.
\begin{proof} (of the claim) For $i<j$, write $A_{ik}=r_id_k-r_kd_i-r_ir_k(g-1)$.
Note that
\begin{equation}\label{IdentAs} \text{If } l<m<n, \ r_nA_{lm}-r_lA_{mn}=r_mA_{ln}-r_lr_mr_n(g-1)\end{equation}
Multiplying equation (\ref{inj}) by $r_1+\dots +r_{j-1}+r_{j+2}+\dots +r_l$ and adding for $j=1,\dots, l-1$, we obtain
\[ \sum_{j=1}^{l-1} (r_1+\dots +r_{j-1}+r_{j+2}+\dots +r_l)[ \sum _{i=1}^j \sum_{k=j+1}^lA_{ik}]\ge 0. \]
This can be written as
\begin{equation}\label{suin}\sum_{1\le i<k\le l} [(k-i)(r_1+\dots +r_{i-1})+(k-i-1)r_i +(k-i-2)(r_{i+1}+ \cdots + r_{k-1})
\end{equation}
\begin{equation*}
+ \cdots +(k-i-1)r_{k}+(k-i)(r_{k+1}+\dots +r_{l})]A_{ik} \ge 0\end{equation*}
Note that
\[ \text{If } m<n<p, \ \ \ r_pA_{mn}+r_mA_{np}=r_nA_{mp}-r_mr_nr_p(g-1).\]
Therefore, for $t>k$, $r_tA_{ik}$ can be combined with $r_iA_{kt}$ to give rise to $r_kA_{it}-r_ir_kr_t(g-1)$.
When the process is carried out for all $i,k, i<k$ once for each $t>k$, , one of the terms $r_sA_{ik}$ will be used up for each $s<i$
and new terms $r_vA_{ik}$ will be gained for $i<v<k$.
Then, inequality (\ref{suin}) becomes
\[ \sum_{1\le i<k\le l} (k-i-1)rA_{ik}-(g-1)\sum_{1\le m<n<p\le l}r_mr_nr_p \ge 0. \]
as claimed.
\end{proof}
\end{proof}
\section{Rational curves through two generic points}\label{secratc2points}
A question of interest in the study of rational curves on Fano varieties is the minimum degree of a rational curve through two generic points.
In \cite{KMM}. Kollar, Miyaoka and Mori showed that the degree is bounded by a quadratic expression on the dimension.
For $M$, this bound can be greatly improved and is in fact linear on the dimension.
\begin{prop}
\label{prop:rat-conn}
Given two generic points of $M$, there is a rational curve containing the two points of degree $(\frac{r^2}2-1)(g-1)$ if $r$ is even and degree $\frac{3r^2-3}2(g-1)$ if $r$ is odd.
\end{prop}
\begin{proof} Given $E_1, E_2\in M$ generic, we want to find $E', E''$ such that we have exact sequences
$$0\to E'\to E_1\to E''\to 0,\ 0\to E'\to E_2\to E''\to 0$$
If these extensions exist, then there is a line in the projective space of extensions containing the two given ones and its image is a rational curve in $M$ containing both $E_1, E_2$.
The dimension of the space of extensions for a fixed $E', E'' $ is $r'd''-r''d'+r'r''(g-1)$.
Moreover, the dimension of the fibers of the map of the space of extensions to $M$ are well behaved, that is as small dimensional as possible (\cite{RT}).
Therefore, we need only $r'd''-r''d'+r'r''(g-1)\ge (r^2-1)(g-1)$.
Equivalently,
$$hk=r'd-rd'\ge (r^2-1-r'(r-r'))(g-1).$$
The smallest value of the right hand side is obtained for $r'$ as close as possible to $\frac r2$ which gives the statement in the proposition.
\end{proof}
| proofpile-arXiv_065-293 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The renaissance of neural networks was and currently still is boosted by convolutional neural networks \cite{krizhevsky2012imagenet}. These networks caused significant breakthroughs in many practical problems which were not solvable before. These methods require a large and general dataset (representing the distribution of the real problem) to select kernels which provide the highest accuracy on our training- and test-set. In many practical problems only limited amount of data is available or limited amount of data can be presented to the network during training in one step, because of memory constraints. Learning scenarios in case of limited data can be helped by data augmentation or by using special architectures (matching networks\cite{vinyals2016matching}, memory augmented neural networks\cite{santoro2016one}) to solve these problems.
Even in case of a sufficiently large dataset bias-variance problem \cite{geman2008neural} is one of the major and most significant challenges of machine learning. We can never be sure that the data we use is general and matches exactly the real world distribution of possible input images of the problem.
We want to achieve two goals simultaneously with every training process: we would like to achieve the highest possible accuracy on our training, test and evaluation set, but in practice we also want to create a general method. Because of this, we want to select features which ensure high accuracy on the dataset, but also general features which are not "too specific" to our data. Our final aim is always the creation of a general model which works well in practice which after a point is not always the same as having the highest accuracy on our selected train and test sets.
In this paper We present a method which can help creating balance between the selection of features providing high accuracy and choice of general features.
\subsection{Building blocks of convolutional neural networks}
The three main building blocks of Convolutional Neural Networks (CNN) are the following: Convolution, Non-linearity and Pooling.
We have to note that there are many more commonly applied elements and layers like batch normalization \cite{ioffe2015batch} or dropout \cite{srivastava2014dropout} which can be used to increase the generalization capabilities and robustness of a system or other structures like residual networks \cite{he2016deep} or all convolutional networks \cite{springenberg2014striving}, but these three elements are applied in almost every commonly used networks.
Common networks are built by creating a chain (in more general case an acyclic graph) of these three operations. Not only the graph of these elements -the structure of the network- can alter the complexity of the function which the network can approximate, but also these elements may vary with respect to the applications. This paper focuses on the pooling operation and provides a generalized pooling method which performs better in one-shot learning scenarios than maximum or average pooling.
The first neural networks applied average pooling, where a feature map was down-sampled and each region was substituted by the average value in the region:
\begin{equation}
P_{avg}(I_{i,j})=\frac{1}{N} \sum_{k,l \in R_{i,j}} I_{k,l}
\end{equation}
Where $P_{avg}$ is the pooling operator. $I$ is the input feature map, $R$ is a two-dimensional region which is selected for pooling, $N$ is the number of elements in the region.
The notation uses two-dimensional pooling and feature maps, because the operation is usually applied on images and two-dimensional inputs, but can also be used with one- or higher-dimensional data.
This notation focuses only on the pooled region and does not deal with the stride of the convolution operation, which can be used to set the overlapping area between pooling regions, but we consider this as a hyper-parameter of the network architecture and not an inherent part of the operator.
Average pooling considers the whole input region and all values are used to create the response of the pooling layer which was beneficial in the selection of general features. On the other hand Average pooling is a linear operator and can be considered a simple and special case of a convolution kernel and two subsequent convolutions can be replaced by one larger convolution kernel. Later it was substituted in almost every application by maximum pooling where the maximum is selected from each region.
\begin{equation}
P_{max}(I_{i,j})= max(R_{i,j})
\end{equation}
Maximum pooling performs well in practice, adds extra non-linearity to the network and is easy to be calculated (only an index has to be stored to propagate the error back).
Maximum pooling also has disadvantages. It results a really sparse update in the network. Only the neuron resulting the maximum element in the kernel will be responsible for the update of the variables during backpropagation and the activation of other neurons does not matter at all. There is a tendency in certain networks like variational autoencoders \cite{kingma2013auto}, generative adversarial networks (GANs)\cite{goodfellow2014generative}, or networks used for one shot learning \cite{fei2006one} to avoid pooling because it grasps only certain elements of the input data which results features which are not general enough.
Modified pooling methods has also appeared in segmentation problems like ROI pooling in \cite{ren2015faster} or \cite{he2017mask} which helps in the more accurate localization of regions of interest but does not help in the selection of features inside the proposed regions, where usually maximum pooling is used.
An other approach is introduced in \cite{murray2014generalized} which uses patch similarity over batches to generate a combination of weighted and maximum pooling, but this method also requires large batches and a large amount of data and can not be applied in one-shot learning scenarios. \cite{lee2016generalizing} also proposes the application of heterogeneous pooling methods inside the network (e.g.: average pooling in certain layers and max pooling in other) but the selection of pooling methods for each layer is a hyper-parameter of the network and is difficult to be optimized in practice.
Here We propose a generalized pooling-method which can help keeping more activations and by this detecting more general features in every region.
\section{$k$th Maximum-Pooling and Feature Consistency}\label{SortedPooling}
The main concept which leads to the application of $k$th maximum-pooling is that convolutional networks are based on local information and exploit the local relations of the data.
Natural images are locally consistent. If a convolutional kernel gives a high response in a region, one can expect that the kernel will give similar results around this region. Small perturbations can not change the response of the network abruptly.
That is the original and main concept of pooling \cite{lecun1995convolutional} - to balance small variances caused by translation, but doing maximum-pooling neglects all other activations and selects only a single element with the highest response. None of the later features are important, the network can easily be overtrained, because it uses a single example from the image in the pooled kernel. In case of a large amount of data these differences and noises are averaged out, because the noises are usually different on all images, and the selection of general features comes from this average. Those features will be selected which can be found on most of the images. Unfortunately in case of one-shot learning scenarios with limited data, these general features are difficult to be found and has to be estimated for a smaller sample.
Natural images consist of alternating smaller and larger homogeneous regions. CNNs detect the changes between these regions, but even areas at alternations, edges/gradients between these regions contain structure. The appearance of similar structures and periodicity is also a characteristic feature of natural images. This is a reason why natural images in most cases can be reproduced fairly well form low-frequency Fourier components and follow log distribution in the Fourier domain \cite{hou2007saliency}. This is a difference between natural images and general noise. In most cases we want to classify and detect these structural elements on images.
In both smaller and larger regions the responses of convolutional kernels are usually consistent on trained networks and the appearance of certain features and patterns is repeated. Based on this one can easily see that if a convolution kernel detects general patterns and not caused by noise it will probably appear multiple times in a region.
This can lead us to the expectation that if one would like to detect a general feature in an area it should appear more than once in a region. This assumption goes completely against maximum pooling, which selects only the largest response of a kernel in a region and completely neglects all the other responses.
It is also worthwhile to note that appearance of locally consist kernel responses can also be observed in deeper layers. Although neurons in deeper layers have larger receptive fields and the size of the input data is decreased layer-by-layer during the training of a network, one can usually observe patches and regions of activations in deeper layers as well, instead of the activation of individual neurons. (Of course this can be changed by changing the size and stride of pooling kernels, but in case of common architectures the activation of regions can be observed even in deeper layers.) A simple presentation of this fact can be seen by visualizing the activation in deeper layers. An example image depicting feature consistency and activations can be seen on Fig \ref{FeatureConsistency}, online demonstration of the appearance of these consistent local features can also be seen on the online demos of Andrej Karpathy\footnote{one can examine activations of deeper layers without the installation of deep learning frameworks for the MNIST: http://cs.stanford.edu/people/karpathy/convnetjs/demo/mnist.html and CIFAR dataset: https://cs.stanford.edu/people/karpathy/convnetjs/demo/cifar10.html datasets.}.
The presence of features depend not only on the size and stride of pooling convolution kernels but also from the resolution and object scale on the input image. In case of practical problems and generally used architectures like Alexnet \cite{krizhevsky2012imagenet}, VGG\cite{simonyan2014very} or Resnet50\cite{he2016deep} even deeper layers contain consistent features, since the input images usually contain objects in various scales and our aim is to detect the objects both in small and larger scales.
\begin{figure}[h!tb]
\centering{
\subfigure{\includegraphics[width=3.0in]{Figs/feature_consistency.png}}\\
\subfigure{\includegraphics[width=3.0in]{Figs/4th_layer.png}}\\
\caption{ Natural features are consistent. We do not want to select a feature which fits a position perfectly and results a high match. Such features could be too specific for our input image and result overfitting. In the first row an image taken from the MNIST dataset can be seen. We want to select a feature for the detection of diagonal gradient in the depicted region, because it is consistent on the image. Other features, like the small individual dot (appearing in the lower right corner of the image) or a corner-point (appearing in the upper left corner of the image) might give high response but are not consistent in the region. It could easily happened that these patterns are first detected by randomly initialized kernels and would be optimized in maximum-pooling, but these errors could be detected if we would require that a kernels has to give more than one high responses in a region. In the second row four activation maps can bee seen in the third layer of a CNN from an MNIST data. As it can be seen regions are activated in this deeper layer as well, instead of the activation of individual neurons.)
}\label{FeatureConsistency}
}
\end{figure}
Instead of selecting local features we introduce $k$th maximum-pooling which selects the $k$th largest response in every region.
$k$th maximum-pooling can significantly increase the generalization power of a network in case of limited amount of data and can be applied in one-shot learning scenarios.
$k$th maximum-pooling can be defined as:
\begin{equation}
P_{sort}(I_{i,j},k)= sort(R_{i,j})[k]
\end{equation}
where $sort$ is a simple sorting algorithm (sorting the elements in descending order) and $k$ is a parameter of the pooling method selecting the $k$th largest element in the region. If $k$ equals one \footnote{assuming indexing starts from one} the method results the original maximum pooling algorithm.
By this, one can ensure that a feature will be selected, which has multiple, large responses in a region. Features appearing locally, which might result overfitting on the data will be omitted.
\section{Results of $k$th Maximum-Pooling}\label{SimResults}
To present that the algorithm can help in the training of convolutional networks we executed simulations using a simple and ocmmonly cited dataset datasets: MNIST \cite{lecun1998mnist} with a three layered convolutional network (containing 8,32,64 $3\times3$ convolution kernels),
We also have to note that there are many other methods to increase the generalization power of a network, like batch-normalization\cite{ioffe2015batch} and dropout\cite{srivastava2014dropout} and SeLUs\cite{klambauer2017self} all the results presented here were trained with networks where all three of these elements were present.
We have created a simple network containing three convolutional layers (with 8,32 and 64 features in the layers) followed by a fully connected layer. Each layer contained convolutions with $3\times3$ kernels and stride of one and pooling layers with $3\times3$ kernels and a stride of two. We have used maximum pooling ($k=1$) and $k$th maximum-pooling with $k$ equals $2$, $3$ and $4$.
The results of the network, the error on the independent test set can be seen on Fig. \ref{MNISTResults}. These results show the averaged error on the independent test set on the MNIST dataset averaged from $50$ test runs.
A summary of the numerical values can be found in Table \ref{tab:results}.
Now difference was seen in case of the train accuracy.
As it can be seen the network converges much faster compared to maximum pooling, which means better generalization, since train accuracies were similar, as not the maximum activation, but the second, third or fourth highest activation is selected. It is also worthwhile to note that if $k$ is fairly large, the convergence of the network will become fast, but the final error on the test set (and also on the training set) remains higher. For higher $k$ values than four the performance of the network decreased drastically (below 40\% test accuracy and because this the results are not displayed). After a given point the network can learn fairly general features but cannot reach high accuracy, because the features are too general and not specific enough.
\begin{figure}[h!tb]
\centering{
\subfigure{\includegraphics[width=3.5in]{Figs/mnist_error.png}}\\
\caption{The plot shows the error rate on the independent test set on the MNIST dataset. Test error are displayes by normal, and train errors by a dashed line. As it can be seen with the increase of the $k$ parameter the network converges faster.}\label{MNISTResults}
}
\end{figure}
\begin{table}[h!]
\centering
\begin{tabular}{|l||c|c|c|c|}
\hline
& k=1& k=2 & k=3 &k=4 \\
\hline
\makecell{MNIST \\ 1 epoch} & 30.2\% & 26.3\% & 24.6\% & \textbf{22.8\%} \\
\hline
\makecell{MNIST \\ 3 epochs} & 12.4\% & 10.5\% & 9.5\% & \textbf{9.3\%} \\
\hline
\makecell{MNIST \\ 10 epochs} & \textbf{6.1}\% & 6.3\% & 6.3\% & 7.0\% \\
\hline
\end{tabular}
\caption{Results achieved on the MNIST dataset. As it can be seen the application of $k$th maximum-pooling can increase the converge speed of the network but results slightly lower accuracy in case of large dataset. Until only a smaller number of input data was showed to the network (1 and 3 epoch) $k$th maximum-pooling performed better}.
\label{tab:results}
\end{table}
\section{Sorted Pooling}
As it could be seen, a balance between the selection of general and specific kernels is important and can result faster convergence and higher accuracy. To ensure that the network has the ability to create this balance, we introduce sorted pooling. Sorted pooling can be seen as the generalization of $k$th maximum-pooling introduced in Section \ref{SortedPooling}, where we select not only the $k$ largest element, but the top $K$ elements and create a weighted summation of them:
\begin{equation}\label{weightedpooling}
P_{weight}(I_{i,j})= \sum_{k=1}^{K} W_k P_{sort}(R_{i,j},k)
\end{equation}
Where $W_k$ are weights of the activation functions, which ca be learned by the network. We have to ensure that $W_k\geq0$ for every $k$ and $\sum W_k = 1$. This can easily be ensured by applying a softmax function on the weights:
\begin{equation}
w_k=\frac{ e^{w^*_k} }{ \sum_{l=1}^{K} e^{w^*_l} }
\end{equation}
where $W_k$ are the normalized weights used in Eq. \ref{weightedpooling} and $w^*_k$ are the original weights learned by the network through gradient descent methods. The initialization of these variables can be done by using teh same value for each $W_k$ or by an exponentially decaying distribution using smaller and smaller weights for more and more general features, (larger $k$ parameters).
The parameters $w^*_k$ are initialized and learned independently for every pooling operation in each layer and every feature (depth on the input data). But the same parameters are applied along the width and height of the input images.
Sorted pooling can also be seen as a general mathematical operation connecting convolution and pooling. In case of $K=1$ the method results maximum pooling, meanwhile if $K$ is the same as the number of elements in the pooling region, the result is similar to a convolution, where the activation will be the weighted summation of all the input elements, but the input elements are in decreasing order according to their responses instead of representing local information. If $K$ equals the size of the pooling region and the weights are one over the size of the pooling region we will get average pooling back. From this one can also see, how the parameter $K$ can create the balance between sparse and specific update and general regional update of the weights.
\section{Results with Sorted Pooling}\label{CombinedPooling}
We have trained a modified version of the VGG-16\cite{simonyan2014very} architecture on the CIFAR-10\cite{krizhevsky2014cifar} dataset in which the number of kernels and convolution sizes remained the same, but pooling operations were changed to $3\times3$ kernels with stride of 2.
The independent test error for the MNIST dataset with the architecture described in Section \ref{SimResults} and on the CIFAR-10 dataset with the VGG-16 architecture are plotted and compared for the maximum and sorted pooling methods on Figure \ref{CIFArResults}.
To be fair we have to note that we had to resize the input images to be compatible with the architecture, but this is a common step in networks used for classification, where small objects in the background of the scene had to be resized for classification to ensure compatibility with the network structure.
\begin{table}[h!]
\centering
\begin{tabular}{|l||c|c|}
\hline
& Max pooling & Sorted pooling (K=4) \\
\hline
\makecell{MNIST \\ 1 epoch} & 30.2\% & \textbf{22.5\%} \\
\hline
\makecell{MNIST \\ 3 epochs} & 12.4\% & \textbf{8.1\%} \\
\hline
\makecell{MNIST \\ 10 epochs} & 8.1\% & \textbf{5.8\%} \\
\hline \hline
\makecell{CIFAR \\ 1 epoch} & 67.3\% & \textbf{49.2\%} \\
\hline
\makecell{CIFAR \\ 3 epochs} & 28.4\% & \textbf{20.8\%} \\
\hline
\makecell{CIFAR \\ 10 epochs} & 18.5\% & \textbf{14.8\%} \\
\hline
\end{tabular}
\caption{Results achieved on different dataset and different architectures. As it can be seen sorted pooling algorithm performs slightly better in all cases and decreases training time with all architectures. }
\label{tab:results2}
\end{table}
As it can be seen from the results the network can indeed learn the weighting of the top $K$ elements in a region. One could expect that the weights will converge to having a value of $1$ at $k=1$ and zero at other indices, meaning that the algorithm converges to max pooling. But this is not the case and other indices also had non zero weights, although the values were decreasing with the increase of $k$. The distribution of average weights for different $k$ values on the CIFAR-10 dataset can be seen on Fig. \ref{wdist}.
We also have to not that this problem does not solve the problem of overfitting completely, and test accuracy can decrease after a number of steps (along with increasing train accuracy), but sorted pooling definitely helps and the results are better as in case of maximum pooling.
\begin{figure}[h!tb]
\centering{
\subfigure{\includegraphics[width=3.5in]{Figs/wdist.png}}\\
\caption{The evolution of the average of the weights for different $k$ values on the CIFAR-10 dataset are depicted on this figure. The weights are set to the same value at the beginning, and the network learn during training how to weight the largest and the smaller responses in each kernel. These values are average values over the whole architecture, the weights of each kernel in each layer was averaged, which hides the possible difference of certain kernels, but inevitable shows the the network uses information for the second and third largest responses as well.
}\label{wdist}
}
\end{figure}
\begin{figure}[h!tb]
\centering{
\subfigure{\includegraphics[width=3.5in]{Figs/cifar_error.png}}\\
\caption{The comparison of maximum and sorted pooling on the MNIST and CIFAR datasets. As it can be seen from the results sorted pooling results lower accuracy on both datasets.
}\label{CIFArResults}
}
\end{figure}
\subsection{Results in one-shot learning scenarios}
Feature generalization might be especially important in cases where large amount of data is not available.
The extreme case of limited data are one-shot learning scenarios, where only one or a few instances of data is available, but there are also other problems in practice when data collection is cumbersome or expensive. Because of this we have selected an architecture which performs well in one-shot learning scenarios: matching networks\cite{vinyals2016matching} and investigated how sorted pooling affects the accuracy of such architectures. The test accuracy on the Omniglot dataset \cite{lake2015human} calculating one-shot, five-way accuracy can be seen on Fig. \ref{OneShotResults} and in Table \ref{tab:results}. The architecture, train and test setups were the same as in \cite{vinyals2016matching} apart from the extension of the pooling kernels to $3\times3$ regions.
\begin{figure}[h!tb]
\centering{
\subfigure{\includegraphics[width=3.5in]{Figs/oneshot_error.png}}\\
\caption{The independent test error on the Omniglot dataset for one-shot learning scenarios with matching-networks. The parameters of the simulation were the same as introduced in \cite{vinyals2016matching}.
}\label{OneShotResults}
}
\end{figure}
As it can be seen the increase of the $K$ parameter helped the network to find general features and it resulted an overall increase of accuracy. Compared to the MNIST and CIFAR scenarios where large amount of data could balance out the overfitted kernels, in case of one-shot learning sorted pooling resulted a higher increase in overall accuracy. We also have to note that the test accuracy is higher than the accuracy of the original approach \cite{vinyals2016matching}, which resulted $98.1\%$ accuracy and in this case the accuracy was $98.9\%$.
\section{Conclusion}
In this paper we have introduced $k$th maximum-pooling and sorted pooling which can be considered as an extension of the classical maximum pooling method and can also defined as a connecting link between maximum pooling, average pooling and convolution.
As it can be seen this method can help the network to find general features which is beneficial during training to increase the generalization power of the architecture. Sorted pooling resulted higher accuracy on all datasets with all architectures than maximum pooling.
$k$th maximum-pooling and sorted pooling can be especially important in scenarios where only a limited amount of data is available. In this tasks both $k$th maximum and sorted pooling approaches perform significantly better than traditionally applied pooling methods, in one-shot learning scenarios sorted pooling could further increase the accuracy of state of the art approaches and architectures, like matching networks.
\bibliographystyle{IEEEtran}
| proofpile-arXiv_065-294 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Background and Motivation}
Entanglement concentration (EC) \cite{entconc1} is the process of obtaining maximally entangled pure states given some initial number of copies, $N$, of partially entangled pure states using local quantum operations and classical communications (LQCC) \cite{entmonotones1}. Concentrated entanglement is an important resource for applications \cite{santratelescope,wilde_2017, nielsen_chuang_2010} and EC protocols are of fundamental interest in quantum information theory \cite{wilde_2017, nielsen_chuang_2010}. Various LQCC EC protocols, which work for different numbers of initial states and with varying efficiencies, are known \cite{entconc1,lo-popescu,bose1}. Although LQCC is a natural operational paradigm, where observers Alice and Bob each possess and operate only on part of a quantum system while coordinating their actions through classical communications, more efficient EC protocols can be obtained using entanglement-assisted local quantum operations and classical communications (ELQCC) \cite{catalysis1, catalysis2}. In this process, an ancillary entangled pure state, called the catalyst state, shared by Alice and Bob is utilized as part of an overall LQCC process to enhance its efficiency and the catalyst state is recovered intact at the end. Here, we analytically obtain the maximum probability of success for an EC protocol transforming $N$-copies of a two-qubit pure state to a single copy of a maximally-entangled two-qubit pure state, or Bell state, when provided with entanglement assistance in the form of a two-qubit pure state catalyst.
In the case of a large number of copies, $N\to \infty$, of a two-qubit pure state, $\ket{\alpha}=\sqrt{\alpha}\ket{00}+\sqrt{1-\alpha}\ket{11}$, a fundamental result \cite{entconc1} is that the number, $M$, of Bell states $\ket{\phi}=(\ket{00}+\ket{11})/\sqrt{2}$, obtainable using LQCC achieves the value, $M=S_{VN}(\alpha)N$, with $S_{VN}(\alpha)=-\alpha\log_2(\alpha)-(1-\alpha)\log_2(1-\alpha)$ - the Von Neumann entropy of the reduced initial state. The result is interpreted to mean that a fraction, $f=M/N$, of the initial states are deterministically transformed to Bell states. A single Bell state, in the limit of an asymptotic number of copies of $\ket{\alpha}$ with $\alpha\neq1$, can always be obtained with certainty. In the other limit, for $N=1$, a Bell state can be obtained only probabilistically via LQCC with the maximum probability being, $P=2(1-\alpha)<1$, \cite{lo-popescu,bose1} since without loss of generality $\alpha\geq0.5$. However, ELQCC does not increase the success probability of a transformation from a single copy of a two-qubit state to a Bell state. Therefore, in both these limits, i.e. $N\to1$ and $N\to\infty$, entanglement assistance does not help, that is, it cannot increase the number of Bell states, $M$, obtained asymptotically nor can the success probability, $P$, be increased for a single copy of $\ket{\alpha}$. However, in the intermediate regime of $N$, ELQCC can increase the expectation value of entanglement obtained in the form of maximally entangled states (of any dimension) in an EC procedure \cite{catalysis1}
We show that for finite $N\geq 2$ entanglement assistance increases the success probability of the transformation, $\ket{\alpha}^{\otimes N}\to\ket{\phi}$. We analytically find that while all pure and entangled two-qubit states can act as catalysts for this tansformation, i.e. increase its success probability, the optimal catalyst must be more entangled than the initial state $\ket{\alpha}$. Remarkably, we find that the entanglement of the optimal catalyst decreases with that of the initial state. Further, we find that the use of an ELQCC procedure for EC is most beneficial for smaller number of copies, $N$, of the initial state. To close, we comment on ELQCC strategies to obtain multiple copies of Bell states. Obtaining catalysts for entanglement transformations is in general a difficult problem analytically while numerical searches do not provide much insight into the general properties of catalyst states.
Entanglement assistance via the presence of a catalyst state, $\ket{C}$, can enable an otherwise impossible LQCC entanglement transformation \cite{catalysis1}, i.e.,
\begin{align}
\ket{\psi}&\underset{LQCC}{\not\to}\ket{\phi}\nonumber\\
\ket{\psi}\ket{C}&\underset{LQCC}{\to} \ket{\phi}\ket{C}.
\end{align}
This result is based on Nielsen's theorem \cite{Nielsen1} which provides a criterion for allowed LQCC transformations from one pure quantum state to another. The criteria states that the transformation from an initial state $\ket{I}$ to a final state $\ket{F}$ is possible with certainty, i.e. $P(I\to F)=1$, iff the sets of the squares of the non-increasingly ordered Schmidt coefficients (OSC), $\bar{\lambda}^I=(\lambda^I_1\geq\lambda^I_2\geq...\geq\lambda^I_d)$ and $\bar{\lambda}^F=(\lambda^F_1\geq\lambda^F_2\geq...\geq\lambda^F_d)$ with respect to the bipartition that defines the local quantum systems, obey the majorization relation,
\begin{align}
\bar{\lambda}^I\preceq \bar{\lambda}^F,
\label{majrel}
\end{align}
which is shorthand to denote that $\sum_{j=1}^k \lambda^I_j\leq \sum_{j=1}^k \lambda^F_j~\forall 1\leq k \leq d$. In case of incommensurate states, i.e. where the OSCs of the initial and final states do not obey Eq.~(\ref{majrel}), Vidal \cite{vidal1} showed that the transformation from $\ket{I}\to\ket{F}$ is possible only probabilistically with the maximum probability given by,
\begin{align}
P(I\to F)=\underset{1\leq l \leq d}{\text{min}}\frac{E_l(\ket{I})}{E_l(\ket{F})},
\label{LQCCprob}
\end{align}
where $E_l(\ket{I}):=1-\sum_{j=0}^{l-1}\lambda^I_j$ and $\lambda_0=0$. For a pair of incommensurate states, Ref. \cite{catalysis1} further showed that entanglement catalysis can increase the efficiency, $P_C(I\to F)>P(I\to F)$, of probabilistic transformations. It is this approach we take to obtain catalysts that can maximize the LQCC entanglement concentration success probability of a finite number of two-qubit pure states.
For the problem of entanglement concentration of multiple copies of 2-qubit pure states, $\ket{\alpha}=\sqrt{\alpha}\ket{00}+\sqrt{1-\alpha}\ket{11}$, we have the initial and final states of the form,
\begin{align}
\ket{\psi}&=\ket{\alpha}^{\otimes N}\nonumber\\
\ket{\phi}&=(\ket{00}+\ket{11})/\sqrt{2},
\label{stateform}
\end{align}
which will be provided entanglement assistance via the catalyst state $\ket{C}$. We will first consider a fixed number of copies, $N$, in the above. Nielsen's theorem applied to the state pair of the form in Eq. (\ref{stateform}) implies the following,\\
\noindent \textbf{Proposition}: If the states $\ket{\psi}\to\ket{\phi}$ are incommensurate, no catalyst can make the transformation deterministic.
\begin{proof} First, note that incompatibility arises iff $\lambda^I_1>\lambda^F_1$ since $\lambda^F_1+\lambda^F_2=1$ and $\sum _{j=1}^{k} \lambda^I_j\leq 1 \forall 1\leq k\leq d$. Thus, the OSCs of the product states $\ket{\psi}\ket{C}$ and $\ket{\phi}\ket{C}$ remain incompatible since their largest Schmidt coefficients follow, $\lambda^I_1 c_1 > \lambda^F_1 c_1$, whereas $\sum_{j=1}^k \lambda^I_j\leq \sum_{j=1}^k \lambda^F_j~\forall ~2\leq k \leq d$. Here, $c_1$ is the square of the largest Schmidt coefficient of $\ket{C}$.
\end{proof}
For a fixed $N\geq1$, we focus on transformations $\ket{\psi}^{\otimes N}\to\ket{\phi}$ that are not possible with certainty using LQCC. The OSCs of the two states form probability vectors of length $2^N$ and are given by,
\begin{align}
\bar{\lambda}^\psi&=\{\alpha^N\geq\alpha^{N-1}(1-\alpha)\geq\alpha^{N-2}(1-\alpha)^{2}\geq...\geq(1-\alpha)^N\}\nonumber\\
\bar{\lambda}^\phi&=\{0.5\geq0.5\geq0\geq...\geq0\}
\end{align}
where the Schmidt coefficients $\alpha^{N-p}(1-\alpha)^p$ of $\ket{\psi}$ have multiplicities of ${N \choose p}$ and $0.5\leq\alpha\leq1$.
The optimal success probability for such a transformation as given by Eq.~(\ref{LQCCprob}) is,
\begin{align}
P(\psi\to\phi)=\text{min}[1,2(1-\alpha^N)].
\end{align}
For LQCC transformations that are probabilistic the minimum in the R.H.S. above is less than unity. Therefore, we have that $2(1-\alpha^N)<1\implies \alpha>(1/2)^{1/N}$.
For such states we would like to find a catalyst, $\ket{C}=\sqrt{c}\ket{00}+\sqrt{1-c}\ket{11}$, i.e. a pure state on a qubit pair that provides the largest boost to the success probability, $P_C(I\to F)$, of the transformation,
\begin{align}
\ket{I}=\ket{\psi}\ket{C}\underset{LQCC}{\to}\ket{F}=\ket{\phi}\ket{C}.
\end{align}
To obtain $P_C(I\to F)$, first we need to evaluate the terms in the R.H.S of Eq.~(\ref{LQCCprob}). This requires the OSCs of the initial and final states. The OSCs of the final state $\ket{F}$ are,
\begin{align}
\bar{\lambda}^F=\{0.5c,0.5c,0.5(1-c),0.5(1-c),0,...,0\},
\end{align}
with $0.5\leq c\leq 1$ where the zeros following the non-zero entries make the length of $\bar{\lambda}^F$ match the dimension of the initial state $\text{dim}(\ket{I})=2^N\times 2$. Now, we note that the minimization problem in Eq.~(\ref{LQCCprob}) is restricted to the first four values of $l$ since, $E_l(\ket{F})=0\forall l\geq 5$, and thus the ratios, $r_l(\alpha,c):=E_l(\ket{I})/E_l(\ket{F})=\infty$, for $l\geq 5$ do not contribute to the complexity of the minimization in our case. Therefore, only the first four monotones, $E_l$, of the initial and final states are required. These can be obtained if the first 3 entries of the OSCs of the initial and final states are known. For the final state (in the entire domain $c\in(0.5,1)$) we have that,
\begin{align}
E_1(\ket{F})&=1,\nonumber\\
E_2(\ket{F})&=1-c/2,\nonumber\\
E_3(\ket{F})&=1-c,\nonumber\\
E_4(\ket{F})&=(1-c)/2.
\label{Ef}
\end{align}
For the initial state $\ket{I}$, the OSCs can have the following two orderings (of relevance are the first three entries of each) based on the value of $c$ relative to $\alpha$,
\begin{small}
\begin{align}
\bar{\lambda}^{I_1}&=\{c\alpha^N>(1-c)\alpha^{N}> c\alpha^{N-1}(1-\alpha)>...>(1- c)(1-\alpha)^N\}
\end{align}
\end{small}
which holds for $0.5<c\leq \alpha$ whereas for $\alpha<c\leq 1$,
\begin{small}
\begin{align}
\bar{\lambda}^{I_2}&=\{c\alpha^N> c\alpha^{N-1}(1-\alpha)> (1-c)\alpha^{N}>...> (1-c)(1-\alpha)^N\}
\end{align}
\end{small}
where the first three entries of $\bar{\lambda}^{I_1}$ have multiplicities $1,1,N$, while the multiplicities for the ordered entries of $\bar{\lambda}^{I_2}$ is $1,N,1$ respectively. Thus, for the two parts of the domain for $c$ the monotones $E_l(\ket{I})$ of the initial state evaluate to,
\begin{align}
E_1(\ket{I})&=1,~c\in(0.5,1)\nonumber\\
E_2(\ket{I})&=1-c\alpha^N,~c\in(0.5,1)\nonumber\\
E_3(\ket{I})&=\begin{cases}1-\alpha^N,~~0.5<c\leq\alpha\\1-c\alpha^{N-1},~\alpha<c<1\end{cases}\nonumber\\
E_4(\ket{I})&=\begin{cases}1-\alpha^N-c\alpha^{N-1}(1-\alpha),~0.5<c\leq\alpha\\1-c\alpha^{N-1}-c\alpha^{N-1}(1-\alpha),~\alpha<c<1\end{cases}
\label{Ei}
\end{align}
From Eqs.~(\ref{Ef}) and (\ref{Ei}) we have the four ratios of the entanglement monotones as functions of $\alpha, c$ and $N$,
\begin{align}
r_1(\alpha,c,N)&=1,~c\in(0.5,1)\nonumber\\
r_2(\alpha,c,N)&=\frac{1-c\alpha^N}{1-c/2},~c\in(0.5,1)\nonumber\\
r_3(\alpha,c,N)&=\begin{cases}\frac{1-\alpha^N}{1-c},~~0.5<c\leq\alpha\\\frac{1-c\alpha^{N-1}}{1-c},~\alpha<c<1\end{cases}\nonumber\\
r_4(\alpha,c,N)&=\begin{cases}\frac{2(1-\alpha^N-c\alpha^{N-1}(1-\alpha))}{1-c},~0.5<c\leq\alpha\\\frac{2(1-c\alpha^{N-1}-c\alpha^{N-1}(1-\alpha))}{1-c},~\alpha<c<1\end{cases}
\label{R}
\end{align}
\emph{Evaluation of the minimum among the ratios of entanglement monotones:}
First, note that for $N=1$ the minimum of the ratios in the above set of equations is given by, $r_4(\alpha,c,N)=2(1-\alpha)$, which is equal to the LQCC probability without a catalyst for all values of $0.5<c<1$. Thus, a catalyst cannot help increase the success probability of a LQCC transformation of a single copy of $\ket{\alpha}$ to $\ket{\phi}$. This is consistent with the fact that catalysis is impossible when the initial and final states are both two-qubit states \cite{catalysis1}.
For $N\geq 2$, the minimum of the ratios $r_l(\alpha,c,N)$ for $l=2,3,4$ determine the probability of a successful catalyzed conversion from $\ket{\psi}^{\otimes N}\to \ket{\phi}$ (since $r_1(\alpha,c,N)=1$). For this we use the derivatives and continuity properties of $r_2,r_3,r_4$ to determine the minimum among the three. It turns out that for all values of, $\alpha>(1/2)^{(1/N)}$, the function $r_2(\alpha,c,N)$ decreases with $c$ with its maximum value $r_2^{\text{max}}=(4/3)(1-\alpha^N/2)$ as $c$ approaches $0.5$. On the other hand, the function $r_3(\alpha,c,N)$ increases with $c$ in both parts of its domain. It is continuous across the domain boundary $c=\alpha$ and has a minimum value of $r_3^{\text{min}}=2(1-\alpha^N)$ as $c$ approaches $0.5$. The minimum value of $r_2(\alpha,c,N)$ is given by $r_2^{\text{min}}=2(1-\alpha^N)$ as $c$ approaches $1$ whereas the value of $r_3(\alpha,c,N)$ diverges as $c\to1$. Therefore, for fixed $\alpha,N$ the curves for $r_2(\alpha,c,N)$ and $r_3(\alpha,c,N)$ as a function of $c$ intersect in the domain $c\in(0.5,1)$. Further, note that $r_2^{\text{max}}\geq r_3^{\text{min}}$ for $\alpha\geq(1/2)^{(1/N)}$. Finally, the minimum of the ratios is never given by the value of the function $r_4(\alpha,c,N)$ in any part of the domain $c\in(0.5,1)$ as shown in the following.
For $c\leq\alpha$, one can show that $r_4(\alpha,c,N)\geq r_3(\alpha,c,N)$ for all $N\geq2$, so that $r_4(\alpha,c,N)$ is not the least of the ratios as follows,
\begin{align}
r_4(\alpha,c,N)&=\frac{2(1-\alpha^N-c\alpha^{N-1}(1-\alpha))}{1-c}\nonumber\\
&=\frac{1-\alpha^N}{1-c}+\frac{1-\alpha^N-2c\alpha^{N-1}(1-\alpha)}{1-c}\nonumber\\
&=r_3(\alpha,c,N)+\frac{p(\alpha,c)}{1-c}
\end{align}
Now we note that the function, $p(\alpha,c,N)=1-\alpha^N-2c\alpha^{N-1}(1-\alpha)$, is a decreasing function of $c$ since $\alpha,(1-\alpha)\geq0$. So w.r.t. $c$ the function takes its minimum value at $c=\alpha$ given by, $p_{\text{min},c}(\alpha)=1+\alpha^N(2\alpha-3)$. This minimum value decreases with $\alpha$ since the sign of the derivative $dp_{\text{min},c}(\alpha)/d\alpha<0$ for $\alpha<(3/2)\frac{N}{N+1}$ which always holds for $N\geq 2$. The minimum value with respect to both arguments is at $c=\alpha$ and $\alpha=1$ and is given by $p_{\text{min},c,\alpha}=0$.
For $\alpha<c<1$, one can show that $r_4(\alpha,c,N)\geq r_3(\alpha,c,N)$ for $N\geq 3$ whereas $r_4(\alpha,c,N)\geq r_2(\alpha,c,N)$ for $N=2$, so that also in this region $r_4(\alpha,c,N)$ is not the least of the ratios as follows. From Eq. (\ref{R}) we have,
\begin{align}
r_4(\alpha,c,N)&=\frac{1-c\alpha^{N-1}}{1-c}+\frac{1-c\alpha^{N-1}-2c\alpha^{N-1}(1-\alpha)}{1-c}\nonumber\\
&=r_3(\alpha,c,N)+\frac{q(\alpha,c)}{1-c},
\end{align}
where the function, $q(\alpha,c,N)=1-c\alpha^{N-1}-2c\alpha^{N-1}(1-\alpha)=1-c\alpha^{N-1}(3-2\alpha)$, is a decreasing function of $c$. Therefore, the minimum of $q(\alpha,c,N)$ w.r.t. $c$ is at $c=1$ and is given by $q_{\text{min},c}(\alpha)=1+\alpha^{N-1}(2\alpha-3)$. This minimum value decreases with $\alpha$ if the derivative $dq_{\text{min},c}(\alpha)/dq<0$ which requires $\alpha\leq (3/2)(N-1)/N$ that always holds for $N\geq3$. The minimum of $q_{\text{min},c}(\alpha)$ is therefore at $\alpha=1$ given by $q_{\text{min},c,\alpha}=0$ for $N\geq3$. For $N=2$, we have that, $(3/2)(N-1)/N=3/4$, so $q_{\text{min},c}(\alpha)<0$ for $(3/4)<\alpha<1$. However, for this range of $\alpha$ and $N=2$, we can show $r_4(\alpha,c,N=2)\geq r_2(\alpha,c,N=2)$ by evaluating their difference,
\begin{align}
&r_4(\alpha,c,N=2)-r_2(\alpha,c,N=2)\nonumber\\
&~~~~~~~~~~~~=\frac{2[1+(\frac{2c^2-4c}{\alpha}+(3c-2c^2))\alpha^2]}{(1-c)(2-c)}\nonumber\\
&~~~~~~~~~~~~=\frac{s(\alpha,c)}{(1-c)(2-c)},
\end{align}
where, $s(\alpha,c)=2[1+(\frac{2c^2-4c}{\alpha}+(3c-2c^2))\alpha^2]$. Note that the term $(2c^2-4c)$ decreases with increasing $c \forall c<1$ while the term $(3c-2c^2)$ decreases with increasing $c$ for $(3/4)<c<1$. Therefore, the minimum value of $s(\alpha,c)$ in this range is at $c=1$ given by $s_{\text{min},c}(\alpha)=2(1-\alpha)^2$ which is always greater than or equal to zero.
~~~~~~~~~~~~~~$\square$
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{pic1.pdf}
\caption{Ratio of entanglement monotones as a function of the catalyst-state Schmidt coefficient, $c$, with fixed $\alpha=0.85$ and $N=2$. Shown in Blue is $r_2(0.85,c,2)$ which monotonically decreases while $r_3(0.85,c,2)$, in Green, monotonically increases with $c$. $r_4(0.85,c,2)$ shown in Red is never the minimum of the three monotones. The value of $c$ at the intersection point of the Blue and Green curves gives the optimal catalyst (vertical dashed line). The horizontal dashed line shows the probability for the LQCC transformation $\ket{\alpha=0.85}^{\otimes 2}\to\ket{\phi}$.}
\label{fig:optimalc1}
\end{figure}
These facts together imply that the maximum probability of a LQCC conversion, $\ket{I}\to\ket{F}$, is obtained where the curves for $r_2(\alpha,c,N)$ and $r_3(\alpha,c,N)$ w.r.t. $c$ intersect for a fixed $\alpha$ and $N$, see figure~(\ref{fig:optimalc1}). The intersection point, $c^{\text{opt}}(\alpha,N)$, is obtained from the solution of one of the quadratic equations, $r_2(\alpha,c,N)=r_3^{c\leq\alpha}(\alpha,c,N)$, or, $r_2(\alpha,c,N)=r_3^{c>\alpha}(\alpha,c,N)$, as given by Eq.~(\ref{R}). We find that the latter has solutions, $c=0$ or $c>1$, which are unacceptable for a physically meaningful catalyst state, whereas the former equation provides an acceptable solution,
\begin{align}
c^{\text{opt}}(\alpha,N)=\frac{1+3\alpha^N-\{(1+3\alpha^N)^2-16\alpha^{2N}\}^{1/2}}{4\alpha^N}.
\label{catalystsolution}
\end{align}
The Schmidt coefficient, $c^{\text{opt}}(\alpha,N)$, identifies a two-qubit catalyst pure state, $\ket{C^{\text{opt}}(\alpha,N)}=\sqrt{c^{\text{opt}}(\alpha,N)}\ket{00}+\sqrt{1-c^{\text{opt}}(\alpha,N)}\ket{11}$, that provides the maximum success probability in an ELQCC procedure to obtain a maximally entangled two-qubit state from $N$-copies of partially entangled pure states. This probability is given by the value of $r_2(\alpha,c^{\text{opt}},N)$ or $r_3(\alpha,c^{\text{opt}},N)$,
\begin{align}
P^{\text{max}}_{C}(I\to F)= \frac{1-\alpha^N}{1-c^{\text{opt}}(\alpha,N)}
\label{probvalue}
\end{align}
Further, since $c^{\text{opt}}(\alpha,N)<\alpha$ the optimal catalyst state is always more entangled than $\ket{\alpha}$. However, even those states, $\ket{C}=\sqrt{c}\ket{00}+\sqrt{1-c}\ket{11}$, with $c\neq c^{\text{opt}}(\alpha,N)$ can act as (non-optimal) catalysts. This is because for such states $\ket{C}$ in the region $c<c^{\text{opt}}(\alpha,N)$ the minimum of the ratios, $r_3(\alpha,c,N)$, is still greater than the LQCC transformation probability of $2(1-\alpha^N)$ as can be seen by evaluating $r_3(\alpha,c,N)$ for $c<\alpha$, see the Green curve in figure~(\ref{fig:optimalc1}). Whereas for those states in the region $c>c^{\text{opt}}(\alpha,N)$ the minimum of the ratios, $r_2(\alpha,c,N)$, is again greater than the LQCC transformation probability of $2(1-\alpha^N)$, see the Blue curve in the same figure.
We remark that the transformation $\ket{I}\to\ket{F}$ can be achieved via LOCC operations jointly on the $N$-copies of the initial state and one-copy of the catalyst state in a two step procedure \cite{Nielsen1,vidal1,lo-popescu} we briefly outline. In the first step a temporary state $\ket{\Gamma}$ that majorises the initial state is obtained with certainty, i.e., $\ket{I}\prec\ket{\Gamma}$, via a sequence of LOCC operations on corresponding two-dimensional subspaces of Alice's and Bob's systems (of Hilbert space dimension $2^{N+1}$ each). That is, a single LOCC operation involves two-levels $\ket{i}_A,\ket{j}_A$ on Alice's systems and the corresponding two levels $\ket{i}_B,\ket{j}_B$ of Bob's systems with $i,j\in[1,2^{N+1}]$. Note that the operations on states, $\{\ket{i}_{A,B}\}_i$, involve the collective manipulation of $N$-qubits of the shared initial state and $1$-qubit of the shared catalyst state. The number of such $(\alpha,c)$-dependent two-level operations is upper bounded by $(2^{N+1}-1)$. In the second step, Bob performs a two-outcome generalized measurement on his portion of the shared state $\ket{\Gamma}$. For one of the outcomes, which occurs with probability given by Eq.~(\ref{probvalue}), the post-measurement state obtained is $\ket{F}$ therefore in this case the catalyst state is recovered along with a Bell state whereas the other outcome signals the failure of the catalytic process and the post-measurement state may be discarded.
Now, we note from Eqs.~(\ref{R}) and (\ref{catalystsolution}) the following properties,
\begin{enumerate}
\item An optimal two-qubit catalyst state always exists for $N\geq2$-copies of every state $\ket{\alpha}$ with $\alpha\in((1/2)^{1/N},1)$.
\item The optimal catalyst state is always more entangled than $\ket{\alpha}$ since $c^{\text{opt}}(\alpha,N)< \alpha$.
\item {\emph Any} pure and entangled two-qubit state can act as a catalyst, that is, it provides a positive boost to the success probability of the $\ket{\psi}^{\otimes N}\to\ket{\phi}, N\geq2$ transformation in an entanglement assisted procedure.
\item \emph{Optimal} self-catalysis is not possible, that is, $c^{\text{opt}}(\alpha,N)\neq\alpha$ for any $N$ and $\alpha<1$. However, an additional copy of the state $\ket{\alpha}$ can act as a non-optimal catalyst.
\item The optimal catalyst state $\ket{C^{\text{opt}}(\alpha,N)}$ becomes less entangled as the state $\ket{\alpha}$ becomes less entangled ($\alpha\to1$) since the derivative, $dc^{\text{opt}}(\alpha,N)/d\alpha>0$, in the region $\alpha\in(0.5,1)\forall N\geq2$.
\item Catalysis with the optimal state is more beneficial if the initial state is less entangled, that is, the ratio of LQCC success probability with optimal catalysis to that without, $\frac{P_C^{\text{max}}(I\to F)}{P(\psi\to\phi)}$ increases as $\alpha\to 1$, see figure~(\ref{fig:ratio1}).
\item Catalysis with the optimal two-qubit catalyst state is more effective for a smaller number of copies, $N$, of the initial state, see figure (\ref{fig:ratio1}).
\end{enumerate}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{fig-ratio1.pdf}
\caption{Ratio of success probability for the transformation, $\ket{\psi}^{\otimes N} \to \ket{\phi}$, using optimal catalysis to that without catalysis. The curves from left to right are for different number of copies $N=2,4,8,16,32$.}
\label{fig:ratio1}
\end{figure}
As a consequence of remark 3, we note that for a set of two-qubit pure states, $\mathcal{S}=\{\ket{\alpha_i}\}_i$ (none of which is a maximally entangled state), any two-qubit pure state can act as a common catalyst for all transformations,
\begin{align}
\ket{\alpha_i}^{\otimes N_i}\to\ket{\phi},~N_i\geq2.
\end{align}
For obtaining multiple copies of Bell states higher dimensional catalysts are more efficient \footnote{Numerically one finds higher dimensional catalysts more efficient even to obtain a single Bell state. For example $\ket{C}=\sqrt{.5}\ket{00}+\sqrt{.35}\ket{11}+\sqrt{.15}\ket{22}$ is more efficient for the conversion $(\sqrt{.8}\ket{00}+\sqrt{.2}\ket{11})^{\otimes2}\to\ket{\phi}$}. For example, the initial state $\ket{\alpha}^{\otimes N}$ (with even $N$) can be transformed to $\ket{\phi}^{\otimes m}$ with a catalyst of the form $\ket{C^{\text{opt}}(\alpha,2)}^{\otimes N/2}$ in a pairwise ELQCC procedure where the number of obtained Bell states, $m=0,1,2,..,n=N/2$, is binomially distributed. The probability of obtaining $m$ Bell states is given by $p_m=\binom{N/2}{m}p^m(1-p)^{N/2-m}$ with $p=P^{\text{max}}_{C}(I\to F)$ as in Eq.~(\ref{probvalue}), where $\ket{I}=\ket{\alpha}^{\otimes 2}\ket{C^{\text{opt}}(\alpha,2)}$ and $\ket{F}=\ket{\phi}\ket{C^{\text{opt}}(\alpha,2)}$. The expected entanglement, $\braket{E}=\sum_m p_m*m=(N/2)P^{\text{max}}_{C}(I\to F)$, in this entanglement concentration procedure, that we will call strategy-1, is linear in the number of copies $N$ of the initial state $\ket{\alpha}$.
To obtain a target number, $m_*$, of Bell states, however, a different method, strategy-2, may be more beneficial. In such a strategy, the initial $N$-copies of $\ket{\alpha}$ may be grouped into $m_*$ sets each of cardinality $N_j$ such that, $\sum_{j=1}^{j=m_*}N_j=N$. The probability of obtaining $m_*$ Bell states will then be the maximum of the product of probabilities maximized over the size of the sets, $p_{m_*}=\text{Max}_{{\{N_j\}}_j}\prod_{j=1}^{j=m_*}P_j$, where $P_j$ is the probability of the transformation, $\ket{\alpha}^{\otimes N_j}\to\ket{\phi}$. For sets with $N_j\geq 2$ one can use an ELQCC transformation procedure, so that for such sets $P_j=P^{\text{max}}_{C}(I\to F)$ with $\ket{I}=\ket{\alpha}^{\otimes N_j}\ket{C^{\text{opt}}(\alpha,N_j)}$ and $\ket{F}=\ket{\phi}\ket{C^{\text{opt}}(\alpha,N_j)}$. The different cardinalities, $N_j$, of the sets allows one to maximize the catalysis success probability using the appropriate catalyst $\ket{C^{\text{opt}}(\alpha,N_j)}$ for each set.
The choice of the advantageous strategy depends on the number of copies available $N$, the value of $\alpha$ and the number of copies of the Bell state $m_*$ desired as the output of the catalyzed entanglement concentration procedure. To compare, strategies-1 and 2 as described above, consider as an example the case when $N=6$ and $\alpha=0.99$. If $m
_*=2$ copies of Bell states are desired as output then strategy-1 yields a probability of $0.034$ whereas strategy-2 utilizing 2 sets of 3-copies of $\ket{\alpha}$ each, yields a probability of $0.065$. On the other hand if only a single copy of a Bell state is the desired output, i.e. $m_*=1$, then strategy-1 yields a probability of $0.391$ whereas strategy-2 utilizing 1 set of 6-copies of $\ket{\alpha}$ each yields a probability of $0.362$.
It will be interesting to apply the results of catalytic entanglement concentration to increase the efficiency of entanglement distribution protocols in quantum repeaters \cite{munro_repeater}. The latter distribute entanglement over long distances by purifying and connecting entanglement generated over smaller length segments. While the entanglement generated over the segments is typically in the form of mixed states, some models of channel noise, e.g. \cite{kwiat_filtration}, can lead to non-maximally entangled shared pure states between the repeater stations. In such cases, if ELQCC is utilized to extract states with high fidelity to a Bell state in each repeater segment more efficiently than LOCC based repeater protocols then the overall distribution rate of the repeater can benefit significantly. This would require adaptive operations at the repeater nodes since the transformation $\ket{I}\to\ket{F}$ is achieved via $\alpha$-dependent local unitaries and measurements by Alice and Bob. Copies of the initial states may be generated and stored on matter qubits that have an efficient light-matter interface while storing the catalyst state in long-lived quantum memories \cite{Simon2010} at the repeater nodes during the ELQCC process. This may allow the reuse of the catalyst state multiple times as allowed by the transformation success probability. Quantum repeater architectures based on the combination of qubits with excellent communication properties and those with long lifetimes, e.g. \cite{Santra_20192}, can thus be good candidates to exploit catalytic entanglement concentration.
In summary, we analytically obtained a two-qubit catalyst pure state that maximizes the success probability of an entanglement assisted LQCC procedure to convert a given number of copies of a partially entangled pure state to a single copy of a maximally entangled two-qubit state. The supplied entanglement assistance is minimal since the catalyst is an entangled state of Schmidt rank equal to 2. Although, a higher rank catalyst cannot make the transformation deterministic, the maximum transformation success probability with a catalyst of any rank is an open question. In contrast with numerical searches for catalyst states, the analytical derivation of the optimal catalyst state reveals multiple properties of the catalytic process and raises interesting questions about possible applications.
{\it Acknowledgements:-} We thank one anonymous referee for many useful comments and suggestions.
\bibliographystyle{apsrev4-1}
| proofpile-arXiv_065-295 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Molecular descriptors \cite{ref1} are mathematical quantities that describe the structure or shape of molecules, helping to predict the activity and properties of molecules in complex experiments. In the last few years, several new molecular structure descriptors have been conceived \cite{ref2,ref3,ref4,ref5}. Molecular descriptors play a significant role in chemistry, pharmacology, etc. Topological indices have a prominent place among molecular structure descriptors. Modeling physical and chemical properties of molecules, designing pharmacologically active compounds and recognizing environmentally hazardous materials are just a few applications of topological indices, see~\cite{ref6}. One of the most important topological indices is the \emph{Szeged index}. The Szeged index of a connected graph $G$, $Sz(G)$, is defined as
$$Sz(G) = \sum_{e = uv\in E(G)} n_u^G(e)n_v^G(e),$$
where $n_u^G(e)$ is the number of vertices in $G$ that are closer to $u$ than $v$.
Similarly, the \emph{weighted Szeged index} of a connected graph $G$, $wSz(G)$, introduced
by Ili\'{c} and Milosavljevi\'{c} in \cite{ilic}, is defined as
$$wSz(G) = \sum_{e = uv\in E(G)} \left(d_G(u)+d_G(v)\right)n_u^G(e)n_v^G(e),$$
where $d_G(u)$ is the degree of $u$ in $G$.
Recently some researches have studied the weighted
Szeged index of special classes of graphs and graph
operations \cite{jan,ref10,ref11,ref12}. One of the
main related open problems is characterizing a graph
of fixed order with minimum weighted Szeged index.
Regarding this problem, if true, the following
conjecture would substantially restrict the structure
of such graphs.
\begin{conjecture}\cite{jan} \label{min-tree}
For $n$-vertex graphs the minimum weighted Szeged
index is attained by a tree.
\end{conjecture}
Unlike many other topological indices, for example
ABC-index \cite{ABCJCTB,krag_tree}, it is not easy
to prove if the graph with minimum weighted Szeged
index on a fixed number of vertices is a tree.
In \cite{jan} some properties of minimum weighted
Szeged index trees are described and the list of
minimum trees on at most $25$ vertices is presented.
We refer the reader to \cite{bondy2008} for basic
graph theory terminology. Throughout this work,
we will use $n$ to denote the number of vertices
of a graph. The rest of the work is organized
as follows. In Section \ref{sec:ending} we
introduce the concept of ending branch, present
some basic results about their structure and
use these results to determine the structure of
ending branches for some orders. We present
computational results on the structure of optimal
trees for the weighted Szeged index, as well as
the structure of optimal ending branches in
Section \ref{sec:computational}. We cover
trees up to 81 vertices; it is not hard to
computationally push this bound, but we only
intend to present the readers with a dataset
that might give a fair idea of what is happening
with these structures, and not giving the most
complete list possible. Finally, we present
conclusions and some conjectures based on our
observations in Section \ref{sec:conc}.
\section{Ending Branches}
\label{sec:ending}
Let $T$ be a tree with the smallest weighted Szeged index on a fixed number of vertices and let $R$ be a vertex of highest degree in $T$. Note that removing any edge from $T$ results in two connected components. We will call the the one that does not contain $R$ an \emph{ending branch}. In this section we will study the ending branches that result in small weighted Szeged index trees. By the weighted Szeged index of an ending branch we mean the sum of the weighted Szeged index of all edges in the ending branch along with the half-edge connecting the ending branch to the rest of the tree. Note that to calculate the weighted Szeged index of an ending branch, it is sufficient to have the total number of vertices of the graph besides the structure of the ending branch. The following proposition clarifies the idea.
\begin{proposition}
Let $n$ be a positive integer and let $T$ be a tree with the
smallest weighted Szeged index on $n$ vertices. If $uv$ is
an edge of $T$ and $v$ is the root of the associated
ending branch, then the weighted Szeged index of the ending
branch does not depend on the degree $d_u$ of $u$.
\end{proposition}
\begin{proof}
Consider two edges $uv$ and $uv'$ of $T$ and
suppose that the associated ending branches with roots $v$
and $v'$ have the same number of vertices.
The difference between the weighted Szeged index of these two graphs will depend on $d_u$ only on the edge $uv$ which is
$$(d_u+d_v)n_u^v n_v^u - (d_u+d_{v'})n_u^{v'} n_{v'}^u,$$
and if the number of vertices in the ending branch is the same then it simplifies to
$$(d_v-d_{v'})n_u^v n_v^u,$$
which does not depend on $d_u$, as claimed.
\end{proof}
Now we can compare the weighted Szeged index of different ending branches. By a {\em minimal ending branch on $k$ vertices} we mean the ending branch that has the smallest weighted Szeged index among all possible ending branches of $k$ vertices.
\begin{corollary}\label{cor:end}
In any minimum weighted Szeged index tree all ending branches are also minimal.
\end{corollary}
This allows us to focus on the best ending branches of a given size in a tree.
\begin{observation}
There is only one possible ending branch of size 1 and one of size 2.
\end{observation}
\begin{figure}[htb]
\centering
\includegraphics{end1-2.pdf}
\caption{Ending branches of size 1 and 2.}
\label{fig:1-2}
\end{figure}
Let $T_i$ be a best ending branch of size $i$. For $T_1$ and $T_2$ we have:
$$wSz(T_1)=n-1,$$
$$wSz(T_2)=(n-1) + 2\times 2 \times (n-2) = 5n-9.$$
Figure \ref{fig:3} shows the two possible ending branches of size 3.
\begin{figure}[htb]
\centering
\includegraphics{end3.pdf}
\caption{Possible ending branches of size 3.}
\label{fig:3}
\end{figure}
\begin{proposition}
The 3-ray is a better ending branch than a vertex of degree 3 with 2 leaves attached to it, i.e. in Figure \ref{fig:3}, (a) is smaller than (b).
\end{proposition}
\begin{proof}
Let $n$ be the number of vertices of $T$, then we have:
$$wSz(T_a)=(1+2)(n-1)(1)+(2+2)(n-2)(2) + 2(n-3)(3)=17n-37,$$
$$wSz(T_b)=2(1+3)(n-1)(1)+3(n-3)(3)=17n-35.$$
Therefore for any $n$ we have that $wSz(T_a)<wSz(T_b)$.
\end{proof}
Note that $wSz(T_a)$ and $wSz(T_b)$ can be rephrased as
$$wSz(T_a)=2\times3\times(n-3)+ 2\times 2 \times (n-2) + wSz(T_2),$$
$$wSz(T_b)=3\times3\times(n-3) + 2\times 3\times (n-1) + 2wSz(T_1).$$
This idea can be generalized as follows. Let $v$ be the root of an ending branch and let $x_1, x_2, \ldots, x_k$ be its children. Also let $n_i$ be the size of the ending branch with root $x_i$ (for $i \in \{ 1, \ldots, k \}$). Then the best ending branch with root $v$ has the following weighted Szeged index:
$$wSz(T_v)=d_v\times n_v \times (n-n_v) + \sum_{i=1}^k d_v\times n_i\times(n-n_i) + \sum_{i=1}^k wSz(T_{x_i}),$$
where $n_v$ is the total number of vertices in this ending branch.
Observe that $\sum_{i=1}^k n_i =n-1$, and for every partition of $n-1$ into integers we will get a new ending branch. We need to compare all these branches to find the best ending branch on $n_v$ number of vertices.
\begin{figure}[htb]
\centering
\includegraphics{end4.pdf}
\caption{Possible ending branches of size 4.}
\label{fig:4}
\end{figure}
As an example, to find the best ending branch on 4 vertices we need to find all partitions of 3 and we have $3=3 , 3=2+1, 3=1+1+1$. See Figure \ref{fig:4}. We have:
$$wSz(T_{3})=2\times 4\times (n-4) + 2\times 3\times (n-3) + 17n-35 = 31n-85,$$
$$wSz(T_{2+1}) = 3\times 4\times (n-4) + 3\times 2\times (n-2) + 3\times (n-1) + 5n-9 + n-1 = 27n - 73,$$
$$wSz(T_{1+1+1})=4\times4\times(n-4) + 3\times 4\times(n-1) + 3\times (n-1) = 31n-79.$$
Now it is easy to see that $wSz(T_{2+1})<wSz(T_3)<wSz(T_{1+1+1})$ for $n>6$. Therefore, in any minimal weighted Szeged index tree of size more than 6, if an ending branch of size 4 occurs, it is going to be $T_{2+1}$.
\section{Computational Results}
\label{sec:computational}
In this section we will use Corollary \ref{cor:end} to find minimum ending branches of higher order, say $n_v$, when the tree has $n$ vertices. Let us assume that we know all the minimum ending branches of order up to $n_v-1$.
Any ending branch has a root, say $v$, and the sum of the order of its children is $n_v-1$.
Basically, the order of children of $v$ is a partition of $n_v-1$. So we need to go over all possible partitions of $n_v-1$ and since we know the best ending branches of order up to $n_v-1$ we can just calculate and compare the best ending branch of order $n_v$.
This is when we have a fixed $n$. In general, when $n$ is a variable, for each partitioning of $n_v-1$ into integers we get a linear equation and to compare linear equations we sometimes need a bound.
Here we present the result of our computations. Table \ref{tab:branch} shows the best ending branches of size $n_v$. When $n$ is at least a specific value, then $v$ has $d_v$ children of shown order. Using the same approach we can find the minimal weighted Szeged index trees as well. Table \ref{tab:tree} shows the minimum weighted Szeged index trees of order up to 81.
Based on this calculation and as an example, the minimum weighted Szeged index tree on 67 vertices has a root of degree 4 and the root has three children of order 16, one child of order 18. A minimum ending branch on 16 vertices (when there are more than 18 vertices in the tree) has three children of order 5. A minimal ending branch of order 5 (when there are more than 6 vertices in the tree) has two children of order 2 and there is one possible ending branch of order 2, shown in Figure \ref{fig:1-2}. A drawing of the minimum weighted Szeged index tree on 67 vertices is shown in Figure \ref{fig:5}.
\small{
\begin{longtable}{| c | c | c | l |}
\hline
$n_v$ & $n\geq$ & $d_v$ & children \\ [0.5ex]
\hline\hline
2 & 2 & 1 & 1 \\
\hline
3 & 3 & 1 & 2 \\
\hline
4 & 6 & 2 & 1, 2 \\
\hline
5 & 6 & 2 & 2, 2 \\
\hline
6 & 8 & 2 & 2, 3 \\
\hline
7 & 9 & 3 & 2, 2, 2 \\
\hline
8 & 14 & 3 & 2, 2, 3 \\
\hline
9 & 11 & 2 & 3, 5 \\
\hline
10 & 14 & 3 & 3, 3, 3 \\
\hline
10* & 14 & 2 & 4, 5 \\
\hline
11 & 13 & 2 & 5, 5 \\
\hline
12 & 15 & 3 & 3, 3, 5 \\
\hline
13 & 18 & 3 & 3, 4, 5 \\
\hline
14 & 17 & 3 & 3, 5, 5 \\
\hline
15 & 18 & 3 & 4, 5, 5 \\
\hline
16 & 18 & 3 & 5, 5, 5 \\
\hline
17 & 20 & 3 & 5, 5, 6 \\
\hline
18 & 22 & 3 & 5, 5, 7 \\
\hline
19 & 25 & 3 & 5, 6, 7 \\
\hline
20 & 32 & 3 & 5, 7, 7 \\
\hline
21 & 70 & 3 & 6, 7, 7 \\
\hline
22 & 45 & 3 & 7, 7, 7 \\
\hline
23 & 51 & 3 & 7, 7, 8 \\
\hline
24 & 48 & 3 & 7, 7, 9 \\
\hline
25 & 49 & 3 & 7, 7, 10 \\
\hline
26 & 39 & 3 & 7, 7, 11 \\
\hline
27 & 48 & 3 & 7, 8, 11 \\
\hline
28 & 47 & 3 & 7, 9, 11 \\
\hline
29 & 50 & 3 & 7, 10, 11 \\
\hline
30 & 41 & 3 & 7, 11, 11 \\
\hline
31 & 45 & 3 & 8, 11, 11 \\
\hline
32 & 44 & 3 & 9, 11, 11 \\
\hline
33 & 44 & 3 & 10, 11, 11 \\
\hline
34 & 42 & 3 & 11, 11, 11 \\
\hline
35 & 54 & 3 & 11, 11, 12 \\
\hline
36 & 55 & 3 & 11, 11, 13 \\
\hline
37 & 54 & 3 & 11, 11, 14 \\
\hline
38 & 55 & 3 & 11, 11, 15 \\
\hline
39 & 45 & 3 & 11, 11, 16 \\
\hline
40 & 54 & 3 & 11, 12, 16 \\
\hline
41 & 55 & 3 & 11, 13, 16 \\
\hline
42 & 54 & 3 & 11, 14, 16 \\
\hline
43 & 55 & 3 & 11, 15, 16 \\
\hline
44 & 50 & 3 & 11, 16, 16 \\
\hline
45 & 66 & 3 & 12, 16, 16 \\
\hline
46 & 56 & 3 & 13, 16, 16 \\
\hline
47 & 55 & 3 & 14, 16, 16 \\
\hline
48 & 55 & 3 & 15, 16, 16 \\
\hline
49 & 56 & 3 & 16, 16, 16 \\
\hline
50 & 65 & 3 & 16, 16, 17 \\
\hline
51 & 64 & 3 & 16, 16, 18 \\
\hline
52 & 72 & 3 & 16, 16, 19 \\
\hline
53 & 83 & 3 & 16, 16, 20 \\
\hline
54 & 98 & 3 & 16, 16, 21 \\
\hline
55 & 121 & 3 & 16, 16, 22 \\
\hline
56 & 125 & 3 & 16, 17, 22 \\
\hline
57 & 165 & 3 & 16, 18, 22 \\
\hline
58 & 254 & 3 & 16, 19, 22 \\
\hline
59 & 506 & 3 & 16, 20, 22 \\
\hline
60 & 66 & 4 & 11, 16, 16, 16 \\
\hline
61 & 67 & 4 & 12, 16, 16, 16 \\
\hline
62 & 68 & 4 & 13, 16, 16, 16 \\
\hline
63 & 70 & 4 & 14, 16, 16, 16 \\
\hline
64 & 71 & 4 & 15, 16, 16, 16 \\
\hline
65 & 71 & 4 & 16, 16, 16, 16 \\
\hline
66 & 72 & 4 & 16, 16, 16, 17 \\
\hline
67 & 73 & 4 & 16, 16, 16, 18 \\
\hline
68 & 77 & 4 & 16, 16, 16, 19 \\
\hline
69 & 78 & 4 & 16, 16, 16, 20 \\
\hline
70 & 79 & 4 & 16, 16, 16, 21 \\
\hline
71 & 80 & 4 & 16, 16, 16, 22 \\
\hline
72 & 82 & 4 & 16, 16, 17, 22 \\
\hline
73 & 84 & 4 & 16, 16, 18, 22 \\
\hline
74 & 87 & 4 & 16, 16, 19, 22 \\
\hline
75 & 93 & 4 & 16, 16, 20, 22 \\
\hline
76 & 100 & 4 & 16, 16, 21, 22 \\
\hline
77 & 99 & 4 & 16, 16, 22, 22 \\
\hline
78 & 107 & 4 & 16, 17, 22, 22 \\
\hline
79 & 117 & 4 & 16, 18, 22, 22 \\
\hline
80 & 128 & 4 & 16, 19, 22, 22 \\
\hline
\caption{Minimal Ending branches}
\label{tab:branch}
\end{longtable}
}
\begin{longtable}{| c | c | l |}
\hline
$n$ & $d_R$ & children \\ [0.5ex]
\hline\hline
2 & 1 & 1 \\
\hline
3 & 1 & 2 \\
\hline
4 & 2 & 1, 2 \\
\hline
5 & 2 & 2, 2 \\
\hline
6 & 3 & 1, 2, 2 \\
\hline
7 & 3 & 2, 2, 2 \\
\hline
8 & 3 & 2, 2, 3 \\
\hline
9 & 4 & 2, 2, 2, 2 \\
\hline
10 & 3 & 2, 2, 5 \\
\hline
11 & 3 & 2, 3, 5 \\
\hline
12 & 4 & 2, 2, 2, 5 \\
\hline
13 & 3 & 2, 5, 5 \\
\hline
14 & 3 & 3, 5, 5 \\
\hline
15 & 4 & 3, 3, 3, 5 \\
\hline
15* & 3 & 4, 5, 5 \\
\hline
16 & 3 & 5, 5, 5 \\
\hline
17 & 4 & 3, 3, 5, 5 \\
\hline
18 & 4 & 2, 5, 5, 5 \\
\hline
18* & 4 & 3, 4, 5, 5 \\
\hline
19 & 4 & 3, 5, 5, 5 \\
\hline
20 & 4 & 4, 5, 5, 5 \\
\hline
21 & 4 & 5, 5, 5, 5 \\
\hline
22 & 4 & 5, 5, 5, 6 \\
\hline
23 & 4 & 5, 5, 5, 7 \\
\hline
24 & 5 & 3, 5, 5, 5, 5 \\
\hline
25 & 5 & 4, 5, 5, 5, 5 \\
\hline
26 & 5 & 5, 5, 5, 5, 5 \\
\hline
27 & 5 & 5, 5, 5, 5, 6 \\
\hline
28 & 5 & 5, 5, 5, 5, 7 \\
\hline
29 & 5 & 5, 5, 5, 6, 7 \\
\hline
30 & 4 & 3, 5, 5, 16 \\
\hline
31 & 4 & 4, 5, 5, 16 \\
\hline
32 & 4 & 5, 5, 5, 16 \\
\hline
33 & 4 & 5, 5, 6, 16 \\
\hline
34 & 4 & 5, 5, 7, 16 \\
\hline
35 & 4 & 5, 6, 7, 16 \\
\hline
36 & 5 & 5, 5, 5, 5, 15 \\
\hline
37 & 5 & 5, 5, 5, 5, 16 \\
\hline
38 & 5 & 5, 5, 5, 6, 16 \\
\hline
39 & 5 & 5, 5, 5, 7, 16 \\
\hline
40 & 5 & 5, 5, 6, 7, 16 \\
\hline
41 & 5 & 5, 5, 7, 7, 16 \\
\hline
42 & 4 & 7, 7, 11, 16 \\
\hline
43 & 5 & 5, 7, 7, 7, 16 \\
\hline
44 & 4 & 10, 11, 11, 11 \\
\hline
45 & 4 & 11, 11, 11, 11 \\
\hline
46 & 4 & 7, 11, 11, 16 \\
\hline
47 & 4 & 8, 11, 11, 16 \\
\hline
48 & 4 & 9, 11, 11, 16 \\
\hline
49 & 4 & 10, 11, 11, 16 \\
\hline
50 & 4 & 11, 11, 11, 16 \\
\hline
51 & 4 & 7, 11, 16, 16 \\
\hline
52 & 5 & 7, 11, 11, 11, 11 \\
\hline
53 & 4 & 9, 11, 16, 16 \\
\hline
54 & 4 & 10, 11, 16, 16 \\
\hline
55 & 4 & 11, 11, 16, 16 \\
\hline
56 & 5 & 11, 11, 11, 11, 11 \\
\hline
57 & 4 & 11, 13, 16, 16 \\
\hline
58 & 4 & 11, 14, 16, 16 \\
\hline
59 & 4 & 11, 15, 16, 16 \\
\hline
60 & 4 & 11, 16, 16, 16 \\
\hline
61 & 5 & 11, 11, 11, 11, 16 \\
\hline
62 & 4 & 13, 16, 16, 16 \\
\hline
63 & 4 & 14, 16, 16, 16 \\
\hline
64 & 4 & 15, 16, 16, 16 \\
\hline
65 & 4 & 16, 16, 16, 16 \\
\hline
66 & 4 & 16, 16, 16, 17 \\
\hline
67 & 4 & 16, 16, 16, 18 \\
\hline
68 & 5 & 11, 11, 13, 16, 16 \\
\hline
69 & 5 & 11, 11, 14, 16, 16 \\
\hline
70 & 5 & 11, 11, 15, 16, 16 \\
\hline
71 & 5 & 11, 11, 16, 16, 16 \\
\hline
72 & 5 & 11, 12, 16, 16, 16 \\
\hline
73 & 5 & 11, 13, 16, 16, 16 \\
\hline
74 & 5 & 11, 14, 16, 16, 16 \\
\hline
75 & 5 & 11, 15, 16, 16, 16 \\
\hline
76 & 5 & 11, 16, 16, 16, 16 \\
\hline
77 & 5 & 12, 16, 16, 16, 16 \\
\hline
78 & 5 & 13, 16, 16, 16, 16 \\
\hline
79 & 5 & 14, 16, 16, 16, 16 \\
\hline
80 & 5 & 15, 16, 16, 16, 16 \\
\hline
81 & 5 & 16, 16, 16, 16, 16 \\
\hline
\caption{Minimum weighted Szeged index trees.}
\label{tab:tree}
\end{longtable}
\begin{figure}[htb]
\centering
\includegraphics[width=0.4\textwidth]{67_vertices.pdf}
\caption{Best weighted Szeged index on 67 vertices.}
\label{fig:5}
\end{figure}
\subsection{Regular Trees}
By a regular ending branch we mean a minimal (weighted Szeged index) ending branch whose children
are of the same order. In the first 80 minimal ending branches 1, 2, 3, 5, 7, 10, 11, 16, 22, 34, 49, 65
are the regular ones. Based on our observations we have the following conjecture. By a main branch we mean an ending branch that is directly connected to the root of the tree.
\begin{conjecture}
In a minimum weighted Szeged index tree all but at most one main ending branches are regular ending branches.
\end{conjecture}
The same definition applies to regular trees and the first regular minimal trees are
1, 2, 3, 5, 7, 9, 16, 21, 26, 45, 56, 65 and 81.
The simplified degree sequence of a regular ending branch is the degree sequence of vertices on a path from the root of the ending branch to a leaf. For example, the simplified degree sequence of a minimal ending branch of order 65 is 4, 3, 2, 1.
Our calculation shows that a minimal ending branch of order 326 (again our expectation) is not a regular ending branch with simplified degree sequence of 5, 4, 3, 2, 1. Actually an ending branch on 326 vertices with 3 children of order 103, 103, 119 works better than the regular ending branch.
We note that there seem to be no vertices of degree greater than 6 in the optimal trees.
In order to understand this phenomenon, we offer the following thought experiment.
We will calculate an approximation of the weighted Szeged index for complete $k$-ary
trees with a very high number $n$ of vertices.
Note that the root has degree $k$, while the other vertices have degree $k+1$. We will ignore this
complication in order to simplify the expressions, and treat the every edge as having degree sum
$2k+2$, so we can just calculate the ordinary Szeged index and multiply it by $2k+2$. The Szeged
index of a complete $k$-ary tree is estimated as follows.
Each edge from the root to one of its $k$ children contributes the product $\frac{n-1}{k} \left( n - \frac{n-1}{k} \right)$ because
the child has $\frac{n-1}{k}$ descendants. In this expression the $n^2$ term dominates, since we assumed that
$n$ is very large; its term is $\frac{k-1}{k^2} n^2$. There are $k$ edges like this, so their overall contribution to
the Szeged index is $\frac{k-1}{k} n^2 + o(n^2)$. Similarly, $k^2$ edges between the children and grandchildren
of the root contribute collectively $k^2 \left( \frac{n-k-1}{k^2} \right) \left( n - \frac{n-k-1}{k^2} \right)$ which equals
$\frac{k^2 - 1}{k^2} n^2 + o(n^2)$. A similar calculation for the remaining levels reveals that the Szeged
index is $n^2 \left( \frac{k-1}{k} + \frac{k^2 - 1}{k^2} + \frac{k^3 - 1}{k^3} + \dots \right) + o(n^2)$. Since there are $\log_k n$
levels, we can say roughly that the Szeged index of a complete $k$-ary tree is about $n^2 \log_k n$, and the
weighted Szeged index is about $n^2 (2k+2) \log_k n$. The expression can be analyzed, but since it only represents a rough approximation we can just look at a large value. When $n=10^6$, the expression
appears to be minimized around $k=4$; we feel this may explain why we don't see vertices of degree more than
6. In any event, a large $k$, say $k=10$, would be disadvantageous. We also see from the calculation that the
contributions of the lower levels of a complete $k$-ary tree are slightly increasing as we go down the tree, and
perhaps this explains why the degrees in our optimal trees are decreasing. In large optimal
trees we have found that the degrees are decreasing just enough to make contributions of all levels roughly the
same (with the exception of the bottom three or four levels).
\section{Conclusions}
\label{sec:conc}
In view of Conjecture \ref{min-tree}, it is a good idea to
understand the structure of minimum weighted Szeged index
trees. Even if it turns out to be false, knowing the
structure of such trees could give us some insight to
understand the structure of minimum weighted Szeged index
graphs in the general case.
In this work we introduced the concept of ending branch,
which we used to analyze the structure of minimum weight
Szeged index trees in a recursive fashion. Our observations
were useful to computationally construct the trees on at most
81 vertices, extending the list of 25 trees given in
\cite{jan}.
Based on our results and experimental observations,
we finalize the section with some conjectures that,
if true, can give insights on the structure of minimum
weighted Szeged index trees. Also, they represent well
determined future lines of work that might be explored.
\begin{conjecture}
In a minimum weighted Szeged index tree, the degree sequence from the root to any leaf is non-increasing.
\end{conjecture}
\begin{conjecture}
Vertices of degree 1 are attached to vertices of degree at most 3.
\end{conjecture}
\begin{conjecture}
There are no vertices of degree greater than 6 in a minimum weighted Szeged index tree.
\end{conjecture}
| proofpile-arXiv_065-296 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\label{sec:start}
\subsection{The landscape}\label{sect:landsc}
In this paper, we give a combinatorial description of the structures on
which diagonal groups, including those arising in the O'Nan--Scott Theorem,
act.
This is a rich area, with links not only to finite group theory (as in the
O'Nan--Scott Theorem) but also to designed experiments, and the combinatorics
of Latin squares and their higher-dimensional generalisations. We do not
restrict our study to the finite case.
Partitions lie at the heart of this study. We express the Latin hypercubes
we need in terms of partitions, and our final structure for diagonal groups
can be regarded as a join-semilattice of partitions. Cartesian products of sets
can be described in terms of the partitions induced by the coordinate projection
maps
and this approach was introduced into the study of primitive permutation groups
by L.~G.~Kov\'acs~\cite{kov:decomp}. He called the collection of these coordinate partitions
a ``system of product imprimitivity''. The concept was further developed
in~\cite{ps:cartesian} where the same object was called a ``Cartesian decomposition''.
In preparation for introducing the join-semilattice of partitions for the diagonal
groups, we view Cartesian decompositions as lattices of partitions of the
underlying set.
Along the way, we also discuss a number of conditions on families of partitions
that have been considered in the literature, especially the statistical
literature.
\subsection{Outline of the paper}
As said above, our aim is to describe the geometry and combinatorics underlying
diagonal groups, in general. In the O'Nan--Scott Theorem, the diagonal groups
$D(T,m)$ depend on a non-abelian simple group $T$ and a positive integer~$m$.
But these groups can be defined for an arbitrary group $T$, finite or infinite,
and we investigate them in full generality.
Our purpose is to describe the structures on which diagonal groups act. This
takes two forms: descriptive, and axiomatic. In the former, we start with a
group $T$ and a positive integer $m$, build the structure on which the group
acts, and study its properties. The axiomatic approach is captured by the
following theorem, to be proved in Section~\ref{sec:diag}. Undefined terms
such as Cartesian lattice, Latin square,
paratopism, and diagonal semilattice will be introduced later, so that when
we get to the point of proving the theorem its statement should be clear.
We mention here that the automorphism group of a Cartesian lattice is,
in the simplest case,
a wreath product of two symmetric groups in its product action, while the
automorphism group of a diagonal semilattice $\dsl Tm$ is the diagonal group $D(T,m)$;
Latin squares, on the other hand, may (and usually do) have only the trivial
group of automorphisms.
\begin{theorem}\label{thm:main}
Let $\Omega$ be a set with $|\Omega|>1$, and $m$ an integer at least $2$. Let $Q_0,\ldots,Q_m$
be $m+1$ partitions of $\Omega$ satisfying the following property: any $m$
of them are the minimal non-trivial partitions in a Cartesian lattice on
$\Omega$.
\begin{enumerate}\itemsep0pt
\item If $m=2$, then the three partitions are the row, column, and letter
partitions of a Latin square on $\Omega$, unique up to paratopism.
\item If $m>2$, then there is a group $T$, unique up to isomorphism,
such that $Q_0,\ldots,Q_m$ are the minimal non-trivial partitions in a diagonal
semilattice $\dsl Tm$ on $\Omega$.
\end{enumerate}
\end{theorem}
The case $m=3$ in Theorem~\ref{thm:main}(b) can be phrased in the language
of Latin cubes and may thus be of independent interest. The proof is in
Theorems~\ref{thm:bingo} and \ref{th:upfront} (see also
Theorem~\ref{thm:regnice}). See Section~\ref{sec:whatis} for the definition
of a regular Latin cube of sort (LC2).
\begin{theorem}
\label{thm:bingo_}
Consider a Latin cube of sort (LC2) on an underlying set~$\Omega$,
with coordinate partitions $P_1$, $P_2$ and $P_3$, and letter partition~$L$.
Then the Latin cube is regular if and only if there is a group~$T$ such that, up to relabelling the letters
and the three sets of coordinates,
$\Omega=T^3$ and $L$ is the coset partition defined
by the diagonal subgroup $\{(t,t,t) \mid t \in T\}$.
Moreover, $T$ is unique up to group isomorphism.
\end{theorem}
Theorem~\ref{thm:main}
has a similar form to the axiomatisation of projective geometry
(see \cite{vy}). We give simple axioms, and show that diagonal structures of smallest
dimension satisfying them are ``wild'' and exist in great profusion, while
higher-dimensional structures can be completely described in terms of an
algebraic object. In our case, the algebraic object is a group, whereas,
for projective geometry, it is a division ring, also called a skew field.
Note that the group emerges naturally from the combinatorial axioms.
In Section~\ref{sec:prelim}, we describe the preliminaries required.
Section~\ref{sec:Cart} revisits Cartesian decompositions, as described
in~\cite{ps:cartesian}, and defines Cartesian lattices.
Section~\ref{sec:LC} specialises to the case that $m=3$. Not only does this
show that this case is very different from $m=2$; it also underpins the
proof by induction of Theorem~\ref{thm:main}, which is given in
Section~\ref{sec:diag}.
In the last two sections, we give further results on diagonal groups. In
Section~\ref{s:pqp}, we determine which diagonal groups are primitive,
and which are quasiprimitive (these two conditions turn out to be equivalent).
In Section~\ref{s:diaggraph}, we define a graph having a given diagonal
group as its automorphism group (except for four small diagonal groups),
examine some of its graph-theoretic properties, and briefly describe the
application of this to synchronization properties of permutation groups
from~\cite{bccsz} (finite primitive diagonal groups with $m\geqslant2$ are
non-synchronizing).
The final section poses a few open problems related to this work.
\subsection{Diagonal groups}\label{sect:diaggroups}
In this section we define the diagonal groups, in two ways: a ``homogeneous''
construction, where all factors are alike but the action is on a coset space;
and an ``inhomogeneous'' version
which gives an alternative way of labelling the elements of the underlying set
which is better for calculation
even though one of the factors has to be treated differently.
Let $T$ be a group with $|T|>1$, and $m$ an integer with $m\geqslant1$. We define the
\emph{pre-diagonal group} $\widehat D(T,m)$ as the semidirect
product of $T^{m+1}$ by $\operatorname{Aut}(T)\times S_{m+1}$, where $\operatorname{Aut}(T)$ (the
automorphism group of $T$) acts in the same way on each factor, and $S_{m+1}$
(the symmetric group of degree $m+1$) permutes the factors.
Let $\delta(T,m+1)$ be the diagonal subgroup $\{(t,t,\ldots,t) \mid t\in T\}$
of $T^{m+1}$,
and $\widehat H=\delta(T,m+1)\rtimes (\operatorname{Aut}(T)\times S_{m+1})$.
We represent $\widehat D(T,m)$ as a permutation group on the set of
right cosets of $\widehat H$. If $T$ is finite, the degree of this
permutation representation is $|T|^m$. In general, the action is not
faithful, since $\delta(T,m+1)$ (acting by conjugation)
induces inner automorphisms of $T^{m+1}$, which agree with the inner
automorphisms induced by $\operatorname{Aut}(T)$.
In fact, if $m\geqslant 2$ or $T$ is non-abelian, then the kernel of the $\widehat D(T,m)$-action
is
\begin{align}\label{eq:K}
\begin{split}
\widehat K
&=\{(t,\ldots,t)\alpha\in T^{m+1}\rtimes \operatorname{Aut}(T)\mid t\in T\mbox{ and}\\
&\mbox{$\alpha$ is the
inner automorphism induced by $
t^{-1}$}\},
\end{split}
\end{align}
and so $\widehat K\cong T$. Thus, if, in addition, $T$ is finite,
then the order of the permutation group induced by
$\widehat D(T,m)$ is $|\widehat D(T,m)|/| \widehat K|=
|T|^m(|\operatorname{Aut}(T)|\times|S_{m+1}|)$. If $m=1$ and $T$ is abelian, then
the factor $S_2$ induces the inversion automorphism $t\mapsto t^{-1}$ on $T$ and
the permutation group induced by $\widehat D(T,m)$ is the holomorph
$T\rtimes \operatorname{Aut}(T)$.
We define the \emph{diagonal group} $D(T,m)$ to be the permutation group
induced by $\widehat D(T,m)$ on the set of right cosets of $\widehat H$ as above.
So $D(T,m)\cong \widehat D(T,m)/\widehat K$.
To move to a more explicit representation of $D(T,m)$,
we choose coset representatives
for $\delta(T,m+1)$ in $T^{m+1}$. A convenient choice is to number the direct
factors of
$T^{m+1}$ as $T_0,T_1,\ldots,T_m$, and use representatives of
the form $(1,t_1,\ldots,t_m)$, with $t_i\in T_i$. We will denote this
representative by $[t_1,\ldots,t_m]$, and let $\Omega$ be the set of all
such symbols. Thus, as a set, $\Omega$ is bijective with~$T^m$.
\begin{remark}\label{rem:diaggens}
Now we can describe the action of $\widehat D(T,m)$ on $\Omega$ as follows.
\begin{itemize}\itemsep0pt
\item[(I)] For $1\leqslant i\leqslant m$, the factor $T_i$ acts by right multiplication
on symbols in the $i$th position in elements of $\Omega$.
\item[(II)] $T_0$ acts by simultaneous left multiplication of all coordinates by
the inverse. This is because, for $x\in T_0$, $x$ maps the coset containing
$(1,t_1,\ldots,t_m)$ to the coset containing $(x,t_1,\ldots,t_m)$, which is
the same as the coset containing $(1,x^{-1}t_1,\ldots,x^{-1}t_m)$.
\item[(III)] Automorphisms of $T$ act simultaneously on all coordinates; but
inner automorphisms are identified with the action of elements in the diagonal
subgroup $\delta(T,m+1)$ (the element $(x,x,\ldots,x)$ maps the coset containing
$(1,t_1,\ldots,t_m)$ to the coset containing $(x,t_1x,\ldots,t_mx)$, which is
the same as the coset containing $(1,x^{-1}t_1x,\ldots,x^{-1}t_mx)$).
\item[(IV)] Elements of $S_m$ (fixing coordinate $0$) act by permuting the
coordinates in elements of $\Omega$.
\item[(V)] Consider the element of $S_{m+1}$ which transposes coordinates $0$ and~$1$.
This maps the coset containing $(1,t_1,t_2,\ldots,t_m)$ to the coset containing
the tuple $(t_1,1,t_2\ldots,t_m)$, which
also contains
$(1,t_1^{-1},t_1^{-1}t_2,\ldots,t_1^{-1}t_m)$. So the action of this
transposition is
\[[t_1,t_2,\ldots,t_m]\mapsto[t_1^{-1},t_1^{-1}t_2,\ldots,t_1^{-1}t_m].\]
Now $S_m$ and this transposition generate $S_{m+1}$.
\end{itemize}
\end{remark}
By~\eqref{eq:K}, the kernel $\widehat K$ of the $\widehat D(T,m)$-action
on~$\Omega$ is
contained in the subgroup generated by elements of type (I)--(III).
For example, in the case when $m=1$, the set $\Omega$ is bijective with
$T$; the factor $T_1$ acts by right multiplication, $T_0$ acts by left
multiplication by the inverse, automorphisms act in the natural way, and
transposition of the coordinates acts as inversion.
The following theorem states that the diagonal group $D(T,m)$ can be
viewed as the automorphism group of the corresponding diagonal join-semilattice
$\dsl Tm$ and the diagonal graph $\Gamma_D(T,m)$ defined in
Sections~\ref{sec:diag1} and~\ref{sec:dgds}, respectively. The two parts of
this theorem comprise Theorem~\ref{t:autDTm} and Corollary~\ref{c:sameag}
respectively.
\begin{theorem}
Let $T$ be a non-trivial group, $m\geqslant 2$, let $\dsl Tm$ be the diagonal semilattice and
$\Gamma_D(T,m)$ the diagonal graph. Then the following are valid.
\begin{enumerate}
\item The automorphism group of $\dsl Tm$ is $D(T,m)$.
\item If $(|T|,m)\not\in\{(2,2),(3,2),(4,2),(2,3)\}$, then the automorphism group of $\Gamma_D(T,m)$ is $D(T,m)$.
\end{enumerate}
\end{theorem}
\subsection{History}
The celebrated O'Nan--Scott Theorem describes the socle (the product of the
minimal normal subgroups) of a finite permutation group. Its original form
was different; it was a necessary condition for a finite permutation group
of degree~$n$ to be a maximal subgroup of the symmetric or alternating
group of degree~$n$. Since the maximal intransitive and imprimitive subgroups
are easily described, attention focuses on the primitive maximal subgroups.
The theorem was proved independently by Michael O'Nan and Leonard Scott,
and announced by them at the Santa Cruz conference on finite groups in 1979.
(Although both papers appeared in the preliminary conference proceedings, the
final published version contained only Scott's paper.) However, the roots
of the theorem are much older; a partial result appears in Jordan's
\textit{Trait\'e des Substitutions} \cite{jordan}
in 1870. The extension to arbitrary primitive groups is due to Aschbacher
and Scott~\cite{aschsc} and independently to Kov\'acs~\cite{kov:sd}. Further
information on the history of the theorem is given in
\cite[Chapter 7]{ps:cartesian} and~\cite[Sections~1--4]{kovacs}.
For our point of view, and avoiding various complications, the theorem
can be stated as follows:
\begin{theorem}\label{thm:ons}
Let $G$ be a primitive permutation group on a finite set $\Omega$. Then one
of the following four conditions holds:
\begin{enumerate}
\item $G$ is contained in an affine group $\operatorname{AGL}(d,p)\leqslant\operatorname{Sym}(\Omega)$,
with $d\geqslant1$ and $p$ prime, and so preserves the affine geometry of
dimension $d$ over the field with $p$ elements with point set $\Omega$;
\item $G$ is contained in a wreath product in its product action contained in
$\operatorname{Sym}(\Omega)$, and so preserves a Cartesian decomposition of $\Omega$;
\item $G$ is contained in the diagonal group $D(T,m)\leqslant\operatorname{Sym}(\Omega)$,
with $T$ a non-abelian finite simple group and $m\geqslant1$;
\item $G$ is almost simple (that is, $T\leqslant G\leqslant\operatorname{Aut}(T)$, where $T$
is a non-abelian finite simple group).
\end{enumerate}
\end{theorem}
Note that, in the first three cases of the theorem, the action of the group
is specified; indeed, in the first two cases, we have a geometric or
combinatorial structure which is preserved by the group. (Cartesian
decompositions are described in detail in~\cite{ps:cartesian}.) One of our
aims in this paper is to provide a similar structure preserved by diagonal
groups, although our construction is not restricted to the case where $T$ is
simple, or even finite.
It is clear that the Classification of Finite Simple Groups had a great
effect on the applicability of the O'Nan--Scott Theorem to the study of
finite primitive permutation groups; indeed, the landscape of the subject
and its applications has been completely transformed by CFSG.
In Section~\ref{s:pqp} we characterise primitive and quasiprimitive diagonal
groups as follows.
\begin{theorem}\label{th:primaut}
Suppose that $T$ is a non-trivial group, $m\geqslant 2$, and consider $D(T,m)$
as a permutation group on $\Omega=T^{m}$. Then the following
are equivalent.
\begin{enumerate}
\item $D(T,m)$ is a primitive permutation group;
\item $D(T,m)$ is a quasiprimitive permutation group;
\item $T$ is a characteristically simple group, and, if $T$ is
an elementary abelian $p$-group, then $p\nmid(m+1)$.
\end{enumerate}
\end{theorem}
Diagonal groups and the structures they preserve have occurred in other
places too. Diagonal groups with $m=1$ (which in fact are not covered by
our analysis) feature in the paper ``Counterexamples to a theorem of Cauchy''
by Peter Neumann, Charles Sims and James Wiegold~\cite{nsw}, while
diagonal groups over the group $T=C_2$ are automorphism groups of the
\emph{folded cubes}, a class of distance-transitive graphs, see~\cite[p.~264]{bcn}.
Much less explicit information is available about related questions on infinite symmetric groups.
Some maximal subgroups of infinite symmetric groups have been associated
with structures such as subsets, partitions~\cite{braziletal,macn,macpr},
and Cartesian decompositions~\cite{covmpmek}.
However, it is still not known if infinite symmetric groups have
maximal subgroups that are analogues of the maximal subgroups of simple
diagonal type in finite symmetric or alternating groups. If $T$ is a possibly
infinite simple group, then the diagonal group $D(T,m)$ is primitive and,
by~\cite[Theorem~1.1]{uniform}, it cannot be embedded into a wreath product in
product action. On the other hand, if $\Omega$ is a countable set, then, by
\cite[Theorem~1.1]{macpr}, simple diagonal type groups are properly contained
in maximal subgroups of $\operatorname{Sym}(\Omega)$. (This containment is proper since the
diagonal group itself is not maximal; its product with the finitary symmetric
group properly contains it.)
\section{Preliminaries}
\label{sec:prelim}
\subsection{The lattice of partitions}
\label{sec:part}
A partially ordered set (often abbreviated to \textit{poset}) is a set
equipped with a partial order, which we here write as $\preccurlyeq$.
A finite poset
is often represented by a \emph{Hasse diagram}.
This is a diagram drawn as a graph in the plane. The vertices of the diagram
are the elements of the poset; if $q$ \emph{covers} $p$ (that is, if $p\prec q$
but there is no element $r$ with $p \prec r \prec q$),
there is an edge joining $p$ to~$q$,
with $q$ above $p$ in the plane (that is, with larger $y$-coordinate).
Figure~\ref{f:hasse} represents the divisors of $36$, ordered by divisibility.
\begin{figure}[htbp]
\begin{center}
\setlength{\unitlength}{1mm}
\begin{picture}(20,20)
\multiput(0,10)(5,-5){3}{\circle*{2}}
\multiput(5,15)(5,-5){3}{\circle*{2}}
\multiput(10,20)(5,-5){3}{\circle*{2}}
\multiput(0,10)(5,-5){3}{\line(1,1){10}}
\multiput(0,10)(5,5){3}{\line(1,-1){10}}
\end{picture}
\end{center}
\caption{\label{f:hasse}A Hasse diagram}
\end{figure}
In a partially ordered set with order relation $\preccurlyeq$,
we say that an element $c$ is the \emph{meet}, or \emph{infimum},
of $a$ and $b$ if
\begin{itemize}
\renewcommand{\itemsep}{0pt}
\item $c\preccurlyeq a$ and $c\preccurlyeq b$;
\item for all $d$, $d\preccurlyeq a$ and $d\preccurlyeq b$ implies
$d\preccurlyeq c$.
\end{itemize}
The meet of $a$ and $b$, if it exists, is unique; we write it $a\wedge b$.
Dually, $x$ is the \emph{join}, or \emph{supremum} of $a$ and $b$ if
\begin{itemize}
\item $a\preccurlyeq x$ and $b\preccurlyeq x$;
\item for all $y$, if $a\preccurlyeq y$ and $b\preccurlyeq y$,
then $x\preccurlyeq y$.
\end{itemize}
Again the join, if it exists, is unique, and is written $a\vee b$.
The terms ``join'' and ``supremum'' will be used interchangeably.
Likewise, so will the terms ``meet'' and ``infimum''.
In an arbitrary poset, meets and joins may not exist. A poset in which every
pair of elements has a meet and a join is called a \emph{lattice}.
A subset of a lattice which is closed under taking joins is called a
\emph{join-semilattice}.
The poset shown in Figure~\ref{f:hasse} is a lattice. Taking it as described
as the set of divisors of $36$ ordered by divisibility, meet and join are
greatest common divisor and least common multiple respectively.
In a lattice, an easy induction shows that suprema and infima of arbitrary
finite sets exist and are unique. In particular, in a finite lattice there is
a unique minimal element and a unique maximal element. (In an infinite lattice,
the existence of least and greatest elements is usually assumed. But all
lattices in this paper will be finite.)
\medskip
The most important example for us is the \emph{partition lattice} on a set
$\Omega$, whose elements are all the partitions of $\Omega$. There are
(at least) three different ways of thinking about partitions. In one
approach, used in \cite{rab:as,pjc:ctta,ps:cartesian},
a partition of
$\Omega$ is a set $P$ of pairwise disjoint subsets of $\Omega$, called \textit{parts}
or \textit{blocks}, whose union is $\Omega$.
For $\omega$ in $\Omega$, we write $P[\omega]$ for the unique part of $P$
which contains~$\omega$.
A second approach uses equivalence relations. The ``Equivalence Relation
Theorem'' \cite[Section 3.8]{pjc:ctta} asserts that, if $R$ is an equivalence
relation on a set~$\Omega$, then the equivalence classes of~$R$ form a partition
of~$\Omega$. Conversely, if $P$~is a partition of~$\Omega$ then there is a
unique equivalence relation~$R$ whose equivalence classes are the parts of~$P$.
We call $R$ the \textit{underlying equivalence relation} of~$P$. We write
$x\equiv_Py$ to mean that $x$ and $y$ lie in the same part of~$P$ (and so are
equivalent in the corresponding relation).
The third approach to partitions, as kernels of functions,
is explained near the end of this subsection.
The ordering on partitions is given by
\begin{quote}
$P\preccurlyeq Q$ if and only if every part of $P$ is contained in a part of $Q$.
\end{quote}
Note that $P\preccurlyeq Q$ if and only if $R_P\subseteq R_Q$, where $R_P$
and $R_Q$ are the equivalence relations corresponding to $P$ and $Q$, and
a relation is regarded as a set of ordered pairs.
For any two partitions $P$ and $Q$, the parts of $P\wedge Q$ are all
\emph{non-empty} intersections of a part of $P$ and a part of $Q$. The join
is a little harder to define. The two elements $\alpha$, $\beta$ in $\Omega$
lie in the same part of $P\vee Q$ if and only if there is a finite sequence
$(\omega_0,\omega_1,\ldots,\omega_m)$ of elements of $\Omega$,
with $\omega_0=\alpha$ and $\omega_m=\beta$, such that $\omega_i$ and
$\omega_{i+1}$ lie in the same part of $P$ if $i$ is even, and
in the same part of $Q$ if $i$ is odd. In other words, there is a walk of finite
length from $\alpha$ to~$\beta$ in which each step remains within a part of
either $P$ or~$Q$.
In the partition lattice on $\Omega$, the unique least element is the partition
(denoted by $E$) with all parts of size~$1$,
and the unique greatest element (denoted by $U$) is
the partition with a single part $\Omega$.
In a sublattice
of this, we shall call an element \textit{minimal} if it is minimal subject
to being different from~$E$.
(Warning: in some of the literature that we cite, this partial order is
written as~$\succcurlyeq$. Correspondingly, the Hasse diagram is the other
way up and the meanings of $\wedge$ and $\vee$ are interchanged.)
For a partition~$P$, we denote by $|P|$ the number of parts of~$P$.
For example, $|P|=1$ if and only if $P=U$. In the infinite case, we interpret
$|P|$ as the cardinality of the set of parts of~$P$.
There is a connection between partitions and functions which will be important
to us. Let $F\colon\Omega\to\mathcal{T}$ be a function, where $\mathcal{T}$
is an auxiliary set. We will assume, without loss of generality,
that $F$ is onto. Associated with~$F$ is a partition of $\Omega$,
sometimes denoted by $\widetilde F$, whose
parts are the inverse images of the elements of $\mathcal{T}$; in other words,
two points of $\Omega$ lie in the same part of~$\widetilde F$ if and only if they
have the same image under~$F$. In areas of algebra such as semigroup theory
and universal algebra, the partition~$\widetilde F$ is referred to as the
\emph{kernel} of $F$.
This point of view is common in experimental design in statistics, where
$\Omega$~is the set of experimental units, $\mathcal{T}$~the set of treatments
being compared, and $F(\omega)$~is the treatment applied to the unit~$\omega$:
see~\cite{rab:design}.
For example, an element $\omega$ in $\Omega$ might be a plot in an agricultural
field, or a single run of an industrial machine, or one person for one month.
The outcomes to be measured are thought of as functions on $\Omega$,
but variables like $F$ which partition~$\Omega$ in ways that may
affect the outcome are called \textit{factors}. If $F$ is a factor, then the
values $F(\omega)$, for $\omega$ in $\Omega$, are called \textit{levels}
of~$F$. In this context,
usually no distinction is made between the function~$F$ and the
partition $\widetilde F$ of $\Omega$ which it defines.
If $F\colon\Omega\to\mathcal{T}$ and $G\colon\Omega\to\mathcal{S}$ are two
functions on $\Omega$, then the partition $\widetilde F\wedge\widetilde G$ is the
kernel of the function $F\times G\colon\Omega\to\mathcal{T}\times\mathcal{S}$,
where $(F\times G)(\omega)=(F(\omega),G(\omega))$. In other words,
$\widetilde{F\times G}=\widetilde{F}\wedge\widetilde{G}$.
\begin{defn}
One type of partition which we make use of is the (right) \emph{coset
partition} of a group relative to a subgroup. Let $H$ be a subgroup of a
group~$G$, and let $P_H$ be the partition of $G$ into right cosets of $H$.
\end{defn}
We gather a few basic properties of coset partitions.
\begin{prop}
\label{prop:coset}
\begin{enumerate}
\item
If $H$ is a normal subgroup of $G$, then $P_H$ is the kernel (in the general
sense defined earlier) of the natural homomorphism from $G$ to $G/H$.
\item
$P_H\wedge P_K=P_{H\cap K}$.
\item
$P_H\vee P_K=P_{\langle H,K\rangle}$.
\item
The map $H\mapsto P_H$ is an isomorphism from the lattice of subgroups of~$G$
to a sublattice of the partition lattice on~$G$.
\end{enumerate}
\end{prop}
\begin{proof}
(a) and (b) are clear. (c) holds because elements of $\langle H,K\rangle$
are composed of elements from $H$ and $K$. Finally, (d) follows from (b) and
(c) and the fact that the map is injective.
\end{proof}
Subgroup lattices of groups have been extensively investigated: see, for
example, Suzuki~\cite{suzuki:book}.
\subsection{Latin squares}
\label{sec:LS}
A \emph{Latin square} of order~$n$ is usually defined as an $n\times n$
array~$\Lambda$ with entries from an alphabet~$T$ of size~$n$
with the property that each letter in~$T$ occurs once in each row and once
in each column of~$\Lambda$.
The diagonal structures in this paper can be regarded as generalisations, where
the dimension is not restricted to be $2$, and the alphabet is allowed to be
infinite. To ease our way in, we re-formulate the definition as follows. For
this definition we regard $T$ as indexing the rows and columns as well as the
letters. This form of the definition allows the structures to be infinite.
A \emph{Latin square} consists of a pair of sets $\Omega$ and $T$, together
with three functions $F_1,F_2,F_3\colon\Omega\to T$, with the property that, if
$i$ and $j$ are any two of $\{1,2,3\}$, the map
$F_i\times F_j\colon\Omega\to T\times T$ is a bijection.
We recover the original definition by specifying that the $(i,j)$ entry
of~$\Lambda$ is equal to~$k$ if the unique point $\omega$ of $\Omega$ for which
$F_1(\omega)=i$ and $F_2(\omega)=j$ satisfies $F_3(\omega)=k$. Conversely,
given the original definition, if we index rows and columns with $T$, then
$\Omega$ is the set of cells of the array, and $F_1,F_2,F_3$ map a cell to its
row, column, and entry respectively.
In the second version of the definition,
the set~$T$ acts as an index set for rows, columns and
entries of the square. We will need the freedom to change the indices
independently; so we now rephrase the definition in terms of the
three partitions $P_i=\widetilde F_i$ ($i=1,2,3$).
Two partitions $P_1$ and $P_2$ of $\Omega$ form a \emph{grid} if,
for all $p_i\in P_i$ ($i=1,2$), there is a unique point of $\Omega$ lying in
both $p_1$ and $p_2$. In other words, there is a bijection $F$ from
$P_1\times P_2$ to $\Omega$ so that $F(p_1,p_2)$ is the unique point in
$p_1\cap p_2$. This implies that $P_1\wedge P_2=E$ and $P_1\vee P_2=U$, but
the converse is not true.
For example, if $\Omega = \{1,2,3,4,5,6\}$ the partitions
$P_1 =\{\{1,2\},\{3,4\},\{5,6\}\}$ and $P_2=\{\{1,3\}, \{2,5\}, \{4,6\}\}$
have these properties but do not form a grid.
Three partitions $P_1,P_2,P_3$ of $\Omega$ form a \emph{Latin square} if
any two of them form a grid.
This third version of the definition is the one that we shall mostly use
in this paper.
\begin{prop}
\label{p:order}
If $\{P_1,P_2,P_3\}$ is a Latin square on $\Omega$, then $|P_1|=|P_2|=|P_3|$,
and this cardinality is also the cardinality of any part of any of the three
partitions.
\end{prop}
\begin{proof}
Let $F_{ij}$ be the bijection from $P_i\times P_j$ to $\Omega$, for
$i,j\in\{1,2,3\}$, $i\ne j$.
For any part~$p_1$ of~$P_1$,
there is a bijection $\phi$ between $P_2$ and~$p_1$:
simply put $\phi(p_2) = F_{12}(p_1,p_2) \in p_1$ for each part $p_2$ of~$P_2$.
Similarly there is a bijection~$\psi$ between $P_3$ and $p_1$
defined by $\psi(p_3) = F_{13}(p_1,p_3) \in p_1$ for each part $p_3$ of~$P_3$.
Thus $|P_2|=|P_3|=|p_1|$, and $\psi^{-1}\phi$ is an explicit bijection
from $P_2$ to $P_3$.
Similar bijections are defined by any part~$p_2$ of $P_2$ and any part~$p_3$
of~$P_3$.
The result follows.
\end{proof}
The three partitions are usually called \emph{rows}, \emph{columns} and
\emph{letters}, and denoted by $R,C,L$ respectively. This refers to the
first definition of the Latin square as a square array of letters. Thus,
the Hasse diagram of the three partitions is shown in Figure~\ref{f:ls}.
\begin{figure}[htbp]
\begin{center}
\setlength{\unitlength}{1mm}
\begin{picture}(30,30)
\multiput(15,5)(0,20){2}{\circle*{2}}
\multiput(5,15)(10,0){3}{\circle*{2}}
\multiput(5,15)(10,10){2}{\line(1,-1){10}}
\multiput(5,15)(10,-10){2}{\line(1,1){10}}
\put(15,5){\line(0,1){20}}
\put(10,3){$E$}
\put(10,13){$C$}
\put(10,23){$U$}
\put(0,13){$R$}
\put(27,13){$L$}
\end{picture}
\end{center}
\caption{\label{f:ls}A Latin square}
\end{figure}
The number defined in Proposition~\ref{p:order} is called the \emph{order} of
the Latin square. So, with our
second definition, the order of the Latin square is $|T|$.
\medskip
Note that the number of Latin squares of order $n$ grows faster than the
exponential of $n^2$, and the vast majority of these (for large $n$) are
not Cayley tables of groups. We digress slightly to discuss this.
The number of Latin squares of order $n$ is a rapidly growing function, so
rapid that allowing for paratopism (the natural notion of isomorphism for
Latin squares, regarded as sets of partitions; see before
Theorem~\ref{thm:albert} for the definition) does not affect the leading
asymptotics. There is
an elementary proof based on Hall's Marriage Theorem that the number is at least
\[n!(n-1)!\cdots1!\geqslant(n/c)^{n^2/2}\]
for a constant $c$. The van der Waerden permanent conjecture (proved by
Egory\v{c}ev and Falikman~\cite{e:vdwpc,f:vdwpc}) improves the
lower bound to $(n/c)^{n^2}$. An elementary argument using only Lagrange's
and Cayley's Theorems shows that the number of groups of order $n$ is much
smaller; the upper bound is $n^{n\log n}$. This has been improved to
$n^{(c\log n)^2}$ by Neumann~\cite{pmn:enum}. (His theorem was conditional on a
fact about finite simple groups, which follows from the classification of these
groups.) The elementary arguments referred to, which suffice for our claim,
can be found in \cite[Sections~6.3,~6.5]{pjc:ctta}.
Indeed, much more is true: almost all Latin squares have trivial
autoparatopism groups~\cite{pjc:asymm,mw}, whereas
the autoparatopism group of the Cayley table of a group of order~$n$
is the diagonal group, which has order at least
$6n^2$, as we shall see at the end of Section~\ref{sect:lsautgp}.
\medskip
There is a graph associated with a Latin square, as follows: see
\cite{bose:SRG,pjc:rsrg,phelps}. The
vertex set is $\Omega$; two vertices are adjacent if they lie in the same part
of one of the partitions $P_1,P_2,P_3$. (Note that, if points lie in the same
part of more than one of these partitions, then the points are equal.)
This is the \emph{Latin-square graph} associated with the Latin square.
In the finite case,
if $|T|=n$, then it is a regular graph with $n^2$~vertices, valency
$3(n-1)$, in which two adjacent vertices have $n$~common neighbours and two
non-adjacent vertices have $6$ common neighbours.
Any regular finite graph with the property that the number of common
neighbours of vertices $v$ and $w$ depends only on whether or not $v$ and $w$
are adjacent is called \textit{strongly regular}: see \cite{bose:SRG,pjc:rsrg}.
Its parameters are the number of vertices, the valency, and the numbers of
common neighbours of adjacent and non-adjacent vertices respectively. Indeed,
Latin-square graphs form one of the most prolific classes of strongly regular
graphs: the number of such graphs on a square number of vertices grows faster
than exponentially, in view of Proposition~\ref{p:lsgraphaut} below.
A \emph{clique} is a set of vertices, any two adjacent;
a \textit{maximum clique} means a maximal clique (with respect to inclusion)
such that there is no clique of strictly larger size. Thus a maximum clique
must be maximal, but the converse is not necessarily true. The following
result is well-known; we sketch a proof.
\begin{prop}
A Latin square of order $n>4$ can be recovered uniquely from its Latin-square
graph, up to the order of the three partitions and permutations of the rows,
columns and letters.
\label{p:lsgraphaut}
\end{prop}
\begin{proof}
If $n>4$, then any clique of size greater than~$4$ is contained in a unique
clique which is a part of one of the three partitions~$P_i$ for
$i=1,2,3$. In particular, the maximum cliques are the parts of the three
partitions.
Two maximum cliques are parts of the same partition if and only if they are
disjoint (since parts of different partitions intersect in a unique point).
So we can recover the three partitions $P_i$ ($i=1,2,3$) uniquely up to order.
\end{proof}
This proof shows why the condition $n>4$ is necessary. Any Latin-square graph
contains cliques of size $3$ consisting of three cells, two in the same row,
two in the same column, and two having the same entry; and there may also be
cliques of size $4$ consisting of the cells of an \emph{intercalate}, a
Latin subsquare of order~$2$.
We examine what happens for $n\leqslant 4$.
\begin{itemize}
\renewcommand{\itemsep}{0pt}
\item For $n=2$, the unique Latin square is the Cayley table of the group $C_2$;
its Latin-square graph is the complete graph $K_4$.
\item For $n=3$, the unique Latin square is the Cayley table of $C_3$. The
Latin-square graph is the complete tripartite graph $K_{3,3,3}$: the nine
vertices are partitioned into three parts of size~$3$, and the edges join all
pairs of points in different parts.
\item For $n=4$, there are two Latin squares up to isotopy, the Cayley tables
of the Klein group and the cyclic group. Their Latin-square graphs
are most easily identified by
looking at their complements, which are strongly regular graphs on $16$ points
with parameters $(16,6,2,2)$: that is, all vertices have valency~$6$, and any
two vertices have just two common neighbours. Shrikhande~\cite{shrikhande}
showed that there are exactly two such graphs: the $4\times4$ square lattice
graph, sometimes written as $L_2(4)$, which is the line graph~$L(K_{4,4})$
of the complete bipartite graph $K_{4,4}$; and one further graph now called
the \emph{Shrikhande graph}. See Brouwer~\cite{Brouwer} for a detailed
description of this graph.
\end{itemize}
Latin-square graphs were introduced in two seminal papers by Bruck and Bose
in the \emph{Pacific Journal of Mathematics} in 1963~\cite{bose:SRG,Bruck:net}.
A special case of Bruck's main result is that a strongly regular graph having
the parameters $(n^2, 3(n-1), n, 6)$ associated with a Latin-square graph of
order~$n$ must actually be a Latin-square graph, provided that $n>23$.
\subsection{Quasigroups}
\label{sesc:quasi}
A \emph{quasigroup} consists of a set $T$ with a binary operation $\circ$ in
which each of the equations $a\circ x=b$ and $y\circ a=b$ has a unique solution
$x$ or $y$ for any given $a,b\in T$. These solutions are denoted by
$a\backslash b$ and $b/a$ respectively.
According to the second of our three equivalent definitions,
a quasigroup $(T,\circ)$ gives rise to a Latin square
$(F_1,F_2,F_3)$ by the rules that $\Omega=T\times T$ and,
for $(a,b)$ in $\Omega$,
$F_1(a,b)=a$, $F_2(a,b)=b$, and $F_3(a,b)=a\circ b$.
Conversely, a Latin square with rows, columns and letters indexed by a
set $T$ induces a quasigroup structure
on $T$ by the rule that, if we use the pair $(F_1,F_2)$ to identify $\Omega$
with $T\times T$, then $F_3$ maps the pair $(a,b)$ to $a\circ b$. (More
formally, $F_1(\omega)\circ F_2(\omega)=F_3(\omega)$ for all $\omega\in\Omega$.)
In terms of partitions, if $a,b\in T$, and the unique point lying in the part
of $P_1$ labelled $a$ and the part of $P_2$ labelled $b$ also lies in the
part of $P_3$ labelled~$c$, then $a\circ b=c$.
In the usual representation of a Latin square as a square array, the Latin
square is the \emph{Cayley table} of the quasigroup.
Any permutation of $T$ induces a quasigroup isomorphism, by simply
re\-labelling the elements. However, the Latin square property is also
preserved if we choose three permutations
$\alpha_1$, $\alpha_2$, $\alpha_3$ of $T$ independently and define new functions
$G_1$, $G_2$, $G_3$ by $G_i(\omega)=(F_i(\omega))\alpha_i$ for $i=1,2,3$.
(Note that we write permutations on the right, but most other functions on
the left.)
Such a triple of maps is called an
\emph{isotopism} of the Latin square or quasigroup.
We can look at this another way. Each map $F_i$ defines a partition
$P_i$ of~$\Omega$, in which two points lie in the same part if their
images under $F_i$ are equal. Permuting elements of the three image sets
independently has no effect on the partitions. So an isotopism class of
quasigroups corresponds to a Latin square (using the partition definition)
with arbitrary labellings of rows, columns and letters by $T$.
A \emph{loop} is a quasigroup with a two-sided identity. Any quasigroup is
isotopic to a loop, as observed by Albert~\cite{albert}: indeed, any element
$e$ of the quasigroup can be chosen to be the identity. (Use the letters in
the row and column of a fixed cell containing $e$ as column, respectively row,
labels.)
A different equivalence on Latin squares is obtained by applying
a permutation to the three functions $F_1,F_2,F_3$. Two Latin squares (or
quasigroups) are said to be \emph{conjugate}~\cite{kd} or \emph{parastrophic}
\cite{shch:quasigroups} if they are related by such a
permutation. For example, the transposition of $F_1$ and $F_2$ corresponds
(under the original definition) to transposition (as matrix) of the Latin
square. Other conjugations are slightly harder to define: for example,
the $(F_1,F_3)$ conjugate is the square in which the $(i,j)$ entry is $k$ if
and only if the $(k,j)$ entry of the original square is $i$.
Combining the operations of isotopism and conjugation gives the relation of
\emph{paratopism}. The paratopisms form the group $\operatorname{Sym}(T)\wr S_3$. Given a
Latin square or quasigroup, its \emph{autoparatopism group} is the group of
all those paratopisms which preserve it, in the sense that they map the set
$\{(x,y,x\circ y):x,y\in T\}$ of triples to itself. This coincides with the
automorphism group of the Latin square (as set of partitions): take $\Omega$
to be the set of triples and let the three partitions correspond to the values
in the three positions. An autoparatopism is called an \emph{autotopism} if it
is an isotopism. See \cite{paratop} for details.
In the case of groups, a conjugation can be attained
by applying a suitable isotopism, and so the following result is a direct
consequence of Albert's well-known theorem~\cite[Theorem~2]{albert}.
\begin{theorem}\label{thm:albert}
If $\Lambda$ and $\Lambda'$ are Latin squares, isotopic to Cayley tables
of groups $G$ and $G'$ respectively, and if some paratopism maps $\Lambda$
to $\Lambda'$, then the groups $G$ and $G'$ are isomorphic.
\end{theorem}
Except for a small number of exceptional cases, the autoparatopism group of
a Latin square coincides with the automorphism group of its Latin-square graph.
\begin{prop}
\label{p:autlsg}
Let $\Lambda$ be a Latin square of order $n>4$. Then the automorphism group
of the Latin-square graph of $\Lambda$ is isomorphic to the autoparatopism
group of~$\Lambda$.
\end{prop}
\begin{proof}
It is clear that autoparatopisms of $\Lambda$ induce automorphisms of its
graph. The converse follows from Proposition~\ref{p:lsgraphaut}.
\end{proof}
A question which will be of great importance to us is the following: How do
we recognise Cayley tables of groups among Latin squares? The answer is given
by the following theorem, proved in \cite{brandt,frolov}. We first need
a definition, which is given in the statement of \cite[Theorem~1.2.1]{DK:book}.
\begin{defn}
\label{def:quad}
A Latin square satisfies the \textit{quadrangle criterion}, if, for all
choices of $i_1$, $i_2$, $j_1$, $j_2$, $i_1'$, $i_2'$, $j_1'$ and $j_2'$,
if the letter in $(i_1,j_1)$ is equal to the letter in $(i_1',j_1')$,
the letter in $(i_1,j_2)$ is equal to the letter in $(i_1',j_2')$,
and the letter in $(i_2,j_1)$ is equal to the letter in $(i_2',j_1')$,
then the letter in $(i_2,j_2)$ is equal to the letter in $(i_2',j_2')$.
\end{defn}
In other words, any pair of rows and pair of columns define four entries in the
Latin square; if two pairs of rows and two pairs of columns have the property that
three of the four entries are equal, then the fourth entries are also equal.
If $(T,\circ)$ is a quasigroup, it satisfies the quadrangle criterion if and
only if, for any $a_1,a_2,b_1,b_2,a_1',a_2',b_1',b_2'\in T$, if
$a_1\circ b_1=a_1'\circ b_1'$, $a_1\circ b_2=a_1'\circ b_2'$, and
$a_2\circ b_1=a_2'\circ b_1'$, then $a_2\circ b_2=a_2'\circ b_2'$.
\begin{theorem}
\label{thm:frolov}
Let $(T,\circ)$ be a quasigroup. Then $(T,\circ)$ is isotopic to a group if
and only if it satisfies the quadrangle criterion.
\end{theorem}
In \cite{DK:book}, the ``only if'' part of this result is proved in its
Theorem 1.2.1 and the converse is proved in the text following Theorem~1.2.1.
A Latin square which satisfies the quadrangle criterion is called a
\textit{Cayley matrix} in~\cite{DOP:quad}.
If $(T, \circ)$ is isotopic to a group then we may assume that the rows,
columns and letters have been labelled in such a way that $a \circ b= a^{-1}b$
for all $a$, $b$ in~$T$. We shall use this format in the proof of
Theorems~\ref{t:autDT2} and~\ref{thm:bingo}.
\subsection{Automorphism groups}\label{sect:lsautgp}
Given a Latin square $\Lambda=\{R,C,L\}$ on a set $\Omega$, an
\emph{automorphism} of $\Lambda$ is a permutation of $\Omega$ preserving
the set of three partitions; it is a \emph{strong automorphism} if it
fixes the three partitions individually. (These maps are also called
\emph{autoparatopisms} and \emph{autotopisms}, as noted in the preceding
section.)
We will generalise this definition later, in Definition~\ref{def:weak}.
We denote the groups of automorphisms and strong automorphisms by
$\operatorname{Aut}(\Lambda)$ and $\operatorname{SAut}(\Lambda)$ respectively.
In this section we verify that, if $\Lambda$ is the Cayley table of a group
$T$, then $\operatorname{Aut}(\Lambda)$ is the diagonal group $D(T,2)$ defined in
Section~\ref{sect:diaggroups}.
We begin with a principle which we will use several times.
\begin{prop}
Suppose that the group $G$ acts transitively on a set~$\Omega$.
Let $H$ be a subgroup of $G$, and assume that
\begin{itemize}
\renewcommand{\itemsep}{0pt}
\item $H$ is also transitive on $\Omega$;
\item $G_\alpha=H_\alpha$, for some $\alpha\in\Omega$.
\end{itemize}
Then $G=H$.
\label{p:subgp}
\end{prop}
\begin{proof}
The transitivity of $H$ on $\Omega$ means that we can choose a set $X$ of
coset representatives for $G_\alpha$ in $G$ such that $X\subseteq H$. Then
$H=\langle H_\alpha,X\rangle=\langle G_\alpha,X\rangle=G$.
\end{proof}
The next result applies to any Latin square. As noted earlier, given a
Latin square $\Lambda$, there is a loop $Q$ whose Cayley table is $\Lambda$.
\begin{prop}
Let $\Lambda$ be the Cayley table of a loop $Q$ with identity $e$. Then
the subgroup $\operatorname{SAut}(\Lambda)$ fixing the cell in row and
column $e$ is equal to the automorphism group of $Q$.
\label{p:autlatin}
\end{prop}
\begin{proof}
A strong automorphism of $\Lambda$ is given by an isotopism $(\rho,\sigma,\tau)$
of $Q$, where $\rho$, $\sigma$, and $\tau$ are permutations of rows, columns
and letters, satisfying
\[(ab)\tau=(a\rho)(b\sigma)\]
for all $a,b\in Q$. If this isotopism fixes the element $(e,e)$ of $\Omega$,
then substituting
$a=e$ in the displayed equation shows that $b\tau=b\sigma$ for all $b\in Q$,
and so $\tau=\sigma$. Similarly, substituting $b=e$ shows that $\tau=\rho$.
Now the displayed equation shows that $\tau$ is an automorphism of $Q$.
Conversely, if $\tau$ is an automorphism of $Q$, then $(\tau,\tau,\tau)$ is
a strong automorphism of $\Lambda$ fixing the cell $(e,e)$.
\end{proof}
\begin{theorem}
Let $\Lambda$ be the Cayley table of a group $T$. Then $\operatorname{Aut}(\Lambda)$
is the diagonal group $D(T,2)$.
\label{t:autDT2}
\end{theorem}
\begin{proof}
First, we show that $D(T,2)$ is a subgroup of $\operatorname{Aut}(\Lambda)$.
We take $\Omega=T\times T$ and
represent $\Lambda=\{R,C,L\}$ as follows, using notation introduced in
Section~\ref{sec:part}:
\begin{itemize}
\renewcommand{\itemsep}{0pt}
\item $(x,y)\equiv_R(u,v)$ if and only if $x=u$;
\item $(x,y)\equiv_C(u,v)$ if and only if $y=v$;
\item $(x,y)\equiv_L(u,v)$ if and only if $x^{-1}y=u^{-1}v$.
\end{itemize}
(As an array, we take the $(x,y)$ entry to be $x^{-1}y$. As noted at the end
of Section~\ref{sesc:quasi}, this
is isotopic to the usual representation of the Cayley table.)
Routine verification shows that the generators of $D(T,2)$ given in
Section~\ref{sect:diaggroups} of types (I)--(III) preserve these
relations, while the map $(x,y)\mapsto(y,x)$ interchanges $R$ and $C$
while fixing $L$, and the map $(x,y)\mapsto(x^{-1},x^{-1}y)$ interchanges $C$
and $L$ while fixing $R$. (Here is one case: the element $(a,b,c)$ in $T^3$ maps
$(x,y)$ to $(a^{-1}xb,a^{-1}yc)$. If $x=u$ then $a^{-1}xb=a^{-1}ub$, and
if $x^{-1}y=u^{-1}v$ then $(a^{-1}xb)^{-1}a^{-1}yc=(a^{-1}ub)^{-1}a^{-1}vc$.)
Thus $D(T,2)\leqslant\operatorname{Aut}(\Lambda)$.
Now we apply Proposition~\ref{p:subgp} in two stages.
\begin{itemize}
\item First, take $G=\operatorname{Aut}(\Lambda)$ and $H=D(T,2)$. Then $G$ and $H$ both induce
$S_3$ on the set of three partitions; so it suffices to prove that the
group of strong automorphisms of $\Lambda$ is generated by elements of
types (I)--(III) in $D(T,2)$.
\item Second, take $G$ to be $\operatorname{SAut}(\Lambda)$,
and $H$ the group generated by translations and automorphisms of $T$
(the elements of type (I)--(III) in Remark~\ref{rem:diaggens}). Both $G$
and $H$ act transitively on $\Omega$, so it is enough to show that the
stabilisers of a cell (which we can take to be $(1,1)$) in $G$ and $H$ are
equal. Consideration of elements of types (I)--(III)
shows that $H_{(1,1)}=\operatorname{Aut}(T)$,
while Proposition~\ref{p:autlatin} shows that $G_{(1,1)}=\operatorname{Aut}(T)$.
\end{itemize}
The statement at the end of the second stage completes the proof.
\end{proof}
It follows from Proposition~\ref{p:lsgraphaut}
that, if $n>4$, the automorphism group of
the Latin-square graph derived from the Cayley table of a group $T$ of order~$n$
is also the diagonal group $D(T,2)$. For $n\leqslant4$, we described the
Latin-square graphs at the end of Section~\ref{sec:LS}. For the groups $C_2$,
$C_3$, and $C_2\times C_2$, the graphs are $K_4$, $K_{3,3,3}$, and
$L(K_{4,4})$ respectively, with automorphism groups $S_4$,
$S_3\wr S_3$, and $S_4\wr S_2$ respectively. However, the automorphism group
of the Shrikhande graph is the group $D(C_4,2)$, with order $192$.
(The order of the automorphism group is $192$, see Brouwer~\cite{Brouwer},
and it contains $D(C_4,2)$, also with order $192$, as a subgroup.)
It also follows from Proposition~\ref{p:lsgraphaut} that,
if $T$ is a group, then the automorphism group of
the Latin-square graph is transitive on the vertex set. Vertex-transitivity
does not, however,
characterise Latin-square graphs that correspond to groups, as can be
seen by considering the examples in~\cite{wanlesspage}; the smallest example
which is not a group has order~$6$.
Finally, we justify the assertion made earlier, that the Cayley table of a
group of order $n$, as a Latin square, has at least $6n^2$ automorphisms. By
Theorem~\ref{t:autDT2}, this automorphism group is the diagonal group
$D(T,2)$; this group has a quotient $S_3$ acting on the three partitions, and
the group of strong automorphisms contains the right multiplications by
elements of $T^2$.
\subsection{More on partitions}\label{sect:moreparts}
Most of the work that we cite in this subsection has been about partitions of
finite sets.
See \cite[Sections 2--4]{rab:BCC} for a recent summary of this material.
\begin{defn}
\label{def:uniform}
A partition~$P$ of a set~$\Omega$ is \emph{uniform} if all its
parts have the same size in the sense that, whenever $\Gamma_1$
and $\Gamma_2$ are parts of $P$, there is a bijection from $\Gamma_1$
onto $\Gamma_2$.
\end{defn}
Many other words are used for this property for finite sets $\Omega$.
Tjur \cite{tjur84,tjur91} calls
such a partition \emph{balanced}. Behrendt \cite{behr} calls them
\emph{homogeneous}, but this conflicts with the use of this word
in \cite{ps:cartesian}. Duquenne \cite{duq} calls them \textit{regular},
as does Aschbacher~\cite{asch_over}, while Preece \cite{DAP:Oz} calls them
\emph{proper}.
Statistical work has made much use of the notion of orthogonality between
pairs of partitions. Here we explain it in the finite case, before
attempting to find a generalisation that works for infinite sets.
When $\Omega$ is finite, let $V$ be the real vector space $\mathbb{R}^\Omega$
with the usual inner product. Subspaces $V_1$ and $V_2$ of $V$ are defined
in \cite{tjur84} to be \textit{geometrically orthogonal} to each other if
$V_1 \cap(V_1 \cap V_2)^\perp \perp V_2 \cap(V_1\cap V_2)^\perp$.
This is equivalent to saying that the matrices $M_1$ and $M_2$ of orthogonal
projection onto $V_1$ and $V_2$ commute.
If $V_i$ is the set of vectors which are constant on each part of partition
$P_i$ then we say that partition $P_1$ is \textit{orthogonal} to partition $P_2$
if $V_1$ is geometrically orthogonal to $V_2$.
Here are two nice results in the finite case. See, for example,
\cite[Chapter 6]{rab:as}, \cite[Chapter 10]{rab:design} and \cite{tjur84}.
\begin{theorem}
For $i=1$, $2$, let $P_i$ be a partition of the finite set $\Omega$ with
projection matrix $M_i$. If $P_1$ is orthogonal to $P_2$ then the matrix
of orthogonal projection onto the subspace consisting of those
vectors which are constant on each part of the partition $P_1 \vee P_2$ is
$M_1M_2$.
\end{theorem}
\begin{theorem}
\label{thm:addon}
If $P_1$, $P_2$ and $P_3$ are pairwise orthogonal partitions of a finite
set $\Omega$ then $P_1\vee P_2$ is orthogonal to $P_3$.
\end{theorem}
Let $\mathcal{S}$ be a set of partitions of $\Omega$ which are pairwise
orthogonal. A consequence of Theorem~\ref{thm:addon} is that, if $P_1$ and
$P_2$ are in $\mathcal{S}$, then $P_1 \vee P_2$ can be added to $\mathcal{S}$
without destroying orthogonality. This is one motivation for the
following definition.
\begin{defn}
\label{def:tjur}
A set of partitions of a finite set $\Omega$ is a \emph{Tjur block structure}
if every pair of its elements is orthogonal, it is closed under taking
suprema, and it contains $E$.
\end{defn}
Thus the set of partitions in a Tjur block structure forms a join-semi\-lattice.
The following definition is more restrictive, but is widely used by
statisticians, based on the work of many people, including
Nelder \cite{JAN:OBS},
Throckmorton \cite{Thr61} and Zyskind \cite{Zy62}.
\begin{defn}
A set of partitions of a finite set $\Omega$ is an \emph{orthogonal
block structure} if it is a Tjur block structure, all of its partitions
are uniform, it is closed under taking infima, and it contains $U$.
\end{defn}
The set of partitions in an orthogonal block structure forms a lattice.
These notions have been used by combinatorialists and group theorists as
well as statisticians. For example, as explained in Section~\ref{sec:LS},
a Latin square can be regarded as an orthogonal block structure with the
partition lattice shown in Figure~\ref{f:ls}.
The following theorem shows how subgroups of a group can give rise to a Tjur
block structure: see \cite[Section 8.6]{rab:as} and
Proposition~\ref{prop:coset}(c).
\begin{theorem}
Given two subgroups $H$, $K$ of a finite group $G$, the partitions
$P_H$ and $P_K$ into right
cosets of $H$ and $K$ are orthogonal if and only if $HK=KH$ (that is, if and
only if $HK$ is a subgroup of $G$). If this happens, then the join of these
two partitions is the partition $P_{HK}$ into right cosets of $HK$.
\end{theorem}
An orthogonal block structure is called a \textit{distributive block structure}
or a \textit{poset block structure} if each of $\wedge$ and $\vee$ is
distributive over the other.
The following definition is taken from \cite{rab:as}.
\begin{defn}
\label{def:weak}
An \textit{automorphism} of a set of
partitions is a permutation of the underlying set that preserves the set of
partitions. Such an automorphism is a \textit{strong automorphism} if it
preserves each of the partitions.
\end{defn}
The group of strong automorphisms of a poset block structure
is a \textit{generalised wreath product} of symmetric groups: see
\cite{GWP,tulliobook}. One of the aims of the present paper is to
describe the automorphism group of the set of partitions defined by a
diagonal semilattice.
In \cite{CSCPWT}, Cheng and Tsai state that the desirable properties of
a collection
of partitions of a finite set are that it is a Tjur block structure,
all the partitions are uniform, and it contains $U$. This sits between Tjur
block structures and orthogonal block structures but does not seem to have been
named.
Of course, this theory needs a notion of inner product. If the set is
infinite we
would have to consider the vector space whose vectors have all but finitely
many entries zero. But if $V_i$ is the set of vectors which are constant on
each part of partition $P_i$ and if each part of $P_i$ is infinite then $V_i$
is the zero subspace. So we need to find a different definition that will
cover the infinite case.
We noted in Section~\ref{sec:part} that each partition is defined by its
underlying equivalence relation. If $R_1$ and $R_2$ are two equivalence
relations on $\Omega$ then their composition $R_1 \circ R_2$ is the relation
defined by
\[
\omega _1 (R_1 \circ R_2) \omega_2\mbox{ if and only if }
\exists \omega_3\in\Omega\mbox{ such that } \omega_1 R_1 \omega_3\mbox{ and }\omega_3 R_2 \omega_2.
\]
\begin{prop}
\label{prop:commeq}
Let $P_1$ and $P_2$ be partitions of $\Omega$ with underlying equivalence
relations $R_1$ and $R_2$ respectively. For each part $\Gamma$ of $P_1$,
denote by $\mathcal{B}_\Gamma$ the set of parts of $P_2$ whose intersection
with $\Gamma$ is not empty.
The following are equivalent.
(Recall that $P[\omega]$ is the part of $P$ containing $\omega$.)
\begin{enumerate}
\item
The equivalence relations $R_1$ and $R_2$ commute with each other
in the sense that
$R_1 \circ R_2 = R_2 \circ R_1$.
\item The relation $R_1 \circ R_2$ is an equivalence relation.
\item For all $\omega_1$ and $\omega_2$ in $\Omega$, the set
$P_1[\omega_1] \cap P_2[\omega_2]$ is non-empty if and only if the set
$P_2[\omega_1]\cap P_1[\omega_2]$ is non-empty.
\item
Modulo the parts of $P_1 \wedge P_2$, the restrictions of $P_1$
and $P_2$ to any part of $P_1 \vee P_2$ form a grid.
In other words, if $\Gamma$ and $\Xi$ are parts of $P_1$ and $P_2$
respectively, both contained in the same part of $P_1\vee P_2$, then
$\Gamma \cap \Xi \ne \emptyset$.
\item For all parts $\Gamma$ and $\Delta$ of $P_1$, the sets
$\mathcal{B}_\Gamma$ and $\mathcal{B}_\Delta$ are either equal or disjoint.
\item If $\Gamma$ is a part of $P_1$ contained in a part $\Theta$
of $P_1\vee P_2$ then $\Theta$ is the union of the parts of $P_2$
in $\mathcal{B}_\Gamma$.
\end{enumerate}
\end{prop}
In part (d), ``modulo the parts of $P_1\wedge P_2$'' means that, if each of
these parts is contracted to a point, the result is a grid as defined earlier.
In the finite case, if $P_1$ is orthogonal to $P_2$ then their underlying
equivalence relations $R_1$ and $R_2$ commute.
We need a concept that is the same as orthogonality in the
finite case (at least, in the Cheng--Tsai case).
\begin{defn}
\label{def:compatible}
Two uniform partitions $P$ and $Q$ of a set $\Omega$ (which may be finite or
infinite) are \emph{compatible} if
\begin{enumerate}
\item their underlying equivalence relations commute, and
\item their infimum $P\wedge Q$ is uniform.
\end{enumerate}
\end{defn}
If the partitions $P$, $Q$ and $R$ of a set $\Omega$ are pairwise
compatible then the equivalence of statements (a) and (f) of
Proposition~\ref{prop:commeq}
shows that
$P\vee Q$ and $R$ satisfy condition~(a) in
the definition of compatibility. Unfortunately, they may not satisfy
condition~(b), as the following example shows,
so the analogue of Theorem~\ref{thm:addon} for compatibility is not true in
general. However, it is true if we restrict attention to join-semilattices
of partitions where all infima are uniform. This is the case for
Cartesian lattices and for semilattices defined
by diagonal structures (whose definitions follow in
Sections~\ref{sec:firstcd} and \ref{sec:diag1} respectively).
It is also true for group semilattices: if $P_H$ and $P_K$ are the
partitions of a group $G$ into right cosets of subgroups $H$ and $K$
respectively, then $P_H\wedge P_K = P_{H \cap K}$,
as remarked in Proposition~\ref{prop:coset}.
\begin{eg}
\label{eg:badeg}
Let $\Omega$ consist of the $12$ cells in the three $2 \times 2$ squares
shown in Figure~\ref{fig:badeg}. Let $P$ be the partition of $\Omega$
into six rows, $Q$ the partition into six columns, and $R$ the partition
into six letters.
\begin{figure}
\[
\begin{array}{c@{\qquad}c@{\qquad}c}
\begin{array}{|c|c|}
\hline
A & B\\
\hline
B & A\\
\hline
\end{array}
&
\begin{array}{|c|c|}
\hline
C & D\\
\hline
E & F\\
\hline
\end{array}
&
\begin{array}{|c|c|}
\hline
C & D\\
\hline
E & F\\
\hline
\end{array}
\end{array}
\]
\caption{Partitions in Example~\ref{eg:badeg}}
\label{fig:badeg}
\end{figure}
Then $P\wedge Q = P\wedge R = Q \wedge R=E$, so each infimum is uniform.
The squares are the parts of the supremum $P\vee Q$.
For each pair of $P$, $Q$ and~$R$, their
underlying equivalence relations commute. However, the parts
of $(P\vee Q)\wedge R$ in the first square have size two, while all of the
others have size one.
\end{eg}
\section{Cartesian structures}
\label{sec:Cart}
We remarked just before Proposition~\ref{p:order} that three partitions of
$\Omega$ form a Latin square if and only if any two form a grid. The main
theorem of this paper is a generalisation of this fact to higher-dimensional
objects, which can be regarded as Latin hypercubes. Before
we get there, we need to consider the higher-dimensional analogue of grids.
\subsection{Cartesian decompositions and Cartesian lattices}
\label{sec:firstcd}
Cartesian decompositions are defined on \cite[p.~4]{ps:cartesian}. Since we
shall be taking a slightly different approach, we introduce these objects
rather briefly; we show that they are equivalent to those in our approach,
in the sense that each can be constructed from the other in a standard way,
and the automorphism groups of corresponding objects are the same.
\begin{defn}
\label{def:cart}
A \emph{Cartesian decomposition} of a set~$\Omega$, of dimension~$n$, is a
set~$\mathcal{E}$ of $n$ partitions $P_1,\ldots,P_n$ of $\Omega$ such that
$|P_i|\geqslant2$ for all $i$, and for all $p_i\in P_i$ for $i=1,\ldots,n$,
\[|p_1\cap\cdots\cap p_n|=1.\]
A Cartesian decomposition is \emph{trivial} if $n=1$; in this case $P_1$ is
the partition of $\Omega$ into singletons.
\end{defn}
For the rest of this subsection, $P_1,\ldots,P_n$ form a Cartesian decomposition
of $\Omega$.
\begin{prop}\label{prop:CDbij}
There is a well-defined bijection between $\Omega$ and
$P_1\times\cdots\times P_n$, given by
\[\omega\mapsto(p_1,\ldots,p_n)\]
if and only if $\omega\in p_i$ for $i=1,\ldots,n$.
\end{prop}
For simplicity, we adapt the notation in Section~\ref{sec:part} by
writing $\equiv_i$ for the equivalence relation $\equiv_{P_i}$ underlying
the partition~$P_i$.
For any subset $J$ of the index set $\{1,\ldots,n\}$, define a partition
$P_J$ of $\Omega$ corresponding to the following equivalence relation
$\equiv_{P_J}$ written as $\equiv_J$:
\[\omega_1\equiv_J\omega_2 \Leftrightarrow (\forall i\in J)\
\omega_1\equiv_i\omega_2.\]
In other words, $P_J=\bigwedge_{i\in J}P_i$.
\begin{prop}
\label{p:antiiso}
For all $J,K\subseteq \{1,\ldots,n\}$, we have
\[P_{J\cup K}=P_J\wedge P_K,\quad\hbox{and}\quad P_{J\cap K}=P_J\vee P_K.\]
Moreover, the equivalence relations $\equiv_J$ and $\equiv_K$ commute with
each other.
\end{prop}
It follows from this proposition that the partitions $P_J$, for
$J\subseteq\{1,\ldots,n\}$, form a lattice (a sublattice of the partition
lattice on $\Omega$), which is anti-isomorphic to the Boolean lattice of
subsets of $\{1,\ldots,n\}$ by the map $J\mapsto P_J$. We call this lattice
the \emph{Cartesian lattice} defined by the Cartesian decomposition.
For more details we refer to the book~\cite{ps:cartesian}.
Following \cite{JAN:OBS},
most statisticians would call such a lattice a \textit{completely crossed
orthogonal block structure}: see \cite{rab:DCC}.
It is called a \textit{complete factorial structure} in \cite{RAB:LAA}.
(Warning: a different common meaning of \textit{Cartesian lattice} is
$\mathbb{Z}^n$: for example, see \cite{Rand:CL}.)
The $P_i$ are the maximal non-trivial elements of this lattice. Our approach is
based on considering the dual description, the minimal non-trivial elements of
the lattice; these are the partitions $Q_1,\ldots,Q_n$, where
\[Q_i=P_{\{1,\ldots,n\}\setminus\{i\}}=\bigwedge_{j\ne i}P_j\]
and $Q_1,\ldots,Q_n$ generate the Cartesian lattice by repeatedly forming
joins (see Proposition~\ref{p:antiiso}).
\subsection{Hamming graphs and Cartesian decompositions}
\label{sec:HGCD}
The Hamming graph is so-called because of its use in coding theory. The
vertex set is the set of all $n$-tuples over an alphabet $A$; more briefly,
the vertex set is $A^n$. Elements of $A^n$ will be written as
${a}=(a_1,\ldots,a_n)$. Two vertices $a$ and $b$ are joined if
they agree in all but one coordinate, that is, if
there exists~$i$ such that $a_i\ne b_i$ but $a_j=b_j$ for $j\ne i$.
We denote this graph by $\operatorname{Ham}(n,A)$.
The alphabet $A$ may be finite or infinite, but we restrict the number~$n$
to be finite. There is a more general form, involving alphabets
$A_1,\ldots,A_n$; here the $n$-tuples $a$ are required to satisfy $a_i\in A_i$
for $i=1,\ldots,n$ (that is, the vertex set is $A_1\times\cdots\times A_n$);
the adjacency rule is the same. We will call this a \emph{mixed-alphabet
Hamming graph}, denoted $\operatorname{Ham}(A_1,\ldots,A_n)$.
A Hamming graph is connected, and the graph distance between two vertices
$a$ and $b$ is the number of coordinates where they differ:
\[d({a},{b})=|\{i\mid a_i\ne b_i\}|.\]
\begin{theorem}\label{th:cdham}
\begin{enumerate}
\item Given a Cartesian decomposition of~$\Omega$, a unique mixed-alpha\-bet
Hamming graph can be constructed from it.
\item Given a mixed-alphabet Hamming graph on $\Omega$, a unique Cartesian
decomposition of~$\Omega$ can be constructed from it.
\item The Cartesian decomposition and the Hamming graph referred to above
have the same automorphism group.
\end{enumerate}
\end{theorem}
The constructions from Cartesian decomposition to Hamming graph and back are
specified in the proof below.
\begin{proof}
Note that the trivial Cartesian decomposition of $\Omega$ corresponds to the complete
graph and the automorphism group of both is the symmetric group $\operatorname{Sym}(\Omega)$.
Thus in the rest of the proof we assume that the Cartesian decomposition
in item~(a) is non-trivial and the Hamming graph in item~(b) is constructed with
$n\geqslant 2$.
\begin{enumerate}
\item
Let $\mathcal{E}=\{P_1,\ldots,P_n\}$ be a Cartesian decomposition
of $\Omega$ of dimension~$n$:
each $P_i$ is a partition of $\Omega$. By Proposition~\ref{prop:CDbij},
there is a bijection $\phi$ from $\Omega$ to
$P_1\times\cdots\times P_n$: a point $a$ in $\Omega$ corresponds to
$(p_1,\ldots,p_n)$, where $p_i$ is the part of $P_i$ containing~$a$.
Also, by Proposition~\ref{p:antiiso} and the subsequent discussion,
the minimal partitions in the
Cartesian lattice generated by $P_1,\ldots,P_n$ have the form
\[Q_i=\bigwedge_{j\ne i}P_j\]
for $i=1,\ldots,n$; so $a$ and $b$ in $\Omega$ lie in the same part
of $Q_i$ if their
images under $\phi$ agree in all coordinates except the $i$th. So, if we define
$a$ and $b$ to be adjacent if they are in the same part of $Q_i$ for some
$i$, the resultant graph is isomorphic (by $\phi$) to the mixed-alphabet
Hamming graph on $P_1\times\cdots\times P_n$.
\item
Let $\Gamma$ be a mixed-alphabet Hamming graph on
$A_1\times\cdots\times A_n$. Without loss of generality, $|A_i|>1$ for all $i$
(we can discard any coordinate where this fails). We establish various facts
about $\Gamma$; these facts correspond to the claims on pages 271--276 of~\cite{ps:cartesian}.
Any maximal clique in $\Gamma$ has the form
\[C({a},i)=\{{b}\in A_1\times\cdots\times A_n\mid b_j=a_j\hbox{ for }j\ne i\},\]
for some ${a}\in\Omega$, $i\in\{1,\ldots,n\}$. Clearly all vertices in
$C({a},i)$ are adjacent in~$\Gamma$. If ${b},{c}$ are distinct vertices in
$C({a},i)$, then
$b_i\ne c_i$, so no vertex outside $C({a},i)$ can be joined to both.
Moreover, if any two vertices are joined, they differ in a unique coordinate
$i$, and so there is some $a$ in $\Omega$ such that
they both lie in $C({a},i)$ for that value of~$i$.
Let $C=C({a},i)$ and $C'=C({b},j)$ be two maximal cliques.
Put $\delta = \min\{d({ x},{ y})\mid { x}\in C,{ y}\in C'\}$.
\begin{itemize}
\item
If $i=j$, then there is a bijection $\theta\colon C\to C'$ such
that $d({ v},\theta({ v}))=\delta$ and
$d({v},{ w})=\delta +1$ for ${ v}$ in $C$, ${ w}$ in $C'$ and
${ w}\ne\theta({v})$.
(Here $\theta$ maps a vertex in $C$ to the unique vertex
in $C'$ with the same $i$th coordinate.)
\item If $i\ne j$, then there are unique ${ v}$ in $C$ and ${ w}$ in $C'$ with
$d({ v},{ w})= \delta$;
and distances between vertices in
$C$ and $C'$ are $\delta$, $\delta+1$ and $\delta+2$,
with all values realised. (Here ${ v}$ and ${ w}$ are
the vertices which agree in both the $i$th and $j$th coordinates; if two
vertices agree in just one of these, their distance is $\delta+1$, otherwise it
is $\delta+2$.)
\end{itemize}
See also claims 3--4 on pages 273--274 of~\cite{ps:cartesian}.
It is a consequence of the above that the partition of the maximal cliques into \emph{types}, where
$C({a},i)$ has type $i$, is invariant under graph automorphisms; each type forms a
partition $Q_i$ of $\Omega$.
By Proposition~\ref{p:antiiso} and the discussion following it, the maximal non-trivial partitions in the
sublattice generated by $Q_1,\ldots,Q_n$ form a Cartesian decomposition
of~$\Omega$.
\item This is clear, since no arbitrary choices were made in either construction.
\end{enumerate}
\end{proof}
We can describe this automorphism group precisely. Details will be given
in the case where all alphabets are the same; we deal briefly with the
mixed-alphabet case at the end.
Given a set $\Omega=A^n$, the wreath product $\operatorname{Sym}(A)\wr S_n$ acts on
$\Omega$: the $i$th factor of the base group $\operatorname{Sym}(A)^n$ acts on the entries
in the $i$th coordinate of points of $\Omega$, while $S_n$ permutes the
coordinates. (Here $S_n$ denotes $\operatorname{Sym}(\{1,\ldots,n\})$.)
\begin{cor}
The automorphism group of the Hamming graph $\operatorname{Ham}(n,A)$ is the wreath product
$\operatorname{Sym}(A)\wr S_n$ just described.
\end{cor}
\begin{proof}
By Theorem~\ref{th:cdham}(c), the automorphism group of $\operatorname{Ham}(n,A)$ coincides
with the stabiliser in $\operatorname{Sym}(A^n)$ of the natural Cartesian decomposition $\mathcal{E}$
of the set $A^n$. By~\cite[Lemma~5.1]{ps:cartesian},
the stabiliser of $\mathcal{E}$ in $\operatorname{Sym}(A^n)$ is $\operatorname{Sym}(A)\wr S_n$.
\end{proof}
In the mixed alphabet case, only one change needs to be made. Permutations
of the coordinates must preserve the cardinality of the alphabets associated
with the coordinate: that is, $g\in S_n$ induces an automorphism of the
Hamming graph if and only if $ig=j$ implies $|A_i|=|A_j|$ for all $i,j$.
(This condition is clearly necessary. For sufficiency, if $|A_i|=|A_j|$,
then we may actually identify $A_i$ and $A_j$.)
So if $\{1,\ldots,n\}=I_1\cup\cdots\cup I_r$, where $I_k$ is the non-empty set
of those indices for which the corresponding alphabet has some given cardinality,
then the group $\operatorname{Aut}(\operatorname{Ham}(A_1,\ldots,A_n))$ is the direct product of $r$ groups, each
a wreath product $\operatorname{Sym}(A_{i_k})\wr\operatorname{Sym}(I_k)$, acting in its product action,
where $i_k$ is a member of $I_k$.
Part~(c) of Theorem~\ref{th:cdham} was also proved in~\cite[Theorem~12.3]{ps:cartesian}.
Our proof is a simplified version of the proof presented in~\cite{ps:cartesian}
and is included here as a nice application of the lattice theoretical framework
developed in Section~\ref{sec:prelim}. The automorphism group of the mixed-alphabet Hamming graph can also be determined
using the characterisation of the automorphism groups of Cartesian products of graphs.
The first such characterisations were given by Sabidussi~\cite{Sabidussi} and
Vizing~\cite{Vizing}; see also~\cite[Theorem~6.6]{grhandbook}.
The recent preprint~\cite{MZ} gives a self-contained elementary proof in the case of
finite Hamming graphs.
\section{Latin cubes}
\label{sec:LC}
\subsection{What is a Latin cube?}
\label{sec:whatis}
As pointed out in \cite{dap75oz,dap83enc,dap89jas,DAcube},
there have been many different definitions of
a Latin cube (that is, a three-dimensional generalisation of a Latin square)
and of a Latin hypercube (a higher-dimensional generalisation).
Typically, the underlying set $\Omega$ is a Cartesian product
$\Omega_1 \times \Omega_2 \times\cdots \times \Omega_m$
where $\left|\Omega_1\right| = \left|\Omega_2\right| = \cdots =
\left|\Omega_m\right|$. As for Latin squares in Section~\ref{sec:LS}, we often
seek to relabel the elements of $\Omega_1$, \ldots, $\Omega_m$ so that
$\Omega = T^m$ for some set~$T$. The possible
conditions are concisely summarised in \cite{CRC}. The alphabet is
a set of letters of cardinality $\left|T\right|^a$ with
$1\leqslant a\leqslant m-1$, and the \emph{type} is $b$ with
$1\leqslant b\leqslant m-a$. The definition is that if the values of any $b$
coordinates are fixed then all letters in the given alphabet occur
equally often on the subset of $\Omega$ so defined (which can be regarded
as a $(m-b)$-dimensional array, so that the $|T|^b$ arrays of this form
partition $T^m$; these are parallel lines or planes in a cubical array
according as $b=2$ or $b=1$).
One extreme case has $a=1$ and $b=m-1$.
This definition is certainly in current use
when $m \in \{3,4\}$: for example, see \cite{MWcube,MulWeb}.
The hypercubes in \cite{LMW}
have $a=1$ but allow smaller values of $b$.
The other extreme has $a=m-1$ and $b=1$,
which is what we have here.
(Unfortunately, the meaning of the phrase ``Latin hypercube design'' in
Statistics has completely changed in the last thirty years. For example,
see \cite{tang2009,tang93}.)
Fortunately, it suffices for us to consider Latin cubes, where $m=3$.
Let $P_1$, $P_2$ and $P_3$ be the partitions which give the standard Cartesian
decomposition of the cube $\Omega_1 \times \Omega_2 \times \Omega_3$.
Following~\cite{DAcube}, we call the parts of
$P_1$, $P_2$ and $P_3$ \textit{layers}, and the parts of $P_1\wedge P_2$,
$P_1\wedge P_3$ and $P_2\wedge P_3$ \textit{lines}. Thus a layer is a slice
of the cube parallel to one of the faces.
Two lines $\ell_1$ and
$\ell_2$ are said to be \textit{parallel} if there is some
$\{i,j\}\subset \{1,2,3\}$ with $i\ne j$ such that $\ell_1$ and $\ell_2$
are both parts of $P_i \wedge P_j$.
The definitions in \cite{CRC,DAcube} give us the following three possibilities
for the case that $|\Omega_i|=n$ for $i$ in $\{1,2,3\}$.
\begin{itemize}
\item[(LC0)]
There are $n$ letters, each of which occurs once per line.
\item[(LC1)]
There are $n$ letters, each of which occurs $n$ times per layer.
\item[(LC2)]
There are $n^2$ letters, each of which occurs once per layer.
\end{itemize}
Because of the meaning of \textit{type} given in the first
paragraph of this section, we shall call
these possibilities \textit{sorts} of Latin cube.
Thus Latin cubes of sort (LC0) are a special case of Latin cubes of
sort (LC1), but Latin cubes of sort (LC2) are quite different.
Sort (LC0) is the definition of Latin cube used in
\cite{rab:as,ball,dscube,gupta,MWcube,MulWeb}, among many others in
Combinatorics and Statistics.
Fisher used sort (LC1) in \cite{RAF42}, where he gave constructions using
abelian groups. Kishen called this a Latin cube
\textit{of first order}, and those of sort (LC2) Latin cubes \textit{of
second order}, in \cite{kish42,kish50}.
Two of these sorts have alternative descriptions using the language of this
paper. Let $L$ be the partition into letters. Then a Latin cube has sort
(LC0) if and only if $\{L,P_i,P_j\}$ is a Cartesian decomposition of the cube
whenever $i\ne j$ and $\{i,j\} \subset \{1,2,3\}$.
A Latin cube has sort (LC2) if and only if $\{L,P_i\}$
is a Cartesian decomposition of the cube for $i=1$, $2$, $3$.
The following definition is taken from \cite{DAcube}.
\begin{defn}
\label{def:reg}
A Latin cube of sort (LC2) is \textit{regular} if, whenever $\ell_1$ and
$\ell_2$ are parallel lines in the cube, the set of letters occurring in
$\ell_1$ is either exactly the same as the set of letters occurring
in $\ell_2$ or disjoint from it.
\end{defn}
(Warning: the word \textit{regular} is used by some authors with quite
a different meaning for some Latin cubes of sorts (LC0) and (LC1).)
\subsection{Some examples of Latin cubes of sort (LC2)}
In these examples, the cube is coordinatised by functions $f_1$, $f_2$ and
$f_3$ from $\Omega$ to $\Omega_1$, $\Omega_2$ and $\Omega_3$
whose kernels are the partitions $P_1$, $P_2$ and $P_3$.
For example, in Figure~\ref{fig:2}, one part of $P_1$ is $f_1^{-1}(2)$.
A statistician would typically write this as ``$f_1=2$''.
For ease of reading, we adopt the statisticians' notation.
\begin{eg}
\label{eg:2}
When $n=2$, the definition of Latin cube of sort (LC2)
forces the two occurrences of each of the four letters to be in
diagonally opposite cells
of the cube. Thus, up to permutation of the letters, the only possibility
is that shown in Figure~\ref{fig:2}.
\begin{figure}
\[
\begin{array}{c@{\qquad}c}
\begin{array}{c|c|c|}
\multicolumn{1}{c}{} & \multicolumn{1}{c}{f_2=1} &
\multicolumn{1}{c}{f_2=2}\\
\cline{2-3}
f_1=1 & A & B\\
\cline{2-3}
f_1=2 & C & D\\
\cline{2-3}
\end{array}
&
\begin{array}{c|c|c|}
\multicolumn{1}{c}{} & \multicolumn{1}{c}{f_2=1} & \multicolumn{1}{c}{f_2=2}\\
\cline{2-3}
f_1=1 & D & C\\
\cline{2-3}
f_1=2 & B & A\\
\cline{2-3}
\end{array}
\\[10\jot]
\quad f_3=1 &\quad f_3=2
\end{array}
\]
\caption{The unique (up to isomorphism)
Latin cube of sort (LC2) and order~$2$}
\label{fig:2}
\end{figure}
This Latin cube of sort (LC2) is regular.
The set of letters on each line of $P_1\wedge P_2$ is either $\{A,D\}$ or
$\{B,C\}$; the set of letters on each line of $P_1\wedge P_3$ is either
$\{A,B\}$ or $\{C,D\}$; and the set of letters on each line of $P_2\wedge P_3$
is either $\{A,C\}$ or $\{B,D\}$.
\end{eg}
\begin{eg}
\label{eg:nice}
Here $\Omega=T^3$, where $T$~is the additive group of $\mathbb{Z}_3$.
For $i=1$, $2$ and~$3$, the function $f_i$ picks out the $i$th coordinate
of $(t_1,t_2,t_3)$. The column headed~$L$ in Table~\ref{tab:cube2}
shows how the nine letters are allocated to the cells of the cube.
The $P_3$-layer of the cube with $f_3=0$ is as follows.
\[
\begin{array}{c|c|c|c|}
\multicolumn{1}{c}{} & \multicolumn{1}{c}{f_2=0} & \multicolumn{1}{c}{f_2=1}
& \multicolumn{1}{c}{f_2=2}\\
\cline{2-4}
f_1=0 & A & D & G\\
\cline{2-4}
f_1=1 & I & C & F\\
\cline{2-4}
f_1=2 & E & H & B\\
\cline{2-4}
\end{array}
\ .
\]
It has each letter just once.
Similarly, the $P_3$-layer of the cube with $f_3=1$ is
\[
\begin{array}{c|c|c|c|}
\multicolumn{1}{c}{} & \multicolumn{1}{c}{f_2=0} & \multicolumn{1}{c}{f_2=1}
& \multicolumn{1}{c}{f_2=2}\\
\cline{2-4}
f_1=0 & B & E & H\\
\cline{2-4}
f_1=1 & G & A & D\\
\cline{2-4}
f_1=2 & F & I & C\\
\cline{2-4}
\end{array}
\]
and
the $P_3$-layer of the cube with $f_3=2$ is
\[
\begin{array}{c|c|c|c|}
\multicolumn{1}{c}{} & \multicolumn{1}{c}{f_2=0} & \multicolumn{1}{c}{f_2=1}
& \multicolumn{1}{c}{f_2=2}\\
\cline{2-4}
f_1=0 & C & F & I\\
\cline{2-4}
f_1=1 & H & B & E\\
\cline{2-4}
f_1=2 & D & G & A\\
\cline{2-4}
\end{array}
\ .
\]
Similarly you can check that if you take the $2$-dimensional $P_1$-layer
defined by any fixed value of $f_1$ then
every letter occurs just once, and the same thing happens for~$P_2$.
\begin{table}[htbp]
\[
\begin{array}{cccccccc}
\mbox{partition}& P_1 & P_2 & P_3 & Q & R & S & L\\
\mbox{function}& f_1 & f_2 & f_3 & -f_1+f_2 &-f_3+f_1 & -f_2+f_3\\
\mbox{value} & t_1 & t_2 & t_3 & -t_1+t_2 & -t_3+t_1 & -t_2+t_3 & \\
\hline
& 0 & 0 & 0 & 0 & 0 & 0 & A \\
& 0 & 0 & 1 & 0 & 2 & 1 & B \\
& 0 & 0 & 2 & 0 & 1 & 2 & C \\
& 0 & 1 & 0 & 1 & 0 & 2 &D \\
& 0 & 1 & 1 & 1 & 2 & 0 & E \\
& 0 & 1 & 2 & 1 & 1 & 1 & F \\
& 0 & 2 & 0 & 2 & 0 & 1 & G \\
& 0 & 2 & 1 & 2 & 2 & 2 & H \\
& 0 & 2 & 2 & 2 & 1 & 0 & I \\
& 1 & 0 & 0 & 2 & 1 & 0 & I \\
& 1 & 0 & 1 & 2 & 0 & 1 & G \\
& 1 & 0 & 2 & 2 & 2 & 2 & H \\
& 1 & 1 & 0 & 0 & 1 & 2 & C \\
& 1 & 1 & 1 & 0 & 0 & 0 & A \\
& 1 & 1 & 2 & 0 & 2 & 1 & B \\
& 1 & 2 & 0 & 1 & 1 & 1 & F \\
& 1 & 2 & 1 & 1 & 0 & 2 & D \\
& 1 & 2 & 2 & 1 & 2 & 0 & E \\
& 2 & 0 & 0 & 1 & 2 & 0 & E \\
& 2 & 0 & 1 & 1 & 1 & 1 & F \\
& 2 & 0 & 2 & 1 & 0 & 2 & D \\
& 2 & 1 & 0 & 2 & 2 & 2 & H \\
& 2 & 1 & 1 & 2 & 1 & 0 & I \\
& 2 & 1 & 2 & 2 & 0 & 1 & G \\
& 2 & 2 & 0 & 0 & 2 & 1 & B \\
& 2 & 2 & 1 & 0 & 1 & 2 & C \\
& 2 & 2 & 2 & 0 & 0 & 0 & A \\
\end{array}
\]
\caption{Some functions and partitions on the cells of the cube
in Example~\ref{eg:nice}}
\label{tab:cube2}
\end{table}
In addition to satisfying the property of being a Latin cube of sort (LC2),
this combinatorial structure has three other good properties.
\begin{itemize}
\item
It is a regular in the sense of Definition~\ref{def:reg}.
The set of letters in any
$P_1\wedge P_2$-line is $\{A,B,C\}$ or $\{D,E,F\}$ or $\{G,H,I\}$.
For $P_1\wedge P_3$ the letter sets are $\{A,D,G\}$, $\{B,E,H\}$ and
$\{C,F,I\}$; for $P_2\wedge P_3$ they are $\{A,E,I\}$, $\{B,F,G\}$ and
$\{C,D,H\}$.
\item
The supremum of $L$ and $P_1\wedge P_2$ is the partition $Q$ shown in
Table~\ref{tab:cube2}. This is the kernel of the function which maps
$(t_1,t_2,t_3)$ to $-t_1+t_2 = 2t_1+t_2$.
Statisticians normally write this partition
as $P_1^2P_2$. Likewise, the supremum of $L$ and $P_1\wedge P_3$ is $R$,
which statisticians might write as $P_3^2P_1$,
and the supremum of $L$ and $P_2\wedge P_3$ is $S$, written by statisticians
as $P_2^2P_3$. The partitions $P_1$, $P_2$, $P_3$,
$Q$, $R$, $S$, $P_1\wedge P_2$, $P_1\wedge P_3$, $P_2\wedge P_3$ and $L$
are pairwise compatible, in the sense of Definition~\ref{def:compatible}.
Moreover, each of them is a coset partition defined by a subgroup of $T^3$.
\item
In anticipation of the notation used in Section~\ref{sec:dag},
it seems fairly natural to rename $P_1$, $P_2$, $P_3$, $Q$, $R$ and $S$
as $P_{01}$, $P_{02}$, $P_{03}$, $P_{12}$, $P_{13}$ and $P_{23}$, in order.
For each $i$ in $\{0,1,2,3\}$, the three partitions $P_{jk}$ which have
$i$ as one of the subscripts, that is, $i\in \{j,k\}$,
form a Cartesian decomposition of the underlying set.
\end{itemize}
However, the set of ten partitions that we have named is not closed under
infima, so they do not form an orthogonal block structure.
For example, the set does not contain the infimum $P_3\wedge Q$.
This partition has nine parts of size three, one of
which consists of the cells $(0,0,0)$, $(1,1,0)$ and $(2,2,0)$,
as can be seen from Table~\ref{tab:cube2}.
\begin{figure}
\begin{center}
\setlength{\unitlength}{2mm}
\begin{picture}(60,40)
\put(5,15){\line(0,1){10}}
\put(5,15){\line(1,1){10}}
\put(5,15){\line(3,1){30}}
\put(15,15){\line(-1,1){10}}
\put(15,15){\line(1,1){10}}
\put(15,15){\line(3,1){30}}
\put(25,15){\line(-1,1){10}}
\put(25,15){\line(0,1){10}}
\put(25,15){\line(3,1){30}}
\put(45,15){\line(-1,1){10}}
\put(45,15){\line(0,1){10}}
\put(45,15){\line(1,1){10}}
\put(30,5){\line(-1,2){5}}
\put(30,5){\line(-3,2){15}}
\put(30,5){\line(3,2){15}}
\curve(30,5,5,15)
\put(30,35){\line(-1,-2){5}}
\put(30,35){\line(1,-2){5}}
\put(30,35){\line(-3,-2){15}}
\put(30,35){\line(3,-2){15}}
\curve(30,35,5,25)
\curve(30,35,55,25)
\put(5,15){\circle*{1}}
\put(4,15){\makebox(0,0)[r]{$P_1\wedge P_2$}}
\put(15,15){\circle*{1}}
\put(14,15){\makebox(0,0)[r]{$P_1\wedge P_3$}}
\put(25,15){\circle*{1}}
\put(24,15){\makebox(0,0)[r]{$P_2\wedge P_3$}}
\put(45,15){\circle*{1}}
\put(47,15){\makebox(0,0){$L$}}
\put(30,5){\circle*{1}}
\put(30,3){\makebox(0,0){$E$}}
\put(5,25){\circle*{1}}
\put(3,25){\makebox(0,0){$P_1$}}
\put(15,25){\circle*{1}}
\put(13,25){\makebox(0,0){$P_2$}}
\put(25,25){\circle*{1}}
\put(23,25){\makebox(0,0){$P_3$}}
\put(35,25){\circle*{1}}
\put(36,25){\makebox(0,0)[l]{$Q$}}
\put(45,25){\circle*{1}}
\put(46,25){\makebox(0,0)[l]{$R$}}
\put(55,25){\circle*{1}}
\put(56,25){\makebox(0,0)[l]{$S$}}
\put(30,35){\circle*{1}}
\put(30,37){\makebox(0,0){$U$}}
\end{picture}
\end{center}
\caption{Hasse diagram of the join-semilattice formed by the pairwise
compatible partitions in Example~\ref{eg:nice}}
\label{fig:nice}
\end{figure}
Figure~\ref{fig:nice} shows the Hasse diagram of the join-semilattice formed
by these ten named partitions, along with the two trivial partitions $E$
and $U$.
This diagram, along with the knowledge of compatibility, makes it clear that
any three of the minimal partitions $P_1 \wedge P_2$, $P_1 \wedge P_3$,
$P_2\wedge P_3$ and $L$ give the minimal
partitions of the orthogonal block structure defined by
a Cartesian decomposition of dimension three of the underlying set $T^3$.
Note that, although the partition $E$ is the highest point in the diagram
which is below both $P_3$ and $Q$, it is not their infimum, because their
infimum is defined in the lattice of all partitions of this set.
\end{eg}
\begin{figure}
\[
\begin{array}{c@{\qquad}c@{\qquad}c}
\begin{array}{|c|c|c|}
\hline
A & E & F\\
\hline
H & I & D\\
\hline
C & G & B\\
\hline
\end{array}
&
\begin{array}{|c|c|c|}
\hline
D & B & I\\
\hline
E & C & G\\
\hline
F & A & H\\
\hline
\end{array}
&
\begin{array}{|c|c|c|}
\hline
G & H & C\\
\hline
B & F & A\\
\hline
I & D & E\\
\hline
\end{array}
\end{array}
\]
\caption{A Latin cube of sort (LC2) which is not regular}
\label{fig:sax}
\end{figure}
\begin{eg}
\label{eg:sax}
Figure~\ref{fig:sax} shows an example which is not regular. This was originally
given in \cite{saxena}. To save space, the three $P_3$-layers are shown
side by side.
For example, there is one $P_1\wedge P_3$-line whose set of letters is
$\{A,E,F\}$ and another whose set of letters is $\{A,F,H\}$.
These are neither the same nor disjoint.
\end{eg}
If we write the group operation in Example~\ref{eg:nice} multiplicatively,
then the cells
$(t_1,t_2,t_3)$ and $(u_1,u_2,u_3)$ have the same letter if and only if
$t_1^{-1}t_2 = u_1^{-1}u_2$ and $t_1^{-1}t_3 = u_1^{-1}u_3$. This means that
$(u_1,u_2,u_3) = (x,x,x)(t_1,t_2,t_3)$ where $x=u_1t_1^{-1}$, so that
$(t_1,t_2,t_3)$ and $(u_1,u_2,u_3)$ are in the same right coset of the
diagonal subgroup $\delta(T,3)$ introduced in Section~\ref{sect:diaggroups}.
The next theorem shows that this construction can be generalised to any group,
abelian or not, finite or infinite.
\begin{theorem}
\label{th:upfront}
Let $T$ be a non-trivial group. Identify the elements of $T^3$ with the cells of a cube
in the natural way. Let $\delta(T,3)$ be the diagonal subgroup
$\{(t,t,t) \mid t \in T\}$. Then the parts of the right coset partition
$P_{\delta(T,3)}$ form the letters of a regular Latin cube of sort
(LC2).
\end{theorem}
\begin{proof}
Let $H_1$ be the subgroup $\{(1,t_2,t_3) \mid t_2 \in T, \ t_3 \in T\}$
of $T^3$. Define subgroups $H_2$ and $H_3$ similarly. Let $i \in \{1,2,3\}$.
Then $H_i \cap \delta(T,3) = \{1\}$ and $H_i\delta(T,3) = \delta(T,3)H_i = T^3$.
Proposition~\ref{prop:coset} shows that $P_{H_i} \wedge P_{\delta(T,3)} = E$ and
$P_{H_i} \vee P_{\delta(T,3)} = U$. Because $H_i\delta(T,3) = \delta(T,3)H_i$,
Proposition~\ref{prop:commeq} (considering statements (a) and~(d)) shows that $\{P_{H_i}, P_{\delta(T,3)}\}$ is a
Cartesian decomposition of $T^3$ of dimension two. Hence the parts
of $P_{\delta(T,3)}$ form the letters of a Latin cube $\Lambda$ of sort~(LC2).
Put $G_{12} = H_1 \cap H_2$ and
$K_{12} = \{(t_1,t_1,t_3) \mid t_1 \in T,\ t_3 \in T\}$.
Then the parts of $P_{G_{12}}$ are lines of the cube parallel to the $z$-axis.
Also, $G_{12} \cap \delta(T,3)=\{1\}$ and $G_{12}\delta(T,3) = \delta(T,3)G_{12}
= K_{12}$, so Propositions~\ref{prop:coset} and~\ref{prop:commeq} show that
$P_{G_{12}} \wedge P_{\delta(T,3)} = E$, $P_{G_{12}} \vee P_{\delta(T,3)} = P_{K_{12}}$,
and the restrictions of $P_{G_{12}}$ and $P_{\delta(T,3)}$ to any part
of $P_{K_{12}}$ form a grid. Therefore, within each coset of~$K_{12}$,
all lines have the same subset of letters. By the definition of supremum,
no line in any other coset of $K_{12}$ has any letters in common
with these.
Similar arguments apply to lines in each of the other two directions.
Hence $\Lambda$ is regular.
\end{proof}
The converse of this theorem is proved at the end of this section.
The set of partitions in Theorem~\ref{th:upfront} form a join-semilattice whose
Hasse diagram is the same as the one shown in Figure~\ref{fig:nice}, apart from
the naming of the partitions. We call this a \textit{diagonal semilattice
of dimension three}. The generalisation to arbitrary dimensions is given
in Section~\ref{sec:diag}.
\subsection{Results for Latin cubes}
As we hinted in Section~\ref{sec:LS},
the vast majority of Latin squares of order at least $5$
are not isotopic to Cayley tables of groups. For $m\geqslant 3$, the situation
changes dramatically as soon as we impose some more, purely combinatorial,
constraints. We continue to use the notation $\Omega$, $P_1$, $P_2$, $P_3$
and $L$ as in Section~\ref{sec:whatis}.
A Latin cube of sort (LC0) is called an \textit{extended Cayley table} of
the group~$T$ if $\Omega=T^3$ and the letter in cell $(t_1,t_2,t_3)$ is
$t_1t_2t_3$. Theorem~8.21 of \cite{rab:as} shows that, in the finite case,
for a Latin cube of sort (LC0), the set $\{P_1,P_2,P_3,L\}$ is contained in
the set of partitions of an orthogonal block structure if and only if the
cube is isomorphic to the extended Cayley table of an abelian group.
Now we will prove something similar for Latin cubes of sort (LC2), by
specifying a property of the set
\[\{ P_1, P_2, P_3, (P_1\wedge P_2)\vee L,
(P_1\wedge P_3)\vee L, (P_2\wedge P_3)\vee L\}\]
of six partitions. We do not restrict this
to finite sets. Also, because we do not insist on closure under infima,
it turns out that the group does not need to be abelian.
In Lemmas~\ref{lem:lc0} and~\ref{lem:lc3},
the assumption is that we have a Latin cube of sort~(LC2),
and that $\{i,j,k\} = \{1,2,3\}$. Write
\[
L^{ij}= L\vee(P_i\wedge P_j).
\]
To clarify the proofs, we shall use the following refinement of
Definition~\ref{def:reg}. Recall that we refer to the parts of $P_i\wedge P_j$
as $P_i\wedge P_j$-lines.
\begin{defn}
\label{def:refine}
A Latin cube of sort (LC2) is \textit{$\{i,j\}$-regular} if,
whenever $\ell_1$ and $\ell_2$ are distinct $P_i\wedge P_j$-lines,
the set of letters occurring in
$\ell_1$ is either exactly the same as the set of letters occurring
in $\ell_2$ or disjoint from it.
\end{defn}
\begin{lem}
\label{lem:lc0}
The following conditions are equivalent.
\begin{enumerate}
\item
The partition $L$ is compatible with $P_i\wedge P_j$.
\item
The Latin cube is $\{i,j\}$-regular.
\item The restrictions of $P_i\wedge P_j$, $P_k$ and $L$ to any
part of $L^{ij}$ form a Latin square.
\item Every pair of distinct $P_i\wedge P_j$-lines in the same
part of $L^{ij}$ lie in distinct parts of $P_i$.
\item The restrictions of $P_i$, $P_k$ and $L$ to any
part of $L^{ij}$ form a Latin square.
\item The set
$\{P_i,P_k,L^{ij}\}$ is a Cartesian decomposition of $\Omega$ of
dimension three.
\item Each part of $P_i\wedge P_k\wedge L^{ij}$ has size one.
\end{enumerate}
\end{lem}
\begin{proof} We prove this result without loss of generality for
$i=1$, $j=2$, $k=3$.
\begin{itemize}
\item[(a)$\Leftrightarrow$(b)]
By the definition of a Latin cube of sort (LC2),
each part of $P_1\wedge P_2$ has either zero or one cells in common
with each part of~$L$. Therefore ${P_1\wedge P_2 \wedge L}=E$,
which is uniform, so Definition~\ref{def:compatible} shows that
compatibility is the same as commutativity of the equivalence relations
underlying $P_1\wedge P_2$ and~$L$.
Consider Proposition~\ref{prop:commeq} with $P_1\wedge P_2$ and $L$ in place
of $P_1$ and $P_2$. Condition~(a) of Proposition~\ref{prop:commeq}
is the same as condition~(a) here; and condition~(e) of
Proposition~\ref{prop:commeq} is the same as condition~(b) here. Thus
Proposition~\ref{prop:commeq} gives us the result.
\item[(a)$\Rightarrow$(c)]
Let $\Delta$ be a part of $L^{12}$. If $L$ is compatible with $P_1\wedge P_2$
then, because ${P_1\wedge P_2 \wedge L}=E$,
Proposition~\ref{prop:commeq} shows that
the restrictions of $P_1\wedge P_2$ and $L$ to $\Delta$ form a Cartesian
decomposition of $\Delta$. Each part of $P_3$ has precisely one cell in
common with each part of $P_1\wedge P_2$,
because $\{P_1,P_2,P_3\}$ is a Cartesian decomposition of $\Omega$,
and precisely one cell in common with each part of $L$,
because the Latin cube has sort (LC2).
Hence the restrictions of $P_1\wedge P_2$, $P_3$ and $L$ to $\Delta$
form a Latin square. (Note that $P_3$ takes all of its values within $\Delta$,
but neither $P_1\wedge P_2$ nor $L$ does.)
\item[(c)$\Rightarrow$(d)]
Let $\ell_1$ and $\ell_2$ be distinct $P_1\wedge P_2$-lines
that are contained in the same part $\Delta$ of $L^{12}$. Every letter
which occurs in $\Delta$ occurs in both of these lines. If $\ell_1$ and
$\ell_2$ are contained in the same part of $P_1$, then that $P_1$-layer
contains at least two occurrences of some letters, which contradicts the
fact that $L\wedge P_1=E$ for a Latin cube of sort (LC2).
\item[(d)$\Rightarrow$(e)]
Let $\Delta$ be a part of $L^{12}$ and let $\lambda$ be a part of~$L$
inside~$\Delta$. Let $p_1$ and $p_3$ be parts of $P_1$ and $P_3$.
Then $\left| p_1 \cap \lambda \right| = \left| p_3 \cap \lambda \right|=1$
by definition of a Latin cube of sort (LC2). Condition (d) specifies that
$p_1 \cap \Delta$ is a part of $P_1 \wedge P_2$. Therefore
$(p_1 \cap \Delta) \cap p_3$ is a part of ${P_1 \wedge P_2 \wedge P_3}$, so
$ \left |(p_1 \cap \Delta) \cap (p_3 \cap \Delta)\right |=
\left |(p_1 \cap \Delta) \cap p_3\right| =1$.
Thus the restrictions of $P_1$, $P_3$, and $L$ to $\Delta$ form a Latin
square.
\item[(e)$\Rightarrow$(f)]
Let $\Delta$, $p_1$ and $p_3$ be parts of $L^{12}$, $P_1$ and $P_3$
respectively. By the definition of a Latin cube of sort (LC2),
$p_1 \cap \Delta$ and $p_3 \cap \Delta$ are both non-empty. Thus
condition (e) implies that $\left | p_1 \cap p_3 \cap \Delta \right|=1$.
Hence $\{P_1, P_3, L^{12}\}$ is a Cartesian
decomposition of dimension three.
\item[(f)$\Rightarrow$(g)] This follows immediately
from the definition of a Cartesian decomposition (Definition~\ref{def:cart}).
\item[(g)$\Rightarrow$(d)]
If (d) is false then there is a part~$\Delta$ of $L^{12}$ which
contains distinct
$P_1\wedge P_2$-lines $\ell_1$ and $\ell_2$ in the same part~$p_1$ of~$P_1$.
Let $p_3$ be any part of $P_3$. Then, since $\{P_1,P_2,P_3\}$ is a
Cartesian decomposition, $\left |p_3\cap \ell_1\right | =
\left | p_3\cap \ell_2\right | =1$ and so
$\left| p_1\cap p_3 \cap \Delta \right | \geqslant 2$. This contradicts~(g).
\item[(d)$\Rightarrow$(b)]
If (b) is false, there are distinct $P_1\wedge P_2$-lines $\ell_1$
and $\ell_2$
whose sets of letters $\Lambda_1$ and $\Lambda_2$ are neither the same nor
disjoint. Because $\Lambda_1 \cap \Lambda_2 \ne \emptyset$, $\ell_1$
and $\ell_2$ are contained in the same part of $L^{12}$.
Let $\lambda \in \Lambda_2 \setminus \Lambda_1$. By definition of a Latin
cube of sort (LC2),
$\lambda$ occurs on precisely one cell~$\omega$
in the $P_1$-layer which contains $\ell_1$. By assumption, $\omega \notin
\ell_1$. Let $\ell_3$ be the $P_1\wedge P_2$-line containing~$\omega$.
Then $\ell_3$ and $\ell_2$ are in the same part of $L^{12}$, as are
$\ell_1$ and $\ell_2$. Hence $\ell_1$ and $\ell_3$ are in the
same part of $L^{12}$ and the same part of $P_1$. This contradicts~(d).
\end{itemize}
\end{proof}
\begin{lem}
\label{lem:lc3}
The set $\{P_i,L^{ik},L^{ij}\}$ is a Cartesian decomposition of $\Omega$ if
and only if $L$ is compatible with both $P_i\wedge P_j$ and $P_i \wedge P_k$.
\end{lem}
\begin{proof}
If $L$ is not compatible with $P_i\wedge P_j$, then
Lemma~\ref{lem:lc0} shows that there is a part of
${P_i \wedge P_k \wedge L^{ij}}$ of size at least two.
This is contained in a part of $P_i\wedge P_k$. Since $P_i \wedge P_k
\preccurlyeq L^{ik}$, it is also contained in a part of~$L^{ik}$. Hence
$\{P_i, L^{ij}, L^{ik}\}$ is not a Cartesian decomposition of~$\Omega$.
Similarly, if $L$ is not compatible with $P_i\wedge P_k$ then
$\{P_i, L^{ij}, L^{ik}\}$ is not a Cartesian decomposition of~$\Omega$.
For the converse, Lemma~\ref{lem:lc0} shows
that if $L$ is compatible with
$P_i\wedge P_j$ then $\{P_i, P_k, L^{ij}\}$ is a Cartesian decomposition of
$\Omega$. Let $\Delta$ be a part of $L^{ij}$, and let $L^*$ be the
restriction of $L$ to $\Delta$. Lemma~\ref{lem:lc0} shows that
$P_i$, $P_k$ and $L^*$ form a Latin square on~$\Delta$. Thus distinct
letters in~$L^*$ occur only in distinct parts of $P_i \wedge P_k$.
If $L$ is also compatible with $P_i\wedge P_k$, then Lemma~\ref{lem:lc0}
shows that each part of $L^{ik}$ is a union of parts of $P_i\wedge P_k$,
any two of which are in different parts of $P_i$ and different parts of~$P_k$,
and all of which have the same letters.
Hence any two different letters in $L^*$
are in different parts of~$L^{ik}$. Since $\{P_i,P_k,L^{ij}\}$ is a Cartesian
decomposition of~$\Omega$,
every part of $P_i\wedge P_k$ has a non-empty intersection with~$\Delta$, and
so every part of $L^{ik}$ has a non-empty intersection with~$\Delta$.
Since $L\prec L^{ik}$, such an intersection consists of one or more
parts of $L^*$ in $\Delta$. We have already noted that distinct
letters in $L^*$ are in different parts of $L^{ik}$, and so it follows that the
restriction of $L^{ik}$ to $\Delta$ is the same as~$L^*$.
Hence the restrictions of $P_i$, $P_k$ and $L^{ik}$ to $\Delta$ form a Latin
square on $\Delta$, and so the restrictions of $P_i$ and $L^{ik}$ to $\Delta$
give a Cartesian decomposition of~$\Delta$.
This is true for every part $\Delta$ of $L^{ij}$, and so it follows that
$\{P_i, L^{ij}, L^{ik}\}$ is a Cartesian decomposition of~$\Omega$.
\end{proof}
\begin{lem}
\label{lem:lc4}
The set $\{P_i, L^{ij},L^{ik}\}$ is a Cartesian decomposition of $\Omega$
if and only if the set $\{P_i \wedge P_j, P_i\wedge P_k,L\}$
generates a Cartesian lattice under taking suprema.
\end{lem}
\begin{proof}
If $\{P_i \wedge P_j, P_i\wedge P_k,L\}$ generates a Cartesian lattice under
taking suprema then the maximal partitions in the Cartesian lattice are
$(P_i \wedge P_j) \vee (P_i\wedge P_k)$, $(P_i \wedge P_j) \vee L$ and
$(P_i \wedge P_k) \vee L$. They form a Cartesian decomposition, and
are equal to $P_i$, $L^{ij}$ and $L^{ik}$
respectively.
Conversely, suppose that $\{P_i, L^{ij},L^{ik}\}$ is a Cartesian decomposition
of~$\Omega$. The minimal partitions in the corresponding Cartesian lattice
are $P_i \wedge L^{ij}$, $P_i \wedge L^{ik}$ and $L^{ij} \wedge L^{ik}$. Now,
$L \preccurlyeq L^{ij}$ and $L \preccurlyeq L^{ik}$, so
$L \preccurlyeq L^{ij} \wedge L^{ik}$.
Because the Latin cube has sort~(LC2), $\{P_i,L\}$ and
$\{P_i,L^{ij}\wedge L^{ik}\}$ are both Cartesian decompositions of~$\Omega$.
Since
$L \preccurlyeq L^{ij} \wedge L^{ik}$, this forces $L=L^{ij}\wedge L^{ik}$.
The identities of the other two infima are confirmed by a similar argument.
We have $P_i \wedge P_j \preccurlyeq P_i$, and
$P_i \wedge P_j \preccurlyeq L^{ij}$, by definition of~$L^{ij}$. Therefore
$P_i \wedge P_j \preccurlyeq P_i \wedge L^{ij}$.
Lemmas~\ref{lem:lc0} and~\ref{lem:lc3} show that $\{P_i,P_k,L^{ij}\}$ is a
Cartesian decomposition of~$\Omega$. Therefore $\{P_k,P_i \wedge L^{ij}\}$
and $\{P_k, P_i \wedge P_j\}$ are both Cartesian decompositions of~$\Omega$.
Since $P_i \wedge P_j \preccurlyeq P_i \wedge L^{ij}$, this forces
$P_i \wedge P_j = P_i \wedge L^{ij}$.
Likewise, $P_i \wedge P_k = P_i \wedge L^{ik}$.
\end{proof}
The following theorem is a direct consequence of
Definitions~\ref{def:reg} and~\ref{def:refine} and
Lemmas~\ref{lem:lc0}, \ref{lem:lc3} and~\ref{lem:lc4}.
\begin{theorem}
\label{thm:regnice}
For a Latin cube of sort~(LC2), the following conditions are equivalent.
\begin{enumerate}
\item
The Latin cube is regular.
\item
The Latin cube is $\{1,2\}$-regular, $\{1,3\}$-regular and $\{2,3\}$-regular.
\item
The partition $L$ is compatible with each of $P_1\wedge P_2$, $P_1\wedge P_3$
and $P_2\wedge P_3$.
\item Each of $\{P_1,P_2,P_3\}$,
$\{P_1,L^{12},L^{13}\}$, $\{P_2,L^{12},L^{23}\}$ and $\{P_3, L^{13}, L^{23}\}$
is a Cartesian decomposition.
\item
Each of the sets ${\{P_1\wedge P_2, P_1 \wedge P_3, P_2\wedge P_3\}}$,
${\{P_1\wedge P_2, P_1 \wedge P_3, L\}}$, \linebreak
${\{P_1\wedge P_2, P_2\wedge P_3, L\}}$ and
${\{P_1 \wedge P_3, P_2\wedge P_3, L\}}$
generates a Cartesian lattice under taking suprema.
\end{enumerate}
\end{theorem}
The condition that $\{P_1,P_2,P_3\}$ is a Cartesian decomposition
is a part of the definition of a Latin cube. This condition is
explicitly included in item~(d) of Theorem~\ref{thm:regnice} for clarity.
The final result in this section gives us the stepping stone for the proof of
Theorem~\ref{thm:main}.
The proof is quite detailed, and makes frequent use of the
relabelling techniques that we already saw in Sections~\ref{sec:LS}
and~\ref{sesc:quasi}.
\begin{theorem}
\label{thm:bingo}
Consider a Latin cube of sort~(LC2) on an underlying set~$\Omega$,
with coordinate partitions $P_1$, $P_2$ and $P_3$, and letter partition~$L$.
If every three of $P_1 \wedge P_2$, $P_1 \wedge P_3$, $P_2\wedge P_3$ and $L$
are the minimal partitions in a Cartesian lattice on~$\Omega$
then there is a group~$T$ such that, up to relabelling the letters
and the three sets of coordinates,
$\Omega=T^3$ and $L$ is the coset partition defined
by the diagonal subgroup $\{(t,t,t) \mid t \in T\}$.
Moreover, $T$~is unique up to group isomorphism.
\end{theorem}
\begin{proof}
Theorem~\ref{thm:regnice} shows that a Latin cube satisfying this condition
must be regular.
As $\{P_1,P_2,P_3\}$ is a Cartesian decomposition of $\Omega$ and,
by Lemma~\ref{lem:lc0}, $\{P_i,P_j,L^{ik}\}$ is also a Cartesian
decomposition of~$\Omega$ whenever $\{i,j,k\} = \{1,2,3\}$,
the cardinalities of $P_1$, $P_2$, $P_3$, $L^{12}$, $L^{13}$ and $L^{23}$
must all be equal
(using the argument in the proof of Proposition~\ref{p:order}).
Thus we may label the parts of each by the same set~$T$.
We start by labelling the parts of $P_1$, $P_2$ and $P_3$. This identifies
$\Omega$ with $T^3$. At first, these three labellings are arbitrary, but
they are made more specific as the proof progresses.
Let $(a,b,c)$ be a cell of the cube. Because
$P_1\wedge P_2 \preccurlyeq L^{12}$, the part of $L^{12}$ which contains
cell $(a,b,c)$ does not depend on the value of~$c$. Thus
there is a binary operation $\circ$ from $T \times T$ to $T$ such that
$a \circ b$ is the label of the part of $L^{12}$ containing
$\{(a,b,c)\mid c \in T\}$; in other words, $(a,b,c)$ is in part
$a \circ b$ of $L^{12}$, irrespective of the value of $c$.
Lemma~\ref{lem:lc0} and Proposition~\ref{p:order} show that,
for each $a$ in $T$, the function $b \mapsto a \circ b$ is a bijection
from $T$ to~$T$. Similarly, for each $b$ in~$T$, the function
$a \mapsto a \circ b$ is a bijection.
Therefore $(T,\circ)$ is a quasigroup.
Similarly, there are binary operations $\star$ and $\diamond$ on $T$
such that the labels of the parts of $L^{13}$ and $L^{23}$ containing
cell $(a,b,c)$ are $c \star a$ and $b \diamond c$ respectively.
Moreover, $(T,\star)$ and $(T,\diamond)$ are both quasigroups.
Now we start the process of making explicit bijections between some pairs
of the six partitions.
Choose any part of $P_1$ and label it $e$. Then the labels of the parts
of $L^{12}$ can be aligned with those of $P_2$ so that $e \circ b= b$ for
all values of~$b$.
In the quasigroup $(T, \star)$, we may use the column headed $e$ to give
a permutation $\sigma$ of $T$ to align the labels of the parts of~$P_3$
and those of~$L_{13}$ so that $c\star e = c\sigma$ for all values of~$c$.
Let $(a,b,c)$ be a cell of the cube. Because $\{L,P_1\}$ is a Cartesian
decomposition of the cube, there is a unique cell $(e,b',c')$
in the same part of $L$ as $(a,b,c)$. Then
\begin{eqnarray*}
a \circ b & = & e \circ b' = b',\\
c \star a & = & c' \star e = c'\sigma, \quad \mbox{and}\\
b \diamond c& =& b'\diamond c'.
\end{eqnarray*}
Hence
\begin{equation}
b\diamond c = (a\circ b) \diamond ((c \star a)\sigma^{-1})
\label{eq:threeops}
\end{equation}
for all values of $a$, $b$ and $c$ in~$T$.
The quasigroup $(T,\diamond)$ can be viewed as a Latin square with rows
labelled by parts of $P_2$ and columns labelled by parts of $P_3$.
Consider the $2 \times 2$ subsquare shown in Figure~\ref{fig:subsq}. It has
$b_1 \diamond c_1 = \lambda$, $b_1 \diamond c_2 = \mu$,
$b_2 \diamond c_1 = \nu$ and $b_2 \diamond c_2 = \phi$.
\begin{figure}
\[
\begin{array}{c|c|c|}
\multicolumn{1}{c}{} & \multicolumn{1}{c}{c_1} & \multicolumn{1}{c}{c_2}\\
\cline{2-3}
b_1 & \lambda & \mu\\
\cline{2-3}
b_2 & \nu & \phi\\
\cline{2-3}
\end{array}
\]
\caption{A $2 \times 2$ subsquare of the Latin square defined by
$(T,\diamond)$}
\label{fig:subsq}
\end{figure}
Let $b_3$ be any row of this Latin square.
Then there is a unique $a$ in $T$ such
that $a \circ b_1=b_3$. By Equation~(\ref{eq:threeops}),
\begin{eqnarray*}
b_3 \diamond ((c_1 \star a)\sigma^{-1}) & = &
(a \circ b_1) \diamond ((c_1 \star a)\sigma^{-1}) = b_1 \diamond c_1
= \lambda, \quad \mbox{and}\\
b_3 \diamond ((c_2 \star a)\sigma^{-1}) & = &
(a \circ b_1) \diamond ((c_2 \star a)\sigma^{-1}) = b_1 \diamond c_2
= \mu.
\end{eqnarray*}
The unique occurrence of letter $\nu$ in column $(c_1\star a)\sigma^{-1}$ of
this Latin square is in row~$b_4$, where $b_4= a \circ b_2$, because
\[
b_4 \diamond ((c_1 \star a)\sigma^{-1}) =
(a \circ b_2) \diamond ((c_1 \star a)\sigma^{-1}) = b_2 \diamond c_1
= \nu.
\]
Now
\[
b_4 \diamond ((c_2 \star a)\sigma^{-1}) =
(a \circ b_2) \diamond ((c_2 \star a)\sigma^{-1}) = b_2 \diamond c_2
= \phi.
\]
This shows that whenever the letters in three cells of a $2 \times 2$
subsquare are known then the letter in the remaining cell is forced.
That is, the Latin square $(T,\diamond)$
satisfies the quadrangle criterion (Definition~\ref{def:quad}).
By Theorem~\ref{thm:frolov}, this property proves that $(T,\diamond)$ is
isotopic to the Cayley table of a group. By \cite[Theorem~2]{albert},
this group is unique up to group isomorphism.
As remarked at the end of Section~\ref{sesc:quasi}, we can now relabel the
parts of $P_2$, $P_3$ and $L^{23}$ so that $b \diamond c = b^{-1}c$ for
all $b$, $c$ in $T$. Then Equation~(\ref{eq:threeops}) becomes
$b^{-1} c = (a\circ b)^{-1} ((c \star a)\sigma^{-1})$, so that
\begin{equation}
(a\circ b) b^{-1} c = (c \star a)\sigma^{-1}
\label{eq:plod}
\end{equation}
for all $a$, $b$, $c$ in $T$.
Putting $b=c$ in Equation~(\ref{eq:plod}) gives
\begin{equation}
(a \circ c)\sigma = c \star a
\label{eq:plodonon}
\end{equation}
for all $a$, $c$ in $T$, while putting $b=1$ gives
\[
((a \circ 1) c)\sigma = c\star a
\]
for all $a$, $c$ in $T$.
Combining these gives
\begin{equation}
\label{eq:plodon}
a \circ c = (a \circ 1)c = (c\star a)\sigma^{-1}
\end{equation}
for all $a,c\in T$.
We have not yet made any explicit use of the labelling of the parts
of $P_1$ other than $e$, with $e \circ 1=1$.
The map $a \mapsto a \circ 1$ is a bijection
from $T$ to $T$, so we may label the parts of $P_1$ in such a way
that $e=1$ and $a \circ 1 = a^{-1}$ for all $a$ in $T$.
Then Equation~(\ref{eq:plodon}) shows that $a \circ b = a^{-1}b$
for all $a$, $b$ in $T$.
Now that we have fixed the labelling of the parts of $P_1$, $P_2$ and $P_3$,
it is clear that they are the partitions of $T^3$
into right cosets of the subgroups as shown in the first three rows of
Table~\ref{tab:coset}.
Consider the partition $L^{23}$. For $\alpha =(a_1,b_1,c_1)$ and
$ \beta =(a_2,b_2,c_2)$ in~$T^3$, we have (using the notation in Section~\ref{sec:part})
\begin{eqnarray*}
L^{23}[\alpha] = L^{23}[\beta]
& \iff & b_1 \diamond c_1 = b_2 \diamond c_2\\
& \iff & b_1^{-1}c_1 = b_2^{-1}c_2\\
& \iff & \mbox{$\alpha$ and $\beta$ are in the same right coset of $K_{23}$,}
\end{eqnarray*}
where $K_{23} = \{(t_1,t_2,t_2) \mid t_1 \in T,\ t_2 \in T\}$. In other words,
$L^{23}$ is the coset partition of $T^3$ defined by $K_{23}$.
Since $a \circ b = a^{-1}b$, a similar argument shows that $L^{12}$ is the
coset partition of $T^3$ defined by $K_{12}$, where
$K_{12} = \{(t_1,t_1,t_2) \mid t_1 \in T,\ t_2 \in T\}$.
Equation~(\ref{eq:plodonon}) shows that the kernel of the function
$(c,a) \mapsto c \star a$ is the same as the kernel of the function
$(c,a) \mapsto a^{-1}c$, which is in turn the same as the kernel of the function
$(c,a) \mapsto c^{-1}a$. It follows that $L^{13}$ is the
coset partition of $T^3$ defined by $K_{13}$, where
$K_{13} = \{(t_1,t_2,t_1) \mid t_1 \in T,\ t_2 \in T\}$.
Thus the partitions $P_i$ and $L^{ij}$ are the partitions of $T^3$
into right cosets of the subgroups as shown in Table~\ref{tab:coset}.
Lemma~\ref{lem:lc4} shows that the letter partition~$L$ is equal to
$L^{ij} \wedge L^{ik}$ whenever $\{i,j,k\} = \{1,2,3\}$.
Consequently, $L$ is the partition into right cosets of the diagonal
subgroup $\{(t,t,t) \mid t \in T\}$.
\end{proof}
\begin{table}[htbp]
\[
\begin{array}{crcl}
\mbox{Partition} & \multicolumn{3}{c}{\mbox{Subgroup of $T^3$}}\\
\hline
P_1 & & & \{(1,t_2,t_3)\mid t_2 \in T, \ t_3 \in T\}\\
P_2 & & & \{(t_1,1,t_3) \mid t_1 \in T, \ t_3 \in T\}\\
P_3 & & & \{(t_1,t_2,1)\mid t_1 \in T, \ t_2 \in T\}\\
L^{12} & K_{12} & = & \{(t_1,t_1,t_3) \mid t_1 \in T, \ t_3 \in T\}\\
L^{13} & K_{13} & = & \{(t_1,t_2,t_1) \mid t_1 \in T, \ t_2 \in T\}\\
L^{23} & K_{23} & = & \{(t_1,t_2,t_2) \mid t_1 \in T, \ t_2 \in T\}\\
\hline
P_1\wedge P_2 & & & \{(1,1,t):t\in T\}\\
P_1\wedge P_3 & & & \{(1,t,1):t\in T\}\\
P_2\wedge P_3 & & & \{(t,1,1):t\in T\}\\
L & \delta(T,3) & = & \{(t,t,t):t\in T\}
\end{array}
\]
\caption{Coset partitions at the end of the proof of Theorem~\ref{thm:bingo}
and some infima}
\label{tab:coset}
\end{table}
The converse of Theorem~\ref{thm:bingo} was given in Theorem~\ref{th:upfront}.
For $\{i,j,k\}= \{1,2,3\}$, let $H_i$ be the intersection of the subgroups of
$T^3$ corresponding to partitions $P_i$ and $L^{jk}$ in Table~\ref{tab:coset},
so that the parts of $P_i \wedge L^{jk}$ are the right cosets of $H_i$.
Then $H_1 = \{(1,t,t)\mid t \in T\}$ and $H_2 = \{(u,1,u)\mid u \in T\}$. If
$T$ is abelian then $H_1H_2=H_2H_1$ and so the right-coset partitions
of $H_1$ and $H_2$ are compatible. If $T$ is not abelian then $H_1H_2 \ne
H_2H_1$ and so these coset partitions are not compatible. Because we do not
want to restrict our theory to abelian groups, we do not require our collection
of partitions to be closed under infima. Thus we require a join-semilattice
rather than a lattice.
\subsection{Automorphism groups}
\begin{theorem}
Suppose that a regular Latin cube $M$ of sort (LC2) arises from a group $T$
by the construction of Theorem~\ref{th:upfront}. Then the group of
automorphisms of $M$ is equal to the diagonal group $D(T,3)$.
\label{t:autDT3}
\end{theorem}
\begin{proof}[Proof (sketch)]
It is clear from the proof of Theorem~\ref{th:upfront} that $D(T,3)$ is
a subgroup of $\operatorname{Aut}(M)$, and we have to prove equality.
Just as in the proof of Theorem~\ref{t:autDT2}, if $G$~denotes the
automorphism group of~$M$, then it suffices to prove that the group of strong
automorphisms of~$M$ fixing the cell $(1,1,1)$ is equal to $\operatorname{Aut}(T)$.
In the proof of Theorem~\ref{thm:bingo}, we choose a part of the partition
$P_1$ which will play the role of the identity of $T$, and using the partitions
we find bijections between the parts of the maximal partitions and show that
each naturally carries the structure of the group $T$. It is clear that
any automorphism of the Latin cube which fixes $(1,1,1)$ will preserve these
bijections, and hence will be an automorphism of $T$. So we have equality.
\end{proof}
\begin{remark}
We will give an alternative proof of this theorem in the next section, in
Theorem~\ref{t:autDTm}.
\end{remark}
\section{Diagonal groups and diagonal semilattices}
\label{sec:diag}
\subsection{Diagonal semilattices}\label{sec:diag1}
Let $T$ be a group, and $m$ be an integer with $m\geqslant2$. Take $\Omega$ to
be the group~$T^m$. Following our convention in Section~\ref{sect:diaggroups},
we will now denote elements of $\Omega$ by $m$-tuples in square brackets.
Consider the following subgroups of $\Omega$:
\begin{itemize}
\item for $1\leqslant i\leqslant m$, $T_i$ is the $i$th coordinate subgroup, the set
of $m$-tuples with $j$th entry $1$ for $j\ne i$;
\item $T_0$ is the diagonal subgroup $\delta(T,m)$ of $T^m$, the set
$\{[t,t,\ldots,t] \mid t\in T\}$.
\end{itemize}
Let $Q_i$ be the partition of $\Omega$ into right cosets of $T_i$ for
$i=0,1,\ldots,m$.
Observe that, by Theorem~\ref{thm:bingo}, the partitions $P_2\wedge P_3$,
$P_1\wedge P_3$, $P_2\wedge P_3$ and $L$ arising from a regular Latin cube
of sort (LC2) are the coset partitions defined by the four subgroups $T_1$,
$T_2$, $T_3$, $T_0$ of $T^3$ just described in the case $m=3$ (see the last
four rows of Table~\ref{tab:coset}).
\begin{prop}
\label{p:diagsemi}
\begin{enumerate}
\item The set $\{Q_0,\ldots,Q_m\}$ is invariant under the diagonal
group $D(T,m)$.
\item Any $m$ of the partitions $Q_0,\ldots,Q_m$ generate a
Cartesian lattice on $\Omega$ by taking suprema.
\end{enumerate}
\end{prop}
\begin{proof}
\begin{enumerate}
\item It is clear that the set of partitions is invariant under
right translations by elements of $T^m$ and left translations by elements of
the diagonal subgroup $T_0$, by automorphisms of $T$ (acting in the same
way on all coordinates), and under the symmetric group $S_m$ permuting the
coordinates. Moreover, it can be checked that the map
\[[t_1,t_2,\ldots,t_m]\mapsto[t_1^{-1},t_1^{-1}t_2,\ldots,t_1^{-1}t_m]\]
interchanges $Q_0$ and $Q_1$ and fixes the other partitions. So we have
the symmetric group $S_{m+1}$ acting on the whole set
$\{Q_0,\ldots,Q_m\}$. These transformations generate the diagonal group
$D(T,m)$; see Remark~\ref{rem:diaggens}.
\item The set $T^m$ naturally has the structure of an $m$-dimensional
hypercube, and $Q_1,\ldots,Q_m$ are the minimal partitions in the
corresponding Cartesian lattice. For any other set of $m$ partitions,
the assertion follows because the symmetric group $S_{m+1}$ preserves
the set of $m+1$ partitions.
\end{enumerate}
\end{proof}
\begin{defn}
Given a group~$T$ and an integer~$m$ with $m\geqslant 2$, define the partitions
$Q_0$, $Q_1$, \ldots, $Q_m$ as above.
For each subset $I$ of $\{0, \ldots, m\}$, put $Q_I = \bigvee_{i\in I}Q_i$.
The \emph{diagonal semilattice} $\mathfrak{D}(T,m)$ is the set
$\{Q_I \mid I \subseteq \{0,1,\ldots,m\}\}$ of partitions of the set $T^m$.
\end{defn}
Thus the diagonal semilattice $\mathfrak{D}(T,m)$ is the set-theoretic union
of the ${m+1}$ Cartesian lattices in Proposition~\ref{p:diagsemi}(b).
Clearly it admits the diagonal group $D(T,m)$ as a group of automorphisms.
\begin{prop}
\label{p:dsjs}
$\mathfrak{D}(T,m)$ is a join-semilattice, that is, closed under taking
joins. For $m>2$ it is not closed under taking meets.
\end{prop}
\begin{proof}
For each proper subset $I$ of $\{0, \ldots, m\}$, the partition~$Q_I$ occurs
in the Cartesian lattice generated by $\{Q_i \mid i \in K\}$
for every subset $K$ of $\{0,\ldots,m\}$ which contains $I$ and has
cardinality~$m$.
Let $I$ and $J$ be two proper subsets of $\{0,\ldots,m\}$. If
$\left |I \cup J\right| \leqslant m$ then there is a subset~$K$ of
$\{0, \ldots, m\}$ with $\left|K\right|=m$ and $I\cup J \subseteq K$.
Then $Q_I\vee Q_J = Q_{I\cup J}$ in the Cartesian lattice defined by $K$,
and this supremum does not depend on the choice of $K$. Therefore
$Q_I\vee Q_J \in \mathfrak{D}(T,m)$.
On the other hand, if $I\cup J=\{0,\ldots,m\}$, then
\[Q_I\vee Q_J = Q_0 \vee Q_1 \vee \cdots \vee Q_m \succcurlyeq
Q_1 \vee Q_2 \vee \cdots \vee Q_m = U.
\]
Hence $Q_I\vee Q_J=U$, and so $Q_I\vee Q_J\in \mathfrak{D}(T,m)$.
If $m=3$, consider the subgroups
\[H=T_0T_1=\{[x,y,y] \mid x,y\in T\}\quad\mbox{ and }
\quad K=T_2T_3=\{[1,z,w] \mid z,w\in T\}.\]
If $P_H$ and $P_K$ are the corresponding coset partitions, then
\[P_H=Q_{\{0,1\}}\quad \mbox{ and } \quad P_K=Q_{\{2,3\}},\]
which are both in $\mathfrak{D}(T,3)$. Now, by Proposition~\ref{prop:coset},
\[P_H\wedge P_K=P_{H\cap K},\]
where $H\cap K=\{[1,y,y] \mid y\in T\}$; this is a subgroup of $T^m$, but the
coset partition $P_{H\cap K}$ does not belong to $\mathfrak{D}(T,3)$. This example is
easily generalised to larger values of $m$.
\end{proof}
When $T$~is finite, Propositions~\ref{p:diagsemi}(b) and~\ref{p:dsjs}
show that $\mathfrak{D}(T,m)$ is a Tjur block structure but is not an
orthogonal block structure when $m>2$ (see Section~\ref{sect:moreparts}).
We will see in the next section that the property in
Proposition~\ref{p:diagsemi}(b) is exactly what is required for the
characterisation of diagonal semilattices. First, we extend
Definition~\ref{def:weak}.
\begin{defn}\label{def:isomsl}
For $i=1$, $2$, let $\mathcal{P}_i$ be a finite set of partitions of a
set $\Omega_i$. Then $\mathcal{P}_1$ is \textit{isomorphic} to
$\mathcal{P}_2$ if there is a bijection $\phi$ from $\Omega_1$ to $\Omega_2$
which induces a bijection from $\mathcal{P}_1$ to $\mathcal{P}_2$ which
preserves the relation $\preccurlyeq$.
\end{defn}
As we saw in Section~\ref{sec:LS}, this notion of isomorphism
is called \textit{paratopism} in the context of Latin squares.
\medskip
The remark before Proposition~\ref{p:diagsemi} shows that a regular Latin
cube of sort (LC2) ``generates'' a diagonal semilattice $\mathfrak{D}(T,3)$
for a group $T$, unique up to isomorphism. The next step is to consider larger
values of $m$.
\subsection{The theorem}\label{sect:mt}
We repeat our axiomatisation of diagonal structures from the introduction.
We emphasise to the reader that we do not assume a Cartesian decomposition on
the set $\Omega$ at the start; the $m+1$ Cartesian decompositions are imposed by
the hypotheses of the theorem, and none is privileged.
\begin{theorem}\label{th:main}
Let $\Omega$ be a set with $|\Omega|>1$, and $m$ an integer at least $2$. Let $Q_0,\ldots,Q_m$
be $m+1$ partitions of $\Omega$ satisfying the following property: any $m$
of them are the minimal non-trivial partitions in a Cartesian lattice on
$\Omega$.
\begin{enumerate}
\item If $m=2$, then the three partitions are the row, column, and letter
partitions of a Latin square on $\Omega$, unique up to paratopism.
\item If $m>2$, then there is a group $T$, unique up to group isomorphism,
such that $Q_0,\ldots,Q_m$ are the minimal non-trivial partitions in a diagonal
semilattice $\mathfrak{D}(T,m)$ on $\Omega$.
\end{enumerate}
\end{theorem}
Note that the converse of the theorem is true: Latin squares (with ${m=2}$)
and diagonal semilattices have the property that their minimal non-trivial
partitions do satisfy our hypotheses.
The general proof for $m\geqslant 3$ is by induction, the base case being $m=3$.
The base case follows from Theorem~\ref{thm:bingo}, as discussed in the
preceding subsection, while the induction step
is given in Subsection~\ref{s:mtinduction}.
\subsection{Setting up}
First, we give some notation.
Let $\mathcal{P}$ be a set of partitions of $\Omega$,
and $Q$ a partition of~$\Omega$. We denote by $\mathcal{P}/\!\!/ Q$ the
following object: take all partitions $P\in\mathcal{P}$ which satisfy
$Q\preccurlyeq P$; then regard each such $P$ as a partition, not of~$\Omega$, but
of~$Q$ (that is, of the set of parts of $Q$).
Then $\mathcal P/\!\!/ Q$ is the set of these partitions of~$Q$.
(We do not write this as $\mathcal{P}/Q$, because this notation has almost the
opposite meaning in the statistical literature cited in
Section~\ref{sec:prelim}.)
The next result is routine but should help to familiarise this concept.
Furthermore, we will temporarily call a set $\{Q_0,\ldots,Q_m\}$ of partitions
of~$\Omega$ satisfying the hypotheses of
Theorem~\ref{th:main}
a \emph{special set of dimension $m$}.
\begin{prop}\label{p:quots}
Let $\mathcal{P}$ be a set of partitions of $\Omega$, and $Q$ a minimal
non-trivial element of $\mathcal{P}$.
\begin{enumerate}
\item If $\mathcal{P}$ is an $m$-dimensional Cartesian lattice, then
$\mathcal{P}/\!\!/ Q$ is an $(m-1)$-dimensional Cartesian lattice.
\item If $\mathcal{P}$ is the join-semilattice generated by an $m$-dimensional
special set $\mathcal{Q}$, and $Q\in\mathcal{Q}$, then $\mathcal{P}/\!\!/ Q$
is generated by a special set of dimension $m-1$.
\item If $\mathcal{P}\cong\mathfrak{D}(T,m)$ is a diagonal semilattice, then
$\mathcal{P}/\!\!/ Q\cong\mathfrak{D}(T,m-1)$.
\end{enumerate}
\end{prop}
\begin{proof}
\begin{enumerate}
\item This follows from Proposition~\ref{p:antiiso}, because if $Q=P_I$
where $I = \{1, \ldots, m\} \setminus \{i\}$
then we are effectively just limiting the set of indices to~$I$.
\item
This follows from part~(a).
\item Assume that $\mathcal P=\mathfrak D(T,m)$. Then, since $\operatorname{Aut}(\mathcal P)$
contains $D(T,m)$, which is transitive on $\{Q_0,\ldots,Q_m\}$, we may assume that $Q=Q_m$.
Thus $\mathcal P/\!\!/ Q$ is a set of partitions of $Q_m$.
In the group $T^{m+1}\rtimes\operatorname{Aut}(T)$ generated by elements of types
(I)--(III) in Remark~\ref{rem:diaggens}, the subgroup $T_m$ generated
by right multiplication of the last coordinate by elements of $T$ is normal,
and the quotient is $T^m\rtimes\operatorname{Aut}(T)$. Moreover, the subgroups $T_i$ commute
pairwise, so the parts of $Q_i\vee Q_m$ are the orbits of $T_iT_m$ (for
$i<m$) and give rise to a minimal partition in $\mathfrak{D}(T,m-1)$.
\end{enumerate}
\end{proof}
\subsection{Automorphism groups}
\label{sec:dag}
In the cases $m=2$ and $m=3$, we showed that the automorphism group of the
diagonal semilattice $\mathfrak{D}(T,m)$ is the diagonal group $D(T,m)$. The
same result holds for arbitrary $m$; but this time, we prove this result first,
since it is needed in the proof of the main theorem. The proof below also
handles the case $m=3$.
\begin{theorem}
For $m\geqslant2$, and any non-trivial group $T$, the automorphism group of the
diagonal semilattice $\mathfrak{D}(T,m)$ is the diagonal group $D(T,m)$.
\label{t:autDTm}
\end{theorem}
\begin{proof}
Our proof will be by induction on $m$. The cases $m=2$ and $m=3$ are given by
Theorems~\ref{t:autDT2} and~\ref{t:autDT3}. However, we base the induction at
$m=2$, so we provide an alternative proof for Theorem~\ref{t:autDT3}. So in
this proof we assume that $m>2$ and that the result holds with $m-1$
replacing~$m$.
Recall from Section~\ref{sect:diaggroups} that $\widehat D(T,m)$ denotes the
pre-diagonal group, so that
$D(T,m)\cong \widehat D(T,m)/ \widehat K$, with $ \widehat K$
as in~\eqref{eq:K}.
Suppose that $\sigma:\widehat D(T,m)\to D(T,m)$ is the natural projection
with $\ker\sigma=\widehat K$.
By Proposition~\ref{p:diagsemi}, we know that $D(T,m)$ is a subgroup of $\operatorname{Aut}(\mathfrak{D}(T,m))$, and we have
to show that equality holds. Using the principle of Proposition~\ref{p:subgp},
it suffices to show that the group $\operatorname{SAut}(\mathfrak{D}(T,m))$ of strong
automorphisms of $\mathfrak{D}(T,m)$ is the group $\sigma(T^{m+1}\rtimes\operatorname{Aut}(T))$
generated by
the images of the elements of the pre-diagonal group of types (I)--(III), as given in
Remark~\ref{rem:diaggens}.
Consider $Q_m$, one of the minimal partitions in $\mathfrak{D}(T,m)$, and let
$\overline\Omega$ be the set of parts of $Q_m$. For $i<m$, the collection of
subsets of $\overline\Omega$ which are the parts of $Q_m$ inside a part of
$Q_i\vee Q_m$ is a partition $\overline Q_i$ of $\overline\Omega$.
Proposition~\ref{p:quots}(c) shows that the $\overline Q_i$ are the minimal
partitions of $\mathfrak{D}(T,m-1)$, a diagonal semilattice
on~$\overline\Omega$.
Moreover, the group $\sigma(T_m)$ is the
kernel of the action of $\sigma(T^{m+1}\rtimes\operatorname{Aut}(T))$ on~$\overline\Omega$.
Further, since $T_m\cap \widehat K=1$, $\sigma(T_m)\cong T_m\cong T$.
As in Section~\ref{sect:diaggroups}, let $\widehat H$ be the
stabiliser in $\widehat D(T,m)$ of the element $[1,\ldots,1]$:
then $T_m\cap \widehat H=1$ and so $T_m$ acts faithfully and
regularly on each part of $Q_m$.
So it suffices to show that the same is true of $\operatorname{SAut}(\mathfrak{D}(T,m))$;
in other words, it is enough to show that the subgroup $H$ of $\operatorname{SAut}(\mathfrak{D}(T,m))$
fixing setwise all parts of $Q_m$
and any given point $\alpha$ of $\Omega$ is trivial.
Any $m$ of the partitions $Q_0,\ldots,Q_m$ are the minimal partitions
in a Cartesian lattice of partitions of $\Omega$. Let $P_{ij}$ denote the supremum of the partitions
$Q_k$ for $k\notin\{i,j\}$. Then, for fixed $i$, the partitions $P_{ij}$
(as $j$ runs over $\{0,\ldots,m\}\setminus\{i\}$) are the maximal partitions
of the Cartesian lattice
generated by $\{ Q_j \mid 0\leqslant j\leqslant~m \mbox{ and } j\ne i\}$
and form a Cartesian decomposition of~$\Omega$.
Hence each point of $\Omega$ is uniquely determined
by the parts of these partitions which contain it
(see Definition~\ref{def:cart}).
For distinct $i,j<m$, all parts of $P_{ij}$ are fixed by $H$, since each is a union of
parts of $Q_m$. Also, for $i<m$, the part of $P_{im}$ containing $\alpha$ is
fixed by $H$. By the defining property of the Cartesian decomposition
$\{P_{ij}\mid 0\leqslant j\leqslant m\mbox{ and }j\neq i\}$, we conclude that $H$ fixes every point lying in
the same part of $P_{im}$ as $\alpha$ and this holds for all $i<m$.
Taking $\alpha=[1,\ldots,1]$, the argument in the last two paragraphs shows
in particular that
$H$ fixes pointwise the part $P_{0m}[\alpha]$ of $P_{0m}$ and the part
$P_{1m}[\alpha]$ of $P_{1m}$ containing
$\alpha$. In other words, $H$ fixes pointwise the sets
\begin{align*}
P_{0m}[\alpha]&=\{[t_1,\ldots,t_{m-1},1]\mid t_1,\ldots,t_{m-1}\in T\}\mbox{ and}\\
P_{1m}[\alpha]&=\{[t_1,\ldots,t_{m-1},t_1]\mid t_1,\ldots,t_{m-1}\in T\}.
\end{align*}
Applying, for a given $t\in T$, the same argument to the element $\alpha'=[t,1,\ldots,1,t]$
of $P_{1m}[\alpha]$, we obtain that $H$ fixes pointwise the set
\[
P_{0m}[\alpha']=\{[t_1,\ldots,t_{m-1},t]\mid t_1,\ldots,t_{m-1}\in T\}.
\]
Letting $t$ run through the elements of $T$, the union of the
parts $P_{0m}[\alpha']$ is $\Omega$, and
this implies that $H$ fixes all elements of $\Omega$ and we are done.
\end{proof}
The particular consequence of Theorem~\ref{t:autDTm} that we require in the proof of the
main theorem is the following.
\begin{cor}\label{c:forinduction}
Suppose that $m\geqslant3$. Let $\mathcal P$ and $\mathcal P'$ be
diagonal semilattices isomorphic to $\mathfrak D(T,m)$, and let $Q$ and
$Q'$ be minimal partitions in
$\mathcal P$ and $\mathcal P'$, respectively.
Then each isomorphism $\psi:\mathcal P/\!\!/ Q\to \mathcal P'/\!\!/ Q'$
is induced by an isomorphism $\overline{\psi}: \mathcal P\to \mathcal P'$
mapping $Q$ to $Q'$.
\end{cor}
\begin{proof}
We may assume without loss of generality that $\mathcal P=\mathcal P'=\mathfrak D(T,m)$ and,
since $\operatorname{Aut}(\mathfrak D(T,m))$ induces $S_{m+1}$ on the minimal partitions
$Q_0,\ldots,Q_m$
of $\mathfrak D(T,m)$, we can also suppose that $Q=Q'=Q_m$.
Thus $\mathcal P/\!\!/ Q= \mathcal P'/\!\!/ Q'\cong \mathfrak D(T,m-1)$.
Let $\sigma:\widehat D(T,m)\to D(T,m)$ be the natural projection map,
as in the proof of
Theorem~\ref{t:autDTm}.
The subgroup of $\operatorname{Aut}(\mathfrak D(T,m))$ fixing $Q_m$ is the image
$X=\sigma(T^{m+1}\rtimes (\operatorname{Aut}(T)\times S_m))$ where the subgroup $S_m$ of $S_{m+1}$
is
the stabiliser of the point $m$ in the action on $\{0,\ldots,m\}$.
Moreover, the subgroup $X$ contains $\sigma(T_m)$, the copy of $T$ acting on the last
coordinate of the $m$-tuples, which is regular on each part
of $Q_m$. Put $Y=\sigma(T_m)$. Then $Y$~is the kernel of the induced action
of $X$ on $\mathcal P/\!\!/ Q_m$, which is isomorphic to $
\mathfrak D(T,m-1)$, and so $X/Y\cong D(T,m-1)$. Moreover since $m\geqslant 3$,
it follows from Theorem~\ref{t:autDTm} that
$X/Y = \operatorname{Aut}(\mathfrak D(T,m-1))$. Thus the given map $\psi$ in
$\operatorname{Aut}(\mathfrak D(T,m-1))$
lies in $X/Y$, and we may choose $\overline{\psi}$ as any pre-image of $\psi$ in $X$.
\end{proof}
\subsection{Proof of the main theorem}\label{s:mtinduction}
Now we begin the proof of Theorem~\ref{th:main}. The proof is by induction
on $m$. As we remarked in
Section~\ref{sect:mt},
there is nothing to prove for $m=2$, and the case $m=3$ follows from
Theorem~\ref{thm:bingo}. Thus we assume that $m\geqslant4$. The induction
hypothesis yields that the main theorem is true for dimensions~$m-1$ and~$m-2$.
Given a special set $\{Q_0,\ldots,Q_m\}$ generating a semilattice $\mathcal{P}$,
we know, by Proposition~\ref{p:quots}, that, for each $i$, $\mathcal{P}/\!\!/ Q_i$
is generated by a special
set of dimension $m-1$, and so is isomorphic to $\mathfrak{D}(T,m-1)$ for
some group $T$. Now, $T$ is independent of the choice of $i$; for, if
$\mathcal{P}/\!\!/ Q_i\cong\mathfrak{D}(T_i,m-1)$, and
$\mathcal{P}/\!\!/ Q_j\cong\mathfrak{D}(T_j,m-1)$, then,
by Proposition~\ref{p:quots}(c),
\[
\mathfrak{D}(T_i,m-2)\cong\mathcal{P} /\!\!/ (Q_i\vee Q_j)
\cong\mathfrak{D}(T_j,m-2),
\]
so by induction $T_i\cong T_j$.
(This proof works even when $m=4$, because it is the reduction to $m=3$ that
gives the groups $T_i$ and $T_j$, so that the Latin squares
$\mathfrak{D}(T_i,2)$ and $\mathfrak{D}(T_j,2)$ are both Cayley tables of groups,
and so Theorem~\ref{thm:albert} implies that $T_i\cong T_j$.)
We call $T$ the \emph{underlying group} of the special set.
\begin{theorem}
\label{th:QQ}
Let $\mathcal{Q}$ and $\mathcal{Q}'$ be special sets of dimension $m\geqslant4$
on sets $\Omega$ and $\Omega'$ with the same underlying group $T$.
Then $\mathcal{Q}$ and $\mathcal{Q'}$ are isomorphic in the sense of
Definition~\ref{def:isomsl}.
\end{theorem}
\begin{proof}
Let $\mathcal{P}$ and $\mathcal{P}'$ be the join-semilattices
generated by $\mathcal{Q}$ and $\mathcal{Q}'$ respectively,
where $\mathcal{Q} = \{Q_0, \ldots, Q_m\}$ and
$\mathcal{Q}' = \{Q'_0, \ldots, Q'_m\}$.
We consider the three partitions $Q_1$, $Q_2$, and
$Q_1\vee Q_2$. Each part of $Q_1\vee Q_2$ is partitioned by $Q_1$ and $Q_2$;
these form a $|T|\times|T|$ grid, where the parts of $Q_1$ are the rows and
the parts of $Q_2$ are the columns. We claim that
\begin{itemize}
\item There is a bijection $F_1$ from the set of parts of $Q_1$ to the set of
parts of $Q_1'$ which induces an isomorphism from $\mathcal{P} /\!\!/ Q_1$ to
$\mathcal{P}' /\!\!/ Q_1'$.
\item There is a bijection $F_2$ from the set of parts of $Q_2$ to the set of
parts of $Q_2'$ which induces an isomorphism from $\mathcal{P} /\!\!/ Q_2$ to
$\mathcal{P}' /\!\!/ Q_2'$.
\item There is a bijection $F_{12}$ from the set of parts of $Q_1\vee Q_2$ to
the set of parts of $Q_1'\vee Q_2'$ which induces an isomorphism from
$\mathcal{P} /\!\!/ (Q_1\vee Q_2)$ to $\mathcal{P}' /\!\!/ (Q_1'\vee Q_2')$;
moreover, each of $F_1$ and $F_2$, restricted to the partitions of
$\mathcal{P}/\!\!/ (Q_1\vee Q_2)$, agrees with $F_{12}$.
\end{itemize}
The proof of these assertions is as follows.
As each part of $Q_1 \vee Q_2$ is a union of parts of $Q_1$,
the partition $Q_1 \vee Q_2$ determines a partition $R_1$ of
$Q_1$ which is a minimal partition of $\mathcal P/\!\!/ Q_1$.
Similarly $Q'_1 \vee Q'_2$ determines a minimal partition $R_1'$ of $\mathcal P'/\!\!/
Q_1'$.
Then since $\mathcal P/\!\!/ Q_1\cong \mathcal P'/\!\!/ Q_1'\cong \mathfrak D(T,m-1)$,
by the induction hypothesis, as discussed above,
we may choose an isomorphism
$F_1: \mathcal P/\!\!/ Q_1\to \mathcal P'/\!\!/ Q_1'$
in the first bullet point such that $R_1$ is mapped to $R_1'$.
Now $F_1$ induces an isomorphism
$(\mathcal P/\!\!/ Q_1)/\!\!/ R_1 \to (\mathcal P'/\!\!/ Q'_1)/\!\!/ R_1'$,
and since there are natural isomorphisms from
$(\mathcal P/\!\!/ Q_1)/\!\!/ R_1$ to
$\mathcal P/\!\!/ (Q_1 \vee Q_2)$ and from
$(\mathcal P'/\!\!/ Q'_1)/\!\!/ R_1'$ to
$\mathcal P'/\!\!/ (Q'_1 \vee Q'_2)$,
$F_1$ induces an isomorphism
\[F_{12}: \mathcal P/\!\!/ (Q_1 \vee Q_2) \to
\mathcal P'/\!\!/ (Q'_1 \vee Q'_2).
\]
The join $Q_1 \vee Q_2$ determines a partition
$R_2$ of $Q_2$ which is a minimal partition of $\mathcal P/\!\!/ Q_2$, and
$Q'_1 \vee Q'_2$ determines a minimal partition $R'_2$ of
$\mathcal P'/\!\!/ Q_2'$. Further, we have natural isomorphisms from
$(\mathcal P/\!\!/ Q_2)/\!\!/ R_2$ to $\mathcal P/\!\!/ (Q_1 \vee Q_2)$ and from
$(\mathcal P'/\!\!/ Q'_2)/\!\!/ R'_2$ to $\mathcal P'/\!\!/ (Q'_1 \vee Q'_2)$,
so we may view $F_{12}$ as an isomorphism from
$(\mathcal P/\!\!/ Q_2)/\!\!/ R_2$ to $(\mathcal P'/\!\!/ Q'_2)/\!\!/ R'_2$.
By Corollary~\ref{c:forinduction}, the isomorphism $F_{12}$ is induced by an
isomorphism from $\mathcal{P} /\!\!/ Q_2$ to $\mathcal{P}' /\!\!/ Q_2'$,
and we take $F_2$ to be this isomorphism.
Thus, $F_{12}$ maps each part $\Delta$ of $Q_1\vee Q_2$ to a part $\Delta'$ of
$Q_1'\vee Q_2'$, and $F_1$ maps the rows of the grid on $\Delta$ described above to the rows of
the grid on $\Delta'$, and similarly $F_2$ maps the columns.
Now the key observation is that there is a unique bijection~$F$ from the points
of $\Delta$ to the points of $\Delta'$ which maps rows to rows (inducing~$F_1$)
and columns to columns (inducing~$F_2$). For each point of $\Delta$ is the
intersection of a row and a column, and can be mapped to the
intersection of the image row and column in $\Delta'$.
Thus, taking these maps on each part of $Q_1\vee Q_2$ and combining them,
we see that there is a unique bijection $F\colon\Omega\to\Omega'$ which induces $F_1$
on the parts of~$Q_1$ and $F_2$ on the parts of~$Q_2$. Since $F_1$ is an
isomorphism from $\mathcal{P} /\!\!/ Q_1$ to $\mathcal{P}' /\!\!/ Q_1'$,
and similarly for $F_2$, we see that
\begin{quote}
$F$ maps every element of $\mathcal{P}$ which is above \emph{either}
$Q_1$ or $Q_2$ to the corresponding element of $\mathcal{P}'$.
\end{quote}
To complete the proof, we have to deal with the remaining partitions of $\mathcal P$
and $\mathcal P'$.
We note that every partition in $\mathcal{P}$ has the form
\[Q_I=\bigvee_{i\in I}Q_i\]
for some $I\subseteq\{0,\ldots,m\}$. By the statement proved in the previous paragraph,
we may assume that $I\cap\{1,2\}=\emptyset$ and in particular that
$|I|\leqslant m-1$.
Suppose first that $|I|\leqslant m-2$. Then there is some $k\in\{0,3,\ldots,m\}$
such that $k\not\in I$. Without loss of generality we may assume that
$0\not\in I$.
Since $\{Q_1,\ldots,Q_m\}$ generates a Cartesian lattice, which is closed
under meet, we have
\[Q_I=Q_{I\cup\{1\}}\wedge Q_{I\cup\{2\}},\]
and since the partitions on the right are mapped by $F$ to $Q'_{I\cup\{1\}}$ and
$Q'_{I\cup\{2\}}$, it follows that $F$ maps $Q_I$ to $Q'_I$.
Consider finally the case when $|I|=m-1$; that is, $I=\{0,3,4,\ldots,m\}$.
As $m\geqslant 4$, we have $0, 3\in I$ and may put
$J = I\setminus \{0,3\}=\{4,\ldots,m\}$.
Then, for $i\in\{0,3\}$, $\left| J \cup \{i\} \right|= m-2$, so
the argument in the previous paragraph shows that $F$ maps $Q_{J \cup \{i\}}$
to $Q'_{J \cup \{i\}}$.
Since $Q_I = Q_{J \cup \{0\}} \vee Q_{J \cup \{3\}}$, it follows
that $F$ maps $Q_I$ to $Q'_I$. \
\end{proof}
Now the proof of the main theorem follows. For let $\mathcal{Q}$ be a special
set of partitions of $\Omega$ with underlying group $T$.
By Proposition~\ref{p:diagsemi},
the set of minimal partitions in $\mathfrak{D}(T,m)$ has the same property.
By Theorem~\ref{th:QQ}, $\mathcal{Q}$~is isomorphic to this special set,
so the
join-semilattice it generates is isomorphic to~$\mathfrak{D}(T,m)$.
\section{Primitivity and quasiprimitivity}\label{s:pqp}
A permutation group is said to be \emph{quasiprimitive} if all its non-trivial
normal subgroups are transitive. In particular, primitive groups are
quasiprimitive, but a quasiprimitive group may be imprimitive. If $T$ is a
(not necessarily finite) simple group and $m\geqslant 2$, then the diagonal group
$D(T,m)$ is a primitive permutation group of simple diagonal type;
see~\cite{aschsc}, \cite{kov:sd}, or~\cite[Section~7.4]{ps:cartesian}.
In this section, we investigate the primitivity and quasiprimitivity of diagonal
groups for an arbitrary~$T$; our conclusions are in Theorem~\ref{th:primaut} in
the introduction.
The proof requires some preliminary lemmas.
A subgroup of a group~$G$ is \emph{characteristic} if it is
invariant under $\operatorname{Aut}(G)$. We say that $G$~is \emph{characteristically simple}
if its only characteristic subgroups are itself and $1$. We require some
results about abelian characteristically simple groups.
An abelian group $(T,+)$ is said to be \emph{divisible}
if, for every positive integer~$n$ and every $a\in T$,
there exists $b\in T$ such that $nb=a$. The group $T$ is
\emph{uniquely divisible} if,
for all $a\in T$ and $n\in\mathbb{N}$, the element $b\in T$ is unique. Equivalently,
an abelian group $T$ is divisible if and only if
the map $T\to T$, $x\mapsto n x$ is surjective for all $n\in\mathbb{N}$, while
$T$ is uniquely divisible if and only if the same map is bijective
for all $n\in\mathbb{N}$. Uniquely divisible groups are also referred to as
\emph{$\mathbb{Q}$-groups}. If $T$ is a uniquely divisible group,
$p\in\mathbb{Z}$, $q\in \mathbb{Z}\setminus\{0\}$ and $a\in T$, then there is
a unique $b\in T$ such that $qb=a$ and we define $(p/q)a=pb$.
This defines a $\mathbb{Q}$-vector space
structure on~$T$. Also note that any non-trivial uniquely divisible group is
torsion-free.
In the following lemma, elements of $T^{m+1}$ are written as
$(t_0,\ldots,t_m)$ with $t_i\in T$,
and $S_{m+1}$ is considered as the symmetric group
acting on the set $\{0,\ldots,m\}$. Moreover, we let $H$ denote
the group $\operatorname{Aut}(T)\times S_{m+1}$; then $H$ acts on $T^{m+1}$ by
\begin{equation}\label{eq:Gomegaact}
(t_0,\ldots,t_m)(\varphi,\pi)=(t_{0\pi^{-1}}\varphi,\ldots,t_{m\pi^{-1}}\varphi)
\end{equation}
for all $(t_0,\ldots,t_m)$ in $T^{m+1}$, $\varphi$ in $\operatorname{Aut}(T)$,
and $\pi$ in $S_{m+1}$.
The proof of statements (b)--(c) depends on the
assertion that bases exist in an arbitrary vector space, which is a well-known
consequence of the Axiom
of Choice. Of course, in special cases, for instance when $T$ is finite-dimensional
over $\mathbb{F}_p$ or over $\mathbb{Q}$, then the use of the Axiom of Choice can be avoided.
\begin{lem}\label{lem:charab}
The following statements hold for any non-trivial abelian
characteristically simple group~$T$.
\begin{enumerate}
\item Either $T$ is an elementary abelian $p$-group or
$T$ is a uniquely divisible group. Moreover, $T$ can be considered
as an $\mathbb{F}$-vector space, where $\mathbb{F}=\mathbb{F}_p$ in the first
case, while $\mathbb F=\mathbb{Q}$ in the second case.
\item $\operatorname{Aut} (T)$ is transitive on the set $T\setminus\{0\}$.
\item Suppose that $m\geqslant 1$ and put
\begin{align*}
\Delta&=\delta(T,m+1)=\{(t,\ldots,t)\in T^{m+1}\mid t\in T\}\mbox{ and }\\
\Gamma&=\left\{(t_0,\ldots,t_m)\in T^{m+1}\mid \sum_{i=0}^mt_i=0\right\}.
\end{align*}
Then $\Delta$ and $\Gamma$ are $H$-invariant subgroups of $T^{m+1}$.
Furthermore, precisely one of the following holds.
\begin{enumerate}
\item $T$ is an elementary abelian $p$-group where $p\mid(m+1)$,
so that $\Delta\leqslant \Gamma$. In particular, $\Gamma/\Delta$ is an
$H$-invariant subgroup of $T^{m+1}/\Delta$, which is proper if $m\geqslant2$
\item Either $T$ is uniquely divisible or $T$ is an elementary
abelian $p$-group with $p\nmid (m+1)$. Further, in this case,
$T^{m+1}=\Gamma\oplus \Delta$ and $\Gamma$ has no proper, non-trivial
$H$-invariant subgroup.
\end{enumerate}
\end{enumerate}
\end{lem}
\begin{proof}
\begin{enumerate}
\item
First note that, for $n\in\mathbb{N}$, both the image $nT$
and the kernel $\{t\in T\mid nt=0\}$ of the map $t\mapsto nt$ are
characteristic subgroups of $T$.
If $T$ is not a divisible group, then there exist $n\in\mathbb{N}$ and
$a\in T$ such that $a \notin nT$. Thus
$nT\neq T$, and hence, since $T$ is characteristically simple, $nT=0$.
In particular, $T$ contains a non-zero element of finite order,
and hence $T$ also contains an element of order $p$ for some prime~$p$.
Since $T$ is abelian, the set $Y=\{t\in T\mid pt=0\}$ is a non-trivial
characteristic subgroup, and so $Y=T$; that is, $T$ is an
elementary abelian $p$-group and it can be
regarded as an $\mathbb F_p$-vector space.
Hence we may assume that $T$ is a non-trivial divisible group. That is,
$nT=T$ for all $n\in\mathbb{N}$, but also, as $T$ is characteristically simple,
$\{t\in T\mid nt=0\}=\{0\}$
for all $n\in \mathbb{N}$. Hence $T$ is uniquely divisible. In this case, $T$ can be viewed
as a $\mathbb{Q}$-vector space, as explained before the statement of this lemma.
\item
By part~(a), $T$ can be considered as a vector space over some field
$\mathbb F$. If $a,b\in T\setminus\{0\}$, then, by extending the sets $\{a\}$ and
$\{b\}$ into $\mathbb F$-bases, we can construct an $\mathbb F$-linear
transformation that takes $a$ to $b$.
\item
The definition of $\Delta$ and $\Gamma$ implies that they are
$H$-invariant, and also that, if $T$ is an elementary abelian $p$-group
such that $p$ divides $m+1$, then $\Delta<\Gamma$, and so $\Gamma/\Delta$ is a
proper $H$-invariant subgroup of $T^{m+1}/\Delta$.
Assume now that
either $T$ is uniquely divisible or $T$ is a $p$-group with $p\nmid(m+1)$.
Then $T^{m+1}=\Delta\oplus \Gamma$ where the decomposition is into the direct
sum of $H$-modules. It suffices to show that,
if $\mathbf{a}=(a_0,\ldots,a_m)$ is a non-trivial element of $\Gamma$,
then the smallest
$H$-invariant subgroup $X$ that contains
$\mathbf{a}$ is equal to $\Gamma$.
The non-zero element $\mathbf a$ of $\Gamma$ cannot be of the form $(b,\ldots,b)$
for $b\in T\setminus\{0\}$,
because $(m+1)b\neq 0$ whether $T$ is uniquely divisible or $T$ is a $p$-group
with $p\nmid(m+1)$. In
particular there exist distinct $i,j$ in $\{0,\ldots,m\}$
such that $a_i\neq a_j$.
Applying an element $\pi$ in $S_{m+1}$,
we may assume without loss of generality
that $a_0\neq a_1$. Applying the transposition $(0,1)\in S_{m+1}$,
we have that
$(a_1,a_0,a_2,\ldots,a_m)\in X$, and so
\[
(a_0,a_1,a_2,\ldots,a_m)-(a_1,a_0,a_2,\ldots,a_m)=(a_0-a_1,a_1-a_0,0,\ldots,0)\in X.
\]
Hence there is a non-zero element $a\in T$ such that $(a,-a,0,\ldots,0)\in X$.
By part~(b), $\operatorname{Aut}(T)$ is transitive on non-zero
elements of $T$ and hence $(a,-a,0,\ldots,0)\in X$ for
all $a\in T$. As $S_{m+1}$ is transitive on pairs of indices $i,j\in\{0,\ldots,m\}$ with
$i\neq j$, this implies that
all elements of the form $(0,\ldots,0,a,0,\ldots,0,-a,0,\ldots,0)\in T^{m+1}$ belong
to $X$, but these elements generate $\Gamma$, and so $X=\Gamma$, as required.
\end{enumerate}
\end{proof}
Non-abelian characteristically simple groups are harder to describe.
A direct product of pairwise isomorphic non-abelian simple groups is
characteristically simple.
Every finite characteristically simple group is of this form, but in the
infinite case this is not true; the first example of a
characteristically simple group not of this form was published by
McLain~\cite{mclain} in 1954, see also Robinson~\cite[(12.1.9)]{djsr}.
\medskip
Now we work towards the main result of this section, the classification of
primitive or quasiprimitive diagonal groups. First we do the case where $T$ is
abelian.
\begin{lem}\label{lem:prabreg}
Let $G$ be a permutation group on a set $\Omega$ and let $M$ be an
abelian regular normal subgroup of $G$. If $\omega\in\Omega$, then
$G=M\rtimes G_\omega$ and the following are
equivalent:
\begin{enumerate}
\item $G$ is primitive;
\item $G$ is quasiprimitive;
\item $M$ has no proper non-trivial subgroup which is invariant under
conjugation by elements of $G_\omega$.
\end{enumerate}
\end{lem}
\begin{proof}
The product decomposition $G=MG_\omega$ follows from the transitivity of $M$, while
$M\cap G_\omega=1$ follows from the regularity of $M$. Hence $G=M\rtimes G_\omega$.
Assertion~(a) clearly implies assertion~(b). The fact that (b) implies (c) follows
from~\cite[Theorem~3.12(ii)]{ps:cartesian} by noting that $M$, being abelian, has no non-trivial inner automorphisms.
Finally, that (c) implies (a) follows directly from~\cite[Theorem~3.12(ii)]{ps:cartesian}.
\end{proof}
To handle the case where $T$ is non-abelian, we need the following definition
and lemma.
A group $X$ is said to be \emph{perfect} if $X'=X$,
where $X'$ denotes the commutator subgroup.
The following lemma is Lemma 2.3 in \cite{charfact}, where the proof can be
found. For $X=X_1\times\cdots\times X_k$ a direct product of groups and
$S\subseteq\{1,\ldots,k\}$, we denote by $\pi_S$ the projection
from $X$ onto $\prod_{i\in S}X_i$.
\begin{lem}\label{comminside}
Let $k$ be a positive integer, let $X_1,\ldots,X_k$ be groups, and suppose, for
$i\in \{1,\ldots,k\}$, that $N_i$ is a perfect subgroup of $X_i$.
Let $X=X_1\times\cdots\times X_k$ and let $K$ be a subgroup of $X$ such that for
all $i$, $j$ with $1\leqslant i<j\leqslant k$, we have
$N_i\times N_j\leqslant \pi_{\{i,j\}}(K)$. Then $N_1\times\cdots\times N_k\leqslant K$.
\end{lem}
Now we are ready to prove Theorem~\ref{th:primaut}. In this proof, $G$ denotes
the group $D(T,m)$ with $m\geqslant2$. As defined earlier in this section, we let
$H=A\times S$, where $A=\operatorname{Aut}(T)$ and $S=S_{m+1}$.
Various properties of diagonal groups whose proofs are straightforward are
used without further comment.
\begin{proof}[Proof of Theorem~\ref{th:primaut}]
We prove (a)~$\Rightarrow$~(b)~$\Rightarrow$~(c)~$\Rightarrow$~(a).
\begin{itemize}
\item[(a)$\Rightarrow$(b)] Clear.
\item[(b)$\Rightarrow$(c)] We show that $T$ is characteristically simple
by proving the contrapositive. Suppose that $N$ is
a non-trivial proper characteristic subgroup of $T$.
Then $N^{m+1}$ is a normal subgroup of $G$, as is readily
checked. We claim that the orbit of the point $[1,1,\ldots,1]\in\Omega$
under $N^{m+1}$ is $N^m$. We have to check that this set is fixed by right
multiplication by $N^m$ (this is clear, and it is also clear that it is a
single orbit), and that
left multiplication of every coordinate by a fixed element
of $N$ fixes $N^m$ (this is also clear). So $D(T,m)$ has an intransitive
normal subgroup, and is not quasiprimitive.
If $T$ is abelian, then it is either an elementary abelian $p$-group or
uniquely divisible. In the former case, if $p\mid(m+1)$, the subgroup
$\Gamma$ from Lemma~\ref{lem:charab} acts intransitively
on $\Omega$, and is normalised by
$H$; so $G$ is not
quasiprimitive, by Lemma~\ref{lem:prabreg}. (The image of $[0,\ldots,0]$
under the element $(t_0,\ldots,t_m)\in\Gamma$ is
$[t_1-t_0,t_2-t_0,\ldots,t_m-t_0]$, which has coordinate sum zero since
$-mt_0=t_0$. So the orbit of $\Gamma$ consists of $m$-tuples with coordinate
sum zero.)
\item[(c)$\Rightarrow$(a)] Assume that $T$ is characteristically simple, and
not an elementary abelian $p$-group for which $p\mid(m+1)$.
If $T$ is abelian, then it is either uniquely divisible or an elementary
abelian $p$-group with $p\nmid(m+1)$. Then
Lemma~\ref{lem:charab}(c) applies; $T^{m+1}=\Gamma\oplus\Delta$, where
$\Delta$ is the kernel of the action of $T^{m+1}$ on $\Omega$, and $\Gamma$ contains no
proper non-trivial $H$-invariant subgroup; so by Lemma~\ref{lem:prabreg},
$G$ is primitive.
So we may suppose that $T$ is non-abelian and characteristically simple.
Then $Z(T)=1$, and so $T^{m+1}$ acts faithfully on $\Omega$,
and its subgroup $R=T^m$ (the set of elements of $T^{m+1}$ of the form
$(1,t_1,\ldots,t_m)$) acts regularly.
Let $L=\{(t_0,1,\ldots,1) \mid t_0\in T\}$.
Put $N=T^{m+1}$. Then $RL=LR=N \cong L\times R$.
We identify $L$ with $T_0$ and $R$ with $T_1 \times \cdots \times T_m$.
Then $N$ is normal in $G$, and $G=NH$.
Let $\omega=[1,\ldots,1]\in\Omega$ be fixed. Then
$G_\omega=H$ and $N_\omega=I$, where $I$ is the subgroup of $A$
consisting of inner automorphisms of~$T$.
To show that $G$ is primitive on $\Omega$, we show that $G_\omega$ is a
maximal subgroup of $G$. So let $X$ be a subgroup of $G$ that properly
contains $G_\omega$. We will show that $X=G$.
Since $S\leqslant X$, we have that $X=(X\cap (NA))S$.
Similarly, as $N_\omega A \leqslant X \cap (NA)$, we find that
$X \cap (N A) = (X \cap N) A$.
So $X = (X \cap N) (A S) = (X \cap N) G_\omega$.
Then, since $G_\omega$ is a proper subgroup of $X$ and $G_\omega \cap N = N_\omega$,
it follows that $X \cap N$ properly contains $N_\omega$.
Set $X_0=X\cap N$.
Thus there exist some pair $(i,j)$ of distinct indices
and an element $(u_0,u_1,\ldots,u_m)$ in $X_0$ such that $u_i\neq u_j$. Since
$(u_i^{-1},\ldots,u_i^{-1}) \in X_0$, it follows that there exists an
element $(t_0,t_1,\ldots,t_m)\in X_0$ such that $t_i=1$ and $t_j\neq~1$.
Since $S\cong S_{m+1}$ normalises $N_\omega A$ and permutes the
direct factors of $N=T_0\times T_1\times \cdots \times T_m$ naturally,
we may assume without loss of generality that $i=0$ and $j=1$, and hence that
there exists an
element $(1,t_1,\ldots,t_m)\in X_0$ with $t_1\neq 1$; that is,
$T_1\cap\pi_{0,1}(X_0)\neq 1$,
where $\pi_{0,1}$ is the projection from $N$ onto $T_0\times T_1$.
If $\psi\in A$, then $\psi$ normalises $X_0$ and acts
coordinatewise on $T^{m+1}$; so $(1,t_1^\psi,\ldots,t_m^\psi)\in X_0$, so that
$t_1^\psi\in T_1\cap \pi_{0,1}(X_0)$. Now,
$\{t_1^\psi \mid \psi \in A\}$ generates a characteristic subgroup of~$T_1$.
Since $T_1$ is characteristically simple, $T_1\leqslant\pi_{0,1}(X_0)$. A
similar argument shows that $T_0\leqslant \pi_{0,1}(X_0)$. Hence
$T_0\times T_1=\pi_{0,1}(X_0)$. Since the group $S\cong S_{m+1}$ acts
$2$-transitively on the direct factors of $N$, and since $S$ normalises $X_0$
(as $S< G_\omega<X$), we
obtain, for all distinct $i,\ j\in\{1,\ldots,m\}$, that
$\pi_{i,j}(X_0)=T_i\times T_j$ (where $\pi_{i,j}$ is the projection onto
$T_i\times T_j$).
Since the $T_i$ are non-abelian characteristically simple groups, they are
perfect. Therefore Lemma~\ref{comminside} implies that $X_0=N$, and hence
$X=(X_0A)S=G$. Thus $G_\omega$ is a maximal subgroup of $G$, and $G$ is
primitive, as required.
\end{itemize}
\end{proof}
In the case $m=1$, diagonal groups behave a little differently. If $T$ is
abelian, then the diagonal group is simply the holomorph of $T$, which is
primitive (and hence quasiprimitive) if and only if $T$ is characteristically
simple. The theorem is true as stated if $T$ is non-abelian, in which case
the diagonal group is the permutation group on $T$ generated by left and right
multiplication, inversion, and automorphisms of~$T$.
\section{The diagonal graph}\label{s:diaggraph}
The diagonal graph is a graph which stands in a similar relation to the
diagonal semilattice as the Hamming graph does to the Cartesian lattice.
In this section, we define it, show that apart from a few small cases its
automorphism group is the diagonal group, and investigate some of its
properties, including its connection with the permutation group property
of \emph{synchronization}.
We believe that this is an interesting class of graphs, worthy of study by
algebraic graph theorists. The graph $\Gamma_D(T,m)$ has appeared in some
cases: when $m=2$ it is the Latin-square graph associated with the Cayley
table of~$T$, and when $T=C_2$ it is the \emph{folded cube}, a
distance-transitive graph.
\subsection{Diagonal graph and diagonal semilattice}
\label{sec:dgds}
In this subsection we define the \emph{diagonal graph} $\Gamma_D(T,m)$ associated
with a diagonal semilattice $\mathfrak{D}(T,m)$. We show that, except for five
small cases (four of which we already met in the context of Latin-square graphs
in Section~\ref{sect:lsautgp}), the
diagonal semilattice and diagonal graph determine each other, and so they have
the same automorphism group, namely $D(T,m)$.
Let $\Omega$ be the underlying set of a diagonal semilattice
$\mathfrak{D}(T,m)$, for $m\geqslant2$ and for a not necessarily finite group $T$. Let $Q_0,\ldots,Q_m$ be the minimal partitions
of the semilattice (as in Section~\ref{sec:diag1}). We define the diagonal graph as follows.
The vertex set is $\Omega$; two vertices are joined if they lie in the same
part of $Q_i$ for some $i$ with $0\leqslant i\leqslant m$. Since parts of distinct $Q_j$, $Q_{j'}$ intersect in at most one point, the value of $i$ is unique. Clearly
the graph is regular with valency $(m+1)(|T|-1)$ (if $T$ is finite).
We represent the vertex set by $T^m$, with $m$-tuples in square brackets.
Then $[t_1,\ldots,t_m]$ is joined to all vertices obtained by changing one
of the coordinates, and to all vertices $[xt_1,\ldots,xt_m]$ for $x\in T$,
$x\ne1$. We say that the adjacency of two vertices differing in the $i$th
coordinate is of \emph{type $i$}, and that of two vertices differing by a
constant left factor is of \emph{type $0$}.
The semilattice clearly determines the graph. So, in particular, the group
$D(T,m)$ acts as a group of graph automorphisms.
If we discard one of the partitions $Q_i$, the remaining partitions form the
minimal partitions in a Cartesian lattice; so the corresponding edges
(those of all types other than~$i$) form a
Hamming graph (Section~\ref{sec:HGCD}). So the diagonal graph is the
edge-union of $m+1$ Hamming graphs $\operatorname{Ham}(T,m)$ on the same set of vertices.
Moreover, two vertices lying in a part of $Q_i$ lie at
maximal distance~$m$ in the Hamming graph obtained by removing $Q_i$.
\begin{theorem}
If $(T,m)$ is not $(C_2,2)$, $(C_3,2)$, $(C_4,2)$, $(C_2\times C_2,2)$, or
$(C_2,3)$, then the diagonal graph determines uniquely the diagonal semilattice.
\label{t:autdiaggraph}
\end{theorem}
\begin{proof}
We handled the case $m=2$ in Proposition~\ref{p:lsgraphaut} and the following
comments, so we can assume that $m\geqslant3$.
The assumption that $m\geqslant3$ has as a consequence that the parts of the
partitions $Q_i$ are the maximal cliques of the graph. For clearly they are
cliques. Since any clique of size $2$ or $3$ is contained in a Hamming graph,
we see that any clique of size greater than~$1$ is contained in a
maximal clique, which has this form; and it is the unique maximal clique
containing the given clique. (See the discussion of cliques in Hamming
graphs in the proof of Theorem~\ref{th:cdham}.)
So all the parts of the partitions $Q_i$ are determined by the graph; we
need to show how to decide when two cliques are parts of the same partition.
We call each maximal clique a \emph{line}; we say it is an \emph{$i$-line},
or has \emph{type~$i$}, if it is a part of $Q_i$. (So an $i$-line is a maximal
set any two of whose vertices are type-$i$ adjacent.) We have to show that the
partition of lines into types is determined by the graph structure. This
involves a closer study of the graph.
Since the graph admits $D(T,m)$, which induces the symmetric group $S_{m+1}$
on the set of types of line, we can assume (for example) that if we have
three types involved in an argument, they are types $1$, $2$ and $3$.
Call lines $L$ and $M$ \emph{adjacent} if they are disjoint but there are
vertices $x\in L$ and $y\in M$ which are adjacent. Now the following holds:
\begin{quote}
Let $L$ and $M$ be two lines.
\begin{itemize}\itemsep0pt
\item If $L$ and $M$ are adjacent $i$-lines, then every vertex in $L$ is
adjacent to a vertex in $M$.
\item If $L$ is an $i$-line and $M$ a $j$-line adjacent to $L$, with $i\ne j$,
then there are at most two vertices in $L$ adjacent to a vertex in $M$, and
exactly one such vertex if $m>3$.
\end{itemize}
\end{quote}
For suppose that two lines $L$ and $M$ are adjacent, and suppose first that
they have the same type, say type $1$, and that $x\in L$ and $y\in M$ are
on a line of type~$2$. Then $L=\{[*,a_2,a_3,\ldots,a_m]\}$ and
$M=\{[*,b_2,b_3,\ldots,b_m]\}$, where $*$ denotes an arbitrary element of $T$.
We have $a_2\ne b_2$ but $a_i=b_i$ for
$i=3,\ldots,m$. The common neighbours on the two lines
are obtained by taking the entries $*$ to be equal in the two lines.
(The conditions show that there cannot be an adjacency of type $i\ne 2$ between
them.)
Now suppose that $L$ has type~$1$ and $M$ has type~$2$, with a line of
type~$3$ joining vertices on these lines. Then we have $L=\{[*,a_2,a_3,\ldots,a_m]\}$ and
$M=\{[b_1,*,b_3,\ldots,b_m]\}$, where $a_3\ne b_3$ but $a_i=b_i$ for $i>3$;
the adjacent vertices are obtained
by putting ${*}=b_1$ in $L$ and ${*}=a_2$ in $M$.
If $m>3$, there is no adjacency of any other type between the lines.
If $m=3$, things are a little different. There is one type~$3$ adjacency between
the lines $L=\{[*,a_2,a_3]\}$ and $M=\{[b_1,*,b_3]\}$ with $a_3\ne b_3$, namely
$[b_1,a_2,a_3]$ is adjacent to $[b_1,a_2,b_3]$. There is also one type-$0$
adjacency, corresponding to multiplying $L$ on the left by $b_3a_3^{-1}$:
this makes $[x,a_2,a_3]$ adjacent to $[b_1,y,b_3]$ if and only if
$b_3a_3^{-1}x=b_1$ and $b_3a_3^{-1}a_2=y$, determining $x$ and $y$ uniquely.
So we can split adjacency of lines into two kinds: the first kind when the
edges between the two lines form a perfect matching
(so there are $|T|$ such edges); the second kind where
there are at most two such edges (and, if $m>3$, exactly one). Now two
adjacent lines have the same type if and only if the adjacency is of the first
kind. So, if either $m>3$ or $|T|>2$, the two kinds of adjacency are
determined by the graph.
Make a new graph whose vertices are the lines, two lines adjacent if their
adjacency in the preceding sense is of the first kind. Then lines in the
same connected component of this graph have the same type. The converse is
also true, as can be seen within a Hamming subgraph of the diagonal graph.
Thus the partition of lines into types is indeed determined by the graph
structure, and is preserved by automorphisms of the graph.
Finally we have to consider the case where $m=3$ and $T=C_2$. In general,
for $T=C_2$, the Hamming graph is the $m$-dimensional cube, and has a unique
vertex at distance $m$ from any given vertex; in the diagonal graph, these
pairs of antipodal vertices are joined. This is the graph known as the
\emph{folded cube} (see \cite[p.~264]{bcn}). The arguments given earlier apply
if $m\geqslant4$; but, if $m=3$, the graph is the complete bipartite graph $K_{4,4}$,
and any two disjoint edges are contained in a $4$-cycle.
\end{proof}
\begin{cor}\label{c:sameag}
Except for the cases $(T,m)=(C_2,2)$, $(C_3,2)$, $(C_2\times C_2,2)$, and
$(C_2,3)$, the diagonal semilattice $\mathfrak{D}(T,m)$ and the
diagonal graph $\Gamma_D(T,m)$ have the same automorphism group, namely
the diagonal group $D(T,m)$.
\end{cor}
\begin{proof}
This follows from Theorem~\ref{t:autdiaggraph} and the fact that
$\Gamma_D(C_4,2)$ is the Shrikhande graph, whose automorphism group is
$D(C_4,2)$: see Section~\ref{sect:lsautgp}.
\end{proof}
\subsection{Properties of finite diagonal graphs}
We have seen some graph-theoretic properties of $\Gamma_D(T,m)$ above.
In this subsection we assume that $T$ is finite and $m\geqslant2$, though we often have to exclude
the case $m=|T|=2$ (where, as we have seen, the diagonal graph is the complete
graph $K_4$).
The \emph{clique number} $\omega(\Gamma)$ of a graph~$\Gamma$
is the number of vertices in its largest clique; the
\emph{clique cover number} $\theta(\Gamma)$ is the smallest number of cliques
whose union contains every vertex; and the \emph{chromatic number}
$\chi(\Gamma)$ is the smallest number of colours required to colour the
vertices so that adjacent vertices receive different colours.
The following properties are consequences of Section~\ref{sec:dgds},
especially the proof of Theorem~\ref{t:autdiaggraph}. We give brief
explanations or pointers to each claim.
\begin{itemize}
\item There are $|T|^m$ vertices, and the valency is $(m+1)(|T|-1)$. (The
number of vertices is clear; each point $v$ lies in a unique part of size
$|T|$ in each of the $m+1$ minimal partitions of the diagonal semlattice.
Each of these parts is a maximal clique, the parts pairwise intersect
only in $v$, and the union of the parts contains all the neighbours of $v$.)
\item Except for the case $m=|T|=2$, the clique number is $|T|$, and the
clique cover number is $|T|^{m-1}$. (The parts of each minimal partition
carry maximal cliques, and thus each minimal partition realises a minimal-size
partition of the vertex set into cliques.)
\item $\Gamma_D(T,m)$ is isomorphic to $\Gamma_D(T',m')$ if and only if
$m=m'$ and $T\cong T'$. (The graph is constructed from the semilattice; and
if $m>2$, or $m=2$ and $|T|>4$, the semilattice is recovered from the graph as
in Theorem~\ref{t:autdiaggraph}; for the remaining cases, see the discussion
after Proposition~\ref{p:lsgraphaut}.)
\end{itemize}
Distances and diameter can be calculated as follows. We define two sorts of
adjacency: (A1) is $i$-adjacency for $i\ne0$, while (A2) is $0$-adjacency.
\subsubsection*{Distances in $\Gamma_D(T,m)$} We observe first that, in any
shortest path, adjacencies of fixed type occur
at most once. This is because different factors of $T^{m+1}$ commute, so
we can group those in each factor together.
We also note that distances cannot exceed $m$, since any two vertices are
joined by a path of length at most $m$ using only edges of sort (A1) (which
form a Hamming graph). So a path of smallest length is contained within a
Hamming graph.
Hence, for any two vertices $t=[t_1,\ldots,t_m]$ and $u=[u_1,\ldots,u_m]$, we
compute the distance in the graph by the following procedure:
\begin{itemize}
\item[(D1)] Let $d_1=d_1(t,u)$ be the Hamming distance between the vertices
$[t_1,\ldots,t_m]$
and $[u_1,\ldots,u_m]$. (This is the length of the shortest path not using a
$0$-adjacency.)
\item[(D2)] Calculate the quotients $u_it_i^{-1}$ for $i=1,\ldots,m$. Let
$\ell$ be the maximum number of times that a non-identity element of $T$ occurs
as one of these quotients, and set $d_2=m-\ell+1$. (We can apply left
multiplication by this common quotient to find a vertex at distance one from
$t$; then use right multiplication by $m-\ell$ appropriate elements to make the
remaining elements agree. This is the length of the shortest path using a
$0$-adjacency.)
\item[(D3)] Now the graph distance $d(u,v)=\min\{d_1,d_2\}$.
\end{itemize}
\subsubsection*{Diameter of $\Gamma_D(T,m)$} An easy argument shows that the diameter of the graph is
$m+1-\lceil (m+1)/|T|\rceil$ which is at most
$m$, with equality if and only if $|T|\geqslant m+1$. The bound $m$ also follows
directly from the fact that, in the previous procedure, both $d_1$
and $d_2$ are at most $m$.
If $|T|\geqslant m+1$, let $1,t_1,t_2,\ldots,t_m$ be pairwise distinct elements
of~$T$. It is easily
checked that $d([1,\ldots,1],[t_1,\ldots,t_m])=m$. For clearly $d_1=m$;
and for $d_2$ we note that all the ratios are distinct so $l=1$.
\subsubsection*{Chromatic number}
This has been investigated in two special cases: the case $m=2$ (Latin-square
graphs) in \cite{ghm}, and the case where $T$ is a non-abelian finite simple
group in \cite{bccsz} in connection with synchronization.
We have not been able to compute the chromatic number in all cases;
this section describes
what we have been able to prove.
The argument in~\cite{bccsz} uses the truth of the
\emph{Hall--Paige conjecture} by
Wilcox~\cite{wilcox}, Evans~\cite{evans}
and Bray et al.~\cite{bccsz},
which we briefly discuss.
(See \cite{bccsz} for the history of the proof of this conjecture.)
\begin{defn}
A \emph{complete mapping} on a group $G$ is a bijection $\phi:G\to G$ for
which the map $\psi:G\to G$ given by $\psi(x)=x\phi(x)$ is also a bijection.
The map $\psi$ is the \emph{orthomorphism} associated with $\phi$.
\end{defn}
In a Latin square, a \emph{transversal} is a set of cells, one in each row,
one in each column, and one containing each letter; an \emph{orthogonal mate}
is a partition of the cells into transversals.
It is well known
(see also~\cite[Theorems~1.4.1 and 1.4.2]{DK:book})
that the following three conditions on a finite group $G$ are
equivalent. (The original proof is in \cite[Theorem~7]{paige}.)
\begin{itemize}\itemsep0pt
\item $G$ has a complete mapping;
\item the Cayley table of $G$ has a transversal;
\item the Cayley table of $G$ has an orthogonal mate.
\end{itemize}
The \emph{Hall--Paige conjecture} \cite{hp} (now, as noted, a theorem),
asserts the following:
\begin{theorem}\label{th:hp}
The finite group $G$ has a complete mapping if and only if either $G$ has odd
order or the Sylow $2$-subgroups of $G$ are non-cyclic.
\end{theorem}
Now let $T$~be a finite group and let $m$~be an integer greater
than~$1$, and consider the diagonal graph $\Gamma_D(T,m)$. The chromatic
number of a graph cannot be smaller than its clique number. We saw at the
start of this section that the clique number is $|T|$ unless $m=2$ and $|T|=2$.
\begin{itemize}
\item Suppose first that $m$ is odd. We give the vertex $[t_1,\ldots,t_m]$
the colour
$u_1u_2 \cdots u_m$ in $T$, where $u_i=t_i$ if $i$~is odd
and $u_i=t_i^{-1}$ if $i$~is even.
If two vertices lie in a part
of $Q_i$ with $i>0$, they differ only in the $i$th coordinate, and clearly
their colours differ. Suppose that $[t_1,\ldots,t_m]$ and $[s_1,\ldots,s_m]$
lie in the same part of $Q_0$, so that $s_i=xt_i$ for $i=1,\ldots,m$,
where $x\ne1$. Put $v_i=s_i$ if $i$ is odd and $v_i=s_i^{-1}$ if $i$~is even.
Then $v_iv_{i+1} = u_iu_{i+1}$ whenever $i$ is even, so
the colour of the second vertex is
\[v_1v_2 \cdots v_m = v_1 u_2 \cdots u_m =xu_1 u_2 \cdots u_m,\]
which is different from that of the first vertex since $x\ne1$.
\item Now suppose that $m$ is even and assume in this case that the Sylow
$2$-subgroups of $T$ are are trivial or non-cyclic. Then, by
Theorem~\ref{th:hp}, $T$~has a complete mapping~$\phi$. Let $\psi$ be
the corresponding orthomorphism. We define the colour of the vertex
$[t_1,\ldots,t_m]$ to be
\[t_1^{-1}t_2t_3^{-1}t_4\cdots t_{m-3}^{-1}t_{m-2}t_{m-1}^{-1}\psi(t_m).\]
An argument similar to but a little more elaborate than in the other case
shows that this is a proper colouring. We refer to \cite{bccsz} for details.
\end{itemize}
With a little more work we get the following theorem, a
contribution to the general question concerning the chromatic number of
the diagonal graphs. Let $\chi(T,m)$ denote the chromatic
number of $\Gamma_D(T,m)$.
\begin{theorem}\label{thm:chrom}
\begin{enumerate}
\item If $m$ is odd, or if $|T|$ is odd, or if the Sylow $2$-subgroups of
$T$ are non-cyclic, then $\chi(T,m)=|T|$.
\item If $m$ is even, then $\chi(T,m)\leqslant\chi(T,2)$.
\end{enumerate}
\end{theorem}
All cases in (a) were settled above; we turn to~(b).
A \emph{graph homomorphism} from $\Gamma$ to $\Delta$ is a map from the
vertex set of $\Gamma$ to that of $\Delta$ which maps edges to edges.
A proper $r$-colouring of a graph $\Gamma$ is a homomorphism from $\Gamma$
to the complete graph $K_r$. Since the composition of homomorphisms is a
homomorphism, we see that if there is a homomorphism from $\Gamma$ to
$\Delta$ then there is a colouring of $\Gamma$ with $\chi(\Delta)$ colours,
so $\chi(\Gamma)\leqslant\chi(\Delta)$.
\begin{theorem}\label{thm:diagepi}
For any $m\geqslant 3$ and non-trivial finite group $T$, there is a homomorphism from $\Gamma_D(T,m)$
to $\Gamma_D(T,m-2)$.
\end{theorem}
\begin{proof} We define a map by mapping a vertex $[t_1,t_2,\ldots,t_m]$ of
$\Gamma_D(T,m)$ to the vertex $[t_1t_2^{-1}t_3,t_4,\ldots,t_m]$ of
$\Gamma_D(T,m-2)$, and show that this map is a homomorphism. If
two vertices of $\Gamma_D(T,m)$ agree in all but position~$j$, then their
images agree in all but position $1$ (if $j\le 3$) or $j-2$ (if $j>3$).
Suppose that $t_i=xs_i$ for $i=1,\ldots,m$. Then
$t_1t_2^{-1}t_3=xs_1s_2^{-1}s_3$, so the images of $[t_1,\ldots,t_m]$ and
$[s_1,\ldots,s_m]$ are joined. This completes the proof.
\end{proof}
This also completes the proof of Theorem~\ref{thm:chrom}.
\medskip
The paper \cite{ghm} reports new results on the chromatic number of a
Latin-square graph, in particular, if $|T|\geqslant 3$ then
$\chi(T,2)\leqslant 3|T|/2$. They also report a conjecture of Cavenagh,
which claims that $\chi(T,2)\leqslant |T|+2$,
and prove this conjecture in the case where $T$ is abelian.
Payan~\cite{Payan} showed that graphs in a class he called ``cube-like''
cannot have chromatic number~$3$. Now $\Gamma_D(C_2,2)$,
which is the complete graph~$K_4$, has chromatic number~$4$; and the
folded cubes $\Gamma_D(C_2,m)$ are ``cube-like'' in Payan's sense.
It follows from Theorems~\ref{thm:chrom} and~\ref{thm:diagepi} that the
chromatic number of the folded cube $\Gamma_D(C_2,m)$ is $2$ if $m$~is odd and
$4$ if $m$~is even. So the bound in Theorem~\ref{thm:chrom}(b) is attained if
$T\cong C_2$.
\subsection{Synchronization}\label{sec:Synch}
A permutation group $G$ on a finite set $\Omega$ is said to be
\emph{synchronizing} if, for any map $f:\Omega\to\Omega$ which is not a
permutation, the transformation monoid $\langle G,f\rangle$ on $\Omega$
generated by $G$ and $f$ contains a map of rank~$1$ (that is, one which maps
$\Omega$ to a single point). For the background of this notion in automata
theory, we refer to \cite{acs:synch}.
The most important tool in the study of synchronizing groups is the following
theorem \cite[Corollary 4.5 ]{acs:synch}. A graph is \emph{trivial} if it
is complete or null.
\begin{theorem}\label{th:nonsynch}
A permutation group $G$ is synchronizing if and only if no non-trivial
$G$-invariant graph has clique number equal to chromatic number.
\end{theorem}
From this it immediately follows that a synchronizing group is transitive
(if $G$ is intransitive, take a complete graph on one orbit of~$G$), and
primitive (take the disjoint union of complete graphs on the blocks in a
system of imprimitivity for~$G$). Now, by the O'Nan--Scott theorem
(Theorem~\ref{thm:ons}), a
primitive permutation group preserves a Cartesian or diagonal semilattice or
an affine space, or else is almost simple.
\begin{theorem}
If a group $G$ preserves a Cartesian decomposition, then it is non-synchro\-nizing.
\end{theorem}
This holds because the Hamming graph has clique number equal to chromatic
number. (We saw in the proof of Theorem~\ref{th:cdham} that the clique number of
the Hamming graph is equal to the
cardinality of the alphabet. Take the alphabet $A$ to be an abelian group;
also use $A$ for the set of colours, and give the $n$-tuple
$(a_1,\ldots,a_n)$ the colour $a_1+\cdots+a_n$. If two $n$-tuples are
adjacent in the Hamming graph, they differ in just one coordinate, and so
get different colours.)
In \cite{bccsz}, it is shown that a primitive diagonal group whose socle
contains $m+1$ simple factors with $m>1$ is non-synchronizing.
In fact, considering Theorem~\ref{th:primaut}, the following more general result
is valid.
\begin{theorem}
If $G$ preserves a diagonal semilattice $\mathfrak{D}(T,m)$ with $m>1$ and $T$
a finite group of order greater than~$2$, then $G$ is non-synchronizing.
\end{theorem}
\begin{proof}
If $T$ is not characteristically simple then Theorem~\ref{th:primaut} implies
that $G$~is imprimitive and so it is non-synchronizing. Suppose that $T$ is
characteristically simple and let $\Gamma$ be the diagonal graph
$\Gamma_D(T,m)$. Since we have excluded the case $|T|=2$, the clique number of
$\Gamma$ is $|T|$, as we showed in the preceding subsection. Also, either $T$
is an elementary abelian group of odd order or the Sylow 2-subgroups of $T$
are non-cyclic. (This is clear unless $T$ is simple, in which case it follows
from Burnside's Transfer Theorem, see \cite[(39.2)]{asch}.) So, by
Theorem~\ref{thm:chrom}, $\chi(\Gamma)=|T|$. Now Theorem~\ref{th:nonsynch}
implies that $D(T,m)$ is non-synchronizing;
since $G\leqslant D(T,m)$, also $G$~is non-synchronizing.
\end{proof}
\begin{remark}
It follows from the above that a synchronizing permutation group must be of one
of the following types: affine (with the point stabiliser a primitive linear
group); simple diagonal with socle the product of two copies of a non-abelian
simple group; or almost simple. In the first and third cases, some but not all
such groups are synchronizing; in the second case, no synchronizing example
is known.
\end{remark}
\section{Open problems}\label{s:problems}
Here are a few problems that might warrant further investigation.
For $m\geqslant 3$, Theorem~\ref{th:main} characterised $m$-dimensional special sets of
partitions as minimal partitions in join-semilattices $\mathfrak D(T,m)$ for a
group $T$. However, for $m=2$, such special sets arise from an arbitrary quasigroup $T$.
The automorphism group of the join-semilattice generated by a 2-dimensional special
set is the autoparatopism group of the quasigroup $T$ and, for $|T|>4$,
it also coincides with
the automorphism group of the corresponding Latin-square graph
(Proposition~\ref{p:autlsg}).
Since we wrote the first draft of the paper, Michael Kinyon has pointed out to
us that the Paige loops~\cite{paige:loops} (which were shown by
Liebeck~\cite{liebeck} to be the only finite simple Moufang loops which are not
groups) have vertex-primitive autoparatopism groups.
\begin{problem}
Determine whether there exists a quasigroup $T$, not isotopic to a group or
a Paige loop, whose
autoparatopism group is primitive.
This is equivalent to requiring that the automorphism group of the corresponding
Latin-square graph is vertex-primitive; see Proposition~\ref{p:autlsg}.
\end{problem}
If $T$ is a non-abelian finite simple group and $m\geqslant 3$,
then the diagonal group $D(T,m)$ is a maximal subgroup of the
symmetric or alternating group~\cite{LPS}. What happens in the infinite
case?
\begin{problem}
Find a maximal subgroup of $\operatorname{Sym}(\Omega)$ that contains
the diagonal group $D(T,m)$ if $T$ is an infinite simple group. If $\Omega$
is countably infinite, then by~\cite[Theorem~1.1]{macpr},
such a maximal subgroup exists.
(For a countable set, \cite{covmpmek} describes maximal subgroups
that stabilise a Cartesian lattice.)
\end{problem}
\begin{problem}
Investigate the chromatic number $\chi(T,m)$ of the
diagonal graph $\Gamma_D(T,m)$ if $m$ is even and $T$ has no complete mapping.
In particular, either show that the bound in Theorem~\ref{thm:chrom}(b)
is always attained (as we noted, this is true for $T=C_2$) or improve this bound.
\end{problem}
For the next case where the Hall--Paige conditions fail, namely $T=C_4$,
the graph $\Gamma_D(T,2)$ is the complement of the Shrikhande graph, and has
chromatic number $6$; so, for any even $m$, the chromatic number of
$\Gamma_D(T,m)$ is $4$, $5$ or $6$, and the sequence of chromatic numbers is
non-increasing.
If $T$ is a direct product of $m$ pairwise isomorphic non-abelian simple groups,
with $m$ an integer and $m>1$, then $D(T,m)$ preserves a Cartesian lattice
by \cite[Lemma~7.10(ii)]{ps:cartesian}. Here $T$ is not necessarily finite,
and groups with this property are called FCR (finitely completely reducible) groups.
However there are other infinite characteristically simple groups,
for example the McLain group~\cite{mclain}.
\begin{problem}
Determine whether there exist characteristically simple (but not simple) groups $T$
which are not FCR-groups, and integers $m>1$, such that $D(T,m)$ preserves a
Cartesian lattice.
It is perhaps the case that $D(T,m)$ does not preserve a Cartesian lattice
for these groups $T$; and we ask further whether $D(T,m)$ might still preserve some
kind of structure that has more automorphisms than the diagonal semilattice.
\end{problem}
\begin{problem}\label{p2}
Describe sets of more than $m+1$ partitions of
$\Omega$, any $m$ of which are the minimal elements in a Cartesian lattice.
\end{problem}
For $m=2$, these are equivalent to sets of mutually orthogonal Latin squares.
For $m>2$, any $m+1$ of the partitions are the minimal elements in a
diagonal semilattice $\mathcal{D}(T,m)$. Examples are known when $T$ is
abelian. One such family is given as follows. Let $T$ be the additive group
of a field $F$ of order $q$, where $q>m+1$; let $F=\{a_1,a_2,\ldots,a_q\}$.
Then let $W=F^m$. For $i=1,\ldots,q$, let $W_i$ be the subspace
spanned by $(1,a_i,a_i^2,\ldots,a_i^{m-1})$, and let $W_0$ be the subspace
spanned by $(0,0,\ldots,0,1)$. The coset partitions of $W$ given by these
$q+1$ subspaces have the property that any $m$ of them are the minimal elements
in a Cartesian lattice of dimension $m$ (since any $m$ of the given vectors
form a basis of $W$.) Note the connection with MDS codes and geometry: the
$1$-dimensional subspaces are the points of a normal rational curve in
$\mathrm{PG}(m-1,F)$. See~\cite{btb}.
For which non-abelian groups $T$ do examples with $m>2$ exist?
\begin{problem}
With the hypotheses of Problem~\ref{p2}, find a good upper bound
for the number of partitions, in terms of $m$ and $T$.
\end{problem}
We note one trivial bound: the number of such partitions cannot exceed
$m+|T|-1$. This is well-known when $m=2$ (there cannot be more than $|T|-1$
mutually orthogonal Latin squares of order $|T|)$. Now arguing inductively
as in the proof of Proposition~\ref{p:quots}, we see that increasing $m$ by
one can increase the number of partitions by at most one.
\medskip
Since the first draft of this paper was written, three of the authors and
Michael Kinyon have written a paper \cite{bckp} addressing (but by no means
solving) the last two problems above.
\section*{Acknowledgements} Part of the work was done while the authors were visiting
the South China University of Science and Technology (SUSTech), Shenzhen, in 2018, and
we are grateful (in particular to Professor Cai Heng Li)
for the hospitality that we received.
The authors would like to thank the
Isaac Newton Institute for Mathematical Sciences, Cambridge,
for support and hospitality during the programme
\textit{Groups, representations and applications: new perspectives}
(supported by \mbox{EPSRC} grant no.\ EP/R014604/1),
where further work on this paper was undertaken.
In particular we acknowledge a Simons Fellowship (Cameron) and a Kirk Distinguished
Visiting Fellowship (Praeger) during this programme. Schneider thanks the Centre for the Mathematics of Symmetry
and Computation
of The University of Western Australia and
Australian Research Council Discovery Grant DP160102323
for hosting his visit
in 2017 and
acknowledges the
support of the CNPq projects \textit{Produtividade em Pesquisa}
(project no.: 308212/2019-3)
and \textit{Universal} (project no.: 421624/2018-3).
We are grateful to Michael Kinyon for comments on an earlier version of the
paper and to the anonymous referee for his or her careful reading of the manuscript.
| proofpile-arXiv_065-297 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The majority of electronic structure methods are built upon the orbital picture. In the simplest models, electrons are understood to behave essentially independently, interacting only with the average field produced by the other electrons. This picture is acceptable when it is possible to assign orbitals unambiguously as occupied or unoccupied, i.e., when the energy gap between the occupied and unoccupied orbitals is large compared with the kinetic energy of the valence electrons. The simplest wavefunction ansatz, i.e. a single Slater determinant with a set of $N$ (number of electrons) occupied orthonormal spin orbitals optimised to produce the lowest energy in Hartree-Fock theory already produces a good first approximation. Further expansion in terms of Slater determinants is obtained by including singly-, doubly-, ... excited Slater determinants with respect to the Hartree-Fock Slater determinant leading to a better approximation of the electronic wavefunction that is most often dominated by the Hartree-Fock determinant \cite{helgaker_book}. Systems of this type are \emph{weakly-correlated} and are generally well described by density functional theory (DFT), and coupled-cluster theory\cite{bartlett:2007} with singles, doubles, and (perturbative) triples.
However, when it is difficult to label orbitals as occupied or unoccupied, this picture breaks down. The number of important Slater determinants grows exponentially with the system size, so a single Slater determinant is not a qualitative representation of the electronic wavefunction. Such systems are \emph{strongly-correlated}. State of the art methods include the density matrix renormalization group (DMRG)\cite{RN1776,RN1779,RN1780,RN1781,RN1782,RN1783,RN1785,RN1786,RN1787,RN1788,RN1789,RN1790,RN1791,RN1792} and Slater determinant Monte-Carlo \cite{RN1793,RN1794,RN1795,RN1796,RN1797,RN1798,RN1799,RN1800,RN1801,RN1802,RN1804,RN1805,RN1806,RN1807}. The high computational cost of these methods has motivated the pursuit of approximate methods for treating strongly-correlated systems generally with mean-field cost. This contribution is a step in that direction: we employ the eigenvectors for a schematic system as a variational ansatz. The key idea is to work in a framework in which strong-correlation is described by the mean-field. In this new basis, the electronic wavefunction will have a short expansion dominated by a single contribution, though it will not necessarily be a Slater determinant.
Thus, we have been studying wavefunctions built as products of geminals, i.e. pairs of electrons.
Geminal wavefunctions have been proposed since the founding days of quantum chemistry, as they tie in with the intuitive chemical picture of Lewis bonds as pairs of electrons \cite{hurley:1953}. Unfortunately, the most general geminal wavefunctions come with a computational cost that scales exponentially with the system size, hence they were soon abandoned for other methods. The intrinsic reason for the pernicious cost can be inferred from the antisymmetrized product of interacting geminals (APIG), which is a general product of closed-shell singlet geminals \cite{silver:1969}.
The expansion coefficients in the basis of Slater determinants are \emph{permanents} of the geminal coefficients, and are combinatorially difficult to evaluate in general \cite{minc:1978}. There are several approaches to simplifying the problem to one that is tractable, each of which amounts to making particular approximations. The first is to make all the geminals identical resulting in the antisymmetrized geminal power (AGP)\cite{coleman:1997}, or equivalently a number-projection of the Bardeen-Cooper-Schrieffer (BCS) ansatz\cite{bardeen:1957,ring:1980}. AGP is well studied and is easy to employ, but suffers the major drawback of not being size-consistent. The second is to partition the orbitals in such a way that in each geminal, each orbital has one partner as in generalized valence bond-perfect pairing (GVB-PP)\cite{goddard:1967}, a set of unique partners as in the antisymmetrized product of strongly-orthogonal geminals (APSG)\cite{surjan:1999}, or one major occupied contribution as in the antisymmetrized product of 1-reference orbital geminals (AP1roG\cite{peter}). In a series of papers, we have proposed and investigated AP1roG as a computationally facile wavefunction to describe strong correlation due to bond-breaking\cite{johnson:2013,peter,boguslawski:2014a,boguslawski:2014b, boguslawski:2014c, tecmer:2014}. It was found that AP1roG systematically reproduces ground-state energies of doubly-occupied configuration interaction (DOCI) calculations for molecular systems\cite{peter}, even for large system sizes\cite{shepherd:2014}. The key ingredient in the AP1roG formalism is that the Schr\"{o}dinger equation is solved projectively with respect to a set of selected reference states, very much in the spirit of coupled-cluster theory (CC). In particular, it is equivalent to pair-coupled-cluster doubles (pCCD)\cite{pCCD,henderson:2014a,henderson:2014b,bulik:2015}. As a result, the permanents one needs to compute all become very easy to evaluate. Indeed, the computational bottleneck in the AP1roG calculations is the orbital optimization (OO), rather than the computation of the geminal coefficients in the AP1roG wavefunction. The energies of geminal theories are strongly dependant on the orbital pairing scheme used\cite{boguslawski:2014c,limacher:2014a}. It was found that Hartree-Fock orbitals are typically well-suited for pairing correlations around equilibrium geometries, whereas pairing occurs preferentially in localized orbitals in the bond-dissociation regime\cite{limacher:2014a}. Geminal theories perform well in the latter regime as strong correlations tend to dominate weak ones. Efforts to incorporate weak correlation in these geminal wavefunctions rely on multi-reference pertubation theory (MRPT)\cite{kobayashi:2010,limacher:2014b}, the random-phase approximation (RPA)\cite{pastorczak:2015}, or generalize the algebraic structure of spin-singlet geminal theory in order to include spin-triplet excitations as well\cite{johnson:2017}. The main hurdle on the way to an all-inclusive geminal theory is the ability to include missing correlations systematically, like CC theory or truncated configuration-interaction (CI) approaches\cite{helgaker_book}. This is one of the main motivations for the approach herein presented. It has recently been observed that the \emph{seniority} scheme provides a new means to organize the Hilbert space for configuration interaction approaches in a hierarchical way. The seniority quantum number counts the number of electrons that are \emph{not} paired\cite{bytautas:2011}. In this framework, the DOCI method corresponds to the seniority-zero rung on the ladder. DOCI is size-extensive, in the correct basis, and captures the majority of strong-correlation, at the cost of combinatorial scaling typical for full CI methods albeit now in pair space.
With this in mind, we follow a third approach in this contribution, in which we propose a structured geminal wavefunction such that the required permanents may be easily evaluated. Specifically, we employ the ground-state eigenvectors of the reduced BCS Hamiltonian, or Richardson Hamiltonian\cite{RN1587} as a variational wavefunction ansatz. It is well-established that the Richardson, or Richardson-Gaudin (RG), model is a quantum integrable system for which the eigenvectors can be obtained using Bethe-Ansatz techniques\cite{dukelsky:2004,ortiz:2005}. From a quantum chemistry point of view, it is highly remarkable that these eigenvectors have the structure of a geminal wavefunction, completely characterized by means of the single-electron model parameters, the pairing strength, and a set of so-called rapidities. There are as many rapidities as there are electron pairs which means that the eigenvectors can be determined by solving a set of equations for the rapidities with a computational scaling that is linear with the system size, rather than the typical combinatorial scaling. Moreover, the norms, scalar products, and 1- \& 2-body reduced density matrices (1-RDM \& 2-RDM) can all be computed with a polynomial cost. This opens an avenue for a variational geminal theory in quantum chemistry. We have already reported first results for LiH, Li$_2$ and HF dissociation curves in previous work\cite{tecmer:2014}, so we will focus on the mathematical details in the present paper.
It should be emphasized that in our approach the object being optimized is the model Hamiltonian (in this case RG), not simply an ansatz for the wavefunction. As a result, we obtain a complete set of eigenvectors with which to construct perturbative corrections, Green's functions etc., all of which are physically well-founded and interpretable.
Similar ideas using the framework of exactly-solvable models as a many-body expansion technique are being explored outside the field of quantum chemistry. In nuclear structure physics, eigenvectors of RG models are being used as a starting point for a CI approach\cite{debaerdemacker:2017}. In condensed matter physics, a variational RG approach is used to treat integrability-breaking interactions in central-spin problems\cite{claeys:2017a}, and a CI framework has been developed for non-integrable spin chains in the truncated spectrum approximation\cite{james:2018}. Recently, we have developed the analogue of Hartree-Fock as a Bethe ansatz to serve as a bridge to the present contribution\cite{laurie}.
In section \ref{sec:ansatz} we outline the basics of RG models, introduce the eigenvectors and develop the variational principle to be employed. In section \ref{sec:numbers} we present numerical results for 4-, 6-, 8- and 10-electron atomic systems as well as dissociation curves for H$_2$, H$_4$, H$_6$, H$_8$ and N$_2$. We formulate our conclusions in section \ref{sec:conclusions}.
\section{Variational Ansatz} \label{sec:ansatz}
\subsection{Eigenvectors of the Reduced BCS Hamiltonian}
We employ a pseudospin representation of su(2) for a set of spatial orbitals $\{i\}$, each of which can contain a single pair of opposite spin electrons. For each spatial orbital there are three operators:
\begin{align}
S^+_i = a^{\dagger}_{i\uparrow} a^{\dagger}_{i\downarrow}, \quad S^-_i = a_{i\downarrow}a_{i\uparrow}, \quad S^z_i = \frac{1}{2}\left( a^{\dagger}_{i\uparrow}a_{i\uparrow} + a^{\dagger}_{i\downarrow}a_{i\downarrow} -1 \right),
\label{eq:pseudospin}
\end{align}
where $a^{\dagger}_{i\uparrow} \; (a_{i\downarrow})$ creates (removes) an up- (down-)spin electron in spatial orbital $i$, etc. $S^+_i$ adds a pair of electrons to spatial orbital $i$ and $S^-_i$ removes a pair. Each spatial orbital can only hold one pair. Acting on a doubly-occupied spatial orbital, $S^z_i$ gives $\frac{1}{2}$, while acting on an empty spatial orbital $S^z_i$ gives $-\frac{1}{2}$. Thus $S^z_i$ ``measures'' whether spatial orbital $i$ is full or empty. For singly occupied orbitals, its action is zero. For doubly-degenerate spin orbitals, the seniority quantum number can be obtained as the expectation value of the operator \cite{bytautas:2011,alcoba:2014}
\begin{align}
\Omega_i = a^{\dagger}_{i\uparrow}a_{i\uparrow} + a^{\dagger}_{i\downarrow}a_{i\downarrow} - 2a^{\dagger}_{i\uparrow}a_{i\uparrow}a^{\dagger}_{i\downarrow}a_{i\downarrow}.
\end{align}
We work only in the seniority zero sector $\braket{\Omega_i}=0,\;\forall i$ in this paper, although it is perfectly possible to extend the formalism to other sectors. This extension is one of the strengths of the present approach.
Using the fermionic anticommutation relations, it is easily verified that the operators, $\eqref{eq:pseudospin}$, commute for distinct spatial orbitals, so that the structure constants of their Lie algebra may be summarized
\begin{align}
\left[S^z_i, S^{\pm}_j\right] = \pm \delta_{ij}S^{\pm}_i, \quad
\left[S^+_i, S^-_j\right] = 2 \delta_{ij}S^z_i.
\end{align}
With $\hat{n}_i = 2S^z_i +1$, which counts the electrons in spatial orbital $i$, the reduced BCS Hamiltonian\cite{RN1587,RN1171,RN810} for a system with $K$ spatial orbitals is
\begin{align}
\hat{H}_{BCS} = \frac{1}{2}\sum^{K}_{i} \varepsilon_i \hat{n}_i - g \sum^K_{ij} S^+_i S^-_j,
\end{align}
where the parameters defining the system are the single particle spectrum $\{\varepsilon_i\}$, and the pairing strength $g$. In this convention, a positive $g$ represents an attractive pairing interaction. The eigenvectors are products of electron pairs distributed over the entire space of orbitals, each pair being characterized by a complex number $u$ (which we call a rapidity). Such an electron pair is denoted
\begin{align}
\mathbb{S}^+(u) = \sum^K_{i} \frac{S^+_i}{u - \varepsilon_i},
\end{align}
and for a system with $M$ pairs of electrons, the states
\begin{align}
\ket{\{ u \}} &= \prod^{M}_{a} \mathbb{S}^+(u_a) \ket{\theta}
\label{eq:ABA}
\end{align}
are eigenvectors of the reduced BCS Hamiltonian provided the rapidities satisfy a set of coupled non-linear equations, called Richardson's equations
\begin{align} \label{eq:RichEquations}
\frac{2}{g} +\sum^K_{i}\frac{1}{u_a -\varepsilon_i} + \sum^M_{b\neq a} \frac{2}{u_b -u_a} =0, \quad \forall\; a=1,M.
\end{align}
The state $\ket{\theta}$ is the vacuum, defined such that
\begin{align}
S^-_i \ket{\theta} = 0, \quad \forall \;i=1,K
\end{align}
meaning that it is destroyed by all pair removal operators. In this contribution we take $\ket{\theta}$ to be the empty state, but it could easily be taken as a set of non-interacting unpaired electrons (Slater determinant). The reduced BCS Hamiltonian was first solved by Richardson\cite{RN1587,RN1171} and elaborated by Gaudin \cite{RN810}. Thus as a shorthand, we will refer to the state \eqref{eq:ABA} as a Richardson-Gaudin (RG) state.
For a system with $M$ pairs distributed among $K$ spatial orbitals, there are $\binom{K}{M}$ eigenvectors corresponding to the $\binom{K}{M}$ solutions of Richardson's equations. These equations are highly non-linear, with singularities hampering a straightforward numerical characterization of the eigenvectors. In the last decade, many new numerical methods have been developed to properly control and possibly avoid the singularities in the equations. These methods range from clusterization methods \cite{rombouts:2004}, Heine-Stieltjes connections\cite{guan:2012}, probabilistic approaches\cite{pogosov:2012}, pseudo-deformations of the su(2) pairing algebra \eqref{eq:pseudospin}\cite{RN1448} and, most recently, eigenvalue-based methods\cite{faribault:2011,claeys:2015}.In this work we employed eigenvalue-based methods.
\subsection{Energy Functional}
We will now outline a variational method employing the eigenvectors of the reduced BCS Hamiltonian (as a model system) to approximate solutions of a Coulomb Hamiltonian describing physical electrons. The reduced BCS Hamiltonian is defined by $K+1$ parameters: the single particle energies $\{ \varepsilon\}$ and the pairing strength $g$, which are the variational parameters. Our purpose is to employ the RG state $\ket{\{u\}}$ as a variational ansatz for a Coulomb Hamiltonian,
\begin{align}
\hat{H}_C &= \sum^K_{ij} h_{ij} \sum_{\sigma} a^{\dagger}_{i\sigma}a_{j\sigma} + \frac{1}{2} \sum^K_{ijkl} V_{ijkl} \sum_{\sigma\tau} a^{\dagger}_{i\sigma}a^{\dagger}_{j\tau}a_{l\tau}a_{k\sigma},
\end{align}
with $\sigma$ and $\tau$ spin variables. The one-electron $h_{ij}$ and two-electron integrals $V_{ijkl}$ are calculated in a basis of orthonormal spatial orbitals $\{\phi\}$:
\begin{align}
h_{ij} & = \int d\mathbf{r} \; \phi^*_i(\mathbf{r}) \left( -\frac{1}{2} \nabla^2 -\sum_I \frac{Z_I}{\vert \mathbf{r} -\mathbf{R}_I \vert} \right) \phi_j (\mathbf{r}) \\
V_{ijkl} &= \int d\mathbf{r}_1 d\mathbf{r}_2 \frac{\phi^*_i(\mathbf{r}_1)\phi^*_j(\mathbf{r}_2)\phi_k(\mathbf{r}_1)\phi_l(\mathbf{r}_2)}{\vert \mathbf{r}_1 - \mathbf{r}_2 \vert}
\end{align}
with \textbf{R}$_I$ and $Z_I$ being the positions and charges of the nuclei.
Thus, with the RG state $\ket{\{u\}}$ as an ansatz, our approximation to the ground state energy is
\begin{align}
E[\{\varepsilon\},g] &= \min_{\{\varepsilon\},g} \frac{\braket{\{u\}|\hat{H}_C|\{u\}}}{\braket{\{u\}|\{u\}}}
\label{eq:energy_functional}
\end{align}
The RG state is the ground state of the reduced BCS Hamiltonian, which is in turn defined by the parameters $\{\varepsilon\}$ and $g$. Thus the energy is to be minimized over these parameters. We do not optimize over rapidities, as they are dictated as the solutions of Richardson's equations for a set of $\{\varepsilon\},g$. It is of paramount importance that we may evaluate \eqref{eq:energy_functional} with a reasonable cost. This is possible thanks to the structure of \eqref{eq:ABA}. The 1-body reduced density matrix (1-RDM) is diagonal and doubly-degenerate, as the $\alpha$ and $\beta$ electrons are treated identically. We adopt the convention
\begin{align}
\gamma_i = \frac{1}{2} \braket{ \{u\} | \hat{n}_i | \{u\} } = \braket{ \{u\} | S^z_i | \{u\} } \label{eq:d1d}
+ \frac{1}{2} \braket{\{u\} | \{u\}}.
\end{align}
Here, $\hat{n}_i$ counts the number of electrons in the spatial orbital $i$, so the elements of the 1-RDM count the number of pairs in each site. They are real numbers between zero and one. The 2-body reduced density matrix (2-RDM) has two non-zero pieces: the \emph{pair correlation function},
\begin{align}
P_{ij} = \braket{\{u\} | a^{\dagger}_{i\uparrow} a^{\dagger}_{i\downarrow} a_{j\downarrow} a_{j\uparrow} | \{u\} } = \braket{ \{u\} | S^+_i S^-_j | \{u\} } \label{eq:d2p}
\end{align}
and the \emph{diagonal correlation function}
\begin{align}
D_{ij} = \frac{1}{4}\braket{ \{u\} | \hat{n}_i\hat{n}_j | \{u\} } = \braket{ \{u\} | S^z_i S^z_j | \{u\} } + \frac{1}{2} \gamma_i + \frac{1}{2} \gamma_j - \frac{1}{4} \braket{\{u\} | \{u\}} . \label{eq:d2d}
\end{align}
The diagonal elements $P_{ii}$ and $D_{ii}$ correspond to the same elements of the 2-RDM, so to avoid double counting we set the elements $D_{ii} = 0$. The state \eqref{eq:ABA} is not normalized, and hence neither are the expressions for the correlation functions \eqref{eq:d1d}, \eqref{eq:d2p}, and \eqref{eq:d2d}
The energy expression becomes:
\begin{align} \label{eq:su2EnergyExpression}
E \braket{ \{u\} | \{u \} }&= 2\sum^K_{i}h_{ii} \gamma_i +\sum^K_{ij} [(2V_{ijij}-V_{ijji})D_{ij} + V_{iijj}P_{ij}]
\end{align}
where the summations are performed over only the spatial orbital index.
The norm and correlation functions of \eqref{eq:ABA} are discussed in refs:\cite{RN1171,RN1643,RN1355,RN1586,RN1362,claeys:2017b} The norm of \eqref{eq:ABA} is obtained from the determinant
\begin{align}
\braket{\{u\} | \{u\}} &= \det G
\end{align}
with the elements of the so-called Gaudin matrix
\begin{align} \label{eq:GaudinMatrix}
G_{ab} &=
\begin{cases}
\sum^{K}_{i} \frac{1}{(u_{a}-\varepsilon_{i})^{2}} -2\sum^{M}_{c\neq a}\frac{1}{(u_{a}-u_{c})^{2}} & a=b\\ \frac{2}{(u_{a}-u_{b})^{2}} & a\neq b
\end{cases}.
\end{align}
The normalized 1-RDM may be written
\begin{align} \label{eq:1RDM}
\gamma_{i} = \frac{1}{2} \left(1 - \frac{2^{M}}{\braket{\{u\} | \{u\}}} \frac{\det(Q_i)}{\prod^{M}_{a=1}\prod^M_{b\neq a}(u_a - u_b )}\right)
\end{align}
where the matrix $Q_i$ is defined:
\begin{align}
(Q_i)_{ba} =
\begin{cases}
\prod^M_{c\neq a} (u_c - u_a) \left(\frac{1}{2} \sum^{K}_{k} \frac{1}{(\varepsilon_{k}-u_{a})^2} -\sum^M_{d\neq a} \frac{1}{(u_d - u_a)^2} - \frac{1}{(\varepsilon_i -u_a)^2} \right) & a=b \\
\prod^M_{c\neq a} (u_c - u_a) \left(\frac{1}{(u_b - u_a)^2} -\frac{1}{(\varepsilon_i - u_b)^2} \right) & a\neq b
\end{cases}
\end{align}
To arrive at these expressions, the interested reader is referred to ref\cite{RN1586}.
Unnormalized expressions for both $P_{ij}$ and $D_{ij}$ can be written as sums of determinants related to the Gaudin matrix:
\begin{align}
P_{ij} = \sum^{M}_{a} \frac{u_a - \varepsilon_i}{u_a - \varepsilon_j} \det A^{(i,j)}_{a}
\end{align}
\begin{align}
D_{ij} = -\frac{1}{2}\sum^{M}_{a} \left( \det A^{(i,j)}_{a} + \det A^{(j,i)}_{a} \right) + \frac{1}{2} \left( \gamma_{i} + \gamma_{j} \right)
\end{align}
The matrices $A_a$ appear bizarre at first sight as they are the result of column operations which have condensed a double sum of determinants into a single sum:
\begin{align}
A^{(i,j)}_{a} &=
\begin{cases}
\vec{G}_{c}-\frac{(\varepsilon_i -u_c)(u_a - u_{c+1})}{(\varepsilon_i - u_{c+1})(u_a - u_c)}\vec{G}_{c+1} & c < a-1 \\
\vec{G}_{c} +\frac{2(\varepsilon_{j}-u_{a})(\varepsilon_{i}-u_{a-1})}{u_{a-1} - u_{a}} \vec{B} & c = a-1 \\
\vec{C} & c = a \\
\vec{G}_{c} & c > a
\end{cases}
\end{align}
where $\vec{G}_{c}$ denotes the \textit{c}th column of the Gaudin matrix Eq. \eqref{eq:GaudinMatrix}, $\vec{B}$ is the column vector:
\begin{align}
\vec{B}_{k} = \frac{(2 u_{k}-\varepsilon_{i}-\varepsilon_{j})}{(u_{k}-\varepsilon_{i})^2(u_{k}-\varepsilon_{j})^2},
\end{align}
and $\vec{C}$ is the column vector:
\begin{align}
\vec{C}_{k} = \frac{1}{(u_{k}-\varepsilon_{i})^2}.
\end{align}
With explicit expressions for the correlation functions, we can evaluate the energy functional \eqref{eq:su2EnergyExpression} with a cost of $\mathcal{O}(N^6)$: each element of the 2-RDM requires a single summation over determinants, and there are $N^2$ elements to compute. Through optimal book-keeping and storage of computed determinants, it would be possible to improve the scaling, though for our purposes we consider this a dead end. More optimal expressions for the correlation functions exist that we will report in a following publication. Our initial guess for the variational parameters $\{ \varepsilon\}$ was based on the diagonal elements of the 1-electron integrals, perturbed with some random noise. For $g$, we started with a small negative value. While the reduced BCS Hamiltonian has $N+1$ parameters, 2 degrees of freedom are lost to choose the scale and reference point for the energy. Thus we could optimize over $N-1$ parameters, but we found that allowing all $N+1$ parameters to vary led to more robust convergence.
The non-linear relationship between the reduced BCS Hamiltonian parameters $\{ \varepsilon\},g$ and the pair-energies $\{u\}$ suggests that numerical gradients of the energy functional are not an effective tool for minimization. Indeed, we have confirmed this with our preliminary numerical tests. We instead chose to use the Nelder-Mead simplex algorithm\cite{neldermead} which worked effectively. There is always the danger that Nelder-Mead will find the wrong optimum, though we have eliminated this issue by preconditioning with the covariance matrix adaptation evolution strategy \cite{cma}.
\section{Numerical Results} \label{sec:numbers}
Calculations were performed for a series of four-, six-, eight-, and ten-electron atomic systems as well as for dissociation of hydrogren chains and molecular nitrogen. Results are compared with doubly-occupied configuration interaction (DOCI)\cite{RN1702} and full configuration interaction (CI). Full CI calculations were performed with psi4\cite{psi4,detci} and verified with an in-house code, DOCI calculations were performed with an in-house code, and RHF calculations were performed with Gaussian 16\cite{g16}. We have noted previously that seniority-zero wavefunction ans\"{a}tze favour localized, valence-bond-like orbitals, rather than the delocalized orbitals obtained from RHF. Thus, dissociation curves were computed both in the basis of RHF orbitals and the basis of orbital optimized DOCI (OO-DOCI) orbitals. Orbital optimization in the OO-DOCI calculations was performed as MC-SCF calculations in the complete doubly-occupied Slater determinant basis with a Newton-Raphson scheme for the optimization for the determinant and orbital coefficients as implemented in GAMESS(US)\cite{GAMESS}. To explore the orbital optimization space more profoundly, several starting bases were constructed including the RHF orbitals, FCI natural orbitals and random orbitals obtained from rotating the other bases.
The variational RG results should always be compared with DOCI: when computed in the same set of orbitals, RG is a strictly variational approximation to DOCI. For atoms, calculations were performed with STO-6G and with aug-cc-pVDZ, while dissociation curves were computed with STO-6G. As our results are proof of principle, our algorithm is not optimal, which unfortunately limits the size of system we can treat. However, results with STO-6G will isolate effects of strong-correlation. Effects of weak correlation are minimal in STO-6G as there are limited weak excitations possible. Thus, dissociation curves computed with STO-6G are meaningful and relevant.
\subsection{Atoms}
Raw energetic results for atomic systems are reported in Table \ref{sto_absdata} (STO-6G) and Table \ref{aug_absdata} (aug-cc-pVDZ). Each atomic system considered was necessarily treated as a closed-shell singlet. All calculations are performed with the RHF orbitals. Again, for a given basis, in this case RHF orbitals, the best possible result in the space of seniority-zero wavefunctions is DOCI. Thus, we summarize the deviations from DOCI in table \ref{doci_dev}.
\begin{table}[h]
\centering
\begin{tabular}{|l|r|r|r|r|r|r|r|}
\thead{a)} & \thead{Be} & \thead{B$^+$} & \thead{C$^{2+}$} & \thead{N$^{3+}$} & \thead{O$^{4+}$} & \thead{F$^{5+}$} & \thead{Ne$^{6+}$} \\
\hline
RHF & -14.50336 & -24.19056 & -36-34155 & -50.87786 & -67.89148 & -87.35546 & -109.32595 \\
RG & -14.55578 & -24.25254 & -36.40430 & -50.94130 & -67.95846 & -87.42542 & -109.39974 \\
DOCI & -14.55578 & -24.25254 & -36.40430 & -50.94130 & -67.95847 & -87.42542 & -109.39974 \\
FCI & -14.55609 & -24.25289 & -36.40457 & -50.94153 & -67.95870 & -87.42566 & -109.40001 \\
\hline
\end{tabular}
\begin{tabular}{|l|r|r|r|r|r|r|r|}
\thead{b)} & \thead{Be$^{2-}$} & \thead{B$^{-}$} & \thead{C} & \thead{N$^{+}$} & \thead{O$^{2+}$} & \thead{F$^{3+}$} & \thead{Ne$^{4+}$} \\
\hline
RHF & -13.61385 & -24.01092 & -37.46352 & -53.64180 & -72.65698 & -94.54252 & -119.37771 \\
RG & -13.65525 & -24.06267 & -37.52018 & -53.70354 & -72.72618 & -94.61900 & -119.46229 \\
DOCI & -13.65525 & -24.06267 & -37.52018 & -53.70356 & -72.72618 & -94.61900 & -119.46238 \\
FCI & -13.70391 & -24.12611 & -37.59286 & -53.78592 & -72.82119 & -94.72667 & -119.58397 \\
\hline
\end{tabular}
\begin{tabular}{|l|r|r|r|r|r|r|r|}
\thead{c)} & \thead{Be$^{4-}$} & \thead{B$^{3-}$} & \thead{C$^{2-}$} & \thead{N$^{-}$} & \thead{O} & \thead{F$^{+}$} & \thead{Ne$^{2+}$} \\
\hline
RHF & -11.16645 & -21.79925 & -36.25543 & -53.76411 & -74.37443 & -98.27513 & -125.52797 \\
RG & -11.19071 & -21.83084 & -36.29171 & -53.80525 & -74.42158 & -98.32892 & -125.58872 \\
DOCI & -11.19071 & -21.83089 & -36.29171 & -53.80525 & -74.42189 & -98.32892 & -125.58872 \\
FCI & -11.23923 & -21.89417 & -36.36427 & -53.88751 & -74.51682 & -98.43650 & -125.71022 \\
\hline
\end{tabular}
\caption{\label{sto_absdata} Absolute energies (a.u.) computed with the STO-6G basis set for a) four-electron systems, b) six-electron systems and c) eight-electron systems.}
\end{table}
\begin{table}[h]
\centering
\begin{tabular}{|l|r|r|r|r|r|r|r|}
\thead{a)}& \thead{Be} & \thead{B$^+$} & \thead{C$^{2+}$} & \thead{N$^{3+}$} & \thead{O$^{4+}$} & \thead{F$^{5+}$} & \thead{Ne$^{6+}$} \\
\hline
RHF & -14.57238 & -24.23501 & -36.40165 & -51.06854 & -68.23528 & -87.89994 & -110.06183 \\
RG & -14.59411 & -24.27572 & -36.46351 & -51.14659 & -68.32741 & -88.00447 & -110.17753 \\
DOCI & -14.59430 & -24.27614 & -36.46414 & -51.14733 & -68.32777 & -88.00475 & -110.17811 \\
FCI & -14.61747 & -24.29450 & -36.47469 & -51.15446 & -68.33310 & -88.00914 & -110.18197 \\
\hline
\end{tabular}
\begin{tabular}{|l|r|r|r|r|r|r|r|}
\thead{b)} & \thead{Be$^{2-}$} & \thead{B$^{-}$} & \thead{C} & \thead{N$^{+}$} & \thead{O$^{2+}$} & \thead{F$^{3+}$} & \thead{Ne$^{4+}$} \\
\hline
RHF & -14.42518 & -24.47453 & -37.59848 & -53.75628 & -72.92427 & -95.09707 & -120.26904 \\
RG & -14.44946 & -24.50437 & -37.62045 & -53.80071 & -72.98480 & -95.17516 & -120.36321 \\
DOCI & -14.45682 & -24.51240 & -37.62790 & -53.80695 & -72.99078 & -95.17980 & -120.36782 \\
FCI & -14.50771 & -24.57900 & -37.71348 & -53.88304 & -73.06314 & -95.24915 & -120.43495 \\
\hline
\end{tabular}
\begin{tabular}{|l|r|r|r|r|r|r|r|}
\thead{c)} & \thead{Be$^{4-}$} & \thead{B$^{3-}$} & \thead{C$^{2-}$} & \thead{N$^{-}$} & \thead{O} & \thead{F$^{+}$} & \thead{Ne$^{2+}$} \\
\hline
RHF & -13.94252 & -24.00987 & -37.35787 & -54.22984 & -74.67005 & -98.64114 & -126.12742 \\
RG & -13.96797 & -24.04881 & -37.40351 & -54.25914 & -74.70246 & -98.68616 & -126.18290 \\
DOCI & -13.97955 & -24.06759 & -37.42464 & -54.27194 & -74.71325 & -98.69661 & -126.19256 \\
FCI & -14.10357 & -24.15193 & -37.54951 & -54.41593 & -74.84971 & -98.82188 & -126.31146 \\
\hline
\end{tabular}
\begin{tabular}{|l|r|r|r|r|r|r|r|}
\thead{d)}& \thead{Be$^{6-}$} & \thead{B$^{5-}$} & \thead{C$^{4-}$} & \thead{N$^{3-}$} & \thead{O$^{2-}$} & \thead{F$^{-}$} & \thead{Ne} \\
\hline
RHF & -13.14242 & -23.11023 & -36.43155 & -53.48638 & -74.43570 & -99.42828 & -128.49635 \\
RG & -13.16941 & -23.14395 & -36.48787 & -53.54620 & -74.47711 & -99.46036 & -128.52650 \\
DOCI & -13.17847 & -23.16554 & -36.51415 & -53.59230 & -74.50614 & -99.48046 & -128.54457 \\
FCI & -13.41990 & -23.25556 & -36.67971 & -53.78809 & -74.72569 & -99.67132 & -128.71147 \\
\hline
\end{tabular}
\caption{\label{aug_absdata} Absolute energies (a.u.) computed with the aug-cc-pVDZ basis set for a) four-electron systems, b) six-electron systems, c) eight-electron systems and d) ten-electron systems.}
\end{table}
\begin{table}[h]
\centering
\begin{tabular}{|l|r|r|r|r|r|r|r|}
\thead{a)}& \thead{Be} & \thead{B} & \thead{C} & \thead{N} & \thead{O} & \thead{F} & \thead{Ne} \\
\hline
4e & 1.94E-6 & 1.43E-6 & 5.47E-7 & 2.53E-7 & 3.30E-7 & 8.86E-8 & 3.15E-7 \\
6e & 2.20E-7 & 5.93E-7 & 2.98E-8 & 2.34E-5 & 1.07E-7 & 8.87E-7 & 8.58E-5 \\
8e & 8.33E-8 & 4.89E-5 & 1.10E-8 & 2.58E-8 & 3.17E-4 & 4.68E-8 & 6.78E-7 \\
\hline
\end{tabular}
\begin{tabular}{|l|r|r|r|r|r|r|r|}
\thead{b)}& \thead{Be} & \thead{B} & \thead{C} & \thead{N} & \thead{O} & \thead{F} & \thead{Ne} \\
\hline
4e & 1.88E-4 & 4.23E-4 & 6.30E-4 & 7.46E-4 & 3.59E-4 & 2.74E-4 & 5.74E-4 \\
6e & 7.36E-3 & 8.03E-3 & 7.44E-3 & 6.24E-3 & 5.98E-3 & 4.64E-3 & 4.61E-3 \\
8e & 1.16E-2 & 1.88E-2 & 2.11E-2 & 1.28E-2 & 1.08E-2 & 1.04E-2 & 9.67E-3 \\
10e & 9.06E-3 & 2.16E-2 & 2.08E-2 & 2.88E-2 & 2.90E-2 & 2.01E-2 & 1.81E-2 \\
\hline
\end{tabular}
\caption{\label{doci_dev} Deviations of variational RG with DOCI computed in the a) STO-6G and b) aug-cc-pVDZ basis sets.}
\end{table}
For the four-electron series there is one pair of electrons deeply entrenched in the 1s core, while the second pair resides principally in the 2s spatial orbital. The 2s-2p gap shrinks as the central charge becomes more positive, and thus the electronic configurations with the second pair occupying the 2p spatial orbitals become important, which makes these systems strongly correlated. All the important Slater determinants in the physical wavefunction are seniority zero, so DOCI is near-exact, as are our variational RG results.
In the six-electron series there are two pairs of electrons in the valence orbitals, and the systems are once again strongly-correlated. However, DOCI is not as good a treatment, as there is weak-correlation from open-shell singlet states missing from DOCI. The same is generally true for the results of the eight-electron series.
For the ten-electron series the dominant effect is weak electron correlation. The 2s-2p gap again gets smaller as the central charge increases, but the 2p-3s gap remains large, and thus there is a single Slater determinant which dominates the physical wavefunction. As a result, DOCI is not quantitatively accurate and neither is our variational calculation. We do not report results for STO-6G as HF is full CI for this case.
In each case we were able to reproduce from half to two-thirds of the correlation energy obtainable by DOCI, which is the best-case scenario for this wavefunction ansatz. To recover the complete DOCI correlation energy, one way to proceed is to write an expansion in terms of eigenvectors of the reduced BCS Hamiltonian. As we already recover the majority of the correlation energy, we are optimistic that such an expansion is short, and dominated by \emph{one} RG state. We are now pursuing this line of reasoning and will report in the future.
\subsection{Dissociation curves: RHF orbitals}
The prototypical strongly-correlated systems are bond-dissociation curves. As it is a two-electron problem, we expect to be able to dissociate the hydrogen molecule perfectly. Indeed this is the case, as can be seen in Figure \ref{H2_curves}, where the RG, DOCI, and FCI curves overlap. The error with respect to DOCI is very small.
\begin{figure}[h]
\centering
\begin{subfigure}[b]{0.475\textwidth}
\centering
\includegraphics[width=\textwidth]{H2_RG.png}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.475\textwidth}
\centering
\includegraphics[width=\textwidth]{H2_RG_delta_DOCI.png}
\end{subfigure}
\caption{a) Bond dissociation curves for H$_2$. RG and DOCI were computed in the basis of RHF orbitals. RG, DOCI and FCI coincide and hence are not distinguishable. b) Energy difference between RG and DOCI for H$_2$. All results were computed with the STO-6G basis set.}
\label{H2_curves}
\end{figure}
Moving to the simultaneous dissociation of linear H$_4$ into four hydrogen atoms, as shown in figure \ref{H_chain_curves}, the results are no longer exact. As all calculations are in the RHF basis, DOCI and FCI differ appreciably. The error for RG with respect to DOCI is no longer zero, but grows continuously to a maximum before dropping off substantially. At the critical point, where the deviation is maximal, more than one RG state is required to match with DOCI. The same trends are observed for H$_6$ and H$_8$.
\begin{figure}[h]
\begin{subfigure}[b]{0.475\textwidth}
\centering
\includegraphics[width=\textwidth]{H4_RG.png}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.475\textwidth}
\centering
\includegraphics[width=\textwidth]{H4_RG_delta_DOCI.png}
\end{subfigure}
\begin{subfigure}[b]{0.475\textwidth}
\centering
\includegraphics[width=\textwidth]{H6_RG.png}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.475\textwidth}
\centering
\includegraphics[width=\textwidth]{H6_RG_delta_DOCI.png}
\end{subfigure}
\begin{subfigure}[b]{0.475\textwidth}
\centering
\includegraphics[width=\textwidth]{H8_RG.png}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.475\textwidth}
\centering
\includegraphics[width=\textwidth]{H8_RG_delta_DOCI.png}
\end{subfigure}
\caption{a-c) Bond dissociation curves for H$_4$, H$_6$ and H$_8$. d-f) Energy difference between RG and DOCI for H$_4$, H$_6$ and H$_8$. All results were computed with the STO-6G basis set. RG and DOCI were computed in the basis of RHF orbitals.}
\label{H_chain_curves}
\end{figure}
\subsection{Dissociation curves: OO-DOCI orbitals}
Hydrogen dissociation chains were also computed in the basis of OO-DOCI orbitals, in which the DOCI curve is much closer to the full CI result. Curves are plotted for H$_4$, H$_6$ and H$_8$ in figure \ref{H_chain_DOCI_curves}. The results for each of the hydrogen chains are the same. The error with respect to DOCI in the RG curve grows continuously before decreasing to less than 1mH at dissociation. That the error tends to zero is a strong indication that the method is size-consistent.
\begin{figure}[h]
\begin{subfigure}[b]{0.475\textwidth}
\centering
\includegraphics[width=\textwidth]{H4_RG_OODOCI.png}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.475\textwidth}
\centering
\includegraphics[width=\textwidth]{H4_RG_delta_OODOCI.png}
\end{subfigure}
\begin{subfigure}[b]{0.475\textwidth}
\centering
\includegraphics[width=\textwidth]{H6_RG_OODOCI.png}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.475\textwidth}
\centering
\includegraphics[width=\textwidth]{H6_RG_delta_OODOCI.png}
\end{subfigure}
\begin{subfigure}[b]{0.475\textwidth}
\centering
\includegraphics[width=\textwidth]{H8_RG_OODOCI.png}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.475\textwidth}
\centering
\includegraphics[width=\textwidth]{H8_RG_delta_OODOCI.png}
\end{subfigure}
\caption{a-c) Bond dissociation curves for H$_4$, H$_6$ and H$_8$. d-f) Energy difference between RG and DOCI for H$_4$, H$_6$ and H$_8$. All results were computed with the STO-6G basis set. RG and DOCI were computed in the basis of OO-DOCI orbitals.}
\label{H_chain_DOCI_curves}
\end{figure}
Dissociation curves were also calculated for the nitrogen molecule, and are plotted in figure \ref{N2_DOCI_curves}. Similar to the case for hydrogen chains, RG differs from DOCI near the minimum, but approaches the DOCI curve much more quickly. There is a curve crossing near 5.2 Bohr which indicates that there is more than one RG state required near that point. At dissociation the RG and DOCI curves agree to a tenth of a milliHartree.
\textbf{\begin{figure}[h]
\begin{subfigure}[b]{0.475\textwidth}
\centering
\includegraphics[width=\textwidth]{N2_RG_OODOCI.png}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.475\textwidth}
\centering
\includegraphics[width=\textwidth]{N2_RG_delta_OODOCI.png}
\end{subfigure}
\caption{a) Bond dissociation curves for N$_2$. b) Energy difference between RG and DOCI for N$_2$. All results were computed with the STO-6G basis set. RG and DOCI were computed in the basis of OO-DOCI orbitals.}
\label{N2_DOCI_curves}
\end{figure}}
\section{Conclusions} \label{sec:conclusions}
We have performed variational calculations for chemical systems employing the ground state eigenvector of the exactly solvable reduced BCS Hamiltonian. The key idea is that this treatment is a mean-field of pairs of electrons, rather than a mean-field of individual electrons, as in conventional orbital-based approaches. Analogous to the way Hartree-Fock is the dominant contribution to the wavefunction of a system with weakly-correlated electrons, the present method is the dominant contribution to a wavefunction of a system with weakly-correlated pairs of electrons.
Our results serve as a starting point to develop a many-body theory for pairs of electrons. We are satisfied that they qualitatively reproduce DOCI. They also highlight issues to be addressed in upcoming contributions. It is obvious that RHF orbitals are not optimal for seniority-zero wavefunctions, as we have studied previously. Weak-correlation of pairs, or inter-pair correlation, is missing, and perturbation theories will need to be developed. Finally, while our method scales polynomially, it should be a smaller polynomial to be taken seriously. All of these problems are solvable, and we are currently addressing them.
\section{Data Availability}
The data that support the findings of this study are available from the corresponding author upon reasonable request.
\section{Acknowledgements}
P.A.J and P.W.A were supported by NSERC and Compute Canada. P.W.A. also thanks Canarie. P.W.A and S. D. B. thank the Canada Research Chairs. We thank Toon Verstraelen for his implementation of the covariance matrix adaptation evolution strategy, and Pieter Claeys for his Richardson's equations solver.
| proofpile-arXiv_065-298 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section*{Method Outline}
\input{sections/01_introduction}
\input{sections/02_relwork}
\input{sections/03_task}
\input{sections/04_model}
\input{sections/05_system}
\input{sections/06_case}
\input{sections/07_study}
\input{sections/08_lessons}
\input{sections/09_conclude}
\bibliographystyle{ACM-Reference-Format}
\section{Limitation}
\textbf{Diversity of Tested Domains.} In this work, we only worked with the FICO dataset. Although the visualization and interaction designs of our system is formed by multiple interviews and collaborations with experts and data scientists, it is essential to provide model interpretation to many other domains such as medicine and criminal justice. At present, our tool can be generalized to explain any user-defined tabular data from different domains.
Increasing feedback from more application domains can help further development of our explanation tool. Meanwhile, the present tool only supports numerical data, which limits the usage of our approach in tasks such as image classification or speech recognition.
\textbf{Lack of Quantitative Studies.} Another limitation comes from the lack of quantitative studies. Although the interviews with experts are insightful, a well designed quantitative study can assist us to understand the merits and demerits more precisely. For instance, we can evaluate the performance of tasks, as proposed in section 3.1, from a more objective perspective.
\textbf{Explanation of Projection.} An intrinsic limitation of the dimensionality reduction results from the unfamiliarity to data scientists. On the one hand, the multidimensional projection (MDP) is a simple and straightforward way of presenting an overview of multidimensional data. On the other hand, some data scientists are not familiar with the MDP techniques so that they are confused with the scaling and distances in the projection at first glance. This also limits their interaction with the projection view.
\section{Conclusion and Future Work}
In this work, through an iterative design process with expert machine learning researchers and practitioners, we identified a list of goals and tasks of explaining a machine learning model, designed and
developed a novel visual analytics tool in the Jupyter notebook environment to assist the exploration of machine learning model explanations at a subpopulation level. We conducted semi-structured interviews with five data scientists. Our results show that data scientists have many reasons for interpretability and like interactive explanations. Although some of them are unfamiliar with interactive visual approaches, in the beginning, they give positive feedback when performing the analytic tasks after training. From our study, it is clear that there is an intense interest in explanatory interfaces for machine learning while there is a lack of such tools. As discussed in the previous section, we spot a few limitations in this work. We are particularly interested in further adapting our approaches to data and tasks in more domains and investigating more options for visual explanations for model users.
\section{Introduction}
With the advance of computing power,
machine learning (ML) produces accurate prediction models that can be applied to address important societal problems
like financial fraud detection \cite{junque2014corporate}, drug discovery \cite{lavecchia2015machine}, and natural disease prediction \cite{rouet2017machine}.
On the one hand, people aim at training models with high accuracy that often are achieved by complex decision boundaries that capture subtle varieties from the data.
On the other hand, stakeholders and end-users expect scientifically rigorous explanations from the models
that provide understanding, protect the safety, and ensure ethics \cite{doshi2017towards}.
To balance the development between model sophistication and human understanding,
a burgeoning research field of explainable artificial intelligence (XAI) has arisen.
The general goal of XAI is to develop human-understandable explanations of what a model has learned.
In general, the two main scopes of understanding how model works are general overviews of model behavior (\textit{global explanations})
and precise decision details of each instance (\textit{local explanations}).
\revise{
These explanation models target only the features in the dataset. Thus they are consistent with different types of tasks such as classification, regression, language translation, and object recognition.
}
Global explanations are mechanisms that describe how a model works overall using simpler logic or approximations such as rules \cite{lakkaraju2016interpretable,letham2015interpretable} or multiple linear models \cite{caruana2015intelligible,ustun2017optimized}.
Local explanations focus on generating sparse interpretable vectors like prototypes \cite{chen2018looks,li2018deep}, concepts \cite{kim2017interpretability},
or feature weights \cite{ribeiro2016should,shrikumar2017learning} for each input data.
Both play an essential role in model interpretability and complement each other.
For example, users leverage global explanation to evaluate whether the model achieves some general goals like learning the hierarchy of different classes \cite{bilal2017convolutional} from the dataset.
Afterward, they may require some sanity check on individual data to verify that their understanding is consistent with the internal structures of the model \cite{kim2017interpretability}.
\revise{
As a result, there is a need to take the granularity of explanations to an appropriate \textit{subpopulation} level. A subpopulation, in other words, the subset of instances' explanations in the dataset, provides an overview of decision characteristics from different major parts of the data. It acts as a bridge between an overly coarse global view and extremely detailed information of a single instance. Thus, exploring the subpopulation allows the users to find a proper balance in the data exploration process. At the same time, the challenge of understanding model explanations through subpopulation is straightforward -- how to find the best partition among a dataset of instances' explanations. Like clustering, there are many ways to cluster a dataset. Finding the best subpopulation, in other words, a subpopulation analysis, is a computation challenging and human-centered question.
}
Furthermore, to realize the potential of model interpretability for end-users,
we need the explanations to be provided in an integrated platform and in a human-centric way.
Recently, information visualization has been receiving much attention as a medium for model explanations \cite{hohman2018visual},
and different visual analytics systems have been developed to address this challenge \cite{hohman2019gamut,kahng2017cti,liu2017towards,ming2019protosteer}.
Intuitively, visualization enhances model interpretability since graphical representations have been shown useful to communicate complex statistics \cite{tufte2001visual}.
Using projections, clustering, and interactions \cite{keim2002information}, visual analytics allow users to interpret large amounts of information, revealing intrinsic global patterns while maintaining the ability to explore details.
Thus, combining visual analytics and model explanation techniques provides a promising area of improving machine learning model interpretability.
In this work, we take the problem of understanding model interpretability as a subpopulation analysis of local explanations.
If we treat the local explanation for each input data as the target, we aim at visualizing and
displaying the similarity and dissimilarity of all local explanations together, which allows us to discover the main decision rationales (i.e., clusters)
as well as more detailed considerations (e.g., outliers).
\revise{ Also, as these explanation methods work for a versatile range of machine learning applications, we are interested in the potential of analyzing explanations as a standalone goal. In this way, the output can inherit the flexbility of model explanations and be embedded in a wider machine learning process.}
Having this objective in mind, we designed \textsc{SUBPLEX},
a visual analytics system that visualizes machine learning model explanations at a subpopulation level. \revise{ We also develop it as a widget in the computational notebook to study the opportunity of analyzing model explanations as a standalone task.}
Working as a team of 5 visualization researchers and 3 data scientists,
we combine the concepts of subpopulation analysis in visualization and real industrial tasks on model interpretability \revise{for model developers} to induce the workflow of model interpretation from local explanations at scale.
In short, our contributions include:
\begin{enumerate}
\item An overview of combining subpopulation analysis and machine learning explanation into a visualization system, including a discussion of tasks, techniques, and visual design.
\item A discussion of user evaluation on the workflow of model interpretation from understanding local explanations from the dataset.
\end{enumerate}
\section{System Design}
With the subpopulation generated from the local attributions, as discussed in Section~\ref{sec:clustering},
we present an interactive visualization system, \texttt{SUBPLEX}, with coordinated views to support the exploration of attribution groups.
It consists of (a) a projection view that maps the attributions onto a 2D plane and
(b) a subpopulation view that summarize the attribution values from each cluster. \revise{These views act as the primary visual understanding channels of the explanations from the model and the dataset (\textbf{R.2}).}
A categorical color scheme is used to encode each subpopulation throughout the whole system.
\begin{figure}[htb]
\centering
\includegraphics[width=\columnwidth]{fig/projection}
\caption{Projection technique of local attribution: For each subpopulation,
a fixed number of control points are extracted and they are mapped to the visual space using Multidimensional Scaling (MDS).
Control points guide the projection of the remaining points using the Local Affine Multidimensional Projection (LAMP) technique.
}
\label{fig:projection}
\end{figure}
\begin{figure}[tb]
\centering
\includegraphics[width=\columnwidth]{fig/projection_comparison.pdf}
\caption{Comparison of the three dimensionality reduction techniques (LAMP, T-SNE, MDS) from the perspectives of visual representations and time performance: (A) shows the scatter plots based on the three projection techniques, where three clusters/groups can be clearly observed in all projection results; (B) illustrates the efficiency of three techniques in the line chart, where LAMP shows the best scalability in terms of time.}
\label{fig:projection_exp}
\end{figure}
\subsection{Projection View}
The projection view maps all attribution vectors in a two dimensional layout (Figure~\ref{fig:system}(A)).
While projection techniques like Multidimensional Scaling (MDS) and t-SNE are popular choices,
we have opted for a projection technique that best fits subpopulation analysis, the Local Affine Multidimensional Projection (LAMP) \cite{joia2011local}.
Since cluster labels are provided for each attribution vector, a supervised dimensionality reduction method can be employed to perform the mapping while preserving/emphasizing cluster structures~\cite{nonato:tvcg:2019}.
The LAMP technique relies on a set of control points to map high-dimensional data to the visual space. More specifically, each control point has a weight associated with each point mapped by LAMP. The larger the weight, the closer to the corresponding control point the point is mapped. In order to further emphasize the clusters, weights between points and control points from the same class are increased (in our implementation weights are increased in 30\%) while weights of outer-class control points are not changed.
Control points are randomly chosen from each class and mapped by classical MDS~\cite{borg2003modern}, also shrinking inner-class distances in 30\%.
The procedure is illustrated in Figure~\ref{fig:projection}.
It can be clearly seen that with fewer control points,
the projection creates a clear separation that allows cluster structure easier to be seen in the final projection layout.
Furthermore, the medoid of each subpopulation (i.e., point with lowest pairwise distance within the group) is encoded as a clickable square
so that when it is clicked, the points in the subpopulation will be highlighted.
Our motivation for using LAMP as the projection technique can also be illustrated in Figure~\ref{fig:projection}.
We generate a synthetic dataset with 3 clusters and 30 attributes and compare the speed and performances among LAMP, MDS, and tSNE.
While all of the projection outputs are similar, we can see that LAMP has a much faster running time (\textbf{R.1}).
Given an interactive workflow provided by the system, interactive computations are more desired.
\revise{ Besides, identifying outliers is a vital operation when browsing a projection (\textbf{R.3}). To increase the stimulus of an outlier, we provide a function to highlight the outlier detected by outlier algorithms in the projection so that the projections can be more informative.}
\subsection{Subpopulation View}
The subpopulation view provides detailed information for the properties of each group of the subpopulations (Figure~\ref{fig:system}(B)).
The details are shown as a list of feature importances depicted with bar charts and histograms.
The bar chart (Figure~\ref{fig:system}(B)(i)) shows the average attribution value of a feature among all points in a subpopulation group juxtaposed horizontally.
While the histogram (Figure~\ref{fig:system}(B)(ii)) shows the distributions of the points in each subpopulation group in a superposed layout.
Each distribution's values (i.e., height) are normalized by the size of its subpopulation.
To facilitate the exploration of data in different priorities (\textbf{R.4}), sorting is provided for each of the columns.
For the columns regarding each subpopulation, \texttt{SUBPLEX} sorts by the values (i.e. the length of the bar).
However, to sort the distributions,
we aim at prioritizing the distributions that deviate much across different subpopulations.
To calculate the distances between two distributions,
we use the earth mover's distance (EMD) \cite{rubner1998metric}.
It briefly refers to the minimum amount of work to transform one distribution to another by moving the ``distribution mass''.
Given the distance metric, the distributions with a more significant sum of pairwise distances will be given a higher priority.
\begin{figure}[htb]
\centering
\includegraphics[width=\columnwidth]{fig/interaction}
\caption{Interacting with multiple coordinated views to refine the subpopulation result.
(A) Selecting the subset of attribution view in the projection view.
(B) Inspecting the details of the selected subset.
(C) Adding or removing the subset as a new subpopulation.
}
\label{fig:interaction}
\end{figure}
\subsection{Interaction}
Interaction plays an important role in facilitating data exploration between two views and human-in-the-loop analysis to provide synergy to the visual outcomes and results (\textbf{R.5}).
\revise{ The whole subpopulation analysis is an interactive computational workflow that a user can first define a number of partitions, then he can refine the final partition by brushing and filtering the instances in the system.}
\texttt{SUBPLEX} supports the following user interactions (Figure~\ref{fig:interaction}):
\begin{enumerate}
\item[-]\textit{Brushing:} Brushing is enabled for users to select a subset of attribution vectors in the projection view.
The system provides a lasso selection so that users can draw an irregular shape to include a group of potentially similar points (Figure~\ref{fig:interaction}(A)).
To examine the behavior of selected attributions,
the bar charts in the subpopulation view are split into two in which the selected subsets in each subpopulation are highlighted with the bar charts with strokes (Figure~\ref{fig:interaction}(B)).
\item[-]\textit{Adding and removing subpopulations:}
After inspecting the details such as the average attribution values and distributions for each feature for the selected subset (Figure~\ref{fig:interaction}(B)),
users can extract the subset as a new subpopulation (Figure~\ref{fig:interaction}(C)) so that the subset now exists as an individual group in the system
(i.e., have a new color, bars, and distributions).
\end{enumerate}
\begin{figure}[tb]
\centering
\includegraphics[width=\columnwidth]{fig/dataflow}
\caption{
System architecture as a widget integration into Jupyter notebook.
Information such as ML model results are fed into the system and subpopulation output can be fed into notebook as variables.
}
\label{fig:dataflow}
\end{figure}
\begin{figure}[tb]
\centering
\includegraphics[width=\columnwidth]{fig/jupyter}
\caption{
Apart from the visual analytics interface, \texttt{SUBPLEX} also provides APIs to transform information between the interface and Jupyter notebook.
(A) Users can set the selection in the interface by passing a list of indices.
(B) Users can output the information of the selected attributions in the interface as (i) a Pandas data frame containing all selected attributions as instances or
(ii) the overall statistics of the selected attributions within each subpopulation.
}
\label{fig:jupyter}
\end{figure}
\subsection{Integration into Jupyter notebook}
The visual analytics system is designed as an extension for data platform like Jupyter notebook,
since we aim at creating a seamless workflow between model development and model understanding.
The system provides the following API calls to extract the information in the visual analytics platform
or interact with the platform programmatically for a customized data inspection and analysis (Figure~\ref{fig:jupyter}).
\begin{enumerate}
\item[-] \verb|set_selection(|\textit{data}\verb|)|: As it might be infeasible or uncertain to select the attributions only through brushing and clicking,
users can also select the attributions by passing an array of indices to this function to highlight the selection programmatically (Figure~\ref{fig:jupyter}(A)).
\item[-] \verb|get_selected_instances()|:
When users select a subpopulation by clicking the medoid (i.e. square in the projection) or brushing a subset of attributions in the projection view,
users can call this function in the notebook to return the indices of the highlighted attributions as a \verb|Pandas| dataframe (Figure~\ref{fig:jupyter}(B)(i)).
\item[-] \verb|get_selected_groups()|: Similar to the above function,
users can call this function to return the aggregated subpopulation attribution values from the highlighted subset as a \verb|Pandas| dataframe (Figure~\ref{fig:jupyter}(B)(ii)).
\end{enumerate}
\subsection{Implementation}
In this work, we implement a Jupyter Widget (ipywidget), using D3\cite{bostock2011d3} and Backbone\footnote{\url{https://backbonejs.org/}} framework for visualization. Apart from supporting the projection results generated by LAMP\cite{joia2011local} and clusters identified by K-means clustering\cite{park2009simple} in default, we enable user-defined clustering labels and projection results to be visualized in this widget.
\section{User Evaluation}
To better understand how \texttt{SUBPLEX} is applied to ML model interpretation in general,
we conducted semi-structured interviews with additional data scientists.
The interview consisted of a go-through and open-ended discussion for every visual component and interaction of the system,
and aimed at addressing the following usability questions:
\begin{enumerate}
\item[\textbf{Q1}] How do general data scientists \revise{perceive} the tasks (\textbf{T.1-3}) by subpopulation analysis?
\item[\textbf{Q2}] How do data scientists perceive each visual component in terms of model interpretation?
\item[\textbf{Q3}] What do data scientists prefer for visual analytics on model interpretation?
\end{enumerate}
\subsection{Participants}
We interviewed 5 data scientists (two male, three female).
The participants had experience building models ranging from three months to five years.
In the following sections and paragraphs, we will use the title ``scientist'' to refer to any interviewee,
since their jobs primarily focused on ML model development.
Our recruitment goal, to avoid sample bias, is to seek a diverse pool of candidates to \revise{provide general impressions of subpopulation analysis in model interpretation} but not to quantify any task effectiveness from the general public.
To convey the results in statistics and numbers, other methods, such as quantitative usability tasks and surveys,
could complement our findings. \looseness=-1
\subsection{Interview Design}
The interview duration was one hour long per participant.
Each participant first received an introduction of the system and the dataset (i.e., the credit scoring system used in Section~\ref{sec:case}) used in the demonstration.
Once the users are familiar with the settings,
we let them explore the system and dataset and explain to the interviewer the functionalities of different components in the interface.
They were asked the impressions and concerns of the interface and suggested the usefulness and relevance to model interpretability.
\section{Results}
\subsection{Usefulness on Solving Three Use Cases}
\noindent\textit{Idea generation from subpopulation comparisons.}
When the participants used \texttt{SUBPLEX} to explore the attribution subpopulations,
they constantly compare different features among different subpopulations to identify
whether some features are prevalent after bias removal or adversarial attacks.
They observed some surprisingly high attribution values in the features that the developers permuted in one or two subpopulations.
Thus, they raised concerns that the ML models were overfitted, data leakage problems happened, and the explanation method did not generate legitimate explanations.
It provides us some insights into hypothesis generation enabled by such tools and workflow.
While the process of interpretation is not standard,
we recognize the process of generating explanations as a creative process that involves lots of judgments, questions, and suggestions
in which the interpretation methods will also be judged.
Instead of giving an explanation to describe the behavior of each instance,
providing multiple explanations at a time increases the concerns on the performance of the workflow and models,
which corroborates with existing work \cite{collaris2018instance,hohman2019gamut,krause2018user}
that there is a need to increase users' considerations while developing insights from the models.
We also observed an additional consideration of \textit{granularity} when evaluating the model explanations.
Participants often selected outliers in the projection view to inspect the distributions of points that were not close to the center.
They were used to understand the model performances by observing the behavior of the majority of the data and derived reasoning from groups of similar points.
With projections provided, they were more eager and curious to select a subset of corner points and questioned on those points' features.
These provide us the insight that by applying subpopulation analysis, anomaly data in the visual interface will receive more attention.
Also, participants mentioned that the tool provided them with the idea of population segmentation when browsing different subpopulations.
\subsection{Perception of Visual Components for Model Interpretability}
\noindent\textit{Pursuit of simplicity on system interaction.}
Our participants had undergone lots of trial and error processes during the exploration of \texttt{SUBPLEX}'s functionality.
They first tried to understand the projection by selecting different subsets of points through brushing,
then they output the subsets and inspect the statistics carefully to see if the different results provided some distinctions among the data.
Some of the participants mentioned that although the interface was simple and intuitive,
they need to have extra efforts to correlate the visual cues with the details of model explanations to summarize the behavior of ML models on this dataset.
As a result, we observe that simplicity helps to remove the burden of visual understanding so that users can have more bandwidth to focus on model interpretation.
\noindent\textit{Trade-off between trust and efficiency on visual encodings.}
During the exploration, the visual component that all participants paid great attention to was the projection view.
Most of them expressed skepticism towards the spatial layout because they had knowledge of dimensionality reduction techniques.
However, they all agreed that it was troublesome to inspect all features in the subpopulations because it was difficult to remember and analyze many features at once.
For example, one participant mentioned,
\textit{``...The most confusing thing is again what are the points... the location of these points... like what does this space actually means... it seems quite abstract to me right now.''}
The projection has been related to concerns about trust \cite{sedlmair2012dimensionality} and this has to be carefully handled in the case of interpretation.
As such, techniques have been prevalent among many clustering and dimensionality reduction tasks in visual analytics. This response motivates further studies to evaluate human trust in combining explanation and clustering processes.
\noindent\textit{Flexibility between programmable interface and visual analytics interface.}
Our participants questioned the methodology behind the subpopulation generation when they were exploring the attribution data.
Also, they paid a considerable amount of attention to the distribution plots to explore further details of the features in the subpopulations.
As a result, they required the statistics to be output to compute more details that they used in their daily operations.
The feedback of participants suggested the importance of integrating a visual analytics system inside the loop of the programming platform.
The trust of interpretation models could be improved if users are granted more engagement to the data exploration pipeline.
One participant mentioned,
\textit{``...the ranking is interesting.
I do not trust is because I do not how the numbers are generated.
Maybe I can export the distributions to see how values are generated... say, shapely value, or min/max value, Partial Dependency Plots...''}
\subsection{Visual Analytics for Model Interpretation}
\noindent\textit{ Relationship between visualization literacy and ML model interpretability.}
Some of our participants had raised concerns about the encodings of the projection view.
The first question they asked was, \textit{``where are the axes in the scatterplot?''}
And after we explained that the points were projections of the data,
they continued by questioning, \textit{``so what are the locations of the points mean?''}
After we explained that projections were 2D planes that approximated the similarities among the points,
the participants showed great interest in such a visual data mining technique.
One participant mentioned,
\textit{``Maybe send me like a little bit more information about how dimensionality reduction is calculated. It is absolutely interesting.''}
Therefore, we observe that to address interpretability through visual analytics,
users need to know how to interpret the visual encodings first.
Although encoding numbers into visual encodings enables a more intuitive reasoning process,
it is important to make sure the visualization is well taught towards the users first.
\noindent\textit{Visual analytics mantra in model interpretation.}
We observed our participants on the use of projection and detail table present the subpopulation information.
Our participants often analyzed the data with the following steps:
they first observed an overview of the whole dataset in the projection. Then they analyzed each subpopulation by switching the rankings according to the subpopulation being inspected.
The model interpretation from such workflow helped establish model understanding similar to the visual analytics mantra \cite{shneiderman1996eyes}:
``overview first, zoom and filter, then details on demand.''
Our initial observation suggests that further explanation models could provide the data representation in such a way to achieve a well-rounded understanding across ML models and input data.
\section{Design Process and Rationale}
\subsection{Addressing Real World Goals to Understand Model Interpretability}
Interpretability is a vague concept that could be either as general as understanding a logical reasoning process or as niche as developing designs and tools that solve a real world problem that requires experts to understand black-box models for decision making.
Our motivation for contributing to the current literature comes from a year-long collaboration with a retail finance institution
in which we have implemented a model explanation interface for the credit scoring system by exchanging ideas between the finance experts and visualization researchers.
\revise{ The experts are mainly model developers who have sufficient knowledge of the data and the models. Thus, their motivations of using model explanation methods are to leverage the exploration of important features to address the interpretability goals.}
By addressing the everyday model explanation tasks in the financial operations,
we developed a new perspective of model interpretation through careful consideration of subpopulation analysis and visual design.
While there are no guarantees of completeness,
our system design and design rationale are based on the goals of understanding black box model behavior in the credit score system.
Each goal is provided with an example of a model interpretability question, which is related to decision making in the financial operations.
\begin{enumerate}
\item[\textbf{G.1}] \textbf{How does the model explain different groups of customers?}
In a retail financial institution, practitioners aim at developing models that can be used for a considerably large amount of customers to improve efficiency while ensuring that it provides a degree of discriminatory power to different populations so that the model is not over-generalized with simple rules.
For example, an ideal model should learn to use different features on customers with different demographics while maintaining the use of default rates on the general public.
\item[\textbf{G.2}] \textbf{What does the model learn after removing bias features?}
The term \textit{bias} here does not only mean features related to machine learning fairness but also the dominating features that
may decrease the diversity of granting credits to different users.
For example, experts would like to see what are the next level influential features that affect credit scoring
without considering how many mortgages the customer owns so that more exciting features can be discovered for future financial products.
\item[\textbf{G.3}] \textbf{Are the model's predictions affected by spurious information?}
This is a model debugging problem that developers need to consider very carefully when they put the model into production.
A typical way to examine this in practice is to include some false or random variables in the model and see how are the populations be affected by the addition.
For example, the developers would like to know can the population with a low default rate receives a good credit score by increasing their length of credit history?
If so, they may be a chance to ``cheat'' the model with adversarial attacks.
\end{enumerate}
\revise{
\subsection{Breaking Down the Goals into Tasks}
The above three goals, while providing the motivations to develop a visual analytics solution, do not explicitly invoke design rationale for our system design. Therefore, it is important to extract the low-level details and actions from these three high-level goals to address the key needs to develop a visual analytics system. These details can be analyzed and mapped to a system-level task requirement. To acquire the low-level tasks, we examine the workflow of our expert through their analysis in the Jupyter notebook. Jupyter notebook is a mainstream data analytics platform that allows data scientists to execute Python scripts to model data and return results in a list of sequential cells. Thus, we studied the notebooks from five data scientists working on these goals and extracted the workflow of the data analysis through browsing the data operations in each cell in the notebook sequentially.
Once we obtain the workflow of data operations to address those goals, we formulate the whole analytics workflow as an exclusive and exhaustive Hierarchical Task Abstraction (HTA) \cite{annett2003hierarchical}. HTA is a popular approach in the HCI community to summarize the tasks conducted by the end-users. It incorporates a set of goals and low-level tasks as a hierarchy to help researchers understand both the necessary tasks and the goals and process. Recently, it has been used by design studies in visual analytics application development \cite{chan2019motion,zhang2018idmvis} as well.
The breakdown of the goals can be seen in Figure~\ref{fig:hta}. In general, each goal can be achieved by around three to five main themes of data analysis, which consists of summarizing a model's decision rationale, selecting an interesting portion of instances and features, and applying further data operations. By grouping the lowest level tasks among the three goals, we summarize the overall \textbf{\textcolor{junglegreen}{task requirement}} in Figure~\ref{fig:hta}:
\begin{itemize}
\item[\textbf{T.1}] \textbf{Interactive clustering to generate subpopulation of local explanations.}
All of the three use cases require an overview of instances' explanations to understand the model's decision rationale.
Therefore, a clustering result of instances based on their similarity of explanation helps users to identify decision paths on the major population as well as the outliers in the dataset. While an initial partition can be generated by automated algorithms to kick start the subpopulation analysis, users also need to refine the results such as merging or splitting the clusters so that the groups of explanations suit their analytics purposes. For example, for model debugging (\textbf{G.3}), the purpose of clustering is to isolate the instances of which the model relies heavily on spurious information to make decisions. Tailoring the clustering results thus is needed to provide the desired data for further analysis. In other words, users combine \textit{data mining algorithms} and \text{interactions} to address the tasks.
\item[\textbf{T.2}] \textbf{Visual analysis of explanation partitions.}
Once the subpopulation of local attributions is finalized, users need to inspect the characteristics of each subpopulation to decide which features or instances should be focused on further data analysis or model refinement. We observe that using basic plotting libraries in Jupyter notebook, our expert still applies a workflow of visual analysis: they first inspect an overview of feature importance over the dataset, then search for an interesting subset of data to focus on its details such as the size of subset and their most-used features. Thus, users require the system to display \textit{overview} as well as \textit{detail-on-demand} to identify a more focused group of data and features for further analysis.
\item[\textbf{T.3}] \textbf{Seamless integration of data analysis pipeline and infrastructure.}
As the subpopulation analysis is a part of the whole model interpretation workflow (i.e., the middle between data preprocessing and data communication or model refinement), it is essential to integrate the whole stage of analysis into the current programming infrastructure so that we can reduce the overhead among switching different platforms or storing many intermediate files.
The whole subpopulation analysis should take the input inside the Jupyter notebook and output results to the notebook. In such a case, users can assess the results and save the input as variables to recycle written codes to conduct iterative analysis and different trial and error experiments to facilitate creativity.
\end{itemize}
}
\revise{
\subsection{Design Rationale for Visual Analytics}
Given a set of tasks we summarized in \textbf{T.1-3} and the exchange of ideas with our domain experts,
we formulate the design rationale of our visual analytics system:
\begin{enumerate}
\item[\textbf{R.1}] \textbf{Visual and interactive clustering of local explanations.} The system should provide ways to cluster the instance explanations from the trained model. Also, it should provide flexibility for the user to adjust and refine the results of the clustering to create partitions that suit various objectives.
\item[\textbf{R.2}] \textbf{Focus on explanations in the whole interface.} Since the local explanation models work for a variety of tasks, including but not limited to classification, translation, and object detection. Our whole framework and interface should focus on the data generated by the explanation method to achieve generic usage.
\item[\textbf{R.3}] \textbf{Display of similarity and difference among instances and general as well as outlying behavior}.
For data with the same group, the model explains them similarly. Otherwise, there are differences in terms of the attribution values. At the same time, the size of groups also indicates that the instances represent general or outlying behavior. The system should display these properties.
\item[\textbf{R.4}] \textbf{Focus on data variety but not design variety.} Data scientists often use a well-known set of visual encodings to display the outcomes of machine learning models. Our solution should respect their mental model and provide the desired workflow and interactions to address the problems.
\item[\textbf{R.5}] \textbf{Widget based system implementation leveraging the infrastructure and utility in Jupyter notebook.}
Since the workflow of visual exploration is in between data operations, which heavily use multiple Python libraries such as scikit-learn and Tensorflow, our system should be embedded in the same environment. The interface should take inputs not only from user interactions but also provides APIs for querying and manipulating data in the interface.
\end{enumerate}
}
To maximize the following objectives,
we employ the \textit{subpopulation} visual analysis, which is common in analyzing the similarity of observations and finding groups in datasets \cite{wenskovitch2017towards}.
The visual analytics consists of two main components to facilitate the sensemaking process:
\begin{itemize}
\item \textbf{Partitional Clustering}:
Partitioning the whole population into different clusters allows users to observe a clear split of data groups by their feature values.
Subpopulations can be clearly defined by automated algorithms so that data characterized by different features and intrinsic decision-making processes in the models can be revealed by different clusters (\textbf{T.2} and \textbf{T.3}).
\item \textbf{Projection}:
This allows the data to be spatially organized on display according to the similarity. Thus, community structure and outliers can be observed.
Users can observe whether there are significant groups and whether data points are having much-deviated behavior compared with the majority of the population,
which are useful for a general model understanding (\textbf{T.1}).
\end{itemize}
Nonetheless, such a form of visual analytics is not trivial, especially for the task requirements of model explanation and the data format of the explanation models, in which we are going to propose the methodology and visual design in the following sections.
\section{Subpopulation model for Black Box Explanation}
In this section, we describe the framework that we apply to produce the explanation subpopulation for visual analytics.
We first explain the representation of local explanation for the input data.
Then we describe the data model that takes these explanations to produce subpopulation analysis.
\revise{
\subsection{Background of Local Explanation Models}
We first give a background of the mainstream models that generate local explanations of a machine learning model's decisions to a dataset.
The popularity of giving local explanations, except applying logical models such as decision trees or rules, is because these methods provide an independent and highly customized explanation for each instance. When explanations do not aggregate into general decisions or rules, they become more faithful to the original model.
In general, to generate a local explanation for an instance, explanation algorithms usually seek one of the following approaches:
\begin{enumerate}
\item \textit{Locality}: The algorithm searches the neighbors of an instance, then fits the subset to a linear model such that the higher the gradient of a feature in the linear model, the more important the feature is to the prediction of the selected instances.
\item \textit{Perturbation}: Instead of using other instances to generate explanations, one can perturbate the values of its attributes and observe whether the output changes significantly. The sensitivity of each feature implies that its value lies in the decision boundary of the machine learning model. Thus, a sensitive feature from perturbation has a high influential power on the instance.
\item \textit{Backpropagation}: Since complex models like neural networks contain series of propagation of weights from the input to the output neurons to produce predictions, one can invert the process to backpropagate the active neurons from the output to the input data locate the portion of original data that causes the neuron activations in the output. Such a portion implies the important features that explain the model's decision.
\end{enumerate}
}
\subsection{Data Interpretation Representation}
The first question of generating explanation is what constitutes an explanation for a data point; in other words, \textit{attribution}, that a human can understand?
Although there are no formal technique definitions for interpretability,
the popular explanation models generate attribution in similar ways.
Current models usually express the \textit{attribution} for a data point as a sparse or skewed vector where each value inside the vector is a human-understandable object.
For example, additive feature attribution methods like LIME \cite{ribeiro2016should}, DeepLift \cite{shrikumar2017learning}, and GAM \cite{hastie2017generalized} output the explanation as a list of feature importances for each data (i.e., this data has these features), and prototype learning methods \cite{li2018deep,ming2019interpretable} explain each data with a list of similarity with other data points (i.e., this data ``looks'' like that data).
Therefore, we can define attribution $a_i$ for each input point $i$ as a set of real valued weights mapped to a feature space with $m$ samples:
\begin{equation} \label{eqn:attribution}
a_i = \left \{ w_1,w_2,...,w_m \right \} \forall w \in \mathbb{R}
\end{equation}
where each weight $w$ represents the attribution value for a feature.
Although there does not exist any hard constraints when generating the attribution vector, the methods, in general, try to achieve the following objectives:
\begin{enumerate}
\item \textit{Sparsity}:
The attribution vector should not contain many weights with high values (i.e., most of the $w$ in Equation~\ref{eqn:attribution} are close to zero).
This ensures that the data can be explained using a small set of features, taking human short term memory of a few items (e.g., not more than seven \cite{miller1956magical}) into account of interpretability.
\item \textit{Diversity}: As only a few items should be shown to explain a data point,
it is also crucial to ensure that each feature shown should not be similar to each other.
This objective often co-exists with sparsity as choosing the most distinctive and discriminative features results in a sparse,
and thus a less redundant set of explanations.
\end{enumerate}
\subsection{Generating Attribution Subpopulation} \label{sec:clustering}
Once the attribution for each data is generated,
subpopulation can be discovered by clustering them by their similarity.
The main challenge we need to address is how to compute the distances between the attribution vectors
so that the clustering is accurate and efficient.
Since the attribution is a sparse vector with many values (e.g., number of training samples in prototype learning),
if we cluster the attributions with euclidean distance,
the clustering result will suffer from \textit{curse of dimensionality} and be easily distorted by small perturbations.
Also, the most efficient clustering algorithm (i.e., K-Means clustering) requires $O(k \cdot i \cdot n \cdot d)$ time complexity,
where $k$ is the number of clusters, $i$ is the number of iterations, $n$ is the number of data points, and $d$ is the number of dimensions.
While the number of iterations $i$ can be fine-tuned and computation for each $n$ data points can be parallelized,
if we do not control the dimension $d$ within a small range,
the running time would inhibit interactive analysis (\revise{\textbf{R.1}}).
To prepare data to fit into the clustering algorithms more efficiently,
we propose the use of Principal Component Analysis (PCA) to transform the sparse attribution vector into a low dimensional vector that preserves as much information as possible by maximizing variance.
Thus, the euclidean distance between these vectors will represent the cluster characteristics more significantly.
\begin{figure}[tb]
\centering
\includegraphics[width=\columnwidth]{fig/attr_boxplot}
\caption{Distribution of attribution values on the synthetic dataset.
The first half of the data (left) is explained by feature A and the second half (right) is explained by feature B,
illustrated by distributions of higher values in the corresponding box plots.}
\label{fig:boxplot}
\end{figure}
\begin{figure}[tb]
\centering
\includegraphics[width=\columnwidth]{fig/experiment_attr}
\caption{Accuracy and run time of clustering with the addition of sparse noise columns.
The accuracy will decrease drastically and run time will increase when the attribution vector becomes sparser.
With the transformation of PCA to the attribution vectors before the clustering,
the accuracy will become more robust to noise and the computing time will be much faster.}
\label{fig:attr_exp}
\end{figure}
We now illustrate the effectiveness by conducting the following experiments with a synthetic dataset.
The dataset $D$ first consists of two classes, two features (A and B), and 10000 points in total.
The first half of the dataset is predictive by features A and the second half is predictive by feature B.
This is achieved by assigning feature values in the following way:
\begin{align*}
D_{1,2,...,5000} = \begin{Bmatrix}
\text{feature A} = \begin{Bmatrix}
U[0,1] \text{ if class 1}\\
U[1,2] \text{ if class 2}
\end{Bmatrix}, & \text{feature B = U[1,2]}
\end{Bmatrix}
\end{align*}
\begin{align*}
D_{5001,5002,...,10000} = \begin{Bmatrix}
\text{feature A = U[2,3]}, &
\text{feature B} = \begin{Bmatrix}
U[3,4] \text{ if class 1}\\
U[4,5] \text{ if class 2}
\end{Bmatrix}
\end{Bmatrix}
\end{align*}
We split the dataset into a train/test split of 80/20 and achieve a test accuracy of 99.95\% with a random forest classifier.
We run LIME with all the data and classifier, which generates the attribution vectors with the characteristics shown in Figure~\ref{fig:boxplot}.
Overall, data that are explained by a feature to a greater extent is assigned higher attribution values on the corresponding feature.
To illustrate the effect of noise and the effectiveness of our attribution transformation approach, we expand the attribution vectors by adding columns with values sampled from a uniform distribution between 0 and 0.5,
which mimics the behavior of noises.
We add the number of noise columns ranging from 1000 to 10000 to examine whether a K Means clustering can group the attributions into two groups the same as the assignment in Figure~\ref{fig:boxplot}.
We run K means clustering with and without PCA multiple times and record the accuracy in terms of Rand index \cite{santos2009use} as well as the average run time.
The result can be seen in Figure~\ref{fig:attr_exp}.
The result shows that by transforming the attribution vectors with PCA,
the clustering results become robust to the effect of sparsity among the attributions,
which makes the subpopulation generation feasible from the local attribution data.
\section{Related Work}
In this section,\revise{we first discuss how visualizations are applied in the area of machine learning model explanations. We then explain the motivations of subpopulation analysis for those explanations. Finally, we present the related critical approaches in human-computer interaction (HCI) and machine learning communities. we first present the critical approaches for model interpretation in machine learning communities and how visual analytic is applied in this area. We then discuss the human factors in interpretable machine learning. Finally, we explain the motivations of subpopulation analysis and how this can help with model interpretations.}
\subsection{Visualization for Model Explanation }
With the increase in the complexity of machine learning models in recent years, the interpretation of machine learning models has been highly valued. Machine learning \revise{ and visualization} communities have been working long on the explanation of machine learning models to improve fairness \cite{masafumi2010study}, support debugging \cite{amershi2015modeltracker,liu2017visual,parikh2011human}, comparisons \cite{zhang2018manifold}, and gain trust from end-users \cite{luo2016automatically}.
There are two distinct categories of model explanation approaches: "white-box" approaches, and "black-box" approaches~\cite{molnar2019}. White-box, or \textit{intrinsic}, approaches tend to restrict the internal logic and structure of a model so that human observers can understand the logic of why a decision is made. Black-box, or \textit{post-hoc}, approaches try to explain how the inputs related to the output without showing the internal working mechanism of a model.
White-box approaches are applicable for those intrinsically interpretable models, such as decision trees, rule-based models, and linear models. For example, a gallery of tree visualization can be found at treevis.net\cite{schulz2011treevis} for visualizing decision trees. BOOSTVis\cite{liu2017visual} helps model diagnosis during the training process of tree boosting. Rule-based models are composed of logical representations and can be expressed in a list of IF-THEN or IF-THEN-ELSE statements. People can get insights into a linear model by using projection-based methods\cite{caragea2001gaining}.
Black-box models, whose internals are generally opaque and uninterpretable, cannot be interpreted by using a white-box approach. Although these models are known to provide better performance in many cases\cite{guidotti2019survey}, using black-box models in high-stakes scenarios may result in increased risk\cite{modarres2018towards}, lower trust, and limited adoption due to lack of interpretability \cite{luo2016automatically}. Thus, there is a surge of interest in the interpretation of black-box models (check the survey~\cite{guidotti2019survey} for more details). In contrast to the white-box approaches, black-box model explanations focus more on the relationship between input and output without looking at the internal structure of the model. In our work, we leverage model diagnosis functions for data scientists who work with black-box models and need to understand the model with domain knowledge so that black-box approaches are adapted.
In general, black box explanations can be divided into three classes as categorized in this survey~\cite{guidotti2019survey}: \textit{Model Explanation} when the explanation involves the whole logic of the model, \textit{Outcome Explanation} to explain why a specific decision is made for a given object, and \textit{Model Inspection} focusing on providing a representation (visual or textual) for understanding some specific property of the black box model or of its predictions when input changing. To provide a \textit{model explanation} for a global understanding of the model, an interpretable surrogate model needs to be trained to approximate a black-box model. This model should be both able to mimic the behavior of the black box, and it should also be understandable by humans. However, the complexity of a surrogate model increases in order to better approximate a black box. Ming \textit{et al.}\cite{ming2018rulematrix} presents the trade-off between model complexity (related to interpretability) and the fidelity to the original model. To work with the real input and model output and keep the explanation understandable, we try to include outcome explanations and model inspection in this project.
Instead of understanding a surrogate model, \textit{outcome explanations} and \textit{model inspection} work on the original data space. For decision-makers, the explanation of a given case is helpful when making decisions. To this end, interactive visual systems are proposed to understand the specific explanations more effectively and efficiently. For example, Krause \textit{et al.}\cite{explainer17} leverages instance-level explanation, measures of local feature relevance that explain single instances in an interactive visual system to help domain experts to investigate, understand, and diagnose the model decisions. And in many recent applications\cite{krause2016interacting,ribeiro2016should}, both \textit{outcome explanations} and \textit{model inspection} are integrated with visualization to help users understand model decisions. Prospector\cite{krause2016interacting} provides interactive partial dependence diagnostics to present how features affect the prediction overall, together with localized inspection for a better understanding of how and why specific data points are predicted as they are. LIME\cite{ribeiro2016should} proposed an algorithm of finding out the attributions of different features by adding perturbation to the original input and then highlight the attributions relevant to the prediction in visualization.
For model inspection of complex models, such as deep neural networks, visualizations can aid developers in understanding the internal structures of the model. For example, TensorBoard\cite{wongsuphasawat2017visualizing} visualizes the underlying dataflow graph of a deep learning model. Liu \textit{et al.}\cite{liu2016towards} used a hybrid visualization that embeds debugging information in the neural network visualization to help experts understand, diagnose, and refine deep CNNs. Tzeng \textit{et al.}\cite{tzeng2005opening} introduces the visualization of weights on neural networks with a single instance or a set of data instances to gain more understanding and confidence in using artificial neural networks. And ActiVis\cite{kahng2017activis} not only visualizes the internal structure of neural network models but supports model exploration at both the instance- and subset-level.\\
\revise{
\hrule
\textbf{Take-away}: Visualizations are applied a lot in the area of machine learning explanation to assist humans to grasp a better understanding of a machine learning model. In our work, we treat a model to be explained as a black box. Moreover, according to the task analysis in the next section, our research focuses more on explaining the model from the data space, which falls into the categories of model inspection and outcome explanation in terms of explaining a black box. Furthermore, to inspect the model behaviors, explanations from multiple granularity levels are required, such as an instance-level explanation for a single model decision, and a subpopulation-level explanation for a group of instances that the model makes decisions for similar reasons. We include more details in a later section, discussing how subpopulation analysis with the help of visualization can assist model interpretation.
\hrule
}
\subsection{Human Factors in Interpretable Machine Learning}
\revise{Since the end-users of machine learning interpretations are the humans themselves, it is crucial to address real-world user needs for understanding AI and generate human-friendly explanations for users ranging from model developers to domain experts and decision-makers.\\
Lipton proposes an overview of machine learning model interpretability ~\cite{lipton2018mythos}, where the author summaries the properties of interpretable models and addresses that humans need model interpretation so that they can build trust in the model, make more informative, fair and ethical decisions. From the perspective of human-computer interaction, there are considerations more than reasoning to consider a model interpretable and useful. The awareness of reasoning \cite{rader2018explanations}, trust \cite{glass2008toward, ribeiro2016should}, alignment with user expectation \cite{eslami2018communicating}, justice \cite{binns2018s}, contrastive reasoning \cite{lim2009and}, and human-in-a-loop analysis \cite{lage2018human} are all possible factors to affect the willingness of users to apply machine learning models to the application scenario.\\
Human decision making, the subsequent step of model understanding, is also an essential factor for generating interpretable models. Whether users are willing to make decisions that are based on the models depends on the model's accuracy \cite{amershi2010examining,fiebrink2011human}, its variances on different input and output \cite{stumpf2009interacting}, and the availability of performance reports \cite{trivedi2017interactive}.\\
Although there are many explainable AI (XAI) algorithms are proposed as stated in the previous subsection, a recent research~\cite{liao2020questioning} reports the interview results with practitioners from the industry revealing that it remains a challenge, for now, to create explainable AI products because of the variance of user needs for explainability, discrepancies between algorithmic explanations and human explanations and a lack of support for design practices. Furthermore, the HCI community has also called for interdisciplinary collaboration ~\cite{abdul2018trends} and user-centered approaches to explainability ~\cite{wang2019designing} to bridge the gap between XAI algorithms and user needs for sufficient transparency. \\
\hrule
\textbf{Take-away}: With the history of previous work into XAI, it is essential yet challenging to design an XAI product that addresses the real issues when explaining a machine learning model. In our work, we are going to present a hierarchical task analysis in the later section, which maps the design goals to multi-level tasks as well as how our designs evolve during the collaboration with data scientists from the industry.
\hrule
}
\subsection{Subpopulation Visualization}
A clustering algorithm helps the dataset to discover groups of similar objects. Clustering has become a popular unsupervised learning method ~\cite{trevor2009elements} typically used early in the process of exploratory data analysis. Cluster analysis has a heuristic nature that encourages the exploration of data~\cite{dubes1980clustering}. Inspired by this, we believe that data analysts can benefit from generating hypotheses at the subpopulation level. Dimension reduction algorithms and clustering algorithms are both frequently used techniques in visual analytics. Both categories of techniques assist analysts in performing related tasks regarding the similarity of observations and finding groups in datasets\cite{wenskovitch2017towards}. \revise{In terms of model inspection and outcome explanation, the exploration of subpopulation presents the groups that can be explained by similar reasons, as well as outliers where the model has abnormal behavior patterns with them.} In the following subsections, we are going to explain the usage of \revise{ clustering visualization and dimensionality reduction methods which assist subpopulation analysis.}
\subsubsection{Visualizing Clusters}
\hfill\\
In general, there are three categories of visualization of clusters: (1) visualizing membership of clusters, focusing on presenting the groups that data instances belong to; (2) visualizing the content of clusters, aiming at demonstrating the feature values or properties of data instances in a cluster; (3) cluster optimization, where the visual system enables users to modify the membership of instances to reach a customized clustering result.
Saket \textit{et al.} studied three options for encoding group membership: nodes with colors of cluster membership, nodes with cluster colors and links, as well as colored space-filling regions. Jianu \textit{et al.}\cite{jianu2014display} further explored the options of Linesets\cite{alper2011design}, GMap\cite{gansner2010gmap}, and BubbleSets\cite{collins2009bubble}. The visualization of clusters or groups provides a straightforward way of showing data distribution. A recent application of clustering for explainable machine learning is CNN2DT\cite{jiavisualizing}, where bubble sets are used to highlight the regions of neurons in CNN with the same label.
In recent years, many interactive systems also include the visualization of cluster content to assist users to explore the clustering results. For example, a heat map, as applied in Hierarchical Clustering Explorer (HCE) \cite{seo2002interactively} is used to show the overall feature values in clusters. Parallel coordinates\cite{inselberg1985plane} is another type of chart that is widely used for multidimensional data. Its application in ClusterVision ~\cite{kwon2018clustervision} enables data distribution overview and useful cluster comparison. However, the usage of parallel coordinates can be cluttered when too much data is being visualized ~\cite{bertini2005springview, yuan2009scattering}.
Another type of visual systems for clustering is designed for cluster optimization. For example, Packer \textit{et al.} ~\cite{packer2013visual} use heuristics to suggest interesting algorithmic settings for exploration. SOMFlow~\cite{sacha2018somflow} enables further data partitions for existing clustering output. Moreover, ClusterVision~\cite{kwon2018clustervision} can retrieve new clustering results recommended based on users' input.
\subsubsection{Dimensionality Reduction in Visualization}
\hfill\\
A recent work\cite{nonato2018multidimensional} provides a survey of Multidimensional Projections (MDP) methods, properties, errors, and tasks. MDP algorithms such as T-SNE\cite{maaten2008visualizing}, Umap\cite{mcinnes2018umap}, LAMP\cite{joia2011local}, PCA\cite{wold1987principal}, MDS\cite{borg2003modern} are widely used in the visualization communities. As for the visual representation of MDP, most dimension reduction algorithm outputs are shown in scatterplots or node-link diagrams\cite{wenskovitch2017towards}. For instance, Andromeda\cite{self2016bridging} integrates a 2D projection view to support communication between a user and high-dimensional data analysis. Kogan introduces Star Coordinates\cite{kandogan2000star} that arranges coordinates on a circle sharing
the same origin at the center for cluster discovery and multifactor analysis tasks. Besides numerical data, text data\cite{alsakran2011streamit, bradel2014multi}, and image data\cite{mamani2013user} can also be encoded in a scatterplot using MDP techniques.
With only the layout resulting from an MDP mapping, we can get a basic point cloud where groups and neighborhoods are indicative of similarity among the involved instances. However, content-based enrichment techniques that build upon the proximity of similar instances in the visual space can be exploited to depict additional information associated with particular instances or groups. For example, Facetatlas\cite{cao2010facetatlas} exploits a cluster-based enrichment for to highlight the clusters in a projection view. Though initially clustering and dimensionality reduction algorithms are used independently, recent works have incorporated algorithms from each family into the same visualization systems. As pointed out in this survey \cite{wenskovitch2017towards}, there can be six different options for pipelines depicting combinations of dimension reduction algorithms and clustering algorithms. In our work, we try to achieve our design goals by integrating cluster analysis on multidimensional data so that multidimensional projection with cluster-based enrichment is considered in our visual designs.\\
\revise{
\hrule
Take-away: Clustering and dimensionality reduction algorithms are widely used for subpopulation analysis, which assists interactive model inspection and outcome explanation. Inspired by this, our work provides an interactive approach to subpopulation level model exploration to help users grasp a better interpretation of the model and data.
\hrule
}
\section{Use Case Scenario}
\label{sec:case}
In this section, we demonstrate three usage scenarios regarding the use of \texttt{SUBPLEX}\xspace to address the interpretability goals of machine learning experts in understanding important features, investigating the bias features, and debugging the model (\textbf{G.1-3}). We used a credit score evaluation dataset~\cite{explaina31} consisting of 6,600 applicants with 37 features and trained a neural network based on the application result (accept/ reject).
\begin{figure*}
\centering
\includegraphics[width=\columnwidth]{fig/case1}
\caption{First use case of identifying important features among the subpopulations. (1) Some instances do not have significant attributions overall, meaning they are noises and not useful in the subpopulation analysis. (2) The removal of meaningless instance is done by clicking the clusters to export the instance to the notebook, which subsequently removes the instance in the data frame. (3) The updated view provides a clear inspection of each cluster's rationale in terms of features with high attribution values. When clicked on the cluster in the detail view, the features are sorted by the selected cluster's values.}
\label{fig:case1}
\end{figure*}
\subsection{Use Case 1: Finding Important Features in Subpopulations}
The first use case explores how our domain experts use \texttt{SUBPLEX}\xspace to identify the model's behavior through different granularity of subpopulation explanations.
\noindent\textbf{Preparing subpopulations through interactive clustering and data cleaning.}
To begin with, our expert first imports the attributions
to the system and tries to cluster them with a different number of clusters. Each clustering and projection process takes around three seconds in total. Then, he identifies that the original attribution data has five clusters with visibly distinct behavior in the detail view (Figure~\ref{fig:case1}). Among the clusters, he discovers each has different sets of high-valued attributions, except one that has no significant attributions at all. Base on the definition of attribution, these instances are the ones that are hard to be explained by the explanation model (Figure~\ref{fig:case1}(1)). Thus, as a data cleaning perspective, our expert selects the cluster by clicking on the medoid, then filter the instances in the data frame to remove them from the widget (Figure~\ref{fig:case1}(2)).
\noindent\textbf{Identifying significant features among subpopulations.} After removing the instances with low attribution values, our expert discovers five unique rationales between the model and the dataset from the subpopulations. By sorting the attribution values for each subpopulation, he identifies each group's characteristics by the long bars in the detail view (Figure~\ref{fig:case1}(3)): the first group contains high attributions on the features related to the number of recent inquires (``MSinceMostRecentInqexcl7days'') and delinquent trades (``NumTrades60Ever2DerogPubRec''); the second group contains features related to the customer's age of trade lines (``AverageMInFile''); the third group consists of features regarding risk estimates (``ExternalRiskEstimate''); the fourth group is about the features concerning the absence of delinquency record (``MaxDelq2PublicRecLast12M = `unknown delinquency'''), and; the final group is concerned about the existence of delinquency (``MaxDelq2PublicRecLast12M=`30 days delinquent'''). The results reveal that while the model has a diversified rationale on different portions of the dataset, each rationale contributes to some unique traits to evaluate the credit risk with different perspectives.
\noindent\textbf{Exporting and preparing the results.} As a result, the expert exports the result to the data frames by clicking the medoids. While the visual exploration is completed, he proceeds to refine the final visual results by plotting the instances with static visualization libraries like \texttt{matplotlib}. The static charts are then shown in other presentation formats like PowerPoint for communications in future internal meetings. All in all, \texttt{SUBPLEX}\xspace provides a comprehensive visual exploration of the model's attributions while being used tightly in the same programming environment.
\begin{figure*}
\centering
\includegraphics[width=\columnwidth]{fig/case2}
\caption{Second use case of evaluating model performance after the removal of bias features. A: The first cluster that reveals the instances that do not have any significant attributions after the most important features are removed. B: The second cluster that contains instances with some useful features for making decisions.}
\label{fig:case2}
\end{figure*}
\subsection{Use Case 2: Evaluating Model Performance After the Removal of Bias Features}
The second use case is concerned about how our domain expert pushes the model to explore new rationale by removing the useful features identified in the previous experiments.
\noindent\textbf{Removing the Useful Features.} To remove the useful features, the expert selects the above features to replace the values with random numbers. Therefore, when training the model using this dataset, the attributions of these features become negligible. To explore the outcome of this model, our expert imports the attribution data to the system to explore different groups of attributions.
\noindent\textbf{Evaluate Model's Capability on Different Subpopulations.} After some experimenting, our expert discovers a clear separation of instances in the projection view when the number of clusters is set to two. The characteristics of the two clusters are very obvious in the detail view. One cluster has two strong feature attributions that are related to the absence of delinquency (``MaxDelq2PublicRecLast12M=`current and never delinquent''' and ``
MaxDelqEver=`current and never delinquent'''). Another one has no significant features at all. Therefore, by exporting the subpopulations and inspecting the cluster sizes, our expert understands that the model does not make consistent decisions on two-third of the dataset. For the remaining ones, it uses the clean delinquency record as the basis to make decisions. Thus, our experts summarize the influences of the important features in the dataset as the rationale for the customers without a clean delinquency record. He also saves these two different populations in separate files for further experiments.
\begin{figure*}
\centering
\includegraphics[width=\columnwidth]{fig/case3}
\caption{The third use case concerning the debugging of the model by adding meaningless features. Some clusters, after (1) interactively combined by the user, (2) reveals some significant attribution values on the meaningless features.}
\label{fig:case3}
\end{figure*}
\subsection{Use Case 3: Debugging Model's Architecture through the Adding Noisy Features}
The last case focuses on the aspect of model debugging, of which the domain expert attempts to influence the model by including noisy and meaningless features adversely. It is done by adding features with values sampled from a normal distribution to the dataset.
\noindent\textbf{Inspecting Model's Attributions.} After training the model and generating the attributions, our experts inspect the subpopulations of the attributions in the detail view (Figure~\ref{fig:case3}). By sorting features in each cluster, our experts identify an interesting observation. While the clusters with clear rationale (i.e., long bars in some features) do not have high values among the noisy features, the clusters without clear rationale seem to have relatively longer bars on these noisy features. The expert then groups all the similar instances throughout different clusters to obtain a finer view (Figure~\ref{fig:case3}(2)).
\noindent\textbf{Insights and Actions from the Observations.}Thus, our expert obtains the following insights: for the instances that do not follow mainstream rationale, they seem to be easier to be affected by noisy features. From a neural network perspective, this makes many senses. The unique behavior does not affect the gradients inside the network during batch processing due to its small population size. As a result, our expert decides to explore the possibility of data augmentation to generate more similar data to increase the adaptation of general logics of these highly customized instances. Moreover, he also reports the findings to caution the use of model when making niche decisions.
}
\section{Lessons Learned}
We have learned two lessons in the process of collaborating with the machine learning experts. \\
First, it is more and more important to integrate a visual analytics tool into a development environment where data scientists are familiar with and train their models. At the beginning of this project, we went through a few iterations of the visual system on the web, that is, building the tool as a traditional web application hosted on a local or remote server. However, our collaborators propose to make it an interactive jupyter widget because they want to stay in the environment of the jupyter notebook where they build the machine learning models. Data scientists are familiar with the coding workflow. So we need to enable them to interact with the visual analytics tool using a way they are used to. Another drawback of using an extra web application is that it requires additional I/O operations such as saving data to files and uploading the data to the server. However, staying in the development environment makes it much easier to transfer the data to be used for visualizations. Moreover, it is also flexible to get the desired data from the tool that the users can make more exploration later on. For example, it is convenient to get an array of data points that are selected by brushing in the projection view, which enables our users to do more analysis on the selected subset using native python functions. \\
Second, data scientists want to know the details of the necessary data processing steps when generating explanations. In our work, we use clustering and dimensionality reduction methods to assist the subpopulation analysis. During the iterations of the tool development, we are required to add processing information for processing steps. We first added the textual information about what the processing steps are (e.g., running clustering, running dimensionality reduction). After we conducted a few interviews with users, we realized that they also want to know the algorithms for clustering and dimensionality reduction we are using, as well as the parameters for each processing step. So in our latest version of the tool, we enable data scientists to initialize the widget using the customized objects of clustering or dimensionality reduction algorithm.
} | proofpile-arXiv_065-299 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |